Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

0.8.3 backports #2022

Merged
merged 30 commits into from
Dec 2, 2015
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
e6fa1db
Add special handler to allow logger messages during shutdown
drcrallen May 22, 2015
91e4a9a
Support multiple outer aggregators of same type and provide more help…
dclim Oct 1, 2015
7157091
add comment explaining logic
dclim Oct 15, 2015
8611b47
Fix Race in jar upload during hadoop indexing - https://github.com/dr…
nishantmonu51 Oct 9, 2015
cf4223c
"druid.manager.segment" should be "druid.manager.segments"
gianm Oct 21, 2015
710ab39
Fix documentation about lookup
b-slim Oct 23, 2015
0bff947
fixing hadoop test scope dependencies in indexing-hadoop
himanshug Oct 26, 2015
05a89ae
EventReceiverFirehose: Drain buffer when closed, until empty.
gianm Oct 28, 2015
698e057
ForkingTaskRunner: Log without buffering.
gianm Nov 7, 2015
289a884
Some changes that make it possible to restart tasks on the same hardw…
gianm Oct 28, 2015
6c2a305
forward cancellation request to all brokers, fixes #1802
xvrl Oct 9, 2015
e4458bc
druid aggregators based on datasketches lib http://datasketches.githu…
himanshug Oct 30, 2015
2326e3e
adding datasketches module to top level pom
himanshug Oct 30, 2015
cc7d106
adding datasketches aggregator to documentation
himanshug Oct 30, 2015
3e917f8
changing names to be explicit about theta sketch algorithm
himanshug Nov 6, 2015
6d0e236
further simplifying the api, users just need to use thetaSketch as ag…
himanshug Nov 10, 2015
087756c
update doc with new thetaSketch api
himanshug Nov 10, 2015
5ba19c6
fix doc - correct default value for maxRowsInMemory
nishantmonu51 Nov 2, 2015
f712b24
RemoteTaskActionClient: Fix statusCode check.
gianm Nov 5, 2015
3c5d8ac
Update curator to 2.9.1
xvrl Nov 5, 2015
1db46e4
DataSchema: Exclude metric names from dimension list.
gianm Nov 7, 2015
242933d
update to sketches-core-0.2.2 .
himanshug Nov 10, 2015
65e2b75
reformat datasketches module to satisfy druid style guidelines
himanshug Nov 19, 2015
bcdd3a7
Ability to skip Incremental Index during query using query context
nishantmonu51 Nov 12, 2015
d823fb6
add examples for duration and period granularities
Oct 15, 2015
c632497
separate ingestion and query thread pool
pjain1 Sep 28, 2015
b5003fd
Add EventReceiverFirehoseMonitor
Sep 23, 2015
52e46c9
enable query caching on intermediate realtime persists
xvrl Nov 10, 2015
70883e3
EC2 autoscaler: avoid hitting aws filter limits
xvrl Nov 11, 2015
85cb4dc
optimize index merge
binlijin Nov 12, 2015
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
141 changes: 141 additions & 0 deletions common/src/main/java/io/druid/common/config/Log4jShutdown.java
Original file line number Diff line number Diff line change
@@ -0,0 +1,141 @@
/*
* Licensed to Metamarkets Group Inc. (Metamarkets) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. Metamarkets licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/

package io.druid.common.config;

import org.apache.logging.log4j.core.util.Cancellable;
import org.apache.logging.log4j.core.util.ShutdownCallbackRegistry;

import java.util.Queue;
import java.util.concurrent.ConcurrentLinkedQueue;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.concurrent.atomic.AtomicReference;

public class Log4jShutdown implements ShutdownCallbackRegistry, org.apache.logging.log4j.core.LifeCycle
{
private final AtomicReference<State> state = new AtomicReference<>(State.INITIALIZED);
private final Queue<Runnable> shutdownCallbacks = new ConcurrentLinkedQueue<>();
private final AtomicBoolean callbacksRun = new AtomicBoolean(false);

@Override
public Cancellable addShutdownCallback(final Runnable callback)
{
if (callback == null) {
throw new NullPointerException("callback");
}
if (!isStarted()) {
throw new IllegalStateException("Not started");
}
final Cancellable cancellable = new Cancellable()
{
private volatile boolean cancelled = false;
private final AtomicBoolean ran = new AtomicBoolean(false);

@Override
public void cancel()
{
cancelled = true;
}

@Override
public void run()
{
if (!cancelled) {
if (ran.compareAndSet(false, true)) {
callback.run();
}
}
}
};
shutdownCallbacks.add(cancellable);
if (!isStarted()) {
// We are shutting down in the middle of registering... Make sure the callback fires
callback.run();
throw new IllegalStateException("Shutting down while adding shutdown hook. Callback fired just in case");
}
return cancellable;
}

@Override
public State getState()
{
return state.get();
}

@Override
public void initialize()
{
// NOOP, state is always at least INITIALIZED
}

@Override
public void start()
{
if (!state.compareAndSet(State.INITIALIZED, State.STARTED)) { // Skip STARTING
throw new IllegalStateException(String.format("Expected state [%s] found [%s]", State.INITIALIZED, state.get()));
}
}

@Override
public void stop()
{
if (callbacksRun.get()) {
return;
}
if (!state.compareAndSet(State.STARTED, State.STOPPED)) {
throw new IllegalStateException(String.format("Expected state [%s] found [%s]", State.STARTED, state.get()));
}
}

public void runCallbacks()
{
if (!callbacksRun.compareAndSet(false, true)) {
// Already run, skip
return;
}
stop();
RuntimeException e = null;
for (Runnable callback = shutdownCallbacks.poll(); callback != null; callback = shutdownCallbacks.poll()) {
try {
callback.run();
}
catch (RuntimeException ex) {
if (e == null) {
e = new RuntimeException("Error running callback");
}
e.addSuppressed(ex);
}
}
if (e != null) {
throw e;
}
}

@Override
public boolean isStarted()
{
return State.STARTED.equals(getState());
}

@Override
public boolean isStopped()
{
return State.STOPPED.equals(getState());
}
}
2 changes: 1 addition & 1 deletion docs/content/configuration/coordinator.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ The coordinator node uses several of the global configs in [Configuration](../co
|Property|Description|Default|
|--------|-----------|-------|
|`druid.manager.config.pollDuration`|How often the manager polls the config table for updates.|PT1m|
|`druid.manager.segment.pollDuration`|The duration between polls the Coordinator does for updates to the set of active segments. Generally defines the amount of lag time it can take for the coordinator to notice new segments.|PT1M|
|`druid.manager.segments.pollDuration`|The duration between polls the Coordinator does for updates to the set of active segments. Generally defines the amount of lag time it can take for the coordinator to notice new segments.|PT1M|
|`druid.manager.rules.pollDuration`|The duration between polls the Coordinator does for updates to the set of active rules. Generally defines the amount of lag time it can take for the coordinator to notice rules.|PT1M|
|`druid.manager.rules.defaultTier`|The default tier from which default rules will be loaded from.|_default|
|`druid.manager.rules.alertThreshold`|The duration after a failed poll upon which an alert should be emitted.|PT10M|
Expand Down
1 change: 1 addition & 0 deletions docs/content/configuration/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,6 +118,7 @@ The following monitors are available:
|`io.druid.server.metrics.HistoricalMetricsMonitor`|Reports statistics on Historical nodes.|
|`com.metamx.metrics.JvmMonitor`|Reports JVM-related statistics.|
|`io.druid.segment.realtime.RealtimeMetricsMonitor`|Reports statistics on Realtime nodes.|
|`io.druid.server.metrics.EventReceiverFirehoseMonitor`|Reports how many events have been queued in the EventReceiverFirehose.|

### Emitting Metrics

Expand Down
10 changes: 10 additions & 0 deletions docs/content/configuration/indexing-service.md
Original file line number Diff line number Diff line change
Expand Up @@ -251,6 +251,7 @@ Middle managers pass their configurations down to their child peons. The middle
|`druid.indexer.runner.javaOpts`|-X Java options to run the peon in its own JVM.|""|
|`druid.indexer.runner.maxZnodeBytes`|The maximum size Znode in bytes that can be created in Zookeeper.|524288|
|`druid.indexer.runner.startPort`|The port that peons begin running on.|8100|
|`druid.indexer.runner.separateIngestionEndpoint`|Use separate server and consequently separate jetty thread pool for ingesting events|false|
|`druid.worker.ip`|The IP of the worker.|localhost|
|`druid.worker.version`|Version identifier for the middle manager.|0|
|`druid.worker.capacity`|Maximum number of tasks the middle manager can accept.|Number of available processors - 1|
Expand All @@ -272,6 +273,15 @@ Additional peon configs include:
|`druid.indexer.task.hadoopWorkingPath`|Temporary working directory for Hadoop tasks.|/tmp/druid-indexing|
|`druid.indexer.task.defaultRowFlushBoundary`|Highest row count before persisting to disk. Used for indexing generating tasks.|50000|
|`druid.indexer.task.defaultHadoopCoordinates`|Hadoop version to use with HadoopIndexTasks that do not request a particular version.|org.apache.hadoop:hadoop-client:2.3.0|
|`druid.indexer.task.gracefulShutdownTimeout`|Wait this long on middleManager restart for restorable tasks to gracefully exit.|PT5M|
|`druid.indexer.task.directoryLockTimeout`|Wait this long for zombie peons to exit before giving up on their replacements.|PT10M|

If `druid.indexer.runner.separateIngestionEndpoint` is set to true then following configurations are available for the ingestion server at peon:

|Property|Description|Default|
|--------|-----------|-------|
|`druid.indexer.server.chathandler.http.numThreads`|Number of threads for HTTP requests.|Math.max(10, (Number of available processors * 17) / 16 + 2) + 30|
|`druid.indexer.server.chathandler.http.maxIdleTime`|The Jetty max idle time for a connection.|PT5m|

If the peon is running in remote mode, there must be an overlord up and running. Peons in remote mode can set the following configurations:

Expand Down
141 changes: 141 additions & 0 deletions docs/content/development/datasketches-aggregators.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,141 @@
---
layout: doc_page
---

## DataSketches aggregator
Druid aggregators based on [datasketches]()http://datasketches.github.io/) library. Note that sketch algorithms are approxiate, see details in the "Accuracy" section of the datasketches doc.
At ingestion time, this aggregator creates the theta sketch objects which get stored in Druid segments. Logically speaking, a theta sketch object can be thought of as a Set data structure. At query time, sketches are read and aggregated(set unioned) together. In the end, by default, you receive the estimate of number of unique entries in the sketch object. Also, You can use post aggregators to do union, intersection or difference on sketch columns in the same row.
Note that you can use `thetaSketch` aggregator on columns which were not ingested using same, it will return estimated cardinality of the column. It is recommended to use it at ingestion time as well to make querying faster.

### Aggregators

```json
{
"type" : "thetaSketch",
"name" : <output_name>,
"fieldName" : <metric_name>,

//following boolean field is optional. This should only be used at
//indexing time if your input data contains theta sketch objects.
//that would be the case if you use datasketches library outside of Druid,
//say with Pig/Hive, to produce the data that you are ingesting into Druid
"isInputThetaSketch": false

//following field is optional, default = 16384. must be a power of 2.
//Internally, size refers to the maximum number
//of entries sketch object will retain, higher size would mean higher
//accuracy but higher space needed to store those sketches.
//note that after you index with a particular size, druid will persist sketch in segments
//and you will use size greater or equal to that at query time.
//See [theta-size](http://datasketches.github.io/docs/ThetaSize.html) for details.
//In general, We recommend just sticking to default size, which has worked well.
"size": 16384
}
```

### Post Aggregators

#### Sketch Estimator
```json
{ "type" : "thetaSketchEstimate", "name": <output name>, "fieldName" : <the name field value of the thetaSketch aggregator>}
```

#### Sketch Operations
```json
{ "type" : "thetaSketchSetOp", "name": <output name>, "func": <UNION|INTERSECT|NOT>, "fields" : <the name field value of the thetaSketch aggregators>}
```

### Examples

Assuming, you have a dataset containing (timestamp, product, user_id). You want to answer questions like

How many unique users visited product A?
How many unique users visited both product A and product B?

to answer above questions, you would index your data using following aggregator.

```json
{ "type": "thetaSketch", "name": "user_id_sketch", "fieldName": "user_id" }
```

then, sample query for, How many unique users visited product A?
```json
{
"queryType": "groupBy",
"dataSource": "test_datasource",
"granularity": "ALL",
"dimensions": [],
"aggregations": [
{ "type": "thetaSketch", "name": "unique_users", "fieldName": "user_id_sketch" }
],
"filter": { "type": "selector", "dimension": "product", "value": "A" },
"intervals": [ "2014-10-19T00:00:00.000Z/2014-10-22T00:00:00.000Z" ]
}
```

sample query for, How many unique users visited both product A and B?

```json
{
"queryType": "groupBy",
"dataSource": "test_datasource",
"granularity": "ALL",
"dimensions": [],
"filter": {
"type": "or",
"fields": [
{"type": "selector", "dimension": "product", "value": "A"},
{"type": "selector", "dimension": "product", "value": "B"}
]
},
"aggregations": [
{
"type" : "filtered",
"filter" : {
"type" : "selector",
"dimension" : "product",
"value" : "A"
},
"aggregator" : {
"type": "thetaSketch", "name": "A_unique_users", "fieldName": "user_id_sketch"
}
},
{
"type" : "filtered",
"filter" : {
"type" : "selector",
"dimension" : "product",
"value" : "B"
},
"aggregator" : {
"type": "thetaSketch", "name": "B_unique_users", "fieldName": "user_id_sketch"
}
}
],
"postAggregations": [
{
"type": "thetaSketchEstimate",
"name": "final_unique_users",
"field":
{
"type": "thetaSketchSetOp",
"name": "final_unique_users_sketch",
"func": "INTERSECT",
"fields": [
{
"type": "fieldAccess",
"fieldName": "A_unique_users"
},
{
"type": "fieldAccess",
"fieldName": "B_unique_users"
}
]
}
}
],
"intervals": [
"2014-10-19T00:00:00.000Z/2014-10-22T00:00:00.000Z"
]
}
```
2 changes: 1 addition & 1 deletion docs/content/ingestion/realtime-ingestion.md
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@ The tuningConfig is optional and default parameters will be used if no tuningCon
|Field|Type|Description|Required|
|-----|----|-----------|--------|
|type|String|This should always be 'realtime'.|no|
|maxRowsInMemory|Integer|The number of rows to aggregate before persisting. This number is the post-aggregation rows, so it is not equivalent to the number of input events, but the number of aggregated rows that those events result in. This is used to manage the required JVM heap size. Maximum heap memory usage for indexing scales with maxRowsInMemory * (2 + maxPendingPersists).|no (default == 5 million)|
|maxRowsInMemory|Integer|The number of rows to aggregate before persisting. This number is the post-aggregation rows, so it is not equivalent to the number of input events, but the number of aggregated rows that those events result in. This is used to manage the required JVM heap size. Maximum heap memory usage for indexing scales with maxRowsInMemory * (2 + maxPendingPersists).|no (default == 500000)|
|windowPeriod|ISO 8601 Period String|The amount of lag time to allow events. This is configured with a 10 minute window, meaning that any event more than 10 minutes ago will be thrown away and not included in the segment generated by the realtime server.|no (default == PT10m)|
|intermediatePersistPeriod|ISO8601 Period String|The period that determines the rate at which intermediate persists occur. These persists determine how often commits happen against the incoming realtime stream. If the realtime data loading process is interrupted at time T, it should be restarted to re-read data that arrived at T minus this period.|no (default == PT10m)|
|basePersistDirectory|String|The directory to put things that need persistence. The plumber is responsible for the actual intermediate persists and this tells it where to store those persists.|no (default == java tmp dir)|
Expand Down
8 changes: 8 additions & 0 deletions docs/content/operations/metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -156,6 +156,14 @@ These metrics are only available if the JVMMonitor module is included.
|`jvm/gc/count`|Garbage collection count.|gcName.|< 100|
|`jvm/gc/time`|Garbage collection time.|gcName.|< 1s|

### EventReceiverFirehose

The following metric is only available if the EventReceiverFirehoseMonitor module is included.

|Metric|Description|Dimensions|Normal Value|
|------|-----------|----------|------------|
|`ingest/events/buffered`|Number of events queued in the EventReceiverFirehose's buffer|serviceName, bufferCapacity.|Equal to current # of events in the buffer queue.|

## Sys

These metrics are only available if the SysMonitor module is included.
Expand Down
Loading