diff --git a/docs/multi-stage-query/concepts.md b/docs/multi-stage-query/concepts.md
index 2969ba9722a9..a7e59caf19c8 100644
--- a/docs/multi-stage-query/concepts.md
+++ b/docs/multi-stage-query/concepts.md
@@ -192,18 +192,13 @@ To perform ingestion with rollup:
2. Set [`finalizeAggregations: false`](reference.md#context-parameters) in your context. This causes aggregation
functions to write their internal state to the generated segments, instead of the finalized end result, and enables
further aggregation at query time.
-3. Wrap all multi-value strings in `MV_TO_ARRAY(...)` and set [`groupByEnableMultiValueUnnesting:
- false`](reference.md#context-parameters) in your context. This ensures that multi-value strings are left alone and
- remain lists, instead of being [automatically unnested](../querying/sql-data-types.md#multi-value-strings) by the
- `GROUP BY` operator.
+3. See [ARRAY types](../querying/arrays.md#sql-based-ingestion-with-rollup) for information about ingesting `ARRAY` columns
+4. See [multi-value dimensions](../querying/multi-value-dimensions.md#sql-based-ingestion-with-rollup) for information to ingest multi-value VARCHAR columns
When you do all of these things, Druid understands that you intend to do an ingestion with rollup, and it writes
rollup-related metadata into the generated segments. Other applications can then use [`segmentMetadata`
queries](../querying/segmentmetadataquery.md) to retrieve rollup-related information.
-If you see the error "Encountered multi-value dimension `x` that cannot be processed with
-groupByEnableMultiValueUnnesting set to false", then wrap that column in `MV_TO_ARRAY(x) AS x`.
-
The following [aggregation functions](../querying/sql-aggregations.md) are supported for rollup at ingestion time:
`COUNT` (but switch to `SUM` at query time), `SUM`, `MIN`, `MAX`, `EARLIEST` and `EARLIEST_BY` ([string only](known-issues.md#select-statement)),
`LATEST` and `LATEST_BY` ([string only](known-issues.md#select-statement)), `APPROX_COUNT_DISTINCT`, `APPROX_COUNT_DISTINCT_BUILTIN`,
diff --git a/docs/multi-stage-query/examples.md b/docs/multi-stage-query/examples.md
index 51a645448daf..14914cab1158 100644
--- a/docs/multi-stage-query/examples.md
+++ b/docs/multi-stage-query/examples.md
@@ -79,7 +79,7 @@ CLUSTERED BY channel
## INSERT with rollup
-This example inserts data into a table named `kttm_data` and performs data rollup. This example implements the recommendations described in [Rollup](./concepts.md#rollup).
+This example inserts data into a table named `kttm_rollup` and performs data rollup. This example implements the recommendations described in [Rollup](./concepts.md#rollup).
Show the query
@@ -91,7 +91,7 @@ SELECT * FROM TABLE(
EXTERN(
'{"type":"http","uris":["https://static.imply.io/example-data/kttm-v2/kttm-v2-2019-08-25.json.gz"]}',
'{"type":"json"}',
- '[{"name":"timestamp","type":"string"},{"name":"agent_category","type":"string"},{"name":"agent_type","type":"string"},{"name":"browser","type":"string"},{"name":"browser_version","type":"string"},{"name":"city","type":"string"},{"name":"continent","type":"string"},{"name":"country","type":"string"},{"name":"version","type":"string"},{"name":"event_type","type":"string"},{"name":"event_subtype","type":"string"},{"name":"loaded_image","type":"string"},{"name":"adblock_list","type":"string"},{"name":"forwarded_for","type":"string"},{"name":"language","type":"string"},{"name":"number","type":"long"},{"name":"os","type":"string"},{"name":"path","type":"string"},{"name":"platform","type":"string"},{"name":"referrer","type":"string"},{"name":"referrer_host","type":"string"},{"name":"region","type":"string"},{"name":"remote_address","type":"string"},{"name":"screen","type":"string"},{"name":"session","type":"string"},{"name":"session_length","type":"long"},{"name":"timezone","type":"string"},{"name":"timezone_offset","type":"long"},{"name":"window","type":"string"}]'
+ '[{"name":"timestamp","type":"string"},{"name":"agent_category","type":"string"},{"name":"agent_type","type":"string"},{"name":"browser","type":"string"},{"name":"browser_version","type":"string"},{"name":"city","type":"string"},{"name":"continent","type":"string"},{"name":"country","type":"string"},{"name":"version","type":"string"},{"name":"event_type","type":"string"},{"name":"event_subtype","type":"string"},{"name":"loaded_image","type":"string"},{"name":"adblock_list","type":"string"},{"name":"forwarded_for","type":"string"},{"name":"number","type":"long"},{"name":"os","type":"string"},{"name":"path","type":"string"},{"name":"platform","type":"string"},{"name":"referrer","type":"string"},{"name":"referrer_host","type":"string"},{"name":"region","type":"string"},{"name":"remote_address","type":"string"},{"name":"screen","type":"string"},{"name":"session","type":"string"},{"name":"session_length","type":"long"},{"name":"timezone","type":"string"},{"name":"timezone_offset","type":"long"},{"name":"window","type":"string"}]'
)
))
@@ -101,8 +101,7 @@ SELECT
agent_category,
agent_type,
browser,
- browser_version,
- MV_TO_ARRAY("language") AS "language", -- Multi-value string dimension
+ browser_version
os,
city,
country,
@@ -113,11 +112,10 @@ SELECT
APPROX_COUNT_DISTINCT_DS_HLL(event_type) AS unique_event_types
FROM kttm_data
WHERE os = 'iOS'
-GROUP BY 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11
+GROUP BY 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
PARTITIONED BY HOUR
CLUSTERED BY browser, session
```
-
## INSERT for reindexing an existing datasource
diff --git a/docs/multi-stage-query/reference.md b/docs/multi-stage-query/reference.md
index a497afa3a71a..d71c58abbd1b 100644
--- a/docs/multi-stage-query/reference.md
+++ b/docs/multi-stage-query/reference.md
@@ -232,23 +232,25 @@ If you're using the web console, you can specify the context parameters through
The following table lists the context parameters for the MSQ task engine:
-| Parameter | Description | Default value |
-|---|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---|
-| `maxNumTasks` | SELECT, INSERT, REPLACE
The maximum total number of tasks to launch, including the controller task. The lowest possible value for this setting is 2: one controller and one worker. All tasks must be able to launch simultaneously. If they cannot, the query returns a `TaskStartTimeout` error code after approximately 10 minutes.
May also be provided as `numTasks`. If both are present, `maxNumTasks` takes priority. | 2 |
-| `taskAssignment` | SELECT, INSERT, REPLACE
Determines how many tasks to use. Possible values include:
- `max`: Uses as many tasks as possible, up to `maxNumTasks`.
- `auto`: When file sizes can be determined through directory listing (for example: local files, S3, GCS, HDFS) uses as few tasks as possible without exceeding 512 MiB or 10,000 files per task, unless exceeding these limits is necessary to stay within `maxNumTasks`. When calculating the size of files, the weighted size is used, which considers the file format and compression format used if any. When file sizes cannot be determined through directory listing (for example: http), behaves the same as `max`.
| `max` |
-| `finalizeAggregations` | SELECT, INSERT, REPLACE
Determines the type of aggregation to return. If true, Druid finalizes the results of complex aggregations that directly appear in query results. If false, Druid returns the aggregation's intermediate type rather than finalized type. This parameter is useful during ingestion, where it enables storing sketches directly in Druid tables. For more information about aggregations, see [SQL aggregation functions](../querying/sql-aggregations.md). | true |
-| `sqlJoinAlgorithm` | SELECT, INSERT, REPLACE
Algorithm to use for JOIN. Use `broadcast` (the default) for broadcast hash join or `sortMerge` for sort-merge join. Affects all JOIN operations in the query. This is a hint to the MSQ engine and the actual joins in the query may proceed in a different way than specified. See [Joins](#joins) for more details. | `broadcast` |
-| `rowsInMemory` | INSERT or REPLACE
Maximum number of rows to store in memory at once before flushing to disk during the segment generation process. Ignored for non-INSERT queries. In most cases, use the default value. You may need to override the default if you run into one of the [known issues](./known-issues.md) around memory usage. | 100,000 |
+| Parameter | Description | Default value |
+|---|---|---|
+| `maxNumTasks` | SELECT, INSERT, REPLACE
The maximum total number of tasks to launch, including the controller task. The lowest possible value for this setting is 2: one controller and one worker. All tasks must be able to launch simultaneously. If they cannot, the query returns a `TaskStartTimeout` error code after approximately 10 minutes.
May also be provided as `numTasks`. If both are present, `maxNumTasks` takes priority. | 2 |
+| `taskAssignment` | SELECT, INSERT, REPLACE
Determines how many tasks to use. Possible values include: - `max`: Uses as many tasks as possible, up to `maxNumTasks`.
- `auto`: When file sizes can be determined through directory listing (for example: local files, S3, GCS, HDFS) uses as few tasks as possible without exceeding 512 MiB or 10,000 files per task, unless exceeding these limits is necessary to stay within `maxNumTasks`. When calculating the size of files, the weighted size is used, which considers the file format and compression format used if any. When file sizes cannot be determined through directory listing (for example: http), behaves the same as `max`.
| `max` |
+| `finalizeAggregations` | SELECT, INSERT, REPLACE
Determines the type of aggregation to return. If true, Druid finalizes the results of complex aggregations that directly appear in query results. If false, Druid returns the aggregation's intermediate type rather than finalized type. This parameter is useful during ingestion, where it enables storing sketches directly in Druid tables. For more information about aggregations, see [SQL aggregation functions](../querying/sql-aggregations.md). | true |
+| `arrayIngestMode` | INSERT, REPLACE
Controls how ARRAY type values are stored in Druid segments. When set to `array` (recommended for SQL compliance), Druid will store all ARRAY typed values in [ARRAY typed columns](../querying/arrays.md), and supports storing both VARCHAR and numeric typed arrays. When set to `mvd` (the default, for backwards compatibility), Druid only supports VARCHAR typed arrays, and will store them as [multi-value string columns](../querying/multi-value-dimensions.md). When set to `none`, Druid will throw an exception when trying to store any type of arrays. `none` is most useful when set in the system default query context with (`druid.query.default.context.arrayIngestMode=none`) to be used to help migrate operators from `mvd` mode to `array` mode and force query writers to make an explicit choice between ARRAY and multi-value VARCHAR typed columns. | `mvd` (for backwards compatibility, recommended to use `array` for SQL compliance)|
+| `sqlJoinAlgorithm` | SELECT, INSERT, REPLACE
Algorithm to use for JOIN. Use `broadcast` (the default) for broadcast hash join or `sortMerge` for sort-merge join. Affects all JOIN operations in the query. This is a hint to the MSQ engine and the actual joins in the query may proceed in a different way than specified. See [Joins](#joins) for more details. | `broadcast` |
+| `rowsInMemory` | INSERT or REPLACE
Maximum number of rows to store in memory at once before flushing to disk during the segment generation process. Ignored for non-INSERT queries. In most cases, use the default value. You may need to override the default if you run into one of the [known issues](./known-issues.md) around memory usage. | 100,000 |
| `segmentSortOrder` | INSERT or REPLACE
Normally, Druid sorts rows in individual segments using `__time` first, followed by the [CLUSTERED BY](#clustered-by) clause. When you set `segmentSortOrder`, Druid sorts rows in segments using this column list first, followed by the CLUSTERED BY order.
You provide the column list as comma-separated values or as a JSON array in string form. If your query includes `__time`, then this list must begin with `__time`. For example, consider an INSERT query that uses `CLUSTERED BY country` and has `segmentSortOrder` set to `__time,city`. Within each time chunk, Druid assigns rows to segments based on `country`, and then within each of those segments, Druid sorts those rows by `__time` first, then `city`, then `country`. | empty list |
-| `maxParseExceptions`| SELECT, INSERT, REPLACE
Maximum number of parse exceptions that are ignored while executing the query before it stops with `TooManyWarningsFault`. To ignore all the parse exceptions, set the value to -1. | 0 |
-| `rowsPerSegment` | INSERT or REPLACE
The number of rows per segment to target. The actual number of rows per segment may be somewhat higher or lower than this number. In most cases, use the default. For general information about sizing rows per segment, see [Segment Size Optimization](../operations/segment-optimization.md). | 3,000,000 |
-| `indexSpec` | INSERT or REPLACE
An [`indexSpec`](../ingestion/ingestion-spec.md#indexspec) to use when generating segments. May be a JSON string or object. See [Front coding](../ingestion/ingestion-spec.md#front-coding) for details on configuring an `indexSpec` with front coding. | See [`indexSpec`](../ingestion/ingestion-spec.md#indexspec). |
-| `durableShuffleStorage` | SELECT, INSERT, REPLACE
Whether to use durable storage for shuffle mesh. To use this feature, configure the durable storage at the server level using `druid.msq.intermediate.storage.enable=true`). If these properties are not configured, any query with the context variable `durableShuffleStorage=true` fails with a configuration error.
| `false` |
-| `faultTolerance` | SELECT, INSERT, REPLACE
Whether to turn on fault tolerance mode or not. Failed workers are retried based on [Limits](#limits). Cannot be used when `durableShuffleStorage` is explicitly set to false. | `false` |
-| `selectDestination` | SELECT
Controls where the final result of the select query is written.
Use `taskReport`(the default) to write select results to the task report. This is not scalable since task reports size explodes for large results
Use `durableStorage` to write results to durable storage location. For large results sets, its recommended to use `durableStorage` . To configure durable storage see [`this`](#durable-storage) section. | `taskReport` |
-| `waitUntilSegmentsLoad` | INSERT, REPLACE
If set, the ingest query waits for the generated segment to be loaded before exiting, else the ingest query exits without waiting. The task and live reports contain the information about the status of loading segments if this flag is set. This will ensure that any future queries made after the ingestion exits will include results from the ingestion. The drawback is that the controller task will stall until the segments are loaded. | `false` |
-| `includeSegmentSource` | SELECT, INSERT, REPLACE
Controls the sources, which will be queried for results in addition to the segments present on deep storage. Can be `NONE` or `REALTIME`. If this value is `NONE`, only non-realtime (published and used) segments will be downloaded from deep storage. If this value is `REALTIME`, results will also be included from realtime tasks. | `NONE` |
-| `rowsPerPage` | SELECT
The number of rows per page to target. The actual number of rows per page may be somewhat higher or lower than this number. In most cases, use the default.
This property comes into effect only when `selectDestination` is set to `durableStorage` | 100000 |
+| `maxParseExceptions`| SELECT, INSERT, REPLACE
Maximum number of parse exceptions that are ignored while executing the query before it stops with `TooManyWarningsFault`. To ignore all the parse exceptions, set the value to -1. | 0 |
+| `rowsPerSegment` | INSERT or REPLACE
The number of rows per segment to target. The actual number of rows per segment may be somewhat higher or lower than this number. In most cases, use the default. For general information about sizing rows per segment, see [Segment Size Optimization](../operations/segment-optimization.md). | 3,000,000 |
+| `indexSpec` | INSERT or REPLACE
An [`indexSpec`](../ingestion/ingestion-spec.md#indexspec) to use when generating segments. May be a JSON string or object. See [Front coding](../ingestion/ingestion-spec.md#front-coding) for details on configuring an `indexSpec` with front coding. | See [`indexSpec`](../ingestion/ingestion-spec.md#indexspec). |
+| `durableShuffleStorage` | SELECT, INSERT, REPLACE
Whether to use durable storage for shuffle mesh. To use this feature, configure the durable storage at the server level using `druid.msq.intermediate.storage.enable=true`). If these properties are not configured, any query with the context variable `durableShuffleStorage=true` fails with a configuration error.
| `false` |
+| `faultTolerance` | SELECT, INSERT, REPLACE
Whether to turn on fault tolerance mode or not. Failed workers are retried based on [Limits](#limits). Cannot be used when `durableShuffleStorage` is explicitly set to false. | `false` |
+| `selectDestination` | SELECT
Controls where the final result of the select query is written.
Use `taskReport`(the default) to write select results to the task report. This is not scalable since task reports size explodes for large results
Use `durableStorage` to write results to durable storage location. For large results sets, its recommended to use `durableStorage` . To configure durable storage see [`this`](#durable-storage) section. | `taskReport` |
+| `waitUntilSegmentsLoad` | INSERT, REPLACE
If set, the ingest query waits for the generated segment to be loaded before exiting, else the ingest query exits without waiting. The task and live reports contain the information about the status of loading segments if this flag is set. This will ensure that any future queries made after the ingestion exits will include results from the ingestion. The drawback is that the controller task will stall till the segments are loaded. | `false` |
+| `includeSegmentSource` | SELECT, INSERT, REPLACE
Controls the sources, which will be queried for results in addition to the segments present on deep storage. Can be `NONE` or `REALTIME`. If this value is `NONE`, only non-realtime (published and used) segments will be downloaded from deep storage. If this value is `REALTIME`, results will also be included from realtime tasks. | `NONE` |
+| `rowsPerPage` | SELECT
The number of rows per page to target. The actual number of rows per page may be somewhat higher or lower than this number. In most cases, use the default.
This property comes into effect only when `selectDestination` is set to `durableStorage` | 100000 |
+
## Joins
diff --git a/docs/querying/arrays.md b/docs/querying/arrays.md
new file mode 100644
index 000000000000..904802c2b1fc
--- /dev/null
+++ b/docs/querying/arrays.md
@@ -0,0 +1,253 @@
+---
+id: arrays
+title: "Arrays"
+---
+
+
+
+
+Apache Druid supports SQL standard `ARRAY` typed columns for `VARCHAR`, `BIGINT`, and `DOUBLE` types (native types `ARRAY`, `ARRAY`, and `ARRAY`). Other more complicated ARRAY types must be stored in [nested columns](nested-columns.md). Druid ARRAY types are distinct from [multi-value dimension](multi-value-dimensions.md), which have significantly different behavior than standard arrays.
+
+This document describes inserting, filtering, and grouping behavior for `ARRAY` typed columns.
+Refer to the [Druid SQL data type documentation](sql-data-types.md#arrays) and [SQL array function reference](sql-array-functions.md) for additional details
+about the functions available to use with ARRAY columns and types in SQL.
+
+The following sections describe inserting, filtering, and grouping behavior based on the following example data, which includes 3 array typed columns:
+
+```json lines
+{"timestamp": "2023-01-01T00:00:00", "label": "row1", "arrayString": ["a", "b"], "arrayLong":[1, null,3], "arrayDouble":[1.1, 2.2, null]}
+{"timestamp": "2023-01-01T00:00:00", "label": "row2", "arrayString": [null, "b"], "arrayLong":null, "arrayDouble":[999, null, 5.5]}
+{"timestamp": "2023-01-01T00:00:00", "label": "row3", "arrayString": [], "arrayLong":[1, 2, 3], "arrayDouble":[null, 2.2, 1.1]}
+{"timestamp": "2023-01-01T00:00:00", "label": "row4", "arrayString": ["a", "b"], "arrayLong":[1, 2, 3], "arrayDouble":[]}
+{"timestamp": "2023-01-01T00:00:00", "label": "row5", "arrayString": null, "arrayLong":[], "arrayDouble":null}
+```
+
+## Ingesting arrays
+
+### Native batch and streaming ingestion
+When using native [batch](../ingestion/native-batch.md) or streaming ingestion such as with [Apache Kafka](../development/extensions-core/kafka-ingestion.md), arrays can be ingested using the [`"auto"`](../ingestion/ingestion-spec.md#dimension-objects) type dimension schema which is shared with [type-aware schema discovery](../ingestion/schema-design.md#type-aware-schema-discovery).
+
+When ingesting from TSV or CSV data, you can specify the array delimiters using the `listDelimiter` field in the `inputFormat`. JSON data must be formatted as a JSON array to be ingested as an array type. JSON data does not require `inputFormat` configuration.
+
+The following shows an example `dimensionsSpec` for native ingestion of the data used in this document:
+
+```
+"dimensions": [
+ {
+ "type": "auto",
+ "name": "label"
+ },
+ {
+ "type": "auto",
+ "name": "arrayString"
+ },
+ {
+ "type": "auto",
+ "name": "arrayLong"
+ },
+ {
+ "type": "auto",
+ "name": "arrayDouble"
+ }
+],
+```
+
+### SQL-based ingestion
+
+Arrays can also be inserted with [SQL-based ingestion](../multi-stage-query/index.md) when you include a query context parameter [`"arrayIngestMode":"array"`](../multi-stage-query/reference.md#context-parameters).
+
+For example, to insert the data used in this document:
+```sql
+REPLACE INTO "array_example" OVERWRITE ALL
+WITH "ext" AS (
+ SELECT *
+ FROM TABLE(
+ EXTERN(
+ '{"type":"inline","data":"{\"timestamp\": \"2023-01-01T00:00:00\", \"label\": \"row1\", \"arrayString\": [\"a\", \"b\"], \"arrayLong\":[1, null,3], \"arrayDouble\":[1.1, 2.2, null]}\n{\"timestamp\": \"2023-01-01T00:00:00\", \"label\": \"row2\", \"arrayString\": [null, \"b\"], \"arrayLong\":null, \"arrayDouble\":[999, null, 5.5]}\n{\"timestamp\": \"2023-01-01T00:00:00\", \"label\": \"row3\", \"arrayString\": [], \"arrayLong\":[1, 2, 3], \"arrayDouble\":[null, 2.2, 1.1]} \n{\"timestamp\": \"2023-01-01T00:00:00\", \"label\": \"row4\", \"arrayString\": [\"a\", \"b\"], \"arrayLong\":[1, 2, 3], \"arrayDouble\":[]}\n{\"timestamp\": \"2023-01-01T00:00:00\", \"label\": \"row5\", \"arrayString\": null, \"arrayLong\":[], \"arrayDouble\":null}"}',
+ '{"type":"json"}',
+ '[{"name":"timestamp", "type":"STRING"},{"name":"label", "type":"STRING"},{"name":"arrayString", "type":"ARRAY"},{"name":"arrayLong", "type":"ARRAY"},{"name":"arrayDouble", "type":"ARRAY"}]'
+ )
+ )
+)
+SELECT
+ TIME_PARSE("timestamp") AS "__time",
+ "label",
+ "arrayString",
+ "arrayLong",
+ "arrayDouble"
+FROM "ext"
+PARTITIONED BY DAY
+```
+
+### SQL-based ingestion with rollup
+These input arrays can also be grouped for rollup:
+
+```sql
+REPLACE INTO "array_example_rollup" OVERWRITE ALL
+WITH "ext" AS (
+ SELECT *
+ FROM TABLE(
+ EXTERN(
+ '{"type":"inline","data":"{\"timestamp\": \"2023-01-01T00:00:00\", \"label\": \"row1\", \"arrayString\": [\"a\", \"b\"], \"arrayLong\":[1, null,3], \"arrayDouble\":[1.1, 2.2, null]}\n{\"timestamp\": \"2023-01-01T00:00:00\", \"label\": \"row2\", \"arrayString\": [null, \"b\"], \"arrayLong\":null, \"arrayDouble\":[999, null, 5.5]}\n{\"timestamp\": \"2023-01-01T00:00:00\", \"label\": \"row3\", \"arrayString\": [], \"arrayLong\":[1, 2, 3], \"arrayDouble\":[null, 2.2, 1.1]} \n{\"timestamp\": \"2023-01-01T00:00:00\", \"label\": \"row4\", \"arrayString\": [\"a\", \"b\"], \"arrayLong\":[1, 2, 3], \"arrayDouble\":[]}\n{\"timestamp\": \"2023-01-01T00:00:00\", \"label\": \"row5\", \"arrayString\": null, \"arrayLong\":[], \"arrayDouble\":null}"}',
+ '{"type":"json"}',
+ '[{"name":"timestamp", "type":"STRING"},{"name":"label", "type":"STRING"},{"name":"arrayString", "type":"ARRAY"},{"name":"arrayLong", "type":"ARRAY"},{"name":"arrayDouble", "type":"ARRAY"}]'
+ )
+ )
+)
+SELECT
+ TIME_PARSE("timestamp") AS "__time",
+ "label",
+ "arrayString",
+ "arrayLong",
+ "arrayDouble",
+ COUNT(*) as "count"
+FROM "ext"
+GROUP BY 1,2,3,4,5
+PARTITIONED BY DAY
+```
+
+
+## Querying arrays
+
+### Filtering
+
+All query types, as well as [filtered aggregators](aggregations.md#filtered-aggregator), can filter on array typed columns. Filters follow these rules for array types:
+
+- All filters match against the entire array value for the row
+- Native value filters like [equality](filters.md#equality-filter) and [range](filters.md#range-filter) match on entire array values, as do SQL constructs that plan into these native filters
+- The [`IS NULL`](filters.md#null-filter) filter will match rows where the entire array value is null
+- [Array specific functions](sql-array-functions.md) like `ARRAY_CONTAINS` and `ARRAY_OVERLAP` follow the behavior specified by those functions
+- All other filters do not directly support ARRAY types and will result in a query error
+
+#### Example: equality
+```sql
+SELECT *
+FROM "array_example"
+WHERE arrayLong = ARRAY[1,2,3]
+```
+
+```json lines
+{"__time":"2023-01-01T00:00:00.000Z","label":"row3","arrayString":"[]","arrayLong":"[1,2,3]","arrayDouble":"[null,2.2,1.1]"}
+{"__time":"2023-01-01T00:00:00.000Z","label":"row4","arrayString":"[\"a\",\"b\"]","arrayLong":"[1,2,3]","arrayDouble":"[]"}
+```
+
+#### Example: null
+```sql
+SELECT *
+FROM "array_example"
+WHERE arrayLong IS NULL
+```
+
+```json lines
+{"__time":"2023-01-01T00:00:00.000Z","label":"row2","arrayString":"[null,\"b\"]","arrayLong":null,"arrayDouble":"[999.0,null,5.5]"}
+```
+
+#### Example: range
+```sql
+SELECT *
+FROM "array_example"
+WHERE arrayString >= ARRAY['a','b']
+```
+
+```json lines
+{"__time":"2023-01-01T00:00:00.000Z","label":"row1","arrayString":"[\"a\",\"b\"]","arrayLong":"[1,null,3]","arrayDouble":"[1.1,2.2,null]"}
+{"__time":"2023-01-01T00:00:00.000Z","label":"row4","arrayString":"[\"a\",\"b\"]","arrayLong":"[1,2,3]","arrayDouble":"[]"}
+```
+
+#### Example: ARRAY_CONTAINS
+```sql
+SELECT *
+FROM "array_example"
+WHERE ARRAY_CONTAINS(arrayString, 'a')
+```
+
+```json lines
+{"__time":"2023-01-01T00:00:00.000Z","label":"row1","arrayString":"[\"a\",\"b\"]","arrayLong":"[1,null,3]","arrayDouble":"[1.1,2.2,null]"}
+{"__time":"2023-01-01T00:00:00.000Z","label":"row4","arrayString":"[\"a\",\"b\"]","arrayLong":"[1,2,3]","arrayDouble":"[]"}
+```
+
+### Grouping
+
+When grouping on an array with SQL or a native [groupBy query](groupbyquery.md), grouping follows standard SQL behavior and groups on the entire array as a single value. The [`UNNEST`](sql.md#unnest) function allows grouping on the individual array elements.
+
+#### Example: SQL grouping query with no filtering
+```sql
+SELECT label, arrayString
+FROM "array_example"
+GROUP BY 1,2
+```
+results in:
+```json lines
+{"label":"row1","arrayString":"[\"a\",\"b\"]"}
+{"label":"row2","arrayString":"[null,\"b\"]"}
+{"label":"row3","arrayString":"[]"}
+{"label":"row4","arrayString":"[\"a\",\"b\"]"}
+{"label":"row5","arrayString":null}
+```
+
+#### Example: SQL grouping query with a filter
+```sql
+SELECT label, arrayString
+FROM "array_example"
+WHERE arrayLong = ARRAY[1,2,3]
+GROUP BY 1,2
+```
+
+results:
+```json lines
+{"label":"row3","arrayString":"[]"}
+{"label":"row4","arrayString":"[\"a\",\"b\"]"}
+```
+
+#### Example: UNNEST
+```sql
+SELECT label, strings
+FROM "array_example" CROSS JOIN UNNEST(arrayString) as u(strings)
+GROUP BY 1,2
+```
+
+results:
+```json lines
+{"label":"row1","strings":"a"}
+{"label":"row1","strings":"b"}
+{"label":"row2","strings":null}
+{"label":"row2","strings":"b"}
+{"label":"row4","strings":"a"}
+{"label":"row4","strings":"b"}
+```
+
+## Differences between arrays and multi-value dimensions
+Avoid confusing string arrays with [multi-value dimensions](multi-value-dimensions.md). Arrays and multi-value dimensions are stored in different column types, and query behavior is different. You can use the functions `MV_TO_ARRAY` and `ARRAY_TO_MV` to convert between the two if needed. In general, we recommend using arrays whenever possible, since they are a newer and more powerful feature and have SQL compliant behavior.
+
+Use care during ingestion to ensure you get the type you want.
+
+To get arrays when performing an ingestion using JSON ingestion specs, such as [native batch](../ingestion/native-batch.md) or streaming ingestion such as with [Apache Kafka](../development/extensions-core/kafka-ingestion.md), use dimension type `auto` or enable `useSchemaDiscovery`. When performing a [SQL-based ingestion](../multi-stage-query/index.md), write a query that generates arrays and set the context parameter `"arrayIngestMode": "array"`. Arrays may contain strings or numbers.
+
+To get multi-value dimensions when performing an ingestion using JSON ingestion specs, use dimension type `string` and do not enable `useSchemaDiscovery`. When performing a [SQL-based ingestion](../multi-stage-query/index.md), wrap arrays in [`ARRAY_TO_MV`](multi-value-dimensions.md#sql-based-ingestion), which ensures you get multi-value dimensions in any `arrayIngestMode`. Multi-value dimensions can only contain strings.
+
+You can tell which type you have by checking the `INFORMATION_SCHEMA.COLUMNS` table, using a query like:
+
+```sql
+SELECT COLUMN_NAME, DATA_TYPE
+FROM INFORMATION_SCHEMA.COLUMNS
+WHERE TABLE_NAME = 'mytable'
+```
+
+Arrays are type `ARRAY`, multi-value strings are type `VARCHAR`.
\ No newline at end of file
diff --git a/docs/querying/multi-value-dimensions.md b/docs/querying/multi-value-dimensions.md
index f1081d3f4323..9680d5603974 100644
--- a/docs/querying/multi-value-dimensions.md
+++ b/docs/querying/multi-value-dimensions.md
@@ -30,21 +30,37 @@ array of values instead of a single value, such as the `tags` values in the foll
{"timestamp": "2011-01-12T00:00:00.000Z", "tags": ["t1","t2","t3"]}
```
-This document describes filtering and grouping behavior for multi-value dimensions. For information about the internal representation of multi-value dimensions, see
+It is important to be aware that multi-value dimensions are distinct from [array types](arrays.md). While array types behave like standard SQL arrays, multi-value dimensions do not. This document describes the behavior of multi-value dimensions, and some additional details can be found in the [SQL data type documentation](sql-data-types.md#multi-value-strings-behavior).
+
+This document describes inserting, filtering, and grouping behavior for multi-value dimensions. For information about the internal representation of multi-value dimensions, see
[segments documentation](../design/segments.md#multi-value-columns). Examples in this document
-are in the form of [native Druid queries](querying.md). Refer to the [Druid SQL documentation](sql-multivalue-string-functions.md) for details
-about using multi-value string dimensions in SQL.
+are in the form of both [SQL](sql.md) and [native Druid queries](querying.md). Refer to the [Druid SQL documentation](sql-multivalue-string-functions.md) for details
+about the functions available for using multi-value string dimensions in SQL.
+
+The following sections describe inserting, filtering, and grouping behavior based on the following example data, which includes a multi-value dimension, `tags`.
+
+```json lines
+{"timestamp": "2011-01-12T00:00:00.000Z", "label": "row1", "tags": ["t1","t2","t3"]}
+{"timestamp": "2011-01-13T00:00:00.000Z", "label": "row2", "tags": ["t3","t4","t5"]}
+{"timestamp": "2011-01-14T00:00:00.000Z", "label": "row3", "tags": ["t5","t6","t7"]}
+{"timestamp": "2011-01-14T00:00:00.000Z", "label": "row4", "tags": []}
+```
-## Overview
+## Ingestion
-At ingestion time, Druid can detect multi-value dimensions and configure the `dimensionsSpec` accordingly. It detects JSON arrays or CSV/TSV fields as multi-value dimensions.
+### Native batch and streaming ingestion
+When using native [batch](../ingestion/native-batch.md) or streaming ingestion such as with [Apache Kafka](../development/extensions-core/kafka-ingestion.md), the Druid web console data loader can detect multi-value dimensions and configure the `dimensionsSpec` accordingly.
-For TSV or CSV data, you can specify the multi-value delimiters using the `listDelimiter` field in the `parseSpec`. JSON data must be formatted as a JSON array to be ingested as a multi-value dimension. JSON data does not require `parseSpec` configuration.
+For TSV or CSV data, you can specify the multi-value delimiters using the `listDelimiter` field in the `inputFormat`. JSON data must be formatted as a JSON array to be ingested as a multi-value dimension. JSON data does not require `inputFormat` configuration.
-The following shows an example multi-value dimension named `tags` in a `dimensionsSpec`:
+The following shows an example `dimensionsSpec` for native ingestion of the data used in this document:
```
"dimensions": [
+ {
+ "type": "string",
+ "name": "label"
+ },
{
"type": "string",
"name": "tags",
@@ -61,20 +77,81 @@ By default, Druid sorts values in multi-value dimensions. This behavior is contr
See [Dimension Objects](../ingestion/ingestion-spec.md#dimension-objects) for information on configuring multi-value handling.
+### SQL-based ingestion
+Multi-value dimensions can also be inserted with [SQL-based ingestion](../multi-stage-query/index.md). The functions `MV_TO_ARRAY` and `ARRAY_TO_MV` can assist in converting `VARCHAR` to `VARCHAR ARRAY` and `VARCHAR ARRAY` into `VARCHAR` respectively. `multiValueHandling` is not available when using the multi-stage query engine to insert data.
+
+For example, to insert the data used in this document:
+```sql
+REPLACE INTO "mvd_example" OVERWRITE ALL
+WITH "ext" AS (
+ SELECT *
+ FROM TABLE(
+ EXTERN(
+ '{"type":"inline","data":"{\"timestamp\": \"2011-01-12T00:00:00.000Z\", \"label\": \"row1\", \"tags\": [\"t1\",\"t2\",\"t3\"]}\n{\"timestamp\": \"2011-01-13T00:00:00.000Z\", \"label\": \"row2\", \"tags\": [\"t3\",\"t4\",\"t5\"]}\n{\"timestamp\": \"2011-01-14T00:00:00.000Z\", \"label\": \"row3\", \"tags\": [\"t5\",\"t6\",\"t7\"]}\n{\"timestamp\": \"2011-01-14T00:00:00.000Z\", \"label\": \"row4\", \"tags\": []}"}',
+ '{"type":"json"}',
+ '[{"name":"timestamp", "type":"STRING"},{"name":"label", "type":"STRING"},{"name":"tags", "type":"ARRAY"}]'
+ )
+ )
+)
+SELECT
+ TIME_PARSE("timestamp") AS "__time",
+ "label",
+ ARRAY_TO_MV("tags") AS "tags"
+FROM "ext"
+PARTITIONED BY DAY
+```
-## Querying multi-value dimensions
-
-The following sections describe filtering and grouping behavior based on the following example data, which includes a multi-value dimension, `tags`.
-
+### SQL-based ingestion with rollup
+These input arrays can also be grouped prior to converting into a multi-value dimension:
+```sql
+REPLACE INTO "mvd_example_rollup" OVERWRITE ALL
+WITH "ext" AS (
+ SELECT *
+ FROM TABLE(
+ EXTERN(
+ '{"type":"inline","data":"{\"timestamp\": \"2011-01-12T00:00:00.000Z\", \"label\": \"row1\", \"tags\": [\"t1\",\"t2\",\"t3\"]}\n{\"timestamp\": \"2011-01-13T00:00:00.000Z\", \"label\": \"row2\", \"tags\": [\"t3\",\"t4\",\"t5\"]}\n{\"timestamp\": \"2011-01-14T00:00:00.000Z\", \"label\": \"row3\", \"tags\": [\"t5\",\"t6\",\"t7\"]}\n{\"timestamp\": \"2011-01-14T00:00:00.000Z\", \"label\": \"row4\", \"tags\": []}"}',
+ '{"type":"json"}',
+ '[{"name":"timestamp", "type":"STRING"},{"name":"label", "type":"STRING"},{"name":"tags", "type":"ARRAY"}]'
+ )
+ )
+)
+SELECT
+ TIME_PARSE("timestamp") AS "__time",
+ "label",
+ ARRAY_TO_MV("tags") AS "tags",
+ COUNT(*) AS "count"
+FROM "ext"
+GROUP BY 1, 2, "tags"
+PARTITIONED BY DAY
```
-{"timestamp": "2011-01-12T00:00:00.000Z", "tags": ["t1","t2","t3"]} #row1
-{"timestamp": "2011-01-13T00:00:00.000Z", "tags": ["t3","t4","t5"]} #row2
-{"timestamp": "2011-01-14T00:00:00.000Z", "tags": ["t5","t6","t7"]} #row3
-{"timestamp": "2011-01-14T00:00:00.000Z", "tags": []} #row4
+
+Notice that `ARRAY_TO_MV` is not present in the `GROUP BY` clause since we only wish to coerce the type _after_ grouping.
+
+
+The `EXTERN` is also able to refer to the `tags` input type as `VARCHAR`, which is also how a query on a Druid table containing a multi-value dimension would specify the type of the `tags` column. If this is the case you must use `MV_TO_ARRAY` since the multi-stage query engine only supports grouping on multi-value dimensions as arrays. So, they must be coerced first. These arrays must then be coerced back into `VARCHAR` in the `SELECT` part of the statement with `ARRAY_TO_MV`.
+
+```sql
+REPLACE INTO "mvd_example_rollup" OVERWRITE ALL
+WITH "ext" AS (
+ SELECT *
+ FROM TABLE(
+ EXTERN(
+ '{"type":"inline","data":"{\"timestamp\": \"2011-01-12T00:00:00.000Z\", \"label\": \"row1\", \"tags\": [\"t1\",\"t2\",\"t3\"]}\n{\"timestamp\": \"2011-01-13T00:00:00.000Z\", \"label\": \"row2\", \"tags\": [\"t3\",\"t4\",\"t5\"]}\n{\"timestamp\": \"2011-01-14T00:00:00.000Z\", \"label\": \"row3\", \"tags\": [\"t5\",\"t6\",\"t7\"]}\n{\"timestamp\": \"2011-01-14T00:00:00.000Z\", \"label\": \"row4\", \"tags\": []}"}',
+ '{"type":"json"}'
+ )
+ ) EXTEND ("timestamp" VARCHAR, "label" VARCHAR, "tags" VARCHAR)
+)
+SELECT
+ TIME_PARSE("timestamp") AS "__time",
+ "label",
+ ARRAY_TO_MV(MV_TO_ARRAY("tags")) AS "tags",
+ COUNT(*) AS "count"
+FROM "ext"
+GROUP BY 1, 2, MV_TO_ARRAY("tags")
+PARTITIONED BY DAY
```
-:::info
- Be sure to remove the comments before trying out the sample data.
-:::
+
+## Querying multi-value dimensions
### Filtering
@@ -88,28 +165,22 @@ dimensions. Filters follow these rules on multi-value dimensions:
- Logical expression filters behave the same way they do on single-value dimensions: "and" matches a row if all
underlying filters match that row; "or" matches a row if any underlying filters match that row; "not" matches a row
if the underlying filter does not match the row.
-
+
The following example illustrates these rules. This query applies an "or" filter to match row1 and row2 of the dataset above, but not row3:
+```sql
+SELECT *
+FROM "mvd_example_rollup"
+WHERE tags = 't1' OR tags = 't3'
```
-{
- "type": "or",
- "fields": [
- {
- "type": "selector",
- "dimension": "tags",
- "value": "t1"
- },
- {
- "type": "selector",
- "dimension": "tags",
- "value": "t3"
- }
- ]
-}
+
+returns
+```json lines
+{"__time":"2011-01-12T00:00:00.000Z","label":"row1","tags":"[\"t1\",\"t2\",\"t3\"]","count":1}
+{"__time":"2011-01-13T00:00:00.000Z","label":"row2","tags":"[\"t3\",\"t4\",\"t5\"]","count":1}
```
-This "and" filter would match only row1 of the dataset above:
+Native queries can also perform filtering that would be considered a "contradiction" in SQL, such as this "and" filter which would match only row1 of the dataset above:
```
{
@@ -129,26 +200,73 @@ This "and" filter would match only row1 of the dataset above:
}
```
-This "selector" filter would match row4 of the dataset above:
+which returns
+```json lines
+{"__time":"2011-01-12T00:00:00.000Z","label":"row1","tags":"[\"t1\",\"t2\",\"t3\"]","count":1}
+```
+Multi-value dimensions also consider an empty row as `null`, consider:
+```sql
+SELECT *
+FROM "mvd_example_rollup"
+WHERE tags is null
```
-{
- "type": "selector",
- "dimension": "tags",
- "value": null
-}
+
+which results in:
+```json lines
+{"__time":"2011-01-14T00:00:00.000Z","label":"row4","tags":null,"count":1}
```
### Grouping
-topN and groupBy queries can group on multi-value dimensions. When grouping on a multi-value dimension, _all_ values
+When grouping on a multi-value dimension with SQL or a native [topN](topnquery.md) or [groupBy](groupbyquery.md) queries, _all_ values
from matching rows will be used to generate one group per value. This behaves similarly to an implicit SQL `UNNEST`
operation. This means it's possible for a query to return more groups than there are rows. For example, a topN on the
dimension `tags` with filter `"t1" AND "t3"` would match only row1, and generate a result with three groups:
-`t1`, `t2`, and `t3`. If you only need to include values that match your filter, you can use a
-[filtered dimensionSpec](dimensionspecs.md#filtered-dimensionspecs). This can also improve performance.
+`t1`, `t2`, and `t3`.
+
+If you only need to include values that match your filter, you can use the SQL functions [`MV_FILTER_ONLY`/`MV_FILTER_NONE`](sql-multivalue-string-functions.md),
+[filtered virtual column](virtual-columns.md#list-filtered-virtual-column), or [filtered dimensionSpec](dimensionspecs.md#filtered-dimensionspecs). This can also improve performance.
-## Example: GroupBy query with no filtering
+#### Example: SQL grouping query with no filtering
+```sql
+SELECT label, tags
+FROM "mvd_example_rollup"
+GROUP BY 1,2
+```
+results in:
+```json lines
+{"label":"row1","tags":"t1"}
+{"label":"row1","tags":"t2"}
+{"label":"row1","tags":"t3"}
+{"label":"row2","tags":"t3"}
+{"label":"row2","tags":"t4"}
+{"label":"row2","tags":"t5"}
+{"label":"row3","tags":"t5"}
+{"label":"row3","tags":"t6"}
+{"label":"row3","tags":"t7"}
+{"label":"row4","tags":null}
+```
+
+#### Example: SQL grouping query with a filter
+```sql
+SELECT label, tags
+FROM "mvd_example_rollup"
+WHERE tags = 't3'
+GROUP BY 1,2
+```
+
+results:
+```json lines
+{"label":"row1","tags":"t1"}
+{"label":"row1","tags":"t2"}
+{"label":"row1","tags":"t3"}
+{"label":"row2","tags":"t3"}
+{"label":"row2","tags":"t4"}
+{"label":"row2","tags":"t5"}
+```
+
+#### Example: native GroupBy query with no filtering
See [GroupBy querying](groupbyquery.md) for details.
@@ -236,7 +354,7 @@ This query returns the following result:
Notice that original rows are "exploded" into multiple rows and merged.
-## Example: GroupBy query with a selector query filter
+#### Example: native GroupBy query with a selector query filter
See [query filters](filters.md) for details of selector query filter.
@@ -314,11 +432,11 @@ This query returns the following result:
```
You might be surprised to see "t1", "t2", "t4" and "t5" included in the results. This is because the query filter is
-applied on the row before explosion. For multi-value dimensions, a selector filter for "t3" would match row1 and row2,
+applied on the row before explosion. For multi-value dimensions, a filter for value "t3" would match row1 and row2,
after which exploding is done. For multi-value dimensions, a query filter matches a row if any individual value inside
the multiple values matches the query filter.
-## Example: GroupBy query with selector query and dimension filters
+#### Example: native GroupBy query with selector query and dimension filters
To solve the problem above and to get only rows for "t3", use a "filtered dimension spec", as in the query below.
@@ -379,7 +497,26 @@ Having specs are applied at the outermost level of groupBy query processing.
## Disable GroupBy on multi-value columns
-You can disable the implicit unnesting behavior for groupBy by setting groupByEnableMultiValueUnnesting: false in your
-query context. In this mode, the groupBy engine will return an error instead of completing the query. This is a safety
+You can disable the implicit unnesting behavior for groupBy by setting `groupByEnableMultiValueUnnesting: false` in your
+[query context](query-context.md). In this mode, the groupBy engine will return an error instead of completing the query. This is a safety
feature for situations where you believe that all dimensions are singly-valued and want the engine to reject any
-multi-valued dimensions that were inadvertently included.
\ No newline at end of file
+multi-valued dimensions that were inadvertently included.
+
+## Differences between arrays and multi-value dimensions
+Avoid confusing string arrays with [multi-value dimensions](multi-value-dimensions.md). Arrays and multi-value dimensions are stored in different column types, and query behavior is different. You can use the functions `MV_TO_ARRAY` and `ARRAY_TO_MV` to convert between the two if needed. In general, we recommend using arrays whenever possible, since they are a newer and more powerful feature and have SQL compliant behavior.
+
+Use care during ingestion to ensure you get the type you want.
+
+To get arrays when performing an ingestion using JSON ingestion specs, such as [native batch](../ingestion/native-batch.md) or streaming ingestion such as with [Apache Kafka](../development/extensions-core/kafka-ingestion.md), use dimension type `auto` or enable `useSchemaDiscovery`. When performing a [SQL-based ingestion](../multi-stage-query/index.md), write a query that generates arrays and set the context parameter `"arrayIngestMode": "array"`. Arrays may contain strings or numbers.
+
+To get multi-value dimensions when performing an ingestion using JSON ingestion specs, use dimension type `string` and do not enable `useSchemaDiscovery`. When performing a [SQL-based ingestion](../multi-stage-query/index.md), wrap arrays in [`ARRAY_TO_MV`](multi-value-dimensions.md#sql-based-ingestion), which ensures you get multi-value dimensions in any `arrayIngestMode`. Multi-value dimensions can only contain strings.
+
+You can tell which type you have by checking the `INFORMATION_SCHEMA.COLUMNS` table, using a query like:
+
+```sql
+SELECT COLUMN_NAME, DATA_TYPE
+FROM INFORMATION_SCHEMA.COLUMNS
+WHERE TABLE_NAME = 'mytable'
+```
+
+Arrays are type `ARRAY`, multi-value strings are type `VARCHAR`.
\ No newline at end of file
diff --git a/docs/querying/post-aggregations.md b/docs/querying/post-aggregations.md
index 74c23065e748..169ab9d4bc50 100644
--- a/docs/querying/post-aggregations.md
+++ b/docs/querying/post-aggregations.md
@@ -38,49 +38,54 @@ There are several post-aggregators available.
The arithmetic post-aggregator applies the provided function to the given
fields from left to right. The fields can be aggregators or other post aggregators.
-Supported functions are `+`, `-`, `*`, `/`, `pow` and `quotient`.
+| Property | Description | Required |
+| --- | --- | --- |
+| `type` | Must be `"arithmetic"`. | Yes |
+| `name` | Output name of the post-aggregation | Yes |
+| `fn`| Supported functions are `+`, `-`, `*`, `/`, `pow` and `quotient` | Yes |
+| `fields` | List of post-aggregator specs which define inputs to the `fn` | Yes |
+| `ordering` | If no ordering (or `null`) is specified, the default floating point ordering is used. `numericFirst` ordering always returns finite values first, followed by `NaN`, and infinite values last. | No |
-**Note**:
+**Note**:
* `/` division always returns `0` if dividing by`0`, regardless of the numerator.
* `quotient` division behaves like regular floating point division
* Arithmetic post-aggregators always use floating point arithmetic.
-Arithmetic post-aggregators may also specify an `ordering`, which defines the order
-of resulting values when sorting results (this can be useful for topN queries for instance):
-
-- If no ordering (or `null`) is specified, the default floating point ordering is used.
-- `numericFirst` ordering always returns finite values first, followed by `NaN`, and infinite values last.
-
-The grammar for an arithmetic post aggregation is:
+Example:
```json
-postAggregation : {
+{
"type" : "arithmetic",
- "name" : ,
- "fn" : ,
- "fields": [, , ...],
- "ordering" :
+ "name" : "mult",
+ "fn" : "*",
+ "fields": [
+ {"type": "fieldAccess", "fieldName": "someAgg"},
+ {"type": "fieldAccess", "fieldName": "someOtherAgg"}
+ ]
}
```
### Field accessor post-aggregators
-These post-aggregators return the value produced by the specified [aggregator](../querying/aggregations.md).
+These post-aggregators return the value produced by the specified [dimension](../querying/dimensionspecs.md) or [aggregator](../querying/aggregations.md).
-`fieldName` refers to the output name of the aggregator given in the [aggregations](../querying/aggregations.md) portion of the query.
-For complex aggregators, like "cardinality" and "hyperUnique", the `type` of the post-aggregator determines what
-the post-aggregator will return. Use type "fieldAccess" to return the raw aggregation object, or use type
-"finalizingFieldAccess" to return a finalized value, such as an estimated cardinality.
+| Property | Description | Required |
+| --- | --- | --- |
+| `type` | Must be `"fieldAccess"` or `"finalizingFieldAccess"`. Use type `"fieldAccess"` to return the raw aggregation object, or use type `"finalizingFieldAccess"` to return a finalized value, such as an estimated cardinality. | Yes |
+| `name` | Output name of the post-aggregation | Yes if defined as a standalone post-aggregation, but may be omitted if used inline to some other post-aggregator in a `fields` list |
+| `fieldName` | The output name of the dimension or aggregator to reference | Yes |
+
+Example:
```json
-{ "type" : "fieldAccess", "name": , "fieldName" : }
+{ "type" : "fieldAccess", "name": "someField", "fieldName" : "someAggregator" }
```
or
```json
-{ "type" : "finalizingFieldAccess", "name": , "fieldName" : }
+{ "type" : "finalizingFieldAccess", "name": "someFinalizedField", "fieldName" : "someAggregator" }
```
@@ -88,29 +93,52 @@ or
The constant post-aggregator always returns the specified value.
+| Property | Description | Required |
+| --- | --- | --- |
+| `type` | Must be `"constant"` | Yes |
+| `name` | Output name of the post-aggregation | Yes |
+| `value` | The constant value | Yes |
+
+Example:
+
```json
-{ "type" : "constant", "name" : , "value" : }
+{ "type" : "constant", "name" : "someConstant", "value" : 1234 }
```
### Expression post-aggregator
The expression post-aggregator is defined using a Druid [expression](math-expr.md).
+| Property | Description | Required |
+| --- | --- | --- |
+| `type` | Must be `"expression"` | Yes |
+| `name` | Output name of the post-aggregation | Yes |
+| `expression` | Native Druid [expression](math-expr.md) to compute, may refer to any dimension or aggregator output names | Yes |
+| `ordering` | If no ordering (or `null`) is specified, the "natural" ordering is used. `numericFirst` ordering always returns finite values first, followed by `NaN`, and infinite values last. If the expression produces array or complex types, specify `ordering` as null and use `outputType` instead to use the correct type native ordering. | No |
+| `outputType` | Output type is optional, and can be any native Druid type: `LONG`, `FLOAT`, `DOUBLE`, `STRING`, `ARRAY` types (e.g. `ARRAY`), or `COMPLEX` types (e.g. `COMPLEX`). If not specified, the output type will be inferred from the `expression`. If specified and `ordering` is null, the type native ordering will be used for sorting values. If the expression produces array or complex types, this value must be non-null to ensure the correct ordering is used. If `outputType` does not match the actual output type of the `expression`, the value will be attempted to coerced to the specified type, possibly failing if coercion is not possible. | No |
+
+Example:
```json
{
"type": "expression",
- "name": ,
- "expression": ,
- "ordering" :
+ "name": "someExpression",
+ "expression": "someAgg + someOtherAgg",
+ "ordering": null,
+ "outputType": "LONG"
}
```
-
### Greatest / Least post-aggregators
`doubleGreatest` and `longGreatest` computes the maximum of all fields and Double.NEGATIVE_INFINITY.
`doubleLeast` and `longLeast` computes the minimum of all fields and Double.POSITIVE_INFINITY.
+| Property | Description | Required |
+| --- | --- | --- |
+| `type` | Must be `"doubleGreatest"`, `"doubleLeast"`, `"longGreatest"`, or `"longLeast"`. | Yes |
+| `name` | Output name of the post-aggregation | Yes |
+| `fields` | List of post-aggregator specs which define inputs to the greatest or least function | Yes |
+
The difference between the `doubleMax` aggregator and the `doubleGreatest` post-aggregator is that `doubleMax` returns the highest value of
all rows for one specific column while `doubleGreatest` returns the highest value of multiple columns in one row. These are similar to the
SQL `MAX` and `GREATEST` functions.
@@ -120,8 +148,11 @@ Example:
```json
{
"type" : "doubleGreatest",
- "name" : ,
- "fields": [, , ...]
+ "name" : "theGreatest",
+ "fields": [
+ { "type": "fieldAccess", "fieldName": "someAgg" },
+ { "type": "fieldAccess", "fieldName": "someOtherAgg" }
+ ]
}
```
@@ -129,23 +160,20 @@ Example:
Applies the provided JavaScript function to the given fields. Fields are passed as arguments to the JavaScript function in the given order.
-```json
-postAggregation : {
- "type": "javascript",
- "name": ,
- "fieldNames" : [, , ...],
- "function":
-}
-```
-
-Example JavaScript aggregator:
+| Property | Description | Required |
+| --- | --- | --- |
+| `type` | Must be `"javascript"` | Yes |
+| `name` | Output name of the post-aggregation | Yes |
+| `fieldNames` | List of input dimension or aggregator output names | Yes |
+| `function` | String javascript function which accepts `fieldNames` as arguments | Yes |
+Example:
```json
{
"type": "javascript",
- "name": "absPercent",
- "fieldNames": ["delta", "total"],
- "function": "function(delta, total) { return 100 * Math.abs(delta) / total; }"
+ "name": "someJavascript",
+ "fieldNames" : ["someAgg", "someOtherAgg"],
+ "function": "function(someAgg, someOtherAgg) { return 100 * Math.abs(someAgg) / someOtherAgg;"
}
```
@@ -157,17 +185,25 @@ Example JavaScript aggregator:
The hyperUniqueCardinality post aggregator is used to wrap a hyperUnique object such that it can be used in post aggregations.
+| Property | Description | Required |
+| --- | --- | --- |
+| `type` | Must be `"hyperUniqueCardinality"` | Yes |
+| `name` | Output name of the post-aggregation | Yes |
+| `fieldName` | The output name of a [`hyperUnique` aggregator](aggregations.md#cardinality-hyperunique) | Yes |
+
```json
{
"type" : "hyperUniqueCardinality",
- "name":