The query context is used for various query configuration parameters. Query context parameters can be specified in the following ways:
- For Druid SQL, context parameters are provided either as a JSON object named
contextto the HTTP POST API, or as properties to the JDBC connection.
- For native queries, context parameters are provided as a JSON object named
These parameters apply to all query types.
|timeout||Query timeout in millis, beyond which unfinished queries will be cancelled. 0 timeout means |
|priority||Query Priority. Queries with higher priority get precedence for computational resources.|
|lane||Query lane, used to control usage limits on classes of queries. See Broker configuration for more details.|
|queryId||auto-generated||Unique identifier given to this query. If a query ID is set or known, this can be used to cancel the query|
|useCache||Flag indicating whether to leverage the query cache for this query. When set to false, it disables reading from the query cache for this query. When set to true, Apache Druid uses |
|populateCache||Flag indicating whether to save the results of the query to the query cache. Primarily used for debugging. When set to false, it disables saving the results of this query to the query cache. When set to true, Druid uses |
|useResultLevelCache||Flag indicating whether to leverage the result level cache for this query. When set to false, it disables reading from the query cache for this query. When set to true, Druid uses |
|populateResultLevelCache||Flag indicating whether to save the results of the query to the result level cache. Primarily used for debugging. When set to false, it disables saving the results of this query to the query cache. When set to true, Druid uses |
|bySegment||Return "by segment" results. Primarily used for debugging, setting it to |
|finalize||Flag indicating whether to "finalize" aggregation results. Primarily used for debugging. For instance, the |
|maxScatterGatherBytes||Maximum number of bytes gathered from data processes such as Historicals and realtime processes to execute a query. This parameter can be used to further reduce |
|maxQueuedBytes||Maximum number of bytes queued per query before exerting backpressure on the channel to the data server. Similar to |
|serializeDateTimeAsLong||If true, DateTime is serialized as long in the result returned by Broker and the data transportation between Broker and compute process|
|serializeDateTimeAsLongInner||If true, DateTime is serialized as long in the data transportation between Broker and compute process|
|enableParallelMerge||Enable parallel result merging on the Broker. Note that |
|parallelMergeParallelism||Maximum number of parallel threads to use for parallel result merging on the Broker. See Broker configuration for more details.|
|parallelMergeInitialYieldRows||Number of rows to yield per ForkJoinPool merge task for parallel result merging on the Broker, before forking off a new task to continue merging sequences. See Broker configuration for more details.|
|parallelMergeSmallBatchRows||Size of result batches to operate on in ForkJoinPool merge tasks for parallel result merging on the Broker. See Broker configuration for more details.|
|useFilterCNF||If true, Druid will attempt to convert the query filter to Conjunctive Normal Form (CNF). During query processing, columns can be pre-filtered by intersecting the bitmap indexes of all values that match the eligible filters, often greatly reducing the raw number of rows which need to be scanned. But this effect only happens for the top level filter, or individual clauses of a top level 'and' filter. As such, filters in CNF potentially have a higher chance to utilize a large amount of bitmap indexes on string columns during pre-filtering. However, this setting should be used with great caution, as it can sometimes have a negative effect on performance, and in some cases, the act of computing CNF of a filter can be expensive. We recommend hand tuning your filters to produce an optimal form if possible, or at least verifying through experimentation that using this parameter actually improves your query performance with no ill-effects.|
In addition, some query types offer context parameters specific to that query type.
|minTopNThreshold||The top minTopNThreshold local results from each segment are returned for merging to determine the global topN.|
|skipEmptyBuckets||Disable timeseries zero-filling behavior, so only buckets with results will be returned.|
See the list of GroupBy query context parameters available on the groupBy query page.
The GroupBy and Timeseries query types can run in vectorized mode, which speeds up query execution by processing batches of rows at a time. Not all queries can be vectorized. In particular, vectorization currently has the following requirements:
- All query-level filters must either be able to run on bitmap indexes or must offer vectorized row-matchers. These include "selector", "bound", "in", "like", "regex", "search", "and", "or", and "not".
- All filters in filtered aggregators must offer vectorized row-matchers.
- All aggregators must offer vectorized implementations. These include "count", "doubleSum", "floatSum", "longSum", "hyperUnique", and "filtered".
- No virtual columns.
- For GroupBy: All dimension specs must be "default" (no extraction functions or filtered dimension specs).
- For GroupBy: No multi-value dimensions.
- For Timeseries: No "descending" order.
- Only immutable segments (not real-time).
- Only table datasources (not joins, subqueries, lookups, or inline datasources).
Other query types (like TopN, Scan, Select, and Search) ignore the "vectorize" parameter, and will execute without
vectorization. These query types will ignore the "vectorize" parameter even if it is set to
|vectorize||Enables or disables vectorized query execution. Possible values are |
|vectorSize||Sets the row batching size for a particular query. This will override |