Imply Enterprise and Hybrid release notes
Imply releases include Imply Manager, Pivot, Clarity, and Imply's distribution of Apache Druid®. Imply delivers improvements more quickly than open source because Imply's distribution of Apache Druid uses the primary branch of Apache Druid. This means that it isn't an exact match to any specific open source release. Any open source version numbers mentioned in the Imply documentation don't pertain to Imply's distribution of Apache Druid.
The following release notes provide information on features, improvements, and bug fixes up to Imply STS release 2026.04. Read all release notes carefully, especially the Upgrade and downgrade notes, before upgrading. Additionally, review the deprecations page regularly to see if any features you use are impacted.
For information on the LTS release, see the LTS release notes.
If you are upgrading by more than one version, read the intermediate release notes too.
The following end-of-support dates apply in 2025:
- On January 26, 2025, Imply 2023.01 LTS reaches EOL. This means that the 2023.01 LTS release line will no longer receive any patches, including security updates. Imply recommends that you upgrade to the latest LTS or STS release.
- On January 31, 2025, Imply 2024.01 LTS ends general support status and will be eligible only for security support.
For more information, see Lifecycle Policy.
See Previous versions for information on older releases.
Imply evaluation
New to Imply? Get started with an Imply Hybrid (formerly Imply Cloud) Free Trial or start a self-hosted trial at Get started with Imply!
With Imply Hybrid, the Imply team manages your clusters in AWS, while you control the infrastructure and own the data. With self-hosted Imply, you can run Imply on *NIX systems in your own environment or cloud provider.
Imply Enterprise
If you run Imply Enterprise, see Imply product releases & downloads to access the Imply Enterprise distribution. When prompted, log on to Zendesk with your Imply customer credentials.
For information about the 2025 releases, see 2025 STS release notes.
2026.04
April 24, 2026
Druid highlights
Cascading reindexing (alpha)
Using cascading reindexing, you can now define age-based rules to automatically apply different compaction configurations based on the age of your data. While standard auto-compaction applies a single flat configuration across an entire datasource, cascading reindexing lets you tailor your compaction settings to the characteristics of your data.
For example, you can keep recent data in hourly segments while automatically rolling up to daily segments after 90 days to reduce segment count. You can also layer on age-based row deletion (such as dropping bot traffic from older data), change compression settings, or shift to rollup with coarser query granularity as data ages. Rules are defined inline in the supervisor spec.
You must use compaction supervisors with the MSQ task engine to use cascading reindexing.
Incremental cache
Incremental segment metadata cache (useIncrementalCache) is now generally available and defaults to ifSynced. Druid blocks reads from the cache until it has synced with the metadata store at least once after becoming leader.
Dynamic default query context
You can now add default query context parameters as a dynamic configuration to the Broker. This allows you to override static defaults set in your runtime properties without restarting your deployment or having to update multiple queries individually. Druid applies query context parameters based on the following priority:
- The query context included with the query
- The query context set as a dynamic configuration on the Broker
- The query context parameters set in the runtime properties
- The defaults that ship with Druid
Note that like other Broker dynamic configuration, this is best-effort. Settings may not be applied in certain cases, such as when a Broker has recently started and hasn't received the configuration yet, or if the Broker can't contact the Coordinator. If a query context parameter is critical for all your queries, set it in the runtime properties.
sys.queries table (alpha)
The new system queries table provides information about currently running and recently completed queries that use the Dart engine. This table is off by default. To enable the table, set the following:
druid.sql.planner.enableSysQueriesTable = true
As part of this change, the /druid/v2/sql/queries API now supports an includeComplete parameter that shows recently completed queries.
Improved Kubernetes support
Added a new WebClientOptions pass-through for the Vert.x HTTP client in the kubernetes-overlord-extensions. Operators can now configure any property on the underlying Vert.x WebClientOptions object by using Druid runtime properties. Some of the options you can configure include connection pool size, keep-alive timeouts, and idle timeouts. This is particularly useful for environments with intermediate load balancers that close idle connections. Most Druid deployments won't need this configuration.
(id: 72147) #19071
Minor compaction for Overlord-based compaction (alpha)
You can now configure minor compaction to compact only newly ingested segments while upgrading existing compacted segments. When Druid upgrades segments, it updates the metadata instead of using resources to compact it again. You can use the native compaction engine or the MSQ task engine.
Use the mostFragmentedFirst compaction policy and set either a percentage of rows-based or byte-based threshold for minor compaction.
(id: 71756) #19059 #19205 #19016
Durable storage retention period
The durable storage cleaner now supports configurable time-based retention for MSQ query results. Previously, query results were retained for all known tasks, which was unreliable for completed tasks. With this change, query results are retained for a configurable time period based on the task creation time.
The new configuration property druid.msq.intermediate.storage.cleaner.durationToRetain controls the retention period for query results. The default retention period is 6 hours.
(id: 71361) #19074
Minor compaction for Overlord-based compaction (alpha)
You can now configure minor compaction to compact only newly ingested segments while upgrading existing compacted segments. When Druid upgrades segments, it updates the metadata instead of using resources to compact it again. You can use the native compaction engine or the MSQ task engine.
Use the mostFragmentedFirst compaction policy and set either a percentage of rows-based or byte-based threshold for minor compaction.
Auto-compaction with compaction supervisors
Auto-compaction using compaction supervisors has been improved, now generally available, and the recommended default. Automatic compaction tasks are now prefixed with auto instead of coordinator-issued.
As part of the improvement compaction states are now stored in a central location, a new indexingStates table. Individual segments only need to store a unique reference (indexing_state_fingerprint) to their full compaction state.
Since many segments in a single datasource share the same underlying compaction state, this greatly reduces metadata storage requirements for automatic compaction.
For backwards compatibility, Druid continues to persist the detailed compaction state in each segment. This functionality will be removed in a future release.
You can stop storing detailed compaction state by setting storeCompactionStatePerSegment to false in the cluster compaction config. If you turn it off and need to downgrade, Druid needs to re-compact any segments that have been compacted since you changed the config.
This change has upgrade impacts for metadata storage and metadata caching. For more information, see the Metadata storage for auto-compaction with compaction supervisors upgrade note.
(id: 70230) #19113 #18844 #19252
Query blocklist
You can now use the Broker API (/druid/coordinator/v1/config/broker) to create a query blocklist to dynamically block queries by datasource, query type, or query context. The blocklist takes effect without a restarting Druid. Block rules use AND logic, which means all criteria must match.
The following example blocks all groupBy queries on the wikipedia datasource with a query context parameter of priority equal to 0:
POST /druid/coordinator/v1/config/broker
{
"queryBlocklist": [
{
"ruleName": "block-wikipedia-groupbys",
"dataSources": ["wikipedia"],
"queryTypes": ["groupBy"],
"contextMatches": {"priority": "0"}
}
]
}
Manual Broker routing in the web console
You can now configure which Broker the Router uses for queries issued from the web console. You may want to do this if there are Brokers that don't have visibility into certain data tiers, and you know you're querying data available only on a certain tier.
To specify a Broker, add the following config to web-console/console-config.js:
consoleBrokerService: 'druid/BROKER_NAME'
Truncate string columns
Use the StringColumnFormatSpec config to set the maximum length for string dimension columns you ingest:
- For a specific dimension:
dimensionSchema.columnFormatSpec.maxStringLength - For a specific job:
indexSpec.columnFormatSpec.maxStringLength - Cluster-wide:
druid.indexing.formats.maxStringLength
Druid truncates any string longer than the specified length. The default is to not truncate string values.
#19146 #19258 (https://github.com/apache/druid/pull/19198)
Ingestion
- Added
workerDesctoWorkerStats, which makes it easier to identify where a worker is running #19171 - Added support for
StorageMonitorso that MSQ task engine tasks always emittaskIdandgroupId#19048 - Fixed an inconsistent metadata issue when publishing segments from a realtime task (id: 71879) #19034
- Improved worker cancellation #18931
- Improved exception handling #19234
- Sped up task scheduling on the Overlord #19199
Querying
groupBy query configuration
Added a new groupBy query configuration property druid.query.groupBy.maxSpillFileCount to limit the maximum number of spill files created per query. When the limit is exceeded, the query fails with a clear error message instead of causing Historical nodes to run out of memory during spill file merging. The limit can also be overridden per query using the query context parameter maxSpillFileCount.
Improved handling of nested aggregates
Druid can now merge two aggregates with a projection between them. For example, the following query:
Click here to see the query
SELECT
hr,
UPPER(t1.x) x,
SUM(t1.cnt) cnt,
MIN(t1.mn) mn,
MAX(t1.mx) mx
FROM (
SELECT
floor(__time to hour) hr,
dim2 x,
COUNT(*) cnt,
MIN(m1 * 5) mn,
MAX(m1 + m2) mx
FROM druid.foo
WHERE dim2 IN ('abc', 'def', 'a', 'b', '')
GROUP BY 1, 2
) t1
WHERE t1.x IN ('abc', 'foo', 'bar', 'a', '')
GROUP BY 1, 2
can be simplified to the following:
Click here to see the query
SELECT
FLOOR(__time TO hour) hr,
UPPER(dim2) x,
COUNT(*) cnt,
MIN(m1 * 5) mn,
MAX(m1 + m2) mx
FROM druid.foo
WHERE dim2 IN ('abc', 'a', '')
GROUP BY 1, 2
Other querying
- Added
durationMsto Dart query reports #19169 - Improved error handling so that row signature column order is preserved when column analysis encounters an error #19162
- Improved GROUP BY performance #18952
- Improved expression filters to take advantage of specialized virtual columns when possible, resulting in better performance for the query #18965
Cluster management
New Broker tier selection strategies
Operators can now configure two new Broker TierSelectorStrategy implementations:
strict- Only selects servers whose priorities match the configured list. Example configuration:druid.broker.select.tier=strictanddruid.broker.select.tier.strict.priorities=[1].pooled- Pools servers across the configured priorities and selects among them, allowing queries to use multiple priority tiers for improved availability. Example configuration:druid.broker.select.tier=pooledanddruid.broker.select.tier.pooled.priorities=[2,1].
You can also use druid.broker.realtime.select.tier to configure these strategies for realtime servers.
Cost-based autoscaler algorithm
The algorithm for cost-based autoscaling has been changed:
- Scale up more aggressively when per-partition lag is meaningful
- Relax the partitions-per-task increase limit based on lag severity and headroom
- Keep behavior conservative near
taskCountMaxand avoid negative headroom effects
Other cluster management
- Added a
ReadOnlyauthorizer that allows all READ operations but denies any other operation, such as WRITE #19243 - Added
/status/readyendpoint for service health so that external load balancers can handle a graceful shutdown better #19148 - Added a configurable option to scale-down during task run time for cost-based autoscaler #18958
- Added
storage_sizetosys.serversto facilitate retrieving disk cache size for Historicals when using the virtual storage fabric #18979 - Added a log for new task count computation for the cost-based auto scaler #18929
- Changed how the scaling is calculated from a square root-based scaling formula to a logarithmic formula that provides better emergency recovery at low task counts and millions of lag #18976
- Fixed an issue for compaction jobs that use the native compaction engine. Tasks using range partitioning and a filter that filters out all rows in an interval would not create tombstones to overshadow existing segments when
dropExisting=trueis set in the IOConfig. This leads to intervals getting stuck in an infinite loop of running compaction because the underlying segments aren't properly overshadowed and getting marked as unused (id: 71345) #18938 - Improved the load speed of cached segments during Historical startup #18489
- Improved Broker startup time by parallelizing buffer initialization #19025
- Improved the stack trace for MSQ task engine worker failures so that they're preserved #19049
- Improved the performance of the cost-based autoscaler during loaded lag conditions #18991
Data management
Per-segment timeout configuration
You can now set a timeout for the segments in a specific datasource using a dynamic configuration:
POST /druid/coordinator/v1/config/broker
{
"perSegmentTimeoutConfig": {
"my_large_datasource": { "perSegmentTimeoutMs": 5000, "monitorOnly": false },
"my_new_datasource": { "perSegmentTimeoutMs": 3000, "monitorOnly": true }
}
}
This is useful when different datasources have different performance characteristics. For example, you can allow longer timeouts for larger datasets.
Durable storage cleaner
The durable storage cleaner now supports configurable time-based retention for MSQ query results. Previously, query results were retained for all known tasks, which was unreliable for completed tasks. With this change, query results are retained for a configurable time period based on the task creation time.
The new configuration property druid.msq.intermediate.storage.cleaner.durationToRetain controls the retention period for query results. The default retention period is 6 hours.
Other data management
- Added the
druid.storage.transfer.asyncHttpClientTypeconfig that specifies which async HTTP client to use for S3 transfers:crtfor Amazon CRT ornettyfor Netty NIO #19249 - Added a mechanism to automatically clean up intermediary files on HDFS storage #19187
Metrics and monitoring
BuildRevision field
All Druid metrics now include a buildRevision field to help identify the Git build revision of the Druid server emitting a metric. You can use this information to verify that all nodes in a cluster are running the intended revision.
Monitoring supervisor state
Added a new supervisor/count metric when SupervisorStatsMonitor is enabled in druid.monitoring.monitors. The metric reports each supervisor’s state, such as RUNNING or SUSPENDED, for Prometheus, StatsD, and other metric systems.
Improved groupBy metrics
GroupByStatsMonitor now provides the following metrics:
mergeBuffer/bytesUsedmergeBuffer/maxBytesUsedmergeBuffer/maxAcquisitionTimeNsgroupBy/maxSpilledBytesgroupBy/maxMergeDictionarySize
Filtering metrics
Operators can set druid.emitter.logging.shouldFilterMetrics=true to limit which metrics the logging emitter writes. Optionally, they can set druid.emitter.logging.allowedMetricsPath to a JSON object file where the keys are metric names. A missing custom file results in a warning and use of the bundled defaultMetrics.json. Alerts and other non-metric events are always logged.
New Broker metrics
Added segment/schemaCache/rowSignature/changed and segment/schemaCache/rowSignature/column/count metrics to expose events when the Broker initializes and updates the row signature in the segment metadata cache for each datasource.
Other metrics and monitoring improvements
- Added the following metrics to the default for Prometheus:
mergeBuffer/bytesUsedandmergeBuffer/maxBytesUsed#19110 - Added compaction mode to the
compact/task/countmetric #19151 - Added support for logging and emitting SQL dynamic parameter values #19067
- Added
ingest/rows/published, which all task types emit to denote the total row count of successfully published segments #19177 - Added
queriesandtotalQueriescounters, which reflect queries made to realtime servers to retrieve realtime data - Added
tier/storage/capacitymetric for the Coordinator. This metric is guaranteed to reflect the totalStorageLocationsize configured across all Historicals in a tier #18962 - Added new metrics for virtual storage fabric to the MSQ task engine
ChannelCounters:loadBytes,loadTime,loadWait, andloadFiles#18971 - Added
storage/virtual/hit/bytes,storage/virtual/hold/countandstorage/virtual/hold/bytesmetric toStorageMonitor#18895 #19217 - Added
supervisorIddimension for streaming tasks toTaskCountStatsMonitor#18920 - Added support for
StorageMonitorso that MSQ task engine tasks always emittaskIdandgroupId. Additionally, changedStorageMonitorto always be on #19048 (id: 72020) - Improved the metrics for autoscalers, so that they all emit the same metrics:
supervisorId,dataSource, andstream#19097
Web console
- Added the Dart unique execution ID (
dartQueryId) and thesqlQueryIdto the Details pane in the web console #19185 - Added support for showing completed Dart queries in the web console #18940
- Added a detail dialog to the Services page #18960
- Added support for Dart reports #18897
- Changed the criteria for active workers: any nonzero rows, files, bytes, frames, or wall time is enough to consider a worker active #19183
- Changed the Cancel query option to show only if a query is in an accepted or running state #19182
- Changed the ordering of the current Dart queries panel to show queries in the following order: RUNNING, ACCEPTED, and then COMPLETED. RUNNING and ACCEPTED queries are ordered by the most recent first (based on timestamp). COMPLETED queries are sorted by finish time #19237
- The following improvements have been made to how storage columns are displayed in the web console:
- Improved the compaction config view to
- Renamed Current size to Assigned size.
- Renamed Max size to Effective size. It now displays the smaller value between
max_sizeandstorage_size. The max size is still shown as a tooltip. - Changed usage calculation to use
effective_size#19007
Extensions
HDFS storage
Added support for lz4 compression. As part of this change, the following metrics are now available:
hdfs/pull/sizehdfs/pull/durationhdfs/push/sizehdfs/push/duration
Kubernetes
- Fixed task labeling in the Kubernetes task runner (id: 71454) #18981
- Reduced log noise for k8s NodeRoleWatcher (id: 72191) #19077
- Improved support for kubernetes readiness and liveness probes (id: 28729) #19148
Dependencies
Dependency updates
The following dependencies have been updated:
Show the dependency updates
- Added
software.amazon.awssdkto supportWebIdentityTokenProvider#19178 org.apache.icebergfrom1.6.1to1.7.2#19172diffnode module from 4.0.1 to 4.0.4 #18933org.apache.avrofrom 1.11.4 to 1.11.5 #19103bytebuddyfrom1.17.7to1.18.3#19000slf4jfrom2.0.16to2.0.17#18990- Apache Commons Codec from
1.16.1to1.17.1#18990 jacocofrom0.8.12to0.8.14#18990docker-java-bomfrom3.6.0to3.7.0#18990assertj-corefrom3.24.2to3.27.7#18994maven-surefire-pluginfrom3.2.5to3.5.4#18847guicefrom5.1.0to6.0.0#18986- JDK compiler from 11 to 17 #18977
vertxfrom4.5.14to4.5.24#18947fabric8from7.4.0to7.5.2#18947mockitofrom5.14.2to5.23#19145easymockfrom5.2.0to5.6.0#19145equalsverifierfrom3.15.8to4.4.1#19145bytebuddyfrom1.18.3to1.18.5#19145- Added
objenesis3.5#19145 org.apache.zookeeperfrom 3.8.4 to 3.8.6 #19135com.lmax.disruptorfrom3.3.6to3.4.4#19122org.junit.junit-bomfrom5.13.3to5.14.3#19122io.fabric8:kubernetes-client7.5.2 → 7.6.0 #19071io.kubernetes:client-java19.0.0 → 25.0.0-legacy #19071com.squareup.okhttp3:okhttp4.12.0 → 5.3.2 #19071org.jetbrains.kotlin:kotlin-stdlib1.9.25 → 2.2.21 #19071commons-codec:commons-codec1.17.1 → 1.20.0 #19071org.apache.commons:commons-lang33.19.0 → 3.20.0 #19071com.google.code.gson:gson2.12.0 → 2.13.2 #19071com.amazonaws:aws-java-sdk1.12.784 → 1.12.793 #19071caffeinefrom2.8.0to2.9.3#19208commons-iofrom2.17.0to2.21.0#19208commons-collections4from4.2to4.5.0#19208commons-compressfrom1.27.0to1.28.0#19208zstd-jnifrom1.5.2-3to1.5.7-7#19208scala-libraryfrom2.12.7to2.13.16#19208icebergfrom1.7.2to1.10.0#19232parquetfrom1.15.2to1.16.0#19232avrofrom1.11.5to1.12.0#19232jacksonfrom2.19.2to2.20.2#19248netty4from4.2.6.Finalto4.2.12.Final#19248errorpronefrom2.35.1to2.41.0#19248bcprov-jdk18on/bcpkix-jdk18onfrom1.81to1.82#19248RoaringBitmapfrom0.9.49to1.6.13#19248jedisfrom5.1.2to7.0.0#19248snakeyamlfrom2.4to2.5#19248aircompressorfrom0.21to2.0.2#19248reflectionsfrom0.9.12to0.10.2#19248httpclient5from5.5to5.5.1#19248jakarta.activationfrom1.2.2to2.0.1#19248netty-tcnative-boringssl-staticfrom2.0.73.Finalto2.0.75.Final#19248maven-compiler-pluginfrom3.11.0to3.14.1#19248
Pivot changes
- Added custom measure PIVOT_QUERY_INTERVAL. It returns the ISO 8601 interval string that covers the evaluated time filter for a query (id: 71221)
2026.01.4
April 10, 2026
Druid changes
- Fixed an issue with queries that used expression filters or expression-based aggregators when there was a multi-value dimension (id: 72776) #19245
- Fixed an issue where Kinesis task live reports didn't work (id: 72777) #19246
- Fixed an issue where append queries returned an authorization error even if you had authorization for both datasources (id: 72403) #19147
2026.01.3
March 19, 2026
Imply Manager updates
- Security updates
2026.01.2
March 10, 2026
Druid changes
- Fixed an issue where compaction fails when a dimension is in the ordering dimension list but doesn't have a corresponding column (id: 72228)
- Fixed a parsing issue when there are empty fields in a nested column. (id: 72196) #19072
- Fixed a metric reporting bug where successful MSQ task requests (
POST/druid/v2/sql/task) caused the Router to erroneously emit aquery/timemetric with a failed status instead of success (id: 72195) #19066
2026.01.1
February 19, 2026
Imply Manager updates
- Security updates
2026.01
February 3, 2026
Druid highlights
Java 21 support
Druid now supports Java 21 in addition to Java 17. Support for Java 11 ended with the 2025.10 STS release.
(id: 71114) (id: 69553)
Query reports for Dart
Dart now supports query reports for running and recently completed queries. The reports can be fetched from the /druid/v2/sql/queries/<sqlQueryId>/reports endpoint.
The format of the response is a JSON object with two keys, query and report. The query key is the same info that is available from the existing /druid/v2/sql/queries endpoint. The report key is a report map including an MSQ report.
You can control the retention behavior for reports using the following configs:
druid.msq.dart.controller.maxRetainedReportCount: Max number of reports that are retained. The default is 0, meaning no reports are retaineddruid.msq.dart.controller.maxRetainedReportDuration: How long reports are retained in ISO 8601 duration format. The default is PT0S, meaning time-based expiration is turned off
#18886 (id: 71070)
Segment format
The new version 10 segment format improves upon version 9. Version 10 supports partial segment downloads, a feature provided by the experimental virtual storage fabric feature. To streamline partial fetches, the contents of the base segment contents get combined into a single file named druid.segment.
As part of this new segment format, you can use the bin/dump-segment tool to view segment metadata. The tool outputs serialized JSON.
Set druid.indexer.task.buildV10=true to have Druid create segments using the new version.
Note that prior versions of Imply don't support the new segment format. If you downgrade from 2026.01 STS to a prior release and enabled the new segment format, you must first reindex any version 10 segments to version 9. After you reindex, you can proceed with the downgrade.
statsd metrics
The following metrics have been added to the default list for statsd:
task/action/run/timetask/status/queue/counttask/status/updated/countingest/handoff/time
#18846 (id: 70763)
Cost-based autoscaling for streaming ingestion
Druid now supports cost-based autoscaling for streaming ingestion that optimizes task count by balancing lag reduction against resource efficiency.. This autoscaling strategy uses the following formula:
totalCost = lagWeight × lagRecoveryTime + idleWeight × idlenessCost
which accounts for the time to clear the backlog and compute time:
lagRecoveryTime = aggregateLag / (taskCount × avgProcessingRate) — time to clear backlog
idlenessCost = taskCount × taskDuration × predictedIdleRatio — wasted compute time
#18819 (id: 70789) (id: 70629)
Record offset and partition
You can now ingest the record offset (offsetColumnName) and partition (partitionColumnName) using the KafkaInputFormat. Their default names are kafka.offset and kafka.partition respectively .
#18757 (id: 70372)
Additional ingestion configurations
You can now use the following configs to control how your data gets ingested and stored:
maxInputFilesPerWorker: Controls the maximum number of input files or segments per worker.maxPartitions: Controls the maximum number of output partitions for any single stage, which affects how many segments are generated during ingestion.
#18826 (id: 70654)
Numeric fields in nested columns
You can now choose between full dictionary-based indexing and nulls-only indexing for long/double fields in nested columns. Set NestedCommonFormatColumnFormatSpec to either LongFieldBitmapIndexEncoding and DoubleFieldBitmapIndexEncoding.
#18722 (id: 70192)
Improved indexSpec
You can now specify a format specification for each JSON column individually, which will override the indexSpec defined in the ingestion job. Additionally, a system-wide default indexSpec can be set using the druid.indexing.formats.indexSpec property.
#17762 #18638 (id: 69629) (id: 69305) (id: 69304)
Jetty 12
Druid now uses Jetty 12. Your deployment may be impacted depending, specifically with regards to URI compliance and SNI host checks.
For more information, see the upgrade note for Jetty
Dimension schemas
At ingestion time, dimension schemas in dimensionsSpec are now strictly validated against allowed types. Previously, an invalid type would fall back to string dimension. Now, such values are rejected. Users must specify a type that's one of the allowed types. Omitting type still defaults to string, preserving backward compatibility.
#18565 (id: 69260)
cgroup v2 support
cgroup v2 is now supported, and all cgroup metrics now emit cgroupversion to identify which version is being used.
The following metrics automatically switch to v2 if v2 is detected: CgroupCpuMonitor , CgroupCpuSetMonitor, CgroupDiskMonitor,MemoryMonitor. CpuAcctDeltaMonitor fails gracefully if v2 is detected.
Additionally, CgroupV2CpuMonitor now also emits cgroup/cpu/shares and cgroup/cpu/cores_quota.
Pivot highlights
Authentication token selection now prioritizes datasource access
Authentication token selection now filters tokens by datasource access before applying priority ranking. Previously, Pivot always selected the highest-priority token regardless of whether it included the requested datasource, which could block access to datasources you had valid permissions for.
If no token matches your requested datasources, the system falls back to selecting your highest-priority token.
(id: 70502)
Pivot changes
- You can now add names and descriptions to Pivot API tokens (id: 70265)
- You can now add banner messages to data cubes and dashboards (id: 69940)
- Added tooltips to dimensions and measures in data cube view (id: 70271)
Druid changes
- Added retries for HTTP 401 issues (#18771)(id: 70431)
- Added
query/byteslogging for failed queries (id: 70749) - Added
maxRowsInMemoryto replacerowsInMemory.rowsInMemorynow functions as an alternate way to provide that config and is ignored ifmaxRowsInMemoryis specified. Previously, onlyrowsInMemoryexisted #18832 (id: 70711) - Added a fingerprinting mechanism to track compaction states in a more efficient manner (id: 70754)
- Added the
supervisorIddimension with streaming task metrics (id: 70552) - Added the
mostFragmentedFirstcompaction policy to prioritize fragmented intervals (#18802)(id: 70553) - Added support for full parallelism in
localSortfor the MSQ task engine (id: 70403) - Security fixes (id: 71109) (id: 69542)
- Fixed CVE-2026-23906
- Changed the response of the
/handoffAPI to no body instead of an empty JSON response (#18884) (id: 70875) - Changed metrics behavior so that task metrics get emitted on all task completions (id: 70417)
- Fixed a logic error in policy application for compaction using the MSQ task engine (id: 70362)
- Fixed an issue where a projection fails to match when the aggregator has a filter (id: 68534)
- Fixed an issue with Coordinator-based compaction (#18812) (id: 70609)
- Fixed an issue where segments weren't getting dropped (#18782) (id: 70498)
- Fixed an issue where a limit to segments per chunk was enforced incorrectly (#18777) (id: 70445)
- Fixed an issue where changing the query detaches it from the currently running execution (#18776) (id: 70443)
- Fixed an issue where an Overlord that is giving up leadership erroneously kills indexing tasks (#18772) (id: 70433)
- Fixed how task slots for MSQ compaction task are calculated (#18756) (id: 70371)
- Fixed an issue with how the task action retry count is calculated (#18755) (id: 70369)
- Fixed an issue where MSQ compaction tasks can fail if a policy enforcer is enabled (#18741) (id: 70291)
- Fixed an issue in the
SeekableStreamsupervisor autoscaler where scale-down operations would create duplicate supervisor history entries. The autoscaler now correctly waits for tasks to complete before attempting subsequent scale operations (#18715) (id: 70137) - Fixed an issue with SQL planning for
json_valuereturning a Boolean to plan as long type output (#18698) (id: 70005) - Fixed an issue where a query returns an empty result set with virtual columns in projection filter (id: 69020)
- Improved the web console:
- Lookup values now use the default engine (id: 70854)
- System table queries now explicitly use the 'native' engine (id: 70820)
- Improved explore max time cancellation (id: 70701)
- Fixed areas where
supervisor_idanddatasourcewere conflated (id: 70691) - Fixed inactive worker counting (id: 70571)
- Improved ISO date parsing (#18724) (id: 70195)
- Improved supervisors so that they can't kill tasks while the supervisor is stopping (#18767) (id: 70419)
- Improved the lag-based autoscaler for streaming ingest (#18745) (id: 70402)
- Improved compaction so that it identifies multi-value dimensions for dimension schemas that can produce them #18760 (id: 70381)
- Improved lag-based autoscaler config persistence (#18745) (id: 70147)
- Improved JSON ingestion so that Druid can compute JSON values directly from dictionary or index structures, allowing ingestion to skip persisting raw JSON data entirely. This reduces on-disk storage size #18589 (id: 69394)
- Improved performance for the timeseries aggregator (id: 69170)
- Updated ZooKeeper to 3.8.5 (id: 69186)
Imply Enterprise
- Fixed an issue with upgrades for Imply Enterprise deployments running on ARM (id: 70401)
- Imply Enterprise and Hybrid now support
cgroupv2(id: 68187)
Upgrade and downgrade notes
In addition to the upgrade and downgrade notes, review the deprecations page regularly to see if any features you use are impacted.
Minimum supported version for rolling upgrade
See Supported upgrade paths in the Lifecycle Policy documentation.
Hadoop-based ingestion
Starting in 2026.04 STS, Hadoop-based ingestion has been removed. Migrate to SQL-based ingestion.
AWS SDK v2
Starting in 2026.04 STS, Druid now uses AWS SDK version 2.40.0 since v1 of the SDK is at end of life.
Segment metadata cache on by default
Starting in 2026.04 STS, the segment metadata cache is on by default. This feature allows the Broker to cache segment metadata polled from the Coordinator, rather than having to fetch metadata for every query against the sys.segments table. This improves performance but increases memory usage on Brokers.
The druid.sql.planner.metadataSegmentCacheEnable config controls this feature.
Parser changes
Streaming ingestion parser
Starting in 2026.04 STS, support for the deprecated parser has been removed for streaming ingest tasks such as Kafka and Kinesis. Operators must now specify inputSource/inputFormat on the ioConfig of the supervisor spec, and the dataSchema must not specify a parser. Do this before upgrading to Druid 37 or newer.
Removed ParseSpec and deprecated parsers
The Parser for native batch tasks and streaming ingestion indexing services has been removed. Where possible, use the input format instead. Note that JavascriptParseSpec and JSONLowercaseParseSpec have no InputFormat equivalents.
Druid supports custom text data formats and can use the Regex input format to parse them. However, be aware doing this to
parse data is less efficient than writing a native Java InputFormat extension, or using an external stream processor. We welcome contributions of new input formats.
Metadata storage for auto-compaction with compaction supervisors
Automatic compaction with supervisors requires incremental segment metadata caching on the Overlord and a new metadata store table; no action is required if you are using the default settings for the following configs:
druid.manager.segments.useIncrementalCachedruid.metadata.storage.connector.createTables
If druid.manager.segments.useIncrementalCache is set to never, update it to ifSynced or always. For more information about the config, see Segment metadata cache.
If you set the druid.metadata.storage.connector.createTables config to false, you need to manually alter the segments table and create the compactionStates table. The Postgres DDL is provided below as a guide:
-- create the indexing states lookup table and associated indices
CREATE TABLE druid_indexingStates (
created_date VARCHAR(255) NOT NULL,
datasource VARCHAR(255) NOT NULL,
fingerprint VARCHAR(255) NOT NULL,
payload BYTEA NOT NULL,
used BOOLEAN NOT NULL,
pending BOOLEAN NOT NULL,
used_status_last_updated VARCHAR(255) NOT NULL,
PRIMARY KEY (fingerprint),
);
CREATE INDEX idx_druid_compactionStates_used ON druid_compactionStates(used, used_status_last_updated);
-- modify druid_segments table to have a column for storing compaction state fingerprints
ALTER TABLE druid_segments ADD COLUMN indexing_state_fingerprint VARCHAR(255);
You may have to adapt the syntax to fit your table naming prefix and metadata store backend.
Segment locking
Segment locking and NumberedOverwriteShardSpec are deprecated and will be removed in a future release. Use time chunk locking instead. You can make sure only time chunk locking is used by setting druid.indexer.tasklock.forceTimeChunkLock to true, which is the default.
Segment formats
Starting in 2026.01 STS, Imply supports a new segment format, version 10. Prior versions of Imply don't support the new segment format. If you want to downgrade from 2026.01 STS to a prior release and enabled the new segment format, you must first reindex any version 10 segments to version 9. After you reindex the data, you can proceed with the downgrade.
MSQ tasks during rolling upgrades
MSQ query_controller tasks can fail during a rolling update due to the addition of new counters that are not backwards compatible with these older versions. You can either retry any failed queries after the update completes or you can set includeAllCounters to false in the query context for any MSQ jobs that need to run during the rolling update.
(#18761) (id: 70389)
Jetty 12 SNI host checks
Jetty 12 by default has strict enforcement of RFC3986 URI format. This is a change from Jetty 9. As part of this update, a new server configuration option has been added: druid.server.http.uriCompliance. To avoid potential breaking changes in existing Druid deployments, this config defaults to LEGACY, which uses the more permissive URI format enforcement that Jetty 9 used. If the cluster you operate does not require legacy compatibility, we recommend you use the upstream Jetty default of RFC3986 in your Druid deployment. See the jetty documentation for more info.
Jetty 12 servers do strict SNI host0 checks when TLS is enabled. If the host your client is connecting to the server with does not match what is in the keystore, even if there is only one certificate in that keystore, it will return a 400 response. This could impact some use cases, such as folks connecting over localhost for whatever reason. If this change will break your deployment, you can opt-out of the change by setting druid.server.http.enforceStrictSNIHostChecking to false in the runtime.properties for some or all of your Druid services. It is recommended that you modify your client behavior to accommodate this change in jetty instead of overriding the config whenever possible.
Async download extension
If you load the imply-sql-async extension, you must remove this extension before you upgrade. This extension was used for the old async download. Support for that feature was dropped in the 2025.01 release.
Incompatible changes
Removed defaultProcessingRate config
This config allowed scaling actions to begin prior to the first metrics becoming available.
Front-coding format
Druid now defaults to v1 of the front-coded format instead of version 0 if enabled. Version 1 was introduced in Druid 26. Downgrading to or upgrading from a version of Druid prior to 26 may require reindexing if you have front-coding enabled with version 0.
Deprecation notices
For a more complete list of deprecations and their planned removal dates, see Deprecations.
Some segment loading configs deprecated
The following segment related configs are now deprecated and will be removed in future releases:
replicationThrottleLimituseRoundRobinSegmentAssignmentmaxNonPrimaryReplicantsToLoaddecommissioningMaxPercentOfMaxSegmentsToMove
Use smartSegmentLoading mode instead, which calculates values for these variables automatically.
End of support
ZooKeeper-based task discovery
Use HTTP-based task discovery instead, which has been the default since 2022.
ioCOnfig.inputSource.type.azure storage schema
Update your ingestion specs to use the azureStorage storage schema, which provides more capabilities.