Skip to main content

Imply Enterprise and Hybrid release notes

Imply releases include Imply Manager, Pivot, Clarity, and Imply's distribution of Apache Druid®. Imply delivers improvements more quickly than open source because Imply's distribution of Apache Druid uses the primary branch of Apache Druid. This means that it isn't an exact match to any specific open source release. Any open source version numbers mentioned in the Imply documentation don't pertain to Imply's distribution of Apache Druid.

The following release notes provide information on features, improvements, and bug fixes up to Imply STS release 2026.01.3. Read all release notes carefully, especially the Upgrade and downgrade notes, before upgrading. Additionally, review the deprecations page regularly to see if any features you use are impacted.

For information on the LTS release, see the LTS release notes.

If you are upgrading by more than one version, read the intermediate release notes too.

The following end-of-support dates apply in 2025:

  • On January 26, 2025, Imply 2023.01 LTS reaches EOL. This means that the 2023.01 LTS release line will no longer receive any patches, including security updates. Imply recommends that you upgrade to the latest LTS or STS release.
  • On January 31, 2025, Imply 2024.01 LTS ends general support status and will be eligible only for security support.

For more information, see Lifecycle Policy.

See Previous versions for information on older releases.

Imply evaluation

New to Imply? Get started with an Imply Hybrid (formerly Imply Cloud) Free Trial or start a self-hosted trial at Get started with Imply!

With Imply Hybrid, the Imply team manages your clusters in AWS, while you control the infrastructure and own the data. With self-hosted Imply, you can run Imply on *NIX systems in your own environment or cloud provider.

Imply Enterprise

If you run Imply Enterprise, see Imply product releases & downloads to access the Imply Enterprise distribution. When prompted, log on to Zendesk with your Imply customer credentials.

For information about the 2025 releases, see 2025 STS release notes.

2026.01.3

March 19, 2026

Imply Manager updates

  • Security updates

2026.01.2

March 10, 2026

Druid changes

  • Fixed an issue where compaction fails when a dimension is in the ordering dimension list but doesn't have a corresponding column (id: 72228)
  • Fixed a parsing issue when there are empty fields in a nested column. (id: 72196) #19072
  • Fixed a metric reporting bug where successful MSQ task requests (POST /druid/v2/sql/task) caused the Router to erroneously emit a query/time metric with a failed status instead of success (id: 72195) #19066

2026.01.1

February 19, 2026

Imply Manager updates

  • Security updates

2026.01

February 3, 2026

Druid highlights

Java 21 support

Druid now supports Java 21 in addition to Java 17. Support for Java 11 ended with the 2025.10 STS release.

(id: 71114) (id: 69553)

Query reports for Dart

Dart now supports query reports for running and recently completed queries. The reports can be fetched from the /druid/v2/sql/queries/<sqlQueryId>/reports endpoint.

The format of the response is a JSON object with two keys, query and report. The query key is the same info that is available from the existing /druid/v2/sql/queries endpoint. The report key is a report map including an MSQ report.

You can control the retention behavior for reports using the following configs:

  • druid.msq.dart.controller.maxRetainedReportCount: Max number of reports that are retained. The default is 0, meaning no reports are retained
  • druid.msq.dart.controller.maxRetainedReportDuration: How long reports are retained in ISO 8601 duration format. The default is PT0S, meaning time-based expiration is turned off

#18886 (id: 71070)

Segment format

The new version 10 segment format improves upon version 9. Version 10 supports partial segment downloads, a feature provided by the experimental virtual storage fabric feature. To streamline partial fetches, the contents of the base segment contents get combined into a single file named druid.segment.

As part of this new segment format, you can use the bin/dump-segment tool to view segment metadata. The tool outputs serialized JSON.

Set druid.indexer.task.buildV10=true to have Druid create segments using the new version.

Note that prior versions of Imply don't support the new segment format. If you downgrade from 2026.01 STS to a prior release and enabled the new segment format, you must first reindex any version 10 segments to version 9. After you reindex, you can proceed with the downgrade.

#18880 #18901 (id: 70870)

statsd metrics

The following metrics have been added to the default list for statsd:

  • task/action/run/time
  • task/status/queue/count
  • task/status/updated/count
  • ingest/handoff/time

#18846 (id: 70763)

Cost-based autoscaling for streaming ingestion

Druid now supports cost-based autoscaling for streaming ingestion that optimizes task count by balancing lag reduction against resource efficiency.. This autoscaling strategy uses the following formula:

totalCost = lagWeight × lagRecoveryTime + idleWeight × idlenessCost

which accounts for the time to clear the backlog and compute time:

lagRecoveryTime = aggregateLag / (taskCount × avgProcessingRate) — time to clear backlog
idlenessCost = taskCount × taskDuration × predictedIdleRatio — wasted compute time

#18819 (id: 70789) (id: 70629)

Record offset and partition

You can now ingest the record offset (offsetColumnName) and partition (partitionColumnName) using the KafkaInputFormat. Their default names are kafka.offset and kafka.partition respectively .

#18757 (id: 70372)

Additional ingestion configurations

You can now use the following configs to control how your data gets ingested and stored:

  • maxInputFilesPerWorker: Controls the maximum number of input files or segments per worker.
  • maxPartitions: Controls the maximum number of output partitions for any single stage, which affects how many segments are generated during ingestion.

#18826 (id: 70654)

Numeric fields in nested columns

You can now choose between full dictionary-based indexing and nulls-only indexing for long/double fields in nested columns. Set NestedCommonFormatColumnFormatSpec to either LongFieldBitmapIndexEncoding and DoubleFieldBitmapIndexEncoding.

#18722 (id: 70192)

Improved indexSpec

You can now specify a format specification for each JSON column individually, which will override the indexSpec defined in the ingestion job. Additionally, a system-wide default indexSpec can be set using the druid.indexing.formats.indexSpec property.

#17762 #18638 (id: 69629) (id: 69305) (id: 69304)

Jetty 12

Druid now uses Jetty 12. Your deployment may be impacted depending, specifically with regards to URI compliance and SNI host checks.

For more information, see the upgrade note for Jetty

Dimension schemas

At ingestion time, dimension schemas in dimensionsSpec are now strictly validated against allowed types. Previously, an invalid type would fall back to string dimension. Now, such values are rejected. Users must specify a type that's one of the allowed types. Omitting type still defaults to string, preserving backward compatibility.

#18565 (id: 69260)

cgroup v2 support

cgroup v2 is now supported, and all cgroup metrics now emit cgroupversion to identify which version is being used.

The following metrics automatically switch to v2 if v2 is detected: CgroupCpuMonitor , CgroupCpuSetMonitor, CgroupDiskMonitor,MemoryMonitor. CpuAcctDeltaMonitor fails gracefully if v2 is detected.

Additionally, CgroupV2CpuMonitor now also emits cgroup/cpu/shares and cgroup/cpu/cores_quota.

#18705

Pivot highlights

Authentication token selection now prioritizes datasource access

Authentication token selection now filters tokens by datasource access before applying priority ranking. Previously, Pivot always selected the highest-priority token regardless of whether it included the requested datasource, which could block access to datasources you had valid permissions for.

If no token matches your requested datasources, the system falls back to selecting your highest-priority token.

(id: 70502)

Pivot changes

  • You can now add names and descriptions to Pivot API tokens (id: 70265)
  • You can now add banner messages to data cubes and dashboards (id: 69940)
  • Added tooltips to dimensions and measures in data cube view (id: 70271)

Druid changes

  • Added retries for HTTP 401 issues (#18771)(id: 70431)
  • Added query/bytes logging for failed queries (id: 70749)
  • Added maxRowsInMemory to replace rowsInMemory. rowsInMemory now functions as an alternate way to provide that config and is ignored if maxRowsInMemory is specified. Previously, only rowsInMemory existed #18832 (id: 70711)
  • Added a fingerprinting mechanism to track compaction states in a more efficient manner (id: 70754)
  • Added the supervisorId dimension with streaming task metrics (id: 70552)
  • Added the mostFragmentedFirst compaction policy to prioritize fragmented intervals (#18802)(id: 70553)
  • Added support for full parallelism in localSort for the MSQ task engine (id: 70403)
  • Security fixes (id: 71109) (id: 69542)
    • Fixed CVE-2026-23906
  • Changed the response of the /handoff API to no body instead of an empty JSON response (#18884) (id: 70875)
  • Changed metrics behavior so that task metrics get emitted on all task completions (id: 70417)
  • Fixed a logic error in policy application for compaction using the MSQ task engine (id: 70362)
  • Fixed an issue where a projection fails to match when the aggregator has a filter (id: 68534)
  • Fixed an issue with Coordinator-based compaction (#18812) (id: 70609)
  • Fixed an issue where segments weren't getting dropped (#18782) (id: 70498)
  • Fixed an issue where a limit to segments per chunk was enforced incorrectly (#18777) (id: 70445)
  • Fixed an issue where changing the query detaches it from the currently running execution (#18776) (id: 70443)
  • Fixed an issue where an Overlord that is giving up leadership erroneously kills indexing tasks (#18772) (id: 70433)
  • Fixed how task slots for MSQ compaction task are calculated (#18756) (id: 70371)
  • Fixed an issue with how the task action retry count is calculated (#18755) (id: 70369)
  • Fixed an issue where MSQ compaction tasks can fail if a policy enforcer is enabled (#18741) (id: 70291)
  • Fixed an issue in the SeekableStream supervisor autoscaler where scale-down operations would create duplicate supervisor history entries. The autoscaler now correctly waits for tasks to complete before attempting subsequent scale operations (#18715) (id: 70137)
  • Fixed an issue with SQL planning for json_value returning a Boolean to plan as long type output (#18698) (id: 70005)
  • Fixed an issue where a query returns an empty result set with virtual columns in projection filter (id: 69020)
  • Improved the web console:
    • Lookup values now use the default engine (id: 70854)
    • System table queries now explicitly use the 'native' engine (id: 70820)
    • Improved explore max time cancellation (id: 70701)
    • Fixed areas where supervisor_id and datasource were conflated (id: 70691)
    • Fixed inactive worker counting (id: 70571)
    • Improved ISO date parsing (#18724) (id: 70195)
  • Improved supervisors so that they can't kill tasks while the supervisor is stopping (#18767) (id: 70419)
  • Improved the lag-based autoscaler for streaming ingest (#18745) (id: 70402)
  • Improved compaction so that it identifies multi-value dimensions for dimension schemas that can produce them #18760 (id: 70381)
  • Improved lag-based autoscaler config persistence (#18745) (id: 70147)
  • Improved JSON ingestion so that Druid can compute JSON values directly from dictionary or index structures, allowing ingestion to skip persisting raw JSON data entirely. This reduces on-disk storage size #18589 (id: 69394)
  • Improved performance for the timeseries aggregator (id: 69170)
  • Updated ZooKeeper to 3.8.5 (id: 69186)

Imply Enterprise

  • Fixed an issue with upgrades for Imply Enterprise deployments running on ARM (id: 70401)
  • Imply Enterprise and Hybrid now support cgroup v2 (id: 68187)

Upgrade and downgrade notes

In addition to the upgrade and downgrade notes, review the deprecations page regularly to see if any features you use are impacted.

Minimum supported version for rolling upgrade

See Supported upgrade paths in the Lifecycle Policy documentation.

Segment formats

Starting in 2026.01 STS, Imply supports a new segment format, version 10. Prior versions of Imply don't support the new segment format. If you want to downgrade from 2026.01 STS to a prior release and enabled the new segment format, you must first reindex any version 10 segments to version 9. After you reindex the data, you can proceed with the downgrade.

#18880

MSQ tasks during rolling upgrades

MSQ query_controller tasks can fail during a rolling update due to the addition of new counters that are not backwards compatible with these older versions. You can either retry any failed queries after the update completes or you can set includeAllCounters to false in the query context for any MSQ jobs that need to run during the rolling update.

(#18761) (id: 70389)

Jetty 12 SNI host checks

Jetty 12 by default has strict enforcement of RFC3986 URI format. This is a change from Jetty 9. As part of this update, a new server configuration option has been added: druid.server.http.uriCompliance. To avoid potential breaking changes in existing Druid deployments, this config defaults to LEGACY, which uses the more permissive URI format enforcement that Jetty 9 used. If the cluster you operate does not require legacy compatibility, we recommend you use the upstream Jetty default of RFC3986 in your Druid deployment. See the jetty documentation for more info.

Jetty 12 servers do strict SNI host0 checks when TLS is enabled. If the host your client is connecting to the server with does not match what is in the keystore, even if there is only one certificate in that keystore, it will return a 400 response. This could impact some use cases, such as folks connecting over localhost for whatever reason. If this change will break your deployment, you can opt-out of the change by setting druid.server.http.enforceStrictSNIHostChecking to false in the runtime.properties for some or all of your Druid services. It is recommended that you modify your client behavior to accommodate this change in jetty instead of overriding the config whenever possible.

#18424 #18623 (id: 69549)

Async download extension

If you load the imply-sql-async extension, you must remove this extension before you upgrade. This extension was used for the old async download. Support for that feature was dropped in the 2025.01 release.

Deprecation notices

For a more complete list of deprecations and their planned removal dates, see Deprecations.

Hadoop-based ingestion

Hadoop-based ingestion is scheduled for removal in 2026. Migrate to SQL-based ingestion.

As part of the deprecation, you must now explicitly opt-in to using the deprecated index_hadoop task type. To opt-in, set druid.indexer.task.allowHadoopTaskExecution to true in your common.runtime.properties file. For more information, see #18239.

Some segment loading configs deprecated

The following segment related configs are now deprecated and will be removed in future releases:

  • replicationThrottleLimit
  • useRoundRobinSegmentAssignment
  • maxNonPrimaryReplicantsToLoad
  • decommissioningMaxPercentOfMaxSegmentsToMove

Use smartSegmentLoading mode instead, which calculates values for these variables automatically.

End of support

ZooKeeper-based task discovery

Use HTTP-based task discovery instead, which has been the default since 2022.

ioCOnfig.inputSource.type.azure storage schema

Update your ingestion specs to use the azureStorage storage schema, which provides more capabilities.