The Imply release notes provide information on features, improvements, and bug fixes in each release. Be sure to read these release notes carefully before upgrading to the 2021.01.2 release.
New to Imply? Get started with an Imply Cloud Free Trial or start a self-hosted trial at Get started with Imply!
With Imply Cloud, the Imply team manages your clusters in AWS, while you control the infrastructure and own the data. With self-hosted Imply, you can run Imply on *NIX systems in your own environment or cloud provider.
Highlights of this release
About this release
2021.01 is a Long Term Support release. As of this release, Imply releases fall into one of two categories: Long Term Support (LTS) and Short Term Support (STS).
Imply Product Long Term Support (LTS) releases are complete, stable versions of our products. They are subject to general bug fixes for one year and to security bug fixes for two years after release, but do not receive features enhancements. They are best suited to production environments that require stable operation and do not require the latest features.
Releases that are not subject to long term support — that is, Short Term Support (STS) releases — include the latest features, including experimental features. STS releases occur on a regular basis. They are not subject to bug fixes, but may receive critical security patches. In general, to receive the latest bug fixes, you should upgrade to the latest monthly release.
Dynamic coordinator configuration to limit the number of segments when finding a candidate segment for segment balancing: You can set the
percentOfSegmentsToConsiderPerMoveto limit the number of segments considered when picking a candidate segment to move. The candidates are searched up to
maxSegmentsToMove * 2times. This new configuration prevents Druid from iterating through all available segments to speed up the segment balancing process, especially if you have lots of available segments in your cluster. See Druid Changes for other Druid updates.
status and selfDiscovered endpoints for Indexers: The Indexer now supports status and selfDiscovered endpoints. See Processor information APIs for details.
Improved handling for missing arguments: Expression processing now can be vectorized when inputs are missing, for example a non-existent column. When an argument is missing in an expression, Druid can now infer the proper type of result based on non-null arguments. For instance, for
longColumn + nonExistentColumn,
nonExistentColumnis treated as (
long) 0 instead of (
double) 0.0. Finally, in default null handling mode, math functions can produce output properly by treating missing arguments as zeros.
Zero period for
TIMESTAMPADDfunction now allows zero period. This functionality is required for some BI tools such as Tableau.
Support for legacy Kafka versions: Druid now supports Apache Kafka older than 0.11. To read from an old version of Kafka, set the isolation.level to read_uncommitted in consumerProperties. Only 0.10.2.1 have been tested up until this release. See Kafka supervisor configurations for details.
Native re-ingestion is less memory intensive: Parallel tasks now sort segments by ID before assigning them to subtasks. This sorting minimizes the number of time chunks for each subtask to handle. As a result, each subtask is expected to use less memory, especially when a single Parallel task is issued to re-ingest segments covering a long time period.
Partitioning information in the web console: The web console now shows datasource partitioning information on the new Segment granularity and Partitioning columns.
Segment granularityon the
Column order in the Schema table matches the dimensionsSpec:
Query timeout metric: A new metric provides the number of timed out queries. Previously timed out queries were treated as interrupted and included in the
query/interrupted/count. See Changed HTTP status codes for query errors for more details.
query/timeout/count: the number of timed out queries during the emission period
Additional changes in 2021.01 LTS
- Upgrade node version to 14
- Fix no results when searching users list breaks the page
- Fix segment selection popup is not dismissed when entering edit mode on a dashboard
- Add a long-typed column
sys.serverstable to indicate whether or not the server is the leader (#10680)
- Add support for the HTTPS protocol to
- Add a
DynamicConfigProviderto make Kafka consumer properties extensible (#10309)
- Fix Druid web console can't handle a large long value (#10741)
- Fix potential deadlock in batch ingestion (#10736)
- fix web console show json bug (#10710)
- Exception in Coordinator if upgrading to Druid packages with
- Fixes and tests related to the Indexer process (#10631)
- Add a
maxColumnsToMergeingestion parameter (#10689)
- Fix the Lookup management UI in unified console is not giving the correct config example (#10629)
- Fix subtotal queries give incorrect results if query has a limit spec (#10743)
- Fix post aggregators do not work when query has subtotals (#10653)
- Add shuffle metrics for batch ingestion for shuffle statistics for MiddleManagers and Indexers (#10359)
- Add Coordinator duty runtime metrics:
- Security fix for CVE-2021-25646
Upgrading from previous releases
When upgrading from earlier versions, see the release notes for all relevant intermediate versions.
Also note the following considerations.
- Improved HTTP status codes for query errors Before this release, Druid returned the "internal error (500)" for most of the query errors. Now Druid returns different error codes based on their cause. The following table lists the errors and their corresponding codes that has changed:
|Exception||Description||Old code||New code|
|Query planning failed||500||400|
|Query execution didn't finish in timeout||500||504|
|Query asked more resources than configured threshold||500||400|
|Query failed to schedule because of lack of merge buffers available at the time when it was submitted||500||429, merged to |
Query interrupted metric:
query/interrupted/countno longer counts the queries that timed out. These queries are counted by
Context dimension in query metrics:
contextis now a default dimension emitted for all query metrics.
contextis a JSON-formatted string containing the query context for the query that the emitted metric refers to. The addition of this dimension may alter some metrics emitted by Druid. You should plan to handle the new
contextdimension in your metrics pipeline. Since the dimension is a JSON-formatted string, a common solution is to parse the dimension and either flatten it or extract the bits you want and discard the full JSON-formatted string blob.
Consistent serialization format and column naming convention for the
sys.segmentstable: All columns in the
sys.segmentstable are now serialized in the JSON format to make them consistent with other system tables. Column names now use the same "snake case" convention.
Docker deployment not supported
Imply no longer recommends nor supports Docker-based deployments for new installations. Existing Docker-based deployments are supported through July 1, 2021. If you are currently using a Docker-based deployment, you should migrate to one of the following deployment modes before that date:
Changes in 2021.01-1
- Corrects a compatibility issue applicable to on-prem Imply deployments that use Pivot state store databases that do not support TLS 1.2, as recommended, by reenabling support for lower TLS versions.
Changes in 2021.01-2
- Fix using Postgres as a session or state store is broken due to an incompatibility introduced by a library upgrade
Changes in 2021.01.1
- Fix filters are not respected when compare or multi-range time selections are used in Pivot SQL
- Fix alert setup form always defines "previous period" as "previous day"
- Fix alert conditions with percent delta cannot be inputted correctly
- Fix default values on global filters are not applied when loading a dashboard
- Fix Pivot SQL timeout after 60000ms regardless of configured timeout
- Fix global dashboard filters are not updated correctly
- Fix error shown when adding a data cube to favorites
- Fix error shown when attempting to change user profile properties
- Fix reset password link does not include
- Fix CSV and Excel exports on time-series visualizations have incorrect column headers
- Fix lookups with empty strings to de-serialize correctly
- Fix an issue where the Broker starts before SQL metadata view is fully initialized even when
- Improve query execution to retain order of AND, OR filter children
- Fix cardinality estimation to calculate numShards independently of partition dimensions
- Better handling of Kinesis errors and message gap metric for Kinesis ingestion
'other' must be a different instance from 'this'error for some vectorized queries using
Changes in LTS-2021.01.2
- Fix public users API does not support
clientUrlproperty on request payload
- Fix crosstab API routes should be properly gated by license and feature flags
- Fix wrong query created when both filter by measure and calculation of change applied
- Fix switching from heatmap to line chart causes an error when both splits are numeric
- Fix data exports can have incorrect column headers when transformed measures are present
- Fix alert comparison period is not correctly preserved by link from alert occurrence to data cube view
- Fix CSV/TSV exports incorrectly show some column values as "undefined"
- Security updates
- Upgrade jetty to latest version
- Fix CompactionTask should throw exception on conflicting segmentGranularity
- Auto compaction can fail to find segments for compaction when segment versions are mixed in the same time chunk based on new segment granularity (#11000)
- Fix runtime error when IndexedTableJoinMatcher matches long selector to unique string index. (#10942)
- Granularity: Introduce primitive-typed bucketStart, increment methods. (#10904)
- Web console: remove namespace prop that does not exist from JDBC lookup (#10888)
- CsvInputFormat: Create a parser per InputEntityReader. (#10923)
- Web console: fix service view actions when grouping (#10898)
- Fix Kafka ingestion fails and halt if the kafka topic encounters and empty/null rows . (#10962)
- Fix java.lang.RuntimeException: Error while applying rule DruidQueryRule(AGGREGATE) post upgrade from Imply-3.2.9 TO Imply-4.0.4 (#10950)
- Fix adding new types of security Resources can break backwards compatibility (#10896)
- Fix maxBytesInMemory (for heap overhead of all sinks and hydrants) check is done on the next persist (#10891)
- Add Druid JDBC handler config for minimum number of rows per frame (#10880)
- Avoid deletion of load/drop entry from CuratorLoadQueuePeon in case of load timeout (#10213)
- Block protocols outside of HTTP and HTTPS for inputSource (#10830)
- Fix OvershadowableManager inefficiently handles large numbers of segments in a single time chunk (#10892)
- Fix Kinesis resharding causes EOS messages to be logged as "Events thrown away" in metrics (#10976)
- Fix Historicals can use more disk than configured for the segment cache (#10884)