Skip to main content

Imply Enterprise and Hybrid release notes

Read all release notes carefully, especially the Upgrade and downgrade notes, before upgrading. The following release notes provide information on features, improvements, and bug fixes up to Imply STS release 2024.02.

For information on the LTS release, see the LTS release notes.

If you are upgrading by more than one version, read the intermediate release notes too.

The following end-of-support dates apply in 2023:

  • On January 26, 2023, Imply 2021.01 LTS reached EOL. This means that the 2021.01 LTS release line will no longer receive any patches, including security updates. Imply recommends that you upgrade to the latest LTS or STS release.
  • On January 31, 2023, Imply 2022.01 LTS ended general support status and is eligible only for security support.

For more information, see Lifecycle Policy.

See Previous versions for information on older releases.

Imply evaluation

New to Imply? Get started with an Imply Hybrid (formerly Imply Cloud) Free Trial or start a self-hosted trial at Get started with Imply!

With Imply Hybrid, the Imply team manages your clusters in AWS, while you control the infrastructure and own the data. With self-hosted Imply, you can run Imply on *NIX systems in your own environment or cloud provider.

Imply Enterprise

If you run Imply Enterprise, see Imply product releases & downloads to access the Imply Enterprise distribution. When prompted, log on to Zendesk with your Imply customer credentials.

Changes in 2024.02

Pivot highlights

New overall visualization (beta)

A new overall visualization includes a trend line and an updated properties panel.

You can enable this beta feature through the SDK based visualizations feature flag. Once enabled, the beta overall visualization replaces the standard overall visualization. See Visualizations reference for more information. (ids: 40562, 41090)

Druid highlights

Improved concurrent append and replace

You no longer need to manually specify the task lock type for concurrent append and replace using the taskLockType context parameter. Instead, Druid can determine it for you. You can either use a context parameter or a cluster-wide config:

  • Use the context parameter "useConcurrentLocks": true for specific JSON-based or streaming ingestion tasks and datasource. Datasources need the parameter in situations such as when you want to be able to append data to the datasource while compaction is running.
  • Set the cluster-wide config druid.indexer.task.default.context to true.

(#1568) (id: 41083)

Range support for window functions

Window functions now support ranges where both endpoints are unbounded or are the current row. Ranges work in strict mode, which means that Druid will fail queries that aren't supported. You can turn off strict mode for ranges by setting the context parameter windowingStrictValidation to false.

The following example shows a window expression with RANGE frame specifications:

(ORDER BY c)
(ORDER BY c RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)
(ORDER BY c RANGE BETWEEN CURRENT ROW AND UNBOUNDED PRECEDING)

(#15746) (#15365) (id: 41623)

Ingest from multiple Azure accounts

Azure as an ingestion source now supports ingesting data from multiple storage accounts that are specified in druid.azure.account. To do this, use the new azureStorage schema instead of the previous azure schema. For example,

    "ioConfig": {
"type": "index_parallel",
"inputSource": {
"type": "azureStorage",
"objectGlob": "**.json",
"uris": ["azureStorage://storageAccount/container/prefix1/file.json", "azureStorage://storageAccount/container/prefix2/file2.json"]
},
"inputFormat": {
"type": "json"
},
...
},
...

(#15630) (id: 41428)

Improved performance for real-time queries

If the query context bySegment is set to false for real-time queries, the way in which layers are merged has been improved to be more efficient. There's now only a single layer of merging, just like for Historical processes. As part of this change, segment metrics, like query/segment/time, are now per-FireHydrant instead of per-Sink.

If you set bySegment to true, the old behavior of two layer is preserved.

(#15757) (id: 41406)

Pivot changes

  • Added maxNumDownloadTasks to Pivot server configuration file, to optionally set the maximum number of tasks to assign to async downloads. See Pivot server config for more information (id: 41092)
  • Added an option to "Go to URL" for URL dimensions in the flat table visualization (id: 41283)
  • Fixed an error that appeared when duplicating a dashboard from the header bar (id: 41537)
  • Fixed a problem with filtering on a dimension with Set/String type that contains nulls (id: 41459)
  • Fixed an issue where async downloads didn't include filters by measure (id: 41435)
  • Fixed records table visualization crashing when scrolling to the bottom in a dashboard tile (id: 41165)
  • Fixed an issue with the records visualization not supporting async download (id: 41289)
  • Fixed dimensions with IDs that contain periods showing as "undefined" in records table visualization (id: 41009)
  • Fixed Pivot 2 visualizations crashing on data cubes with no dimensions (id: 40998)
  • Fixed inability to set "Greater than 0" measure filter in flat table visualization (id: 40985)
  • Fixed a problem with visualization URLs not updating after a measure is deleted from a data cube (id: 40565)
  • Fixed "overall" values rendering incorrectly in line chart visualization when they should be hidden (id: 40501)
  • Fixed incorrect time bucket label for America/Mexico_City timezone in DST (id: 39749)
  • Fixed inability to scroll pinned dimensions list (id: 39647)
  • Fixed discrepancies when applying custom UI colors (id: 40266)
  • Improved handling of time filters dashboard tiles (id: 41171)
  • Improved measures in tables visualization to show nulls if they contain no data (id: 40665)
  • Improved the display of comparison values in visualizations, by adding the ability to sort by delta and percentage (id: 38539)

Druid changes

  • Added QueryLifecycle#authorize for grpcqueryextension (#15816) (id: 41725)
  • Added nested array index support fix some issues (#15752) (id: 41724)
  • Added support for array types in the web console ingestion wizards (#15588) (id: 41613)
  • Added SQUARE_ROOT function to the timeseries extension: MAP_TIMESERIES(timeseries, 'sqrt(value)') (id: 41516)
  • Added null value index wiring for nested columns (#15687) (id: 41475)
  • Added support to the web console for sorting the segment table on start and end when grouped (#15720) (id: 41438)
  • Added a tile to the web console for the new Azure input source (id: 41317)
  • Added ImmutableLookupMap for static lookups (#15675) (id: 41268)
  • Added Cache value selectors in RowBasedColumnSelectorFactory (#15615) (id: 41265)
  • Added faster kway merging using tournament trees 8byte key strides (#15661) (id: 40987)
  • Added CONCAT flattening filter decomposition (#15634) (id: 40986)
  • Added partition boosting for INSERT with GROUP BY (dealing with skewed partition) (#15474) (id: 15015)
  • Added SQL compatibility for numeric first and last column types. The web console also provides an option for first and last aggregation(#15607) (id: 40615)
  • Added differentiation between null and empty strings in SerializablePairStringLong serde (id: 40401)
  • Changed IncrementalIndex#add is no longer thread safe and improves performance (#15697) (id: 41260)
  • Fixed the KafkaInputFormat parsing incoming JSON newline-delimited (as if it were a batch ingest) rather than as a whole entity (as is typical for streaming ingest) (#15692) (id: 41261)
  • Improved segment locking behavior so that the RetrieveSegmentsToReplaceAction is no longer needed (#15699) (id: 41484)
  • Disabled eager initialization for non-query connection requests (#15751) (id: 41407)
  • Enabled ArrayListRowsAndColumns to StorageAdapter conversion (#15735) (id: 41616)
  • Enabled query request queuing by default when total laning is turned on (#15440) (id: 40807)
  • Fixed web console forcing waitUntilSegmentLoad to true even if the user sets it to false (#15781) (id: 41614)
  • Fixed CVEs (#15814) (id: 41612)
  • Fixed interpolated exception message in InvalidNullByteFault (#15804) (id: 41546)
  • Fixed extractionFns on numberwrapping dimension selectors (#15761) (id: 41443)
  • Fixed summary iterator in grouping engine(#15658) (id: 41264)
  • Fixed incorrect scale when reading decimal from parquet (#15715) (id: 41263)
  • Fixed a rendering issue for disabled workers in the web console (#15712) (id: 41259)
  • Fixed issues so that the Kafka emitter will now run all scheduled callables. The emitter now intelligently provision threads to make sure there are no wasted threads, and all callables can run (#15719) (id: 41258)
  • Fixed MSQ task engine intermediate files not being immediately cleaned up in Azure (id: 41243)
  • Fixed audit log entries not appearing for "Mark as used all segments" actions (id: 41080)
  • Fixed some naming related to AggregatePullUpLookupRule (#15677) (id: 41030) -- NOT USER FACING. DELETE
  • Fixed an NPE that could occur if the StandardDeviationPostAggregator passed in is null: postAggregations.estimator: null (#15660) (id: 41003)
  • Fixed reverse pull-up lookups in the SQL planner (#15626) (id: 41002)
  • Fixed compaction getting stuck on intervals with tombstones (#15676) (id: 41001)
  • Fixed Resultcache causing an exception when a sketch is stored in the cache (#15654) (id: 40885)
  • Fixed concurrent append and replace options in the web console (#15649) (id: 40868)
  • Fixed an issue that blocked queries issued from the small Run buttons (from inside the larger queries) from being modified from the table actions. (#15779) (id: 41515)
  • Improved segment killing performance for Azure (#15770) (id: 38567)
  • Improved the performance of the druid-basic-security extension (#15648) (id: 40884)
  • Improved lookups to register first lookup immediately, regardless of the cache status (#15598) (id: 40863)
  • Improved numerical first and last aggregators so that they work for SQL-based ingestion too (id: 40996)
  • Improved parsing speed for list-based input rows (#15681) (id: 41262)
  • Improved error messages for DATE_TRUNC operators (#15759) (id: 41471)
  • Improved the web console to support using file inputs instead of text inputs for the Load query detail archive dialogue (#15632) (id: 40941)
  • Changed the web console to use the new azureStorage input type instead of the azure storage type for ingesting from Azure (#15820) (id: 41723)
  • Changed the cryptographic salt size that Druid uses to 128 bits so that it is FIPS compliant (#15758) (id: 41405)

Changes in 2024.01.3

Druid changes

  • Fixed an issue where DataSketches HLL Sketches would erroneously be considered empty. For details see the following Imply Knowledge Base article (id: 41916)

Changes in 2024.01.2

Druid changes

  • Fixed an issue where an exception occurs when queries use filters on TIME_FLOOR (#15778)

Changes in 2024.01.1

Druid changes

  • Fixed an issue with the default value for the inSubQueryThreshold parameter, which resulted in slower than expected queries. The default value for it is now 2147483647 (up from 20) (#15688) (id: 40814)

Changes in 2024.01

Pivot highlights

Pivot now runs natively on macOS ARM systems

We encourage on-prem customers to opt-in to an updated distribution format for Pivot by setting an environment variable in your Pivot nodes: IMPLY_PIVOT_NOPKG=1. This format will become the default later in 2024.

This distribution format enables Pivot to target current and future LTS versions of Node.js and provides a compatibility option for customers who are unable to upgrade from legacy Linux distributions such as RHEL 7, CentOS 7, and Ubuntu 18.04. (id: 40447)

Druid highlights

SQL PIVOT and UNPIVOT (beta)

You can now use the SQL PIVOT and UNPIVOT operators to turn rows into columns and column values into rows respectively. (id: 37598)

The PIVOT operator carries out an aggregation and transforms rows into columns in the output. The following is the general syntax for the PIVOT operator:

PIVOT (aggregation_function(column_to_aggregate)
FOR column_with_values_to_pivot
IN (pivoted_column1 [, pivoted_column2 ...])
)

The UNPIVOT operator transforms existing column values into rows. The following is the general syntax for the UNPIVOT operator:

UNPIVOT (values_column 
FOR names_column
IN (unpivoted_column1 [, unpivoted_column2 ... ])
)

New JSON_QUERY_ARRAY function

The JSON_QUERY_ARRAY function is similar to JSON_QUERY except the return type is always ARRAY<COMPLEX<json>> instead of COMPLEX<json>. Essentially, this function allows extracting arrays of objects from nested data and performing operations such as UNNEST, ARRAY_LENGTH, ARRAY_SLICE, or any other available ARRAY operations. (#15521) (id: 40335)

Changes to native equals filter

Native query equals filter on mixed type 'auto' columns that contain arrays must now be filtered as their presenting type. So if any rows are arrays (the segment metadata and information_schema reports the type as some array type), then the native queries must also filter as if they are some array type. This does not impact SQL, which already has this limitation due to how the type presents itself. This only impacts mixed type 'auto' columns, which contain both scalars and arrays. (#15503) (id: 40328)

Support for GCS for SQL-based ingestion

You can now use Google Cloud Storage (GCS) as durable storage for SQL-based ingestion and queries from deep storage. (#15398) (id: 35053)

Improved INNER joins

Druid can support arbitrary join conditions for INNER join. For INNER joins, Druid will look at the join condition, and any sub-conditions that cannot be evaluated efficiently as part of the join will be converted to a post-join filter. With this feature, you can do inequality joins that were not possible before. (#15302) (id: 37564)

Pivot changes

  • Added Pivot server configuration property forceNoRedirect which forces the Pivot UI to always render the splash page without automatic redirection (id: 38986)
  • Added the ability to sort a data cube by the first column, by clicking the column header (id: 31363)
  • Fixed percent of root causing downloads from deep storage to fail (id: 40673)
  • Fixed incorrect sort order in deep storage downloads (id: 40374)
  • Fixed flat table visualization with absolute time filter using "Latest day" when accessed with link (id: 40339)
  • Fixed functional and display issues in the overall visualization (id: 40271)
  • Fixed back button not working correctly in async downloads dialog (id: 40265)
  • Improved query generation in Pivot and Plywood to use the 2-value IS NOT TRUE version of the NOT operator (id: 40638)
  • Improved data cube measure preview by providing a manual override prompt when the preview fails (id: 38763)
  • Updated the names of the async downloads feature flags to Async Downloads (Deprecated) and Async Downloads, New Engine, 2023 (Alpha) (id: 40525)

Druid changes

  • Added experimental support for first/last data types for double/float/long during native and SQL-based ingestion (#14462) (id: 37231)
  • Added new config druid.audit.manager.type which can take values log, sql(default). This allows audited events to either be logged or persisted in metadata store (default behavior). (#15480) (id: 37696)
  • Added new config druid.audit.manager.logLevel which allows users to set the log level of audit events and can take values DEBUG, INFO(default), WARN. (#15480) (id: 37696)
  • Added array column type support to EXTEND operator (#15458) (id: 40286)
  • Changed what happens when query scheduler threads are less than server HTTP threads. When that happens, total laning is enforced, and some HTTP threads are reserved for non-query requests, such as health checks. Previously, any request that exceeded lane capacity was rejected. Now, excess requests are queued with a timeout equal to MIN(Integer.MAX_VALUE, druid.server.http.maxQueryTimeout). If the value is negative, requests are queued forever. (#15440) (id: 40776)
  • Changed the ARRAY_TO_MV function to support expression inputs (#15528) (id: 40358)
  • Changed the auto column indexer so that when columns that contain only empty or null containing arrays are ingested, they are stored as ARRAY<LONG> instead of COMPLEX<json>. (#15505) (id: 40313)
  • Fixed an issue where null and empty strings were treated equally, and the return value was always null (#15525) (id: 40401)
  • Fixed an issue where lookups fail with an error related to failing to construct FilteredAggregatorFactory (#15526) (id: 40296)
  • Fixed issues related to null handling and vector expression processors (#15587) (id: 40545)
  • Fixed a bug in the ingestion spec to SQL-based ingestion query convertor for the web console (#15627) (id: 40795)
  • Fixed redundant expansion in SearchOperatorConversion (#15625) (id: 40768)
  • Fixed an issue where some ARRAY types were treated incorrectly as COMPLEX types instead(#15543) (id: 40514)
  • Fixed a NPE with virtual expressions and unnest (#15513) (id: 40348)
  • Fixed an issue where the Window function minimum aggregates nulls as 0 (#15371) (id: 40327)
  • Fixed an issue where null filters on datasources with range partitioning could lead to excessive segment pruning, leading to missed results (#15500) (id: 40288)
  • Fixed an issue with window functions where a string cannot be cast when creating HLL sketches (#15465) (id: 39859)
  • Fixed a bug in segment allocation that can potentially cause loss of appended data when running interleaved append and replace tasks. (#15459) (id: 39718)
  • Improved filtering performance by adding support for using underlying column index for ExpressionVirtualColumn (#15585) (#15633) (id: 39668) (id: 40794)
  • Improved how three-valued logic is handled (#15629) (id: 40797)
  • Improved the Broker to be able to use catalog for datasource schemas for SQL queries (#15469) (id: 40796)
  • Improved the Druid audit system to log when a supervisor is created or updated (#15636) (id: 40774)
  • Improved the connection between Brokers and Coordinators with Historical and real-time processes (#15596) (id: 40763)
  • Improved how segment granularity is handled when there is a conflict and the requested segment granularity can't be allocated. Day granularity is now considered after month. Previously, week was used, but weeks do not align with months perfectly. You can still explicitly request week granularity. (#15589) (id: 40701)
  • Improved polling in segment allocation queue to improve efficiency and prevent race conditions (#15590) (id: 40690)
  • Improved the web console to detect EXPLAIN PLAN queries and be able to run them individually (#15570) (id: 40508)
  • Improved the efficiency of queries by Reducing amount of expression objects created during evaluations (#15552) (id: 40495)
  • Improved the error message you get if you try to use INSERT INTO and OVERWRITE syntax (id: 37790)
  • Improved the JDBC lookup dialog in the web console to include Jitter seconds, Load timeout seconds, and Max heap percentage options (#15472) (id: 40246)
  • Improved compaction so that it skips for datasources with partial eternity segments, which could result in memory pressure on the Coordinator (#15542) (id: 40075)
  • Improved Kinesis integration so that only checkpoints for partitions with unavailable sequence numbers are reset (#15338) (id: 29788)
  • Improved the performance of the following:
    • how Druid generates queries from Calcite plans
    • the internal SEARCH operator used by other functions
    • the COALESCE function (#15609) (id: 40672) (#15623) (id: 40691)
  • Removed the ‘auto’ strategy from search queries. Specifying ‘auto’ will now be equivalent to specifying useIndexes (#15550) (id: 40460)

Clarity changes

  • Updated subsetFormula for server cube to accept null values (id: 40254)

Platform changes

  • Added support for JVM memory metrics in GKE ZooKeeper deployments (id: 38855)

Upgrade and downgrade notes

Minimum supported version for rolling upgrade

See "Supported upgrade paths" in the Lifecycle Policy documentation.

Segment metrics for real-time queries

Starting in 2024.02 STS, segment metrics for real-time queries (such as query/segment/time) are per-FireHydrant instead of per-Sink when the context parameter bySegment is set to false, which is common for most use cases.

GroupBy queries that use the MSQ task engine during upgrades

Beginning in 2024.02 STS, the performance and behavior for segment partitioning has been improved. GroupBy queries may fail during an upgrade if some workers are on an older version and some are on a more recent version.

Changes to native equals filter

Beginning in 2024.01 STS, the native query equals filter on mixed type 'auto' columns that contain arrays must now be filtered as their presenting type. So if any rows are arrays (the segment metadata and information_schema reports the type as some array type), then the native queries must also filter as if they are some array type. This does not impact SQL, which already has this limitation due to how the type presents itself. This only impacts mixed type 'auto' columns, which contain both scalars and arrays.

Imply Hybrid MySQL upgrade

Imply Hybrid previously used MySQL 5.7 by default. New clusters will use MySQL 8 by default. If you have an existing cluster, you'll need to upgrade the MySQL version since the Amazon RDS support end date for this version is scheduled for February 29, 2024. Although you can opt for extended support from Amazon, you can use use Imply Hybrid Manager to upgrade your MySQL instance to MySQL 8.

The upgrade should have little to no impact on your queries but does require a reconnection to the database. The process can take an hour and services will reconnect to the database during the upgrade.

In preparation for the upgrade, you need to grant certain permissions to the Cloud Manager IAM role by applying the following policy:

Show the policy
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"rds:CreateBlueGreenDeployment",
"rds:PromoteReadReplica"
],
"Resource": [
"arn:aws:rds:*:*:pg:*",
"arn:aws:rds:*:*:deployment:*",
"arn:aws:rds:*:*:*:imply-*"
],
"Effect": "Allow"
},
{
"Action": [
"rds:AddTagsToResource",
"rds:CreateDBInstanceReadReplica",
"rds:DeleteBlueGreenDeployment",
"rds:DescribeBlueGreenDeployments",
"rds:SwitchoverBlueGreenDeployment"
],
"Resource": "*",
"Effect": "Allow"
}
]
}

After you grant the permissions, click Apply changes for Amazon RDS MySQL Update on the Overview page of Imply Hybrid Manager.

Three-valued logic

caution

The legacy two-valued logic and the corresponding properties that support it will be removed in the December 2024 STS and January 2025 LTS. The SQL compatible three-valued logic will be the only option.

Update your queries and downstream apps prior to these releases.

SQL standard three-valued logic introduced in 2023.11 primarily affects filters using the logical NOT operation on columns with NULL values. This applies to both query and ingestion time filtering.

The following example illustrates the old behavior and the new behavior: Consider the filter “x <> 'some value'” to filter results for which x is not equal to 'some value'. Previously, Druid included all rows not matching "x='some value'" including null values. The new behavior follows the SQL standard and will now only match rows with a value and which are not equal to 'some value'. Null values are excluded from the results.

This change primarily affects filters using the logical NOT operation on columns with NULL values.

Three-valued logic is only enabled if you accept the following default values:

druid.generic.useDefaultValueForNull=false
druid.expressions.useStrictBooleans=true
druid.generic.useThreeValueLogicForNativeFilters=true

SQL compatibility

caution

The legacy behavior that is not compatible with standard ANSI SQL and the corresponding properties will be removed in the December 2024 STS and January 2025 LTS releases. The SQL compatible behavior introduced in the 2023.09 STS will be the only behavior available.

Update your queries and any downstream apps prior to these releases.

Starting with 2023.09 STS, the default way Druid treats nulls and booleans has changed.

For nulls, Druid now differentiates between an empty string ('') and a record with no data as well as between an empty numerical record and 0.

You can revert to the previous behavior by setting druid.generic.useDefaultValueForNull to true. This property affects both storage and querying, and must be set on all Druid service types to be available at both ingestion time and query time. Reverting this setting to the old value restores the previous behavior without reingestion.

For booleans, Druid now strictly uses 1 (true) or 0 (false). Previously, true and false could be represented either as true and false as well as 1 and 0, respectively. In addition, Druid now returns a null value for Boolean comparisons like True && NULL.

druid.expressions.useStrictBooleans primarily affects querying, however it also affects json colmns and type-aware schema discovery for ingestion. You can set druid.expressions.useStrictBooleans to false to configure Druid to ingest booleans in 'auto' and 'json' columns as VARCHAR (native STRING) typed colums that use string values of 'true' and 'false' instead of BIGINT (native LONG). It must be set on all Druid service types to be available at both ingestion time and query time.

The following table illustrates some example scenarios and the impact of the changes:

Show the table
Query2023.08 STS and earlier2023.09 STS and later
Query empty stringEmpty string ('') or nullEmpty string ('')
Query null stringNull or emptyNull
COUNT(*)All rows, including nullsAll rows, including nulls
COUNT(column)All rows excluding empty stringsAll rows including empty strings but excluding nulls
Expression 100 && 11111
Expression 100 || 111001
Null FLOAT/DOUBLE column0.0Null
Null LONG column0Null
Null __time column0, meaning 1970-01-01 00:00:00 UTC1970-01-01 00:00:00 UTC
Null MVD column''Null
ARRAYNullNull
COMPLEXnoneNull
Update your queries

Before you upgrade from a version prior to 2023.09 to 2023.09 or later, update your queries to account for the changed behavior:

NULL filters

If your queries use NULL in the filter condition to match both nulls and empty strings, you should add an explicit filter clause for empty strings. For example, update s IS NULL to s IS NULL OR s = ''.

COUNT functions

COUNT(column) now counts empty strings. If you want to continue excluding empty strings from the count, replace COUNT(column) with COUNT(column) FILTER(WHERE column <> '').

GroupBy queries

GroupBy queries on columns containing null values can now have additional entries as nulls can co-exist with empty strings.

Avatica JDBC driver upgrade

info

The Avatica JDBC is not packaged with Druid. Its upgrade is separate from any upgrades to Imply.

If you notice intermittent query failures after upgrading your Avatica JDBC to version 1.21.0 or later, you may need to set the transparent_reconnection.

Parameter execution changes for Kafka

When using the built-in FileConfigProvider for Kafka, interpolations are now intercepted by the JsonConfigurator instead of being passed down to the Kafka provider. This breaks existing deployments.

For more information, see KIP-297 and #13023.

Deprecation notices

azure ingestion source parameter

Starting in 2024.02, the ioConfig.inputSource.type.azure parameter has been deprecated. Use the new azureStorage parameter instead. The new parameter supports ingesting from multiple accounts.

Two-valued logic

Druid's legacy two-valued logic for native filters and the properties for maintaining that behavior are deprecated and will be removed in the December 2024 STS and January 2025 LTS releases.

The ANSI-SQL compliant three-valued logic will be the only supported behavior after these releases. This SQL compatible behavior became the default for deployments that use Imply 2023.11 STS and January 2024 LTS releases.

Update your queries and downstream apps prior to these releases.

For more information, see three-valued logic.

Properties for legacy Druid SQL behavior

Druid's legacy behavior for Booleans and NULLs and the corresponding properties are deprecated and will be removed in the December 2024 STS and January 2025 LTS releases.

The ANSI-SQL compliant treatment of Booleans and null values will be the only supported behavior after these releases. This SQL compatible behavior became the default for Imply 2023.11 STS and January 2024 LTS.

Update your queries and downstream apps prior to these releases.

For more information, see SQL compatibility.

Some segment loading configs deprecated

Starting with 2023.08 STS, the following segment related configs are now deprecated and will be removed in future releases:

  • maxSegmentsInNodeLoadingQueue
  • maxSegmentsToMove
  • replicationThrottleLimit
  • useRoundRobinSegmentAssignment
  • replicantLifetime
  • maxNonPrimaryReplicantsToLoad
  • decommissioningMaxPercentOfMaxSegmentsToMove

Use smartSegmentLoading mode instead, which calculates values for these variables automatically.

SysMonitor support deprecated

Starting with 2023.08 STS, switch to OshiSysMonitor as SysMonitor is now deprecated and will be removed in future releases.

Asynchronous SQL download deprecated

The async downloads feature is deprecated and will be removed in future releases. Instead consider using Query from deep storage.

End of support

CrossTab view is deprecated

The CrossTab view feature is no longer supported. Use Pivot 2.0 instead, which incorporates the capabilities of CrossTab view.