Pivot SQL is an alpha feature that should be considered experimental and subject to change or removal at any time. Alpha features are provided "as is," and are not subject to Imply SLAs.
Measures and Managing data cubes describe how to use Plywood to create custom filters and measures in Pivot data cubes. While Plywood is a powerful and flexible framework for defining data interactions, it can present a learning curve to those who are not familiar with it.
Pivot SQL, an alternative to Plywood, gives users a well-known, standards-based syntax for creating dimensions and measures in Pivot.
Any SQL function supported in Druid SQL is available for use in Pivot SQL expressions. Note, however, the listed Limitations.
Enabling Pivot SQL
By default, the Pivot SQL feature is disabled. You can enable Pivot SQL from the Advanced tab in Pivot settings by enabling experimental features.
Working with Pivot SQL
Data cubes must be configured as SQL data cubes to use Pivot SQL features.
Creating SQL data cubes
With experimental features enabled, when creating data cubes, Pivot users can choose to make the data cube a SQL data cube:
SQL data cubes offer auto-fill and introspection from within SQL expression composition fields. As a fallback, Plywood expressions are available in SQL data cubes as well.
Converting existing data cubes to SQL
In certain circumstances, you can convert an existing, non-SQL data cube into a SQL data cube.
Conversion to SQL is not possible for data cubes that use advanced options like
altDataSource or data cubes that have individual dimensions or measures that can’t be converted to SQL for any reason. If you attempt to convert data cubes in such cases, an error dialog appears that indicates why the data cube can't be converted, including the expression or option that prevents conversion.
Note that converting a data cube to SQL can’t be reversed and may have unexpected consequences. As a backup measure, you should duplicate the data cube you intend to convert before attempting the operation on the original or copied data cube.
To convert a data cube to SQL, follow these steps:
Open the data cube settings by clicking the information icon in the data cube view and then clicking Edit.
Click the Advanced tab to open advanced settings for the data cube.
Click the Convert to SQL button at the bottom of the page.
Confirm the operation by clicking Convert:
You can now use SQL expressions to compose dimensions and formulas, as described next.
Composing SQL expressions
Once a data cube is enabled for Pivot SQL, you can use SQL expressions to define custom dimension and measure formulas, as shown in the Formula field in the following figure:
The syntax for querying columns follows the
t.”columnName” format. Inline help in the data cube UI provides troubleshooting information in the event of syntax errors.
See Druid SQL for a complete list of supported SQL functions.
Note the following:
- Pivot SQL supports subset filters, filter tokens, PII masks
- Subset filters can be expressed as SQL; SQL support for filter tokens will be available in the future
- Data cube APIs support both SQL and Plywood on SQL data cubes; SQL data cubes can be created using the property: queryMode: “sql”
- All existing security constraints on user data access applies to SQL queries as well.
Also see the following limitations.
The following limitations apply to the use of SQL in Pivot data cubes:
altDataSource(also known as native Druid UNION queries) are not currently supported
- Support for lookups is provided only via the LOOKUP Druid SQL function
MilliSecondsInIntervalmagic constant for rate calculations is not currently supported
- Resplit measures (e.g., daily active users) must still be expressed using Plywood
- Druid functions requiring extensions aren't available unless the extension is loaded. For example, using
STDDEVrequires the stats extension to be loaded.