2021.03

2021.03

  • Imply
  • Pivot
  • Druid
  • Manager
  • Clarity

›Experimental features

Overview

  • Pivot overview
  • Navigation

Data

  • Data ingestion
  • Datasources

Visualize

    Data cubes

    • About data cubes
    • Managing data cubes
    • Dimensions
    • Measures
    • Custom dimensions and measures
    • Visualizations
    • Time compare
    • Filter by measure
    • Data export

    Dashboards

    • About dashboards
    • Managing dashboards

Query data

  • SQL
  • Monitor queries

Alerts & Reports

  • Alerts
  • Scheduled reports

Manage user access

  • Access control
  • User management in Pivot
  • Manage users with LDAP
  • Manage users with OIDC

Advanced usage

  • Pivot server config
  • Pivot API
  • Generating links into Pivot
  • Customizing the Pivot UI
  • Load Hadoop data via Amazon EMR

Experimental features

  • Feature Flags
  • Explain
  • Annotations
  • Pivot SQL
  • CrossTab

Pivot SQL (alpha)

Pivot SQL is an alpha feature that should be considered experimental and subject to change or removal at any time. Alpha features are provided "as is," and are not subject to Imply SLAs.

Measures and Managing data cubes describe how to use Plywood to create custom filters and measures in Pivot data cubes. While Plywood is a powerful and flexible framework for defining data interactions, it can present a learning curve to those who are not familiar with it.

Pivot SQL, an alternative to Plywood, gives users a well-known, standards-based syntax for creating dimensions and measures in Pivot.

Any SQL function supported in Druid SQL is available for use in Pivot SQL expressions. Note, however, the listed Limitations.

Enabling Pivot SQL

By default, the Pivot SQL feature is disabled. You can enable Pivot SQL from the Advanced tab in Pivot settings by enabling experimental features.

Working with Pivot SQL

Data cubes must be configured as SQL data cubes to use Pivot SQL features.

Creating SQL data cubes

With experimental features enabled, when creating data cubes, Pivot users can choose to make the data cube a SQL data cube:

Add annotation

SQL data cubes offer auto-fill and introspection from within SQL expression composition fields. As a fallback, Plywood expressions are available in SQL data cubes as well.

Converting existing data cubes to SQL

In certain circumstances, you can convert an existing, non-SQL data cube into a SQL data cube.

Conversion to SQL is not possible for data cubes that use advanced options like altDataSource or data cubes that have individual dimensions or measures that can’t be converted to SQL for any reason. If you attempt to convert data cubes in such cases, an error dialog appears that indicates why the data cube can't be converted, including the expression or option that prevents conversion.

Note that converting a data cube to SQL can’t be reversed and may have unexpected consequences. As a backup measure, you should duplicate the data cube you intend to convert before attempting the operation on the original or copied data cube.

To convert a data cube to SQL, follow these steps:

  1. Open the data cube settings by clicking the information icon in the data cube view and then clicking Edit.

  2. Click the Advanced tab to open advanced settings for the data cube.

  3. Click the Convert to SQL button at the bottom of the page.

  4. Confirm the operation by clicking Convert:

    Convert data cube

You can now use SQL expressions to compose dimensions and formulas, as described next.

Composing SQL expressions

Once a data cube is enabled for Pivot SQL, you can use SQL expressions to define custom dimension and measure formulas, as shown in the Formula field in the following figure:

SQL expression

The syntax for querying columns follows the t.”columnName” format. Inline help in the data cube UI provides troubleshooting information in the event of syntax errors.

See Druid SQL for a complete list of supported SQL functions.

Note the following:

  • Pivot SQL supports subset filters, filter tokens, PII masks
  • Subset filters can be expressed as SQL; SQL support for filter tokens will be available in the future
  • Data cube APIs support both SQL and Plywood on SQL data cubes; SQL data cubes can be created using the property: queryMode: “sql”
  • All existing security constraints on user data access applies to SQL queries as well.

Also see the following limitations.

Limitations

The following limitations apply to the use of SQL in Pivot data cubes:

  • altDataSource (also known as native Druid UNION queries) are not currently supported
  • Support for lookups is provided only via the LOOKUP Druid SQL function
  • MilliSecondsInInterval magic constant for rate calculations is not currently supported
  • Resplit measures (e.g., daily active users) must still be expressed using Plywood
  • Druid functions requiring extensions aren't available unless the extension is loaded. For example, using STDDEV requires the stats extension to be loaded.
← AnnotationsCrossTab →
  • Enabling Pivot SQL
  • Working with Pivot SQL
    • Creating SQL data cubes
    • Converting existing data cubes to SQL
    • Composing SQL expressions
  • Limitations
2021.03
Key links
Try ImplyApache Druid siteImply GitHub
Get help
Stack OverflowSupportContact us
Learn more
BlogApache Druid docs
Copyright © 2021 Imply Data, Inc