• Developer guide
  • API reference

›Analytics

Getting started

  • Introduction to Imply Polaris
  • Quickstart
  • Execute a POC
  • Create a dashboard
  • Navigate the console
  • Customize Polaris
  • Key concepts

Tables and data

  • Overview
  • Introduction to tables
  • Table schema
  • Ingestion jobs

    • Create an ingestion job
    • Ingest using SQL
    • Job auto-discovery
    • Timestamp expressions
    • SQL ingestion reference
    • Ingestion status reference
  • Data partitioning
  • Introduction to rollup
  • Replace data
  • Ingestion use cases

    • Approximation algorithms
    • Ingest earliest or latest value

Ingestion sources

  • Ingestion sources overview
  • Supported data formats
  • Create a connection
  • Ingest from files
  • Ingest data from a table
  • Ingest from S3
  • Ingest from Kafka and MSK
  • Ingest from Kinesis
  • Ingest from Confluent Cloud
  • Kafka Connector for Imply Polaris
  • Push event data
  • Connect to Confluent Schema Registry
  • Ingestion source reference

Analytics

  • Overview
  • Manage data cubes
  • Visualize data
  • Data cube dimensions
  • Data cube measures
  • Dashboards
  • Visualizations reference
  • Set up alerts
  • Set up reports
  • Embed visualizations

Querying

  • Overview
  • Time series functions

Monitoring

  • Overview
  • Monitoring dashboards
  • Monitor performance metrics
  • Integrate with Datadog
  • Integrate with Prometheus
  • Integrate with Elastic stack
  • Metrics reference

Management

  • Overview
  • Pause and resume a project

Usage and Billing

  • Billing structure overview
  • Polaris plans
  • Add a payment method
  • Monitor account usage

Security

    Polaris access

    • Overview
    • Invite users to your organization
    • Manage users
    • Permissions reference
    • Manage user groups
    • Enable SSO
    • SSO settings reference
    • Map IdP groups

    Secure networking

    • Connect to AWS
    • Create AWS PrivateLink connection

Developer guide

  • Overview
  • Security

    • Overview
    • Authenticate with API keys
    • Authenticate with OAuth
    • Manage users and groups
    • Restrict an embedding link
  • Migrate deprecated resources
  • Create a table
  • Upload files
  • Ingestion jobs

    • Create an ingestion job
    • Create a streaming ingestion job
    • Ingest using SQL
    • View and manage jobs

    Ingestion sources

    • Ingest from files
    • Ingest from a table
    • Get ARN for AWS access
    • Ingest from Amazon S3
    • Ingest from Kafka and MSK
    • Ingest from Amazon Kinesis
    • Ingest from Confluent Cloud
    • Push event data
    • Kafka Connector for Imply Polaris
    • Kafka Connector reference

    Ingestion use cases

    • Filter data to ingest
    • Ingest nested data
    • Ingest and query sketches
    • Specify data schema
    • Ingest Kafka metadata

    Analytics

    • Query data
    • Connect over JDBC
    • Link to BI tools
    • Query parameters reference
  • Update a project
  • API documentation

    • OpenAPI reference
    • Query API

    Migrations

    • Migrate from Hybrid

Product info

    Release notes

    • 2023
    • 2022
  • Known limitations
  • Druid extensions

Connect from external business intelligence applications

While Imply Polaris includes built-in visualization capabilities, you can integrate Polaris with your existing business intelligence tools to explore, query, and share your data.

This topic describes how to connect Polaris to the following applications:

  • Tableau Desktop
  • Looker
  • Apache Superset

Prerequisites

To integrate Polaris with business intelligence tools, you need an API key with the AccessQueries permission. See Authenticate with API keys to obtain an API key. Visit Permissions reference for more information on permissions.

Tableau Desktop

Tableau provides a platform for analyzing, visualizing, and building reports from data. You can access your Polaris data from Tableau by establishing a Java Database Connectivity (JDBC) connection between Polaris and Tableau Desktop.

To connect Tableau Desktop to Polaris, follow these steps:

  1. Download the Avatica JDBC driver available from Imply. We recommend using Avatica JDBC driver version 1.17.0 or later.

  2. Place the JAR file in the appropriate folder for your operating system. For example on a Mac, place the file in ~/Library/Tableau/Drivers. See the Tableau documentation for more details.

  3. Start Tableau Desktop. On the Connect pane, under To a Server, select Other Databases (JDBC).

  4. Set the following options:

    • URL: Specify the following JDBC connection string: jdbc:avatica:remote:url=https://ORGANIZATION_NAME.jdbc.REGION.CLOUD_PROVIDER.api.imply.io.
      Set ORGANIZATION_NAME to your Polaris organization name.
      Replace REGION with the cloud region of your Polaris project and CLOUD_PROVIDER with the cloud service for your Polaris infrastructure.
    • Dialect: Select SQL92.
    • Username: Leave this field blank.
    • Password: Pass in your API key.

    The following screen capture shows the configuration in Tableau Desktop version 2023.1.0:

    Tableau Desktop configuration

  5. Click Sign In. If the button is grayed out, verify that you entered the correct JDBC connection URL. Tableau directs you to the data source page when it successfully makes the connection.

  6. On the left pane, under Database, select druid.

  7. For Schema, select druid.

  8. Tableau displays the tables available in Polaris:

    Tableau Desktop connect a database

Looker

Looker is a business intelligence and data analytics platform. Connecting Looker to Polaris enables you to query your data directly from Looker.

To connect Looker to Polaris over JDBC, follow these steps:

  1. Download the Avatica JDBC driver available from Imply. We recommend using Avatica JDBC driver version 1.17.0 or later.

  2. In the Admin section of Looker, navigate to Database > Connections > Add Connection.

  3. Set the following options:

    • Name: Assign a label to your connection.
    • Dialect: Select Apache Druid 0.18+.
    • Host: Specify the hostname in the format ORGANIZATION_NAME.jdbc.REGION.CLOUD_PROVIDER.api.imply.io.
      Set ORGANIZATION_NAME to your Polaris organization name.
      Replace REGION with the cloud region of your Polaris project and CLOUD_PROVIDER with the cloud service for your Polaris infrastructure.
    • Port: Enter port number 443.
    • Database: Specify database name druid.
    • Username: Assign any string.
    • Password: Pass in your API key.

    The following screen capture shows the configuration in Looker version 23.6.66: Looker connect a database

    Specify the other fields as desired, or leave them at default values.

  4. Click Test to verify the connection. If you get the UnknownHostException error, check that you have the correct organization name in the hostname.

  5. Click Connect to save these settings.

    Looker connections

  6. Test the connection in SQL Runner. Navigate to SQL Runner, and select your connection and the druid schema. You should see your Polaris tables in the Tables menu.

    If you get the Couldn't Load Tables error due to authentication failure, verify that you have a valid API key in the Password field of the Looker connection.

Apache Superset

Apache Superset is a business intelligence web application that helps you explore and visualize data. You connect your Superset instance to Polaris using the SQLAlchemy connector.

To use Polaris in Superset, follow these steps:

  1. In Superset, go to Data > Connect a database and select Apache Druid from the list of supported databases.

  2. Enter the following SQLAlchemy connection string in the Add Database dialog:

    druid+https://apikey:POLARIS_API_KEY@ORGANIZATION_NAME.REGION.CLOUD_PROVIDER.api.imply.io:443/v1/query/sql
    

    In the connection string, replace POLARIS_API_KEY with your API key.
    Set ORGANIZATION_NAME to your Polaris organization name.
    Replace REGION with the cloud region of your Polaris project and CLOUD_PROVIDER with the cloud service for your Polaris infrastructure.

    For example:

    druid+https://apikey:pok_baQlrmvprWGAL...W3Lbmih@example_organization.us-east-1.aws.api.imply.io:443/v1/query/sql
    

    The following screen capture shows the configuration in Superset version 2.0:

    Superset configuration

  3. Click Connect. Once connected, you should be able to retrieve and query Polaris tables from Superset.

    Superset connect a database

Learn more

See the following topics for more information:

  • Connect over JDBC to establish a connection to Polaris using JDBC.
  • Tableau documentation to start a generic JDBC connection in Tableau.
  • Looker documentation to configure a database connection from Looker.
  • Superset documentation to connect to a database in the DB Connection UI.
← Connect over JDBCQuery parameters reference →
  • Prerequisites
  • Tableau Desktop
  • Looker
  • Apache Superset
  • Learn more
Key links
Try ImplyApache Druid siteImply GitHub
Get help
Stack OverflowSupportContact us
Learn more
BlogApache Druid docs
Copyright © 2023 Imply Data, Inc