• Developer guide
  • API reference

›Product info

Getting started

  • Introduction to Imply Polaris
  • Quickstart
  • Navigate the console
  • Key concepts

Ingestion sources

  • Ingestion sources overview
  • Supported data formats
  • Create a connection
  • Ingest from files
  • Ingest from S3
  • Ingest from Kinesis
  • Ingest from Confluent Cloud
  • Kafka Connector for Imply Polaris
  • Push event data

Tables and data

  • Overview
  • Introduction to tables
  • Table schema
  • Create an ingestion job
  • Timestamp expressions
  • Data partitioning
  • Introduction to rollup
  • Approximation algorithms
  • Replace data

Analytics

  • Overview
  • Manage data cubes
  • Visualize data
  • Data cube dimensions
  • Data cube measures
  • Dashboards
  • Create a dashboard
  • Visualizations reference
  • Set up alerts
  • Set up reports
  • Embed visualizations
  • Query data

Monitoring

  • Overview

Management

  • Overview
  • Pause and resume a project

Billing

  • Overview
  • Polaris plans
  • Estimate project costs

Usage

  • Overview

Security

    Polaris access

    • Overview
    • Invite users to your organization
    • Permissions reference
    • Manage user groups
    • Enable SSO
    • SSO settings reference
    • Map IdP groups

    Secure networking

    • Connect to AWS

Developer guide

  • Overview
  • Authentication

    • Overview
    • Authenticate with API keys
    • Authenticate with OAuth
  • Manage users and groups
  • Migrate deprecated resources
  • Create a table
  • Define a schema
  • Upload files
  • Create an ingestion job
  • Ingestion sources

    • Ingest from files
    • Ingest from a table
    • Get ARN for AWS access
    • Ingest from Amazon S3
    • Ingest from Amazon Kinesis
    • Ingest from Confluent Cloud
    • Push event data
    • Kafka Connector for Imply Polaris
    • Kafka Connector reference
  • Filter data to ingest
  • Ingest nested data
  • Ingest and query sketches
  • Query data
  • Update a project
  • Link to BI tools
  • Connect over JDBC
  • Query parameters reference
  • API documentation

    • OpenAPI reference
    • Query API

Product info

  • Release notes
  • Known limitations
  • Druid extensions

Known limitations

This topic describes known limitations of Imply Polaris.

Parsing and ingestion

  • Polaris does not support rollup for nested data.

  • Polaris accepts any event payload. Polaris only checks payload syntax when processing events for ingestion. The acceptance of a pushed streaming event payload does not indicate successful addition to the table. See Event payload requirements to verify the requirements for incoming events.

  • For connections to Amazon Kinesis, the Kinesis stream must have data to successfully test and ingest from the connection. Take into account the retention period for the data in your Kinesis Data Stream as well as the Event payload requirements for streaming ingestion in Polaris.

  • Fields that fail to be parsed are populated in a table row as nulls. Failure in parsing a single column does not cause the whole event to be rejected unless that column is the __time column.

  • You should double check the time column that is automatically selected by the UI before ingesting data. For best results, always explicitly set the __time column.

  • Ingesting an empty file or ingesting a file where all the rows fail to parse does not prevent an ingestion job from succeeding.

  • In some cases, Polaris may return a Succeeded status even though there is an ingestion error. For batch ingestion, click the job in Ingestion Jobs list to check for any errors. For streaming ingestion, click ... More Options for the Push API Endpoint and select View details to see streaming errors.

  • The maximum supported file size for file upload is 2 GB. If you have a file larger than 2 GB, split it into multiple files that are smaller than 2 GB.

  • If you drop all data from a table that previously had ongoing streaming ingestion, you can't push new data to that table without some intervention from Imply. Contact Polaris Support.

Streaming downloads

  • Rows with measures that have a null value show up as null when they should be suppressed instead. For example, if events don’t exist.

  • The Include metadata option sometime results in failure.

System defined limits

  • You can only ingest data 10 years in the past and 5 years in the future.

  • A project within Polaris can support up to 50 data tables at a time.

  • Each table supports a maximum of 400 columns.

  • Each organization supports a maximum of 50 concurrently running jobs for all job types. This includes batch and streaming ingestion as well as data deletion and table deletion jobs. Additional jobs beyond this limit are rejected rather than queued.

  • Each organization can have up to 50 total push connections and up to 10 push connections that are unused in ingestion jobs.

  • The maximum number of push streaming requests for all users in an organization is 500 requests per second.

  • The maximum size for all files uploaded for an organization is 10 TB.

  • Downloading source data files is not supported. Once you upload a file for batch ingestion, you cannot re-download it from the file staging area in Polaris.

Time delay

  • It might take a few seconds for a file to become available for ingestion after you upload it.
← Release notesDruid extensions →
  • Parsing and ingestion
  • Streaming downloads
  • System defined limits
  • Time delay
Key links
Try ImplyApache Druid siteImply GitHub
Get help
Stack OverflowSupportContact us
Learn more
BlogApache Druid docs
Copyright © 2023 Imply Data, Inc