Skip to main content

Known limitations

This topic describes known limitations of Imply Polaris.

Parsing and ingestion

General

  • For an aggregate table created with flexible schema mode, the UI displays the auto-discovered columns as dimensions even when they are measures. Declared measure columns still show up as measures.

  • Fields that fail to be parsed are populated in a table row as nulls. Failure in parsing a single column does not cause the whole event to be rejected unless that column is the __time column.

  • You should double check the time column that is automatically selected by the UI before ingesting data. For best results, always explicitly set the __time column.

  • In some cases, Polaris may return a Succeeded status even though there is an ingestion error. Click the job in the Jobs page to check for any errors. For information on troubleshooting ingestion, see Troubleshoot data ingestion.

Batch ingestion

  • Batch ingestion jobs will fail if they run for longer than 2 days. If your job fails due to the time limit, try resubmitting your ingestion job with a smaller number of files.

  • Ingesting an empty file or ingesting a file where all the rows fail to parse does not prevent an ingestion job from succeeding.

  • The maximum supported file size for file upload is 2 GB. If you have a file larger than 2 GB, split it into multiple files that are smaller than 2 GB.

Streaming ingestion

  • Polaris accepts any event payload. Polaris only checks payload syntax when processing events for ingestion. The acceptance of a pushed streaming event payload does not indicate successful addition to the table. See Streaming use cases to verify the requirements for incoming events.

  • For connections to Amazon Kinesis, the Kinesis stream must have data for Polaris to test the connection as well as ingest data from it. Kinesis only stores data temporarily in the stream based on the data retention period for the data in your Kinesis Data Stream as well as the Polaris requirements. See Streaming use cases.

  • If you drop all data from a table that previously had ongoing streaming ingestion, you can't push new data to that table without some intervention from Imply. Contact Polaris Support.

Streaming downloads

  • Rows with measures that have a null value show up as null when they should be suppressed instead. For example, if events don’t exist.

  • The Include metadata option sometime results in failure.

System-defined limits

  • The maximum number of projects per region is 25.

  • A project within Polaris can support up to 1000 data tables at a time.

  • The maximum number of columns per table is 1000.

  • Each organization supports a maximum of 50 concurrently running jobs for all job types. This includes batch and streaming ingestion as well as data deletion and table deletion jobs. Additional jobs beyond this limit are rejected rather than queued.

  • Each organization can have up to 20 API keys.

  • Each organization can have up to 50 total push connections and up to 10 push connections that are unused in ingestion jobs.

  • The maximum number of push streaming requests for all users in an organization is 500 requests per second.

  • The maximum size for all files uploaded for an organization is 10 TB.

  • Downloading source data files is not supported. Once you upload a file for batch ingestion, you cannot re-download it from the file staging area in Polaris.

  • The maximum number of active alerts in a project is 100.

  • The maximum number of rows that you can download in an embedded link is 50,000.

Time delay

  • It might take a few seconds for a file to become available for ingestion after you upload it.