Known limitations
This topic describes known limitations of Imply Polaris.
Parsing and ingestion
General
For tables created in flexible mode, the UI labels undeclared columns as
Auto
and does not label them as dimensions or measures.Fields that fail to be parsed are populated in a table row as nulls. Failure in parsing a single column does not cause the whole event to be rejected unless that column is the
__time
column.You should double check the time column that is automatically selected by the UI before ingesting data. For best results, always explicitly set the
__time
column.If the timestamp field in your source data is called
__time
, it must be in units of milliseconds from Unix epoch. You can't transform__time
during ingestion. For more information, see Mapping for an input field named__time
.In some cases, Polaris may return a
Succeeded
status even though there is an ingestion error. Click the job in the Jobs page to check for any errors. For information on troubleshooting ingestion, see Troubleshoot data ingestion.
Batch ingestion
Batch ingestion jobs will fail if they run for longer than 2 days. If your job fails due to the time limit, try resubmitting your ingestion job with a smaller number of files.
Ingesting an empty file or ingesting a file where all the rows fail to parse does not prevent an ingestion job from succeeding.
The maximum supported file size for file upload is 2 GB. If you have a file larger than 2 GB, split it into multiple files that are smaller than 2 GB.
If a data replacement ingestion job exceeds your cluster capacity temporarily, there may be a brief period during in which the new data isn't queryable while Polaris offloads old data and loads new data.
Streaming ingestion
Polaris accepts any event payload. Polaris only checks payload syntax when processing events for ingestion. The acceptance of a pushed streaming event payload does not indicate successful addition to the table. See Streaming use cases to verify the requirements for incoming events.
For connections to Amazon Kinesis, the Kinesis stream must have data for Polaris to test the connection as well as ingest data from it. Kinesis only stores data temporarily in the stream based on the data retention period for the data in your Kinesis Data Stream as well as the Polaris requirements. See Streaming use cases.
Polaris on Azure doesn't support ingestion from Amazon Kinesis.
If you drop all data from a table that previously had ongoing streaming ingestion, you can't push new data to that table without some intervention from Imply. Contact Polaris Support.
If a data replacement ingestion job exceeds your cluster capacity temporarily, there may be a brief period during in which the new data isn't queryable while Polaris offloads old data and loads new data.
Streaming downloads
Rows with measures that have a null value show up as
null
when they should be suppressed instead. For example, if events don’t exist.The Include metadata option sometime results in failure.
Analytics
There's a 5 MB limit on request payloads for changes to data cubes and dashboards. If you see the error
request entity too large
when trying to update a data cube or dashboard, try using the Data cubes v1 API or Dashboards v1 API instead, where the payload limit is 5 MB.Polaris can't process null values in columns with numeric data types. For example, null values in a Numeric type dimension produces a data cube error
Cannot read properties of null (reading 'start')
.
System-defined limits
A project within Polaris can support up to 2000 data tables at a time.
The maximum number of columns per table is 1000.
Each organization supports a maximum of 50 concurrently running jobs for all job types. This includes batch and streaming ingestion as well as data deletion and table deletion jobs. Additional jobs beyond this limit are rejected rather than queued.
Each organization can have up to 20 API keys.
Each organization can have up to 50 total push connections and up to 10 push connections that are unused in ingestion jobs.
The maximum number of push streaming requests for all users in an organization is 500 requests per second.
The maximum size for all files uploaded for an organization is 10 TB.
Downloading source data files is not supported. Once you upload a file for batch ingestion, you cannot re-download it from the file staging area in Polaris.
The maximum number of active alerts in a project is 100.
The maximum number of entries per IP allowlist is 20.
Time delay
- It can take a few seconds for a file to become available for ingestion after you upload it.
- Network policy updates made using the IP allowlist UI or Network policy API can take up to one minute to take effect after implementation.