Skip to main content

Data partitioning

Partitioning is a method of organizing a large dataset into partitions to aid in data management and improve query performance in Imply Polaris.

By distributing data across multiple partitions, you decrease the amount of data that needs to be scanned at query time, which reduces the overall query response time.

For example, if you always filter your data by country, you can use the country dimension to partition your data. This improves the query performance because Polaris only needs to scan the rows related to the country filter.

Time partitioning

Polaris partitions datasets by timestamp based on the time partitioning granularity you select. By default, time partitioning is set to day, which is sufficient for most applications.

You can partition your data by the following time periods:

  • Hour
  • Day
  • Month
  • Year
  • All (group all data into a single bucket)

Depending on the use case and the size of your dataset, you may benefit from a finer or a coarser setting. For highly aggregated datasets, where a single day contains less than one million rows, a coarser time partitioning may be appropriate. A finer time partitioning may be more suitable for datasets with finer granularity timestamps, where queries often run on smaller intervals within a single day.

To change the time partitioning on a table, go to the table view and click Manage > Edit table. Click Partitioning from the menu bar to display the partitioning pane.

You also have the option to set the partitioning:

  • In the Map source to table step of creating an ingestion job in the UI
  • Through partitioningGranularity when you create a table using the API
  • Through partitionedBy when you create a batch or streaming ingestion job using the API
  • Through PARTITIONED BY in SQL-based ingestion

The time partitioning defined in an ingestion job overrides the table property.

Relation to rollup

When using partitioning with rollup, partitioning time granularity must be coarser than or equal to the rollup granularity.

Generally, fine-tuning clustering and rollup is more impactful on performance than using time partitioning alone.

Relation to replacing data

The table's time partitioning determines the granularity of data replacement. The replacement time interval must be coarser than the time partitioning. If you set the time partitioning to all, any data replacement job must replace all data within the table.

Clustering

In addition to partitioning by time, you can partition further using other columns. This is often referred to as clustering or secondary partitioning. You can cluster by any existing column.

To achieve the best performance and the smallest overall memory footprint, we recommend choosing the columns you most frequently filter on. Select the column you filter on the most as your first dimension. Doing so decreases access time and improves data locality, the practice of storing similar data together. The order of the columns determines how Polaris sorts table columns within the partition, which often improves data compression. Note that Polaris always sorts the rows within a partition by timestamp first.

To change the clustering columns on a table, go to the table view and click Manage > Edit table. Click Partitioning from the menu bar to display the partitioning pane, which contains settings for time partitioning and clustering.

You also have the option to set clustering columns:

  • In the Map source to table step of creating an ingestion job in the UI
  • Through clusteringColumns when you create a table using the API
  • Through clusteringColumns when you create a batch or streaming ingestion job using the API
  • Through CLUSTERED BY in SQL-based ingestion

The clustering defined in an ingestion job overrides the table property.

Example

The following screenshot shows a table with time partitioning set to day and clustering configured on continent and country, in that order.

Polaris clustering columns

Segment generation

Partitioning controls how data is stored in files known as segments. The partitioning granularity translates to the interval by which segments are generated and stored. For example, if your data spans one week, and your partitioning granularity is set to day, there are seven one-day intervals that can each contain segments.

A given interval may have zero segments if no data exists for that time interval. A given interval may have more than one segment if there is a lot of data within the time interval.

Polaris optimizes the size of each segment for best performance, meaning that Polaris may create multiple segments for a given interval. For example, with ALL granularity, Polaris groups all data into a single bucket. The bucket may contain more than one segment if there’s more data than fits in a single segment.

Each segment is identified by a version that is the UTC timestamp for when Polaris created the segment. Keep the following in mind when working with versions:

  • An INSERT data job doesn't necessarily create a new segment, so a segment may have data ingested later than the version timestamp. In these instances, cross reference with approximately when the INSERT job was created.
  • REPLACE and compaction jobs create new segments.

In most cases, you do not directly interact with segment files in Polaris. One place where you do interact with segments is when you're trying to restore data. You restore the corresponding interval and (optionally) version that contain the segments.

For more information, see Restore or permanently delete data (UI) or Restore data (API).