2023.01

2023.01

  • Imply
  • Ingest
  • Query
  • Visualize
  • Administer
  • Deploy

›SQL-based ingestion

Ingestion

  • Ingestion overview
  • Supported file formats
  • Data model
  • Data rollup
  • Partitioning
  • Ingestion spec
  • Schema design tips
  • Troubleshooting FAQ

SQL-based ingestion

  • Overview
  • Key concepts
  • Setup
  • Examples
  • API
  • Security
  • Reference
  • Release notes
  • Known issues

Stream ingestion

  • Apache Kafka ingestion
  • Apache Kafka supervisor
  • Apache Kafka operations
  • Amazon Kinesis
  • Tranquility
  • Realtime Process

Batch ingestion

  • Native batch
  • Native batch (simple)
  • Native batch: input sources
  • Firehose (deprecated)
  • Hadoop-based
  • Load Hadoop data via Amazon EMR

Ingestion reference

  • Ingestion
  • Data formats
  • Task reference

SQL-based ingestion

SQL-based ingestion using the multi-stage query task engine is a preview feature available starting in Imply Enterprise and Imply Hybrid 2022.06. It is not available in Polaris yet. Preview features enable early adopters to benefit from new functionality while providing ongoing feedback to help shape and evolve the feature. All functionality documented on this page is subject to change or removal in future releases. Preview features are provided "as is" and are not subject to Imply SLAs.

Apache Druid supports SQL-based ingestion using the bundled druid-multi-stage-query extension. This extension adds a multi-stage query task engine for SQL that allows running SQL INSERT and REPLACE statements as batch tasks. As an experimental feature, the task engine also supports running SELECT queries as batch tasks.

Nearly all SELECT capabilities are available in the multi-stage query (MSQ) task engine, with certain exceptions listed on the Known issues page. This allows great flexibility to apply transformations, filters, JOINs, aggregations, and so on as part of INSERT ... SELECT and REPLACE ... SELECT statements. This also allows in-database transformation: creating new tables based on queries of other tables.

Vocabulary

  • Controller: An indexing service task of type query_controller that manages the execution of a query. There is one controller task per query.

  • Worker: Indexing service tasks of type query_worker that execute a query. There can be multiple worker tasks per query. Internally, the tasks process items in parallel using their processing pools (up to druid.processing.numThreads of execution parallelism within a worker task).

  • Stage: A stage of query execution that is parallelized across worker tasks. Workers exchange data with each other between stages.

  • Partition: A slice of data output by worker tasks. In INSERT or REPLACE queries, the partitions of the final stage become Druid segments.

  • Shuffle: Workers exchange data between themselves on a per-partition basis in a process called shuffling. During a shuffle, each output partition is sorted by a clustering key.

Next steps

  • Read about key concepts to learn more about how SQL-based ingestion and multi-stage queries work.
  • Enable the MSQ task engine by loading the extension.
  • Check out the examples to see SQL-based ingestion in action.
  • Explore the Query view to get started in the web console.
Last updated on 12/19/2022
← Troubleshooting FAQKey concepts →
  • Vocabulary
  • Next steps
2023.01
Key links
Try ImplyApache Druid siteImply GitHub
Get help
Stack OverflowSupportContact us
Learn more
Apache Druid forumsBlog
Copyright © 2023 Imply Data, Inc