Skip to main content

Overview

Imply's distribution of Apache Druid® powers Imply’s ability to deliver highly performant real-time analytics at scale. Within Druid, your data resides in table datasources, which are similar to tables in relational database management systems (RDBMS).

In addition to tables, Druid supports other types of datasources, including lookups and views. See Lookups and View manager.

Loading data into a datasource

The goal when loading data into a Druid datasource is to optimize the schema and layout so that Druid can deliver fast ad-hoc analytics for your end users.

To load data into Druid, you can use the Druid console (batch and streaming), SQL-based ingestion API (batch), or the Supervisor API (streaming). These are different ways to define tasks used for the different stages of ingestion, including:

  • Connecting to the system hosting your original data.
  • Parsing the original data.
  • Performing ingestion-time data transformation including filtering, concatenation, string processing, and other data manipulation functions.
  • Creating the schema for a datasource if it doesn’t already exist.
  • Adding data to the datasource, including appending new segments to existing segment sets.
  • Organizing the data layout in segments on disk.

Ingestion tasks replace CREATE_TABLE and INSERT INTO..., SELECT... INTO, and similar commands used to load data into a relational database.

During ingestion, Druid transforms your original data into time-chunked files called segments. Segments reside on disks in deep storage. Druid data retrieval services called Historicals load the segment files from deep storage to make them available for low-latency querying. You can also keep data only in deep storage and query from there, trading performance for lower cost.

Efficient organization of your data into segments on disk can improve query performance within Druid.

Data layout, schema design, and query performance

Queries perform best when Druid can distribute the compute load to retrieve data for responses across many Historicals. In general, queries run more quickly when Historicals:

  • Can exclude segment files from processing based upon a query filter when building a query response.
  • Have less data to process when responding to queries that target only a few rows and columns.

Before you ingest data, identify your performance goals and use the goals to define your ingestion strategy for Druid to optimize your schema and the layout of your segments.

To learn more about Historicals and other Druid services and their role in data processing, see Design.

Differences between Druid and other database systems

Even though Druid shares conceptual similarities with traditional relational databases and data warehousing tools, there are key differences in terms of loading data. For example:

  • In some data warehouses, you load all your data up front and figure out performance later at query time. Druid relies on the data layout on disk to deliver exceptional performance. Therefore, queries perform better if you plan your Schema design first and define your ingestion tasks accordingly.

  • Relational databases can be highly normalized, joining many tables together to return the results for a single query. Druid performs better with a flat data model where all the results from a query come from a single datasource.

    To learn more about the differences between Druid and other database models see Schema design tips.

Ingestion methods

Druid supports streaming ingestion of data from Kafka and Kinesis directly. Druid can make data from stream sources available for querying with very low latency.

Use the following guidance when choosing an ingestion method:

  • For streaming ingestion, launch a supervisor.
  • For any kind of batch loading of data, such as if you want to load data from CSV or Parquet files or Parquet, use SQL-based ingestion.
  • If you have Hadoop infrastructure, you can use Hadoop-based batch ingestion with Hadoop 3, but we don't recommend it. We recommend SQL-based ingestion instead, which provides a more fully featured experience.

Hybrid batch/streaming

You can combine batch and streaming methods in a hybrid batch/streaming architecture, sometimes called a lambda architecture. In a hybrid architecture, you use streaming ingestion initially, and then periodically ingest finalized data in batch modetypically every few hours or nightly.

Hybrid architectures are simple with Druid, since batch loaded data for a particular time range automatically replaces streaming loaded data for that same time range. All Druid queries seamlessly access historical data together with real-time data. We recommend this kind of architecture if you need real-time analytics but also need the ability to reprocess historical data. Common reasons for reprocessing historical data include:

  • Most streaming ingestion methods currently supported by Druid introduce the possibility of dropped or duplicated messages in certain failure scenarios, and batch re-ingestion eliminates this potential source of error for historical data.

  • You get the option to re-ingest your data if necessary in batch mode. This could occur if you missed some data the first time around, or because you need to revise your data. Because Druid's batch ingestion operates on specific slices of time, it is possible to simultaneously do a historical batch load and real-time streaming load.

With the Kafka indexing service, it is possible to reprocess historical data in a pure streaming architecture by migrating to a new stream-based datasource whenever you want to reprocess historical data. This is sometimes called a kappa architecture.

Learn more

See the following topics for more information: