Loading data (files)

Getting started

To load batch data, you'll need:

There are two supported methods for loading files into Druid:

  1. Built-in ingestion with the "index" task. This performs the ingestion work on your Druid nodes. Each task processes data in a single thread, but you can parallelize ingestion by submitting multiple tasks.

  2. Hadoop-based ingestion with the "index_hadoop" task. This performs the ingestion work on a YARN cluster using Hadoop Map/Reduce, where it is automatically parallelized.

If you've never loaded data files into Druid before, we recommend trying out the quickstart first and then coming back to this page.

Built-in ingestion

Druid can load files using built-in ingestion with the "index" task. Each indexing task you submit will run single-threaded. To parallelize the data loading process, you can partition your data by time (e.g. hour, day, or some other time bucketing) and then submit an indexing task for each time partition. Indexing tasks for different intervals can run simultaneously.

For an example of using built-in ingestion, see the Imply quickstart. For reference documentation, see the Druid documentation for ingestion tasks.

Hadoop-based ingestion

Druid can leverage Hadoop Map/Reduce to scale out ingestion, allowing it to load data from files on HDFS, S3, or other filesystems via parallelized YARN jobs. These jobs will scan through your raw data and produce optimized Druid data segments in your configured deep storage. The data will then be loaded by Druid Historical Nodes. Once loading is complete, Hadoop and YARN are not involved in the query path of Druid in any way.

The main advantages of loading data using Hadoop is that it automatically parallelizes the batch data loading process, and that it uses YARN resources instead of using your Druid machines (leaving your Druid machines free to handle queries).

For an example of using built-in ingestion, see the Tutorial: Load from Hadoop. For reference documentation, see the Druid documentation for Hadoop ingestion.

Configuration

To configure Druid for running ingestion tasks on a Hadoop cluster:

S3 setup

If your data is stored in S3, you can load it using Elastic MapReduce (EMR) or your own Hadoop cluster. To do this, use the following job properties in your Druid "index_hadoop" task:

"jobProperties" : {
   "fs.s3.awsAccessKeyId" : "YOUR_ACCESS_KEY",
   "fs.s3.awsSecretAccessKey" : "YOUR_SECRET_KEY",
   "fs.s3.impl" : "org.apache.hadoop.fs.s3native.NativeS3FileSystem",
   "fs.s3n.awsAccessKeyId" : "YOUR_ACCESS_KEY",
   "fs.s3n.awsSecretAccessKey" : "YOUR_SECRET_KEY",
   "fs.s3n.impl" : "org.apache.hadoop.fs.s3native.NativeS3FileSystem",
   "io.compression.codecs" : "org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.SnappyCodec"
}

If you don't already have a Hadoop cluster, you can follow these steps to create an EMR cluster:

This method uses Hadoop's built-in S3 file system rather than Amazon's EMRFS, and is not compatible with Amazon-specific features such as S3 encryption and consistent views. If you need to use those features, you will need to make the Amazon EMR Hadoop JARs available to Druid through one of the mechanisms described in the Using other Hadoop distributions section.

Using other Hadoop distributions

Druid works out of the box with many Hadoop distributions. If you are having dependency conflicts between Druid and your version of Hadoop, you can try reading the Druid Different Hadoop Versions documentation, searching for a solution in the Druid user groups, or contacting us for help.

Loading additional data

When you load additional data into Druid using subsequent indexing tasks, the behavior depends on the intervals of the subsequent tasks. Batch loads in Druid act in a replace-by-interval manner, so if you submit two tasks for the same interval, only data from the later task will be visible. If you submit two tasks for different intervals, both sets of data will be visible.

This behavior makes it easy to reload data that you have corrected or amended in some way: just resubmit an indexing task for the same interval, but pointing at the new data. The replacement occurs atomically.

If you want to append to existing data for a given interval rather than replace it, you can do this in one of two ways:

  1. With built-in ingestion through the "index" task, use the parameter "appendToExisting" as described in the Druid documentation for ingestion tasks.
  2. With Hadoop-based ingestion through the "index_hadoop" task, use delta ingestion as described in the Druid documentation for updating existing data.
Overview

Tutorial

Deploy

Manage Data

Query Data

Visualize

Configure

Special UI Features

Misc