To load batch data, you'll need:
There are two supported methods for loading files into Druid:
If you've never loaded data files into Druid before, we recommend trying out the quickstart first and then coming back to this page.
Druid can load files using built-in ingestion with the "index" task. Each indexing task you submit will run single-threaded. To parallelize the data loading process, you can partition your data by time (e.g. hour, day, or some other time bucketing) and then submit an indexing task for each time partition. Indexing tasks for different intervals can run simultaneously.
For an example of using built-in ingestion, see the Imply quickstart. For reference documentation, see the Druid documentation for ingestion tasks.
Druid can leverage Hadoop Map/Reduce to scale out ingestion, allowing it to load data from files on HDFS, S3, or other filesystems via parallelized YARN jobs. These jobs will scan through your raw data and produce optimized Druid data segments in your configured deep storage. The data will then be loaded by Druid Historical Nodes. Once loading is complete, Hadoop and YARN are not involved in the query path of Druid in any way.
The main advantages of loading data using Hadoop is that it automatically parallelizes the batch data loading process, and that it uses YARN resources instead of using your Druid machines (leaving your Druid machines free to handle queries).
For an example of using built-in ingestion, see the Tutorial: Load from Hadoop. For reference documentation, see the Druid documentation for Hadoop ingestion.
To configure Druid for running ingestion tasks on a Hadoop cluster:
Update druid.indexer.task.hadoopWorkingPath
in conf/druid/middleManager/runtime.properties
to
a path on HDFS that you'd like to use for temporary files required during the indexing process.
druid.indexer.task.hadoopWorkingPath=/tmp/druid-indexing
is a common choice.
Place your Hadoop configuration XMLs (core-site.xml, hdfs-site.xml, yarn-site.xml,
mapred-site.xml) on the classpath of your Druid nodes. You can do this by copying them into
conf/druid/_common/
.
Ensure that you have configured a distributed deep storage. Note that while you do need a distributed deep storage in order to load data with Hadoop, it doesn't need to be HDFS. For example, if your cluster is running on Amazon Web Services, we recommend using S3 for deep storage even if you are loading data using Hadoop or Elastic MapReduce.
Hadoop-based Druid ingestion task specs use a different format from built-in ingestion task specs. For an example, see the Tutorial: Load from Hadoop.
If your data is stored in S3, you can load it using Elastic MapReduce (EMR) or your own Hadoop cluster. To do this, use the following job properties in your Druid "index_hadoop" task:
"jobProperties" : {
"fs.s3.awsAccessKeyId" : "YOUR_ACCESS_KEY",
"fs.s3.awsSecretAccessKey" : "YOUR_SECRET_KEY",
"fs.s3.impl" : "org.apache.hadoop.fs.s3native.NativeS3FileSystem",
"fs.s3n.awsAccessKeyId" : "YOUR_ACCESS_KEY",
"fs.s3n.awsSecretAccessKey" : "YOUR_SECRET_KEY",
"fs.s3n.impl" : "org.apache.hadoop.fs.s3native.NativeS3FileSystem",
"io.compression.codecs" : "org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.SnappyCodec"
}
If you don't already have a Hadoop cluster, you can follow these steps to create an EMR cluster:
classification=yarn-site,properties=[mapreduce.reduce.memory.mb=6144,mapreduce.reduce.java.opts=-server -Xms2g -Xmx2g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -XX:+PrintGCDetails -XX:+PrintGCTimeStamps,mapreduce.map.java.opts=758,mapreduce.map.java.opts=-server -Xms512m -Xmx512m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -XX:+PrintGCDetails -XX:+PrintGCTimeStamps,mapreduce.task.timeout=1800000]
This method uses Hadoop's built-in S3 file system rather than Amazon's EMRFS, and is not compatible with Amazon-specific features such as S3 encryption and consistent views. If you need to use those features, you will need to make the Amazon EMR Hadoop JARs available to Druid through one of the mechanisms described in the Using other Hadoop distributions section.
Druid works out of the box with many Hadoop distributions. If you are having dependency conflicts between Druid and your version of Hadoop, you can try reading the Druid Different Hadoop Versions documentation, searching for a solution in the Druid user groups, or contacting us for help.
When you load additional data into Druid using subsequent indexing tasks, the behavior depends on the intervals of the subsequent tasks. Batch loads in Druid act in a replace-by-interval manner, so if you submit two tasks for the same interval, only data from the later task will be visible. If you submit two tasks for different intervals, both sets of data will be visible.
This behavior makes it easy to reload data that you have corrected or amended in some way: just resubmit an indexing task for the same interval, but pointing at the new data. The replacement occurs atomically.
If you want to append to existing data for a given interval rather than replace it, you can do this in one of two ways: