Apache Druid
  • Imply Documentation

›Hidden

Getting started

  • Introduction to Apache Druid
  • Quickstart
  • Docker
  • Single server deployment
  • Clustered deployment

Tutorials

  • Loading files natively
  • Load from Apache Kafka
  • Load from Apache Hadoop
  • Querying data
  • Roll-up
  • Configuring data retention
  • Updating existing data
  • Compacting segments
  • Deleting data
  • Writing an ingestion spec
  • Transforming input data
  • Kerberized HDFS deep storage

Design

  • Design
  • Segments
  • Processes and servers
  • Deep storage
  • Metadata storage
  • ZooKeeper

Ingestion

  • Ingestion
  • Data formats
  • Schema design tips
  • Data management
  • Stream ingestion

    • Apache Kafka
    • Amazon Kinesis
    • Tranquility

    Batch ingestion

    • Native batch
    • Hadoop-based
  • Task reference
  • Troubleshooting FAQ

Querying

  • Druid SQL
  • Native queries
  • Query execution
  • Concepts

    • Datasources
    • Joins
    • Lookups
    • Multi-value dimensions
    • Multitenancy
    • Query caching
    • Context parameters

    Native query types

    • Timeseries
    • TopN
    • GroupBy
    • Scan
    • Search
    • TimeBoundary
    • SegmentMetadata
    • DatasourceMetadata

    Native query components

    • Filters
    • Granularities
    • Dimensions
    • Aggregations
    • Post-aggregations
    • Expressions
    • Having filters (groupBy)
    • Sorting and limiting (groupBy)
    • Sorting (topN)
    • String comparators
    • Virtual columns
    • Spatial filters

Configuration

  • Configuration reference
  • Extensions
  • Logging

Operations

  • Web console
  • Getting started with Apache Druid
  • Basic cluster tuning
  • API reference
  • High availability
  • Rolling updates
  • Retaining or automatically dropping data
  • Metrics
  • Alerts
  • Working with different versions of Apache Hadoop
  • HTTP compression
  • TLS support
  • Password providers
  • dump-segment tool
  • reset-cluster tool
  • insert-segment-to-db tool
  • pull-deps tool
  • Misc

    • Legacy Management UIs
    • Deep storage migration
    • Export Metadata Tool
    • Metadata Migration
    • Segment Size Optimization
    • Content for build.sbt

Development

  • Developing on Druid
  • Creating extensions
  • JavaScript functionality
  • Build from source
  • Versioning
  • Experimental features

Misc

  • Papers

Hidden

  • Apache Druid vs Elasticsearch
  • Apache Druid vs. Key/Value Stores (HBase/Cassandra/OpenTSDB)
  • Apache Druid vs Kudu
  • Apache Druid vs Redshift
  • Apache Druid vs Spark
  • Apache Druid vs SQL-on-Hadoop
  • Authentication and Authorization
  • Broker
  • Coordinator Process
  • Historical Process
  • Indexer Process
  • Indexing Service
  • MiddleManager Process
  • Overlord Process
  • Router Process
  • Peons
  • Approximate Histogram aggregators
  • Apache Avro
  • Microsoft Azure
  • Bloom Filter
  • DataSketches extension
  • DataSketches HLL Sketch module
  • DataSketches Quantiles Sketch module
  • DataSketches Theta Sketch module
  • DataSketches Tuple Sketch module
  • Basic Security
  • Kerberos
  • Cached Lookup Module
  • Apache Ranger Security
  • Google Cloud Storage
  • HDFS
  • Apache Kafka Lookups
  • Globally Cached Lookups
  • MySQL Metadata Store
  • ORC Extension
  • Druid pac4j based Security extension
  • Apache Parquet Extension
  • PostgreSQL Metadata Store
  • Protobuf
  • S3-compatible
  • Simple SSLContext Provider Module
  • Stats aggregator
  • Test Stats Aggregators
  • Ambari Metrics Emitter
  • Apache Cassandra
  • Rackspace Cloud Files
  • DistinctCount Aggregator
  • Graphite Emitter
  • InfluxDB Line Protocol Parser
  • InfluxDB Emitter
  • Kafka Emitter
  • Materialized View
  • Moment Sketches for Approximate Quantiles module
  • Moving Average Query
  • OpenTSDB Emitter
  • Druid Redis Cache
  • Microsoft SQLServer
  • StatsD Emitter
  • T-Digest Quantiles Sketch module
  • Thrift
  • Timestamp Min/Max aggregators
  • GCE Extensions
  • Aliyun OSS
  • Cardinality/HyperUnique aggregators
  • Select
  • Realtime Process
Edit

Cached Lookup Module

Please note that this is an experimental module and the development/testing still at early stage. Feel free to try it and give us your feedback.

Description

This Apache Druid module provides a per-lookup caching mechanism for JDBC data sources. The main goal of this cache is to speed up the access to a high latency lookup sources and to provide a caching isolation for every lookup source. Thus user can define various caching strategies or and implementation per lookup, even if the source is the same. This module can be used side to side with other lookup module like the global cached lookup module.

To use this extension please make sure to include druid-lookups-cached-single as an extension.

If using JDBC, you will need to add your database's client JAR files to the extension's directory. For Postgres, the connector JAR is already included. For MySQL, you can get it from https://dev.mysql.com/downloads/connector/j/. Copy or symlink the downloaded file to extensions/druid-lookups-cached-single under the distribution root directory.

Architecture

Generally speaking this module can be divided into two main component, namely, the data fetcher layer and caching layer.

Data Fetcher layer

First part is the data fetcher layer API DataFetcher, that exposes a set of fetch methods to fetch data from the actual Lookup dimension source. For instance JdbcDataFetcher provides an implementation of DataFetcher that can be used to fetch key/value from a RDBMS via JDBC driver. If you need new type of data fetcher, all you need to do, is to implement the interface DataFetcher and load it via another druid module.

Caching layer

This extension comes with two different caching strategies. First strategy is a poll based and the second is a load based.

Poll lookup cache

The poll strategy cache strategy will fetch and swap all the pair of key/values periodically from the lookup source. Hence, user should make sure that the cache can fit all the data. The current implementation provides 2 type of poll cache, the first is on-heap (uses immutable map), while the second uses MapDB based off-heap map. User can also implement a different lookup polling cache by implementing PollingCacheFactory and PollingCache interfaces.

Loading lookup

Loading cache strategy will load the key/value pair upon request on the key it self, the general algorithm is load key if absent. Once the key/value pair is loaded eviction will occur according to the cache eviction policy. This module comes with two loading lookup implementation, the first is on-heap backed by a Guava cache implementation, the second is MapDB off-heap implementation. Both implementations offer various eviction strategies. Same for Loading cache, developer can implement a new type of loading cache by implementing LookupLoadingCache interface.

Configuration and Operation:

Polling Lookup

Note that the current implementation of offHeapPolling and onHeapPolling will create two caches one to lookup value based on key and the other to reverse lookup the key from value

FieldTypeDescriptionRequireddefault
dataFetcherJSON objectSpecifies the lookup data fetcher type to use in order to fetch datayesnull
cacheFactoryJSON ObjectCache factory implementationnoonHeapPolling
pollPeriodPeriodpolling periodnonull (poll once)
Example of Polling On-heap Lookup

This example demonstrates a polling cache that will update its on-heap cache every 10 minutes

{
    "type":"pollingLookup",
   "pollPeriod":"PT10M",
   "dataFetcher":{ "type":"jdbcDataFetcher", "connectorConfig":"jdbc://mysql://localhost:3306/my_data_base", "table":"lookup_table_name", "keyColumn":"key_column_name", "valueColumn": "value_column_name"},
   "cacheFactory":{"type":"onHeapPolling"}
}

Example Polling Off-heap Lookup

This example demonstrates an off-heap lookup that will be cached once and never swapped (pollPeriod == null)

{
    "type":"pollingLookup",
   "dataFetcher":{ "type":"jdbcDataFetcher", "connectorConfig":"jdbc://mysql://localhost:3306/my_data_base", "table":"lookup_table_name", "keyColumn":"key_column_name", "valueColumn": "value_column_name"},
   "cacheFactory":{"type":"offHeapPolling"}
}

Loading lookup

FieldTypeDescriptionRequireddefault
dataFetcherJSON objectSpecifies the lookup data fetcher type to use in order to fetch datayesnull
loadingCacheSpecJSON ObjectLookup cache spec implementationyesnull
reverseLoadingCacheSpecJSON ObjectReverse lookup cache implementationyesnull
Example Loading On-heap Guava

Guava cache configuration spec.

FieldTypeDescriptionRequireddefault
concurrencyLevelintAllowed concurrency among update operationsno4
initialCapacityintInitial capacity sizenonull
maximumSizelongSpecifies the maximum number of entries the cache may contain.nonull (infinite capacity)
expireAfterAccesslongSpecifies the eviction time after last read in milliseconds.nonull (No read-time-based eviction when set to null)
expireAfterWritelongSpecifies the eviction time after last write in milliseconds.nonull (No write-time-based eviction when set to null)
{
   "type":"loadingLookup",
   "dataFetcher":{ "type":"jdbcDataFetcher", "connectorConfig":"jdbc://mysql://localhost:3306/my_data_base", "table":"lookup_table_name", "keyColumn":"key_column_name", "valueColumn": "value_column_name"},
   "loadingCacheSpec":{"type":"guava"},
   "reverseLoadingCacheSpec":{"type":"guava", "maximumSize":500000, "expireAfterAccess":100000, "expireAfterAccess":10000}
}
Example Loading Off-heap MapDB

Off heap cache is backed by MapDB implementation. MapDB is using direct memory as memory pool, please take that into account when limiting the JVM direct memory setup.

FieldTypeDescriptionRequireddefault
maxStoreSizedoublemaximal size of store in GB, if store is larger entries will start expiringno0
maxEntriesSizelongSpecifies the maximum number of entries the cache may contain.no0 (infinite capacity)
expireAfterAccesslongSpecifies the eviction time after last read in milliseconds.no0 (No read-time-based eviction when set to null)
expireAfterWritelongSpecifies the eviction time after last write in milliseconds.no0 (No write-time-based eviction when set to null)
{
   "type":"loadingLookup",
   "dataFetcher":{ "type":"jdbcDataFetcher", "connectorConfig":"jdbc://mysql://localhost:3306/my_data_base", "table":"lookup_table_name", "keyColumn":"key_column_name", "valueColumn": "value_column_name"},
   "loadingCacheSpec":{"type":"mapDb", "maxEntriesSize":100000},
   "reverseLoadingCacheSpec":{"type":"mapDb", "maxStoreSize":5, "expireAfterAccess":100000, "expireAfterAccess":10000}
}
← KerberosApache Ranger Security →
  • Description
  • Architecture
    • Data Fetcher layer
    • Caching layer
  • Configuration and Operation:
    • Polling Lookup
    • Loading lookup

Technology · Use Cases · Powered by Druid · Docs · Community · Download · FAQ

 ·  ·  · 
Copyright © 2019 Apache Software Foundation.
Except where otherwise noted, licensed under CC BY-SA 4.0.
Apache Druid, Druid, and the Druid logo are either registered trademarks or trademarks of The Apache Software Foundation in the United States and other countries.