Apache Druid
  • Imply Documentation

›Hidden

Getting started

  • Introduction to Apache Druid
  • Quickstart
  • Docker
  • Single server deployment
  • Clustered deployment

Tutorials

  • Loading files natively
  • Load from Apache Kafka
  • Load from Apache Hadoop
  • Querying data
  • Roll-up
  • Configuring data retention
  • Updating existing data
  • Compacting segments
  • Deleting data
  • Writing an ingestion spec
  • Transforming input data
  • Kerberized HDFS deep storage

Design

  • Design
  • Segments
  • Processes and servers
  • Deep storage
  • Metadata storage
  • ZooKeeper

Ingestion

  • Ingestion
  • Data formats
  • Schema design tips
  • Data management
  • Stream ingestion

    • Apache Kafka
    • Amazon Kinesis
    • Tranquility

    Batch ingestion

    • Native batch
    • Hadoop-based
  • Task reference
  • Troubleshooting FAQ

Querying

  • Druid SQL
  • Native queries
  • Query execution
  • Concepts

    • Datasources
    • Joins
    • Lookups
    • Multi-value dimensions
    • Multitenancy
    • Query caching
    • Context parameters

    Native query types

    • Timeseries
    • TopN
    • GroupBy
    • Scan
    • Search
    • TimeBoundary
    • SegmentMetadata
    • DatasourceMetadata

    Native query components

    • Filters
    • Granularities
    • Dimensions
    • Aggregations
    • Post-aggregations
    • Expressions
    • Having filters (groupBy)
    • Sorting and limiting (groupBy)
    • Sorting (topN)
    • String comparators
    • Virtual columns
    • Spatial filters

Configuration

  • Configuration reference
  • Extensions
  • Logging

Operations

  • Web console
  • Getting started with Apache Druid
  • Basic cluster tuning
  • API reference
  • High availability
  • Rolling updates
  • Retaining or automatically dropping data
  • Metrics
  • Alerts
  • Working with different versions of Apache Hadoop
  • HTTP compression
  • TLS support
  • Password providers
  • dump-segment tool
  • reset-cluster tool
  • insert-segment-to-db tool
  • pull-deps tool
  • Misc

    • Legacy Management UIs
    • Deep storage migration
    • Export Metadata Tool
    • Metadata Migration
    • Segment Size Optimization
    • Content for build.sbt

Development

  • Developing on Druid
  • Creating extensions
  • JavaScript functionality
  • Build from source
  • Versioning
  • Experimental features

Misc

  • Papers

Hidden

  • Apache Druid vs Elasticsearch
  • Apache Druid vs. Key/Value Stores (HBase/Cassandra/OpenTSDB)
  • Apache Druid vs Kudu
  • Apache Druid vs Redshift
  • Apache Druid vs Spark
  • Apache Druid vs SQL-on-Hadoop
  • Authentication and Authorization
  • Broker
  • Coordinator Process
  • Historical Process
  • Indexer Process
  • Indexing Service
  • MiddleManager Process
  • Overlord Process
  • Router Process
  • Peons
  • Approximate Histogram aggregators
  • Apache Avro
  • Microsoft Azure
  • Bloom Filter
  • DataSketches extension
  • DataSketches HLL Sketch module
  • DataSketches Quantiles Sketch module
  • DataSketches Theta Sketch module
  • DataSketches Tuple Sketch module
  • Basic Security
  • Kerberos
  • Cached Lookup Module
  • Apache Ranger Security
  • Google Cloud Storage
  • HDFS
  • Apache Kafka Lookups
  • Globally Cached Lookups
  • MySQL Metadata Store
  • ORC Extension
  • Druid pac4j based Security extension
  • Apache Parquet Extension
  • PostgreSQL Metadata Store
  • Protobuf
  • S3-compatible
  • Simple SSLContext Provider Module
  • Stats aggregator
  • Test Stats Aggregators
  • Ambari Metrics Emitter
  • Apache Cassandra
  • Rackspace Cloud Files
  • DistinctCount Aggregator
  • Graphite Emitter
  • InfluxDB Line Protocol Parser
  • InfluxDB Emitter
  • Kafka Emitter
  • Materialized View
  • Moment Sketches for Approximate Quantiles module
  • Moving Average Query
  • OpenTSDB Emitter
  • Druid Redis Cache
  • Microsoft SQLServer
  • StatsD Emitter
  • T-Digest Quantiles Sketch module
  • Thrift
  • Timestamp Min/Max aggregators
  • GCE Extensions
  • Aliyun OSS
  • Cardinality/HyperUnique aggregators
  • Select
  • Realtime Process
Edit

Bloom Filter

This Apache Druid extension adds the ability to both construct bloom filters from query results, and filter query results by testing against a bloom filter. Make sure to include druid-bloom-filter as an extension.

A Bloom filter is a probabilistic data structure for performing a set membership check. A bloom filter is a good candidate to use with Druid for cases where an explicit filter is impossible, e.g. filtering a query against a set of millions of values.

Following are some characteristics of Bloom filters:

  • Bloom filters are highly space efficient when compared to using a HashSet.
  • Because of the probabilistic nature of bloom filters, false positive results are possible (element was not actually inserted into a bloom filter during construction, but test() says true)
  • False negatives are not possible (if element is present then test() will never say false).
  • The false positive probability of this implementation is currently fixed at 5%, but increasing the number of entries that the filter can hold can decrease this false positive rate in exchange for overall size.
  • Bloom filters are sensitive to number of elements that will be inserted in the bloom filter. During the creation of bloom filter expected number of entries must be specified. If the number of insertions exceed the specified initial number of entries then false positive probability will increase accordingly.

This extension is currently based on org.apache.hive.common.util.BloomKFilter from hive-storage-api. Internally, this implementation uses Murmur3 as the hash algorithm.

To construct a BloomKFilter externally with Java to use as a filter in a Druid query:

BloomKFilter bloomFilter = new BloomKFilter(1500);
bloomFilter.addString("value 1");
bloomFilter.addString("value 2");
bloomFilter.addString("value 3");
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
BloomKFilter.serialize(byteArrayOutputStream, bloomFilter);
String base64Serialized = Base64.encodeBase64String(byteArrayOutputStream.toByteArray());

This string can then be used in the native or SQL Druid query.

Filtering queries with a Bloom Filter

JSON Specification of Bloom Filter

{
  "type" : "bloom",
  "dimension" : <dimension_name>,
  "bloomKFilter" : <serialized_bytes_for_BloomKFilter>,
  "extractionFn" : <extraction_fn>
}
PropertyDescriptionrequired?
typeFilter Type. Should always be bloomyes
dimensionThe dimension to filter over.yes
bloomKFilterBase64 encoded Binary representation of org.apache.hive.common.util.BloomKFilteryes
extractionFnExtraction function to apply to the dimension valuesno

Serialized Format for BloomKFilter

Serialized BloomKFilter format:

  • 1 byte for the number of hash functions.
  • 1 big endian int(That is how OutputStream works) for the number of longs in the bitset
  • big endian longs in the BloomKFilter bitset

Note: org.apache.hive.common.util.BloomKFilter provides a serialize method which can be used to serialize bloom filters to outputStream.

Filtering SQL Queries

Bloom filters can be used in SQL WHERE clauses via the bloom_filter_test operator:

SELECT COUNT(*) FROM druid.foo WHERE bloom_filter_test(<expr>, '<serialized_bytes_for_BloomKFilter>')

Expression and Virtual Column Support

The bloom filter extension also adds a bloom filter Druid expression which shares syntax with the SQL operator.

bloom_filter_test(<expr>, '<serialized_bytes_for_BloomKFilter>')

Bloom Filter Query Aggregator

Input for a bloomKFilter can also be created from a druid query with the bloom aggregator. Note that it is very important to set a reasonable value for the maxNumEntries parameter, which is the maximum number of distinct entries that the bloom filter can represent without increasing the false positive rate. It may be worth performing a query using one of the unique count sketches to calculate the value for this parameter in order to build a bloom filter appropriate for the query.

JSON Specification of Bloom Filter Aggregator

{
      "type": "bloom",
      "name": <output_field_name>,
      "maxNumEntries": <maximum_number_of_elements_for_BloomKFilter>
      "field": <dimension_spec>
    }
PropertyDescriptionrequired?
typeAggregator Type. Should always be bloomyes
nameOutput field nameyes
fieldDimensionSpec to add to org.apache.hive.common.util.BloomKFilteryes
maxNumEntriesMaximum number of distinct values supported by org.apache.hive.common.util.BloomKFilter, default 1500no

Example

{
  "queryType": "timeseries",
  "dataSource": "wikiticker",
  "intervals": [ "2015-09-12T00:00:00.000/2015-09-13T00:00:00.000" ],
  "granularity": "day",
  "aggregations": [
    {
      "type": "bloom",
      "name": "userBloom",
      "maxNumEntries": 100000,
      "field": {
        "type":"default",
        "dimension":"user",
        "outputType": "STRING"
      }
    }
  ]
}

response

[{"timestamp":"2015-09-12T00:00:00.000Z","result":{"userBloom":"BAAAJhAAAA..."}}]

These values can then be set in the filter specification described above.

Ordering results by a bloom filter aggregator, for example in a TopN query, will perform a comparatively expensive linear scan of the filter itself to count the number of set bits as a means of approximating how many items have been added to the set. As such, ordering by an alternate aggregation is recommended if possible.

SQL Bloom Filter Aggregator

Bloom filters can be computed in SQL expressions with the bloom_filter aggregator:

SELECT BLOOM_FILTER(<expression>, <max number of entries>) FROM druid.foo WHERE dim2 = 'abc'

but requires the setting druid.sql.planner.serializeComplexValues to be set to true. Bloom filter results in a SQL response are serialized into a base64 string, which can then be used in subsequent queries as a filter.

← Microsoft AzureDataSketches extension →
  • Filtering queries with a Bloom Filter
    • JSON Specification of Bloom Filter
    • Serialized Format for BloomKFilter
    • Filtering SQL Queries
    • Expression and Virtual Column Support
  • Bloom Filter Query Aggregator
    • JSON Specification of Bloom Filter Aggregator
    • Example
    • SQL Bloom Filter Aggregator

Technology · Use Cases · Powered by Druid · Docs · Community · Download · FAQ

 ·  ·  · 
Copyright © 2019 Apache Software Foundation.
Except where otherwise noted, licensed under CC BY-SA 4.0.
Apache Druid, Druid, and the Druid logo are either registered trademarks or trademarks of The Apache Software Foundation in the United States and other countries.