Apache Druid
  • Imply Documentation

›Operations

Getting started

  • Introduction to Apache Druid
  • Quickstart
  • Docker
  • Single server deployment
  • Clustered deployment

Tutorials

  • Loading files natively
  • Load from Apache Kafka
  • Load from Apache Hadoop
  • Querying data
  • Roll-up
  • Configuring data retention
  • Updating existing data
  • Compacting segments
  • Deleting data
  • Writing an ingestion spec
  • Transforming input data
  • Kerberized HDFS deep storage

Design

  • Design
  • Segments
  • Processes and servers
  • Deep storage
  • Metadata storage
  • ZooKeeper

Ingestion

  • Ingestion
  • Data formats
  • Schema design tips
  • Data management
  • Stream ingestion

    • Apache Kafka
    • Amazon Kinesis
    • Tranquility

    Batch ingestion

    • Native batch
    • Hadoop-based
  • Task reference
  • Troubleshooting FAQ

Querying

  • Druid SQL
  • Native queries
  • Query execution
  • Concepts

    • Datasources
    • Joins
    • Lookups
    • Multi-value dimensions
    • Multitenancy
    • Query caching
    • Context parameters

    Native query types

    • Timeseries
    • TopN
    • GroupBy
    • Scan
    • Search
    • TimeBoundary
    • SegmentMetadata
    • DatasourceMetadata

    Native query components

    • Filters
    • Granularities
    • Dimensions
    • Aggregations
    • Post-aggregations
    • Expressions
    • Having filters (groupBy)
    • Sorting and limiting (groupBy)
    • Sorting (topN)
    • String comparators
    • Virtual columns
    • Spatial filters

Configuration

  • Configuration reference
  • Extensions
  • Logging

Operations

  • Web console
  • Getting started with Apache Druid
  • Basic cluster tuning
  • API reference
  • High availability
  • Rolling updates
  • Retaining or automatically dropping data
  • Metrics
  • Alerts
  • Working with different versions of Apache Hadoop
  • HTTP compression
  • TLS support
  • Password providers
  • dump-segment tool
  • reset-cluster tool
  • insert-segment-to-db tool
  • pull-deps tool
  • Misc

    • Legacy Management UIs
    • Deep storage migration
    • Export Metadata Tool
    • Metadata Migration
    • Segment Size Optimization
    • Content for build.sbt

Development

  • Developing on Druid
  • Creating extensions
  • JavaScript functionality
  • Build from source
  • Versioning
  • Experimental features

Misc

  • Papers

Hidden

  • Apache Druid vs Elasticsearch
  • Apache Druid vs. Key/Value Stores (HBase/Cassandra/OpenTSDB)
  • Apache Druid vs Kudu
  • Apache Druid vs Redshift
  • Apache Druid vs Spark
  • Apache Druid vs SQL-on-Hadoop
  • Authentication and Authorization
  • Broker
  • Coordinator Process
  • Historical Process
  • Indexer Process
  • Indexing Service
  • MiddleManager Process
  • Overlord Process
  • Router Process
  • Peons
  • Approximate Histogram aggregators
  • Apache Avro
  • Microsoft Azure
  • Bloom Filter
  • DataSketches extension
  • DataSketches HLL Sketch module
  • DataSketches Quantiles Sketch module
  • DataSketches Theta Sketch module
  • DataSketches Tuple Sketch module
  • Basic Security
  • Kerberos
  • Cached Lookup Module
  • Apache Ranger Security
  • Google Cloud Storage
  • HDFS
  • Apache Kafka Lookups
  • Globally Cached Lookups
  • MySQL Metadata Store
  • ORC Extension
  • Druid pac4j based Security extension
  • Apache Parquet Extension
  • PostgreSQL Metadata Store
  • Protobuf
  • S3-compatible
  • Simple SSLContext Provider Module
  • Stats aggregator
  • Test Stats Aggregators
  • Ambari Metrics Emitter
  • Apache Cassandra
  • Rackspace Cloud Files
  • DistinctCount Aggregator
  • Graphite Emitter
  • InfluxDB Line Protocol Parser
  • InfluxDB Emitter
  • Kafka Emitter
  • Materialized View
  • Moment Sketches for Approximate Quantiles module
  • Moving Average Query
  • OpenTSDB Emitter
  • Druid Redis Cache
  • Microsoft SQLServer
  • StatsD Emitter
  • T-Digest Quantiles Sketch module
  • Thrift
  • Timestamp Min/Max aggregators
  • GCE Extensions
  • Aliyun OSS
  • Cardinality/HyperUnique aggregators
  • Select
  • Realtime Process
Edit

dump-segment tool

The DumpSegment tool can be used to dump the metadata or contents of an Apache Druid segment for debugging purposes. Note that the dump is not necessarily a full-fidelity translation of the segment. In particular, not all metadata is included, and complex metric values may not be complete.

To run the tool, point it at a segment directory and provide a file for writing output:

java -classpath "/my/druid/lib/*" -Ddruid.extensions.loadList="[]" org.apache.druid.cli.Main \
  tools dump-segment \
  --directory /home/druid/path/to/segment/ \
  --out /home/druid/output.txt

Output format

Data dumps

By default, or with --dump rows, this tool dumps rows of the segment as newline-separate JSON objects, with one object per line, using the default serialization for each column. Normally all columns are included, but if you like, you can limit the dump to specific columns with --column name.

For example, one line might look like this when pretty-printed:

{
  "__time": 1442018818771,
  "added": 36,
  "channel": "#en.wikipedia",
  "cityName": null,
  "comment": "added project",
  "count": 1,
  "countryIsoCode": null,
  "countryName": null,
  "deleted": 0,
  "delta": 36,
  "isAnonymous": "false",
  "isMinor": "false",
  "isNew": "false",
  "isRobot": "false",
  "isUnpatrolled": "false",
  "iuser": "00001553",
  "metroCode": null,
  "namespace": "Talk",
  "page": "Talk:Oswald Tilghman",
  "regionIsoCode": null,
  "regionName": null,
  "user": "GELongstreet"
}

Metadata dumps

With --dump metadata, this tool dumps metadata instead of rows. Metadata dumps generated by this tool are in the same format as returned by the SegmentMetadata query.

Bitmap dumps

With --dump bitmaps, this tool dump bitmap indexes instead of rows. Bitmap dumps generated by this tool include dictionary-encoded string columns only. The output contains a field "bitmapSerdeFactory" describing the type of bitmaps used in the segment, and a field "bitmaps" containing the bitmaps for each value of each column. These are base64 encoded by default, but you can also dump them as lists of row numbers with --decompress-bitmaps.

Normally all columns are included, but if you like, you can limit the dump to specific columns with --column name.

Sample output:

{
  "bitmapSerdeFactory": {
    "type": "roaring",
    "compressRunOnSerialization": true
  },
  "bitmaps": {
    "isRobot": {
      "false": "//aExfu+Nv3X...",
      "true": "gAl7OoRByQ..."
    }
  }
}

Command line arguments

argumentdescriptionrequired?
--directory fileDirectory containing segment data. This could be generated by unzipping an "index.zip" from deep storage.yes
--output fileFile to write to, or omit to write to stdout.yes
--dump TYPEDump either 'rows' (default), 'metadata', or 'bitmaps'no
--column columnNameColumn to include. Specify multiple times for multiple columns, or omit to include all columns.no
--filter jsonJSON-encoded query filter. Omit to include all rows. Only used if dumping rows.no
--time-iso8601Format __time column in ISO8601 format rather than long. Only used if dumping rows.no
--decompress-bitmapsDump bitmaps as arrays rather than base64-encoded compressed bitmaps. Only used if dumping bitmaps.no
← Password providersreset-cluster tool →

Technology · Use Cases · Powered by Druid · Docs · Community · Download · FAQ

 ·  ·  · 
Copyright © 2019 Apache Software Foundation.
Except where otherwise noted, licensed under CC BY-SA 4.0.
Apache Druid, Druid, and the Druid logo are either registered trademarks or trademarks of The Apache Software Foundation in the United States and other countries.