2022.06

2022.06

  • Imply
  • Ingest
  • Query
  • Visualize
  • Administer
  • Deploy

›Druid administration

Overview

  • About Imply administration

Manager

  • Using Imply Manager
  • Managing Imply clusters
  • Imply Manager security
  • Extensions

Users

  • Imply Manager users
  • Druid API access
  • Authentication and Authorization

    • Get started with Imply Hybrid Auth
    • Authentication
    • Local users
    • User roles
    • User groups
    • User sessions
    • Brute force attack detection
    • Identity provider integration
    • Okta OIDC integration
    • Okta SAML integration
    • LDAP integration
    • OAuth client authentication

Clarity

  • Monitoring
  • Set up Clarity
  • Cloudwatch monitoring
  • Metrics

Druid administration

  • Configuration reference
  • Logging
  • Druid design

    • Design
    • Segments
    • Processes and servers
    • Deep storage
    • Metadata storage
    • ZooKeeper

    Security

    • Security overview
    • User authentication and authorization
    • LDAP auth
    • Dynamic Config Providers
    • Password providers
    • Authentication and Authorization
    • TLS support
    • Row and column level security

    Performance tuning

    • Basic cluster tuning
    • Segment size optimization
    • Mixed workloads
    • HTTP compression
    • Automated metadata cleanup
  • API reference
  • View Manager

    • View Manager
    • View Manager API
    • Create a view
    • List views
    • Delete a view
    • Inspect view load status
  • Rolling updates
  • Retaining or automatically dropping data
  • Alerts
  • Working with different versions of Apache Hadoop
  • Misc

    • dump-segment tool
    • reset-cluster tool
    • pull-deps tool
    • Deep storage migration
    • Export Metadata Tool
    • Metadata Migration

Alerts

Druid generates alerts on getting into unexpected situations.

Alerts are emitted as JSON objects to a runtime log file or over HTTP (to a service such as Apache Kafka). Alert emission is disabled by default.

All Druid alerts share a common set of fields:

  • timestamp - the time the alert was created
  • service - the service name that emitted the alert
  • host - the host name that emitted the alert
  • severity - severity of the alert e.g. anomaly, component-failure, service-failure etc.
  • description - a description of the alert
  • data - if there was an exception then a JSON object with fields exceptionType, exceptionMessage and exceptionStackTrace
Last updated on 9/8/2020
← Retaining or automatically dropping dataWorking with different versions of Apache Hadoop →
2022.06
Key links
Try ImplyApache Druid siteImply GitHub
Get help
Stack OverflowSupportContact us
Learn more
Apache Druid forumsBlog
Copyright © 2022 Imply Data, Inc