Monitoring

To monitor an Imply cluster, you can use:

Clarity

On Imply Cloud, you can access Clarity from the Manager console. To do so, from the cluster overview page, click Monitor from the left menu.

Clarity accessing

Navigating the Clarity UI

By default, Clarity opens in the Visuals pane:

Clarity home

From the Clarity home page, you can access various views, including:

Clarity alerts

It's a good practice to open the Clarity UI regularly to inspect the performance of your Imply cluster. In addition, by configuring alerts, you can have Clarity notify you when a condition is met. You can configure conditions to evaluate
query times, exception counts, and more.

You can configure alerts from the Alerts tab:

Clarity alerts

Clarity emitter configurations

You can control the way Druid emits metrics by adding the following properties to the Druid properties file, common.runtime.properties, of the metrics emitting cluster.

Add druid.emitter.clarity. as a prefix to the field names shown, for example, druid.emitter.clarity.topic and druid.emitter.clarity.producer.bootstrap.servers.

Field Type Description Default Required
topic String HTTP endpoint events will be posted to, e.g. http://<clarity collector host>:<port>/d/<username> [required] yes
producer.bootstrap.servers String Kafka "bootstrap.servers" configuration (a list of brokers) [required] yes
producer.* String Can be used to specify any other Kafka producer property. empty no
clusterName String Cluster name used to tag events null no
anonymous Boolean Should hostnames be scrubbed from events? false no
maxBufferSize Integer Maximum size of event buffer min(250MB, 10% of heap) no
samplingRate Integer For sampled metrics, what percentage of metrics will be emitted 100 no
sampledMetrics List Which event types are sampled ["query/wait/time", "query/segment/time", "query/segmentAndCache/time"] no
sampledNodeTypes List Which node types are sampled ["druid/historical", "druid/peon", "druid/realtime"] no

metricsCluster connection optional parameters

The configuring Clarity settings step, above, describes the basic and required connection settings to connect Clarity to the metrics collection cluster.

The following connection settings are optional, or required only as neessitated by your metrics collection Druid configuration.

The settings are equivalent to those in the Pivot configuration. However, those settings are separate from Clarity's.

Field Description
timeout The timeout for the metric queries. Default is 40000.
protocol The connection protocol, one of plain (the default), tls-loose or tls. If tls specify the ca, cert, key and passphrase.
ca If connecting via TLS, a trusted certificate of the certificate authority if using self-signed certificates. Should be PEM formatted text.
cert If connecting via TLS, the client side certificate to present. Should be PEM-formatted text.
key If connecting via TLS, the private key file name. The key should be PEM-formatted text.
passphrase If connecting via TLS, a passphrase for the private key, if needed.
defaultDbAuthToken If Druid authentication is enabled, the default token that will be used to authenticate against this connection.
socksHost If Clarity needs to connect to Druid via a Socks5 proxy, the hostname of the proxy host.
socksUsername The user for the Socks proxy, if needed.
socksPassword The password for proxy authentication, if needed.

Status APIs

Druid includes status APIs that return metrics that help you gauge the health of the system. The following APIs are especially useful for monitoring.

  1. Unavailable segments: On the Coordinator, check /druid/coordinator/v1/loadstatus?simple and verify each datasource registers "0". This is the number of unavailable segments. It may briefly be non-zero when new segments are added, but if this value is high for a prolonged period of time, it indicates a problem with segment availability in your cluster. In this case, check your data nodes to confirm they are healthy, have spare disk space to load data, and have access to your S3 bucket where data is stored.

  2. Data freshness: Run a "dataSourceMetadata" query to get the "maxIngestedEventTime" and verify that it's recent enough for your needs. For example, alert if it's more than a few minutes old. This is an inexpensive Druid query, since it only hits the most recent segments and it only looks at the last row of data. In addition to verifying ingestion time, this also verifies that Druid is responsive to queries. If this value is staler than you expect, it can indicate that real-time data is not being loaded properly. In this case, use the Imply Manager to verify that your data ingestion is healthy, that there have not been any errors loading data, and that you have enough capacity to load the amount of data that you're trying to load.

See Druid API reference for more information.

Overview

Administer

Manage Data

Query Data

Visualize

Configure

Misc