• Developer guide
  • API reference

›Monitoring

Getting started

  • Introduction to Imply Polaris
  • Quickstart
  • Execute a POC
  • Create a dashboard
  • Navigate the console
  • Customize Polaris
  • Key concepts

Tables and data

  • Overview
  • Introduction to tables
  • Table schema
  • Ingestion jobs

    • Create an ingestion job
    • Ingest using SQL
    • Job auto-discovery
    • Timestamp expressions
    • SQL ingestion reference
    • Ingestion status reference
  • Data partitioning
  • Introduction to rollup
  • Replace data
  • Ingestion use cases

    • Approximation algorithms
    • Ingest earliest or latest value

Ingestion sources

  • Ingestion sources overview
  • Supported data formats
  • Create a connection
  • Ingest from files
  • Ingest data from a table
  • Ingest from S3
  • Ingest from Kafka and MSK
  • Ingest from Kinesis
  • Ingest from Confluent Cloud
  • Kafka Connector for Imply Polaris
  • Push event data
  • Connect to Confluent Schema Registry
  • Ingestion source reference

Analytics

  • Overview
  • Manage data cubes
  • Visualize data
  • Data cube dimensions
  • Data cube measures
  • Dashboards
  • Visualizations reference
  • Set up alerts
  • Set up reports
  • Embed visualizations

Querying

  • Overview
  • Time series functions

Monitoring

  • Overview
  • Monitoring dashboards
  • Monitor performance metrics
  • Integrate with Datadog
  • Integrate with Prometheus
  • Integrate with Elastic stack
  • Metrics reference

Management

  • Overview
  • Pause and resume a project

Usage and Billing

  • Billing structure overview
  • Polaris plans
  • Add a payment method
  • Monitor account usage

Security

    Polaris access

    • Overview
    • Invite users to your organization
    • Manage users
    • Permissions reference
    • Manage user groups
    • Enable SSO
    • SSO settings reference
    • Map IdP groups

    Secure networking

    • Connect to AWS
    • Create AWS PrivateLink connection

Developer guide

  • Overview
  • Security

    • Overview
    • Authenticate with API keys
    • Authenticate with OAuth
    • Manage users and groups
    • Restrict an embedding link
  • Migrate deprecated resources
  • Create a table
  • Upload files
  • Ingestion jobs

    • Create an ingestion job
    • Create a streaming ingestion job
    • Ingest using SQL
    • View and manage jobs

    Ingestion sources

    • Ingest from files
    • Ingest from a table
    • Get ARN for AWS access
    • Ingest from Amazon S3
    • Ingest from Kafka and MSK
    • Ingest from Amazon Kinesis
    • Ingest from Confluent Cloud
    • Push event data
    • Kafka Connector for Imply Polaris
    • Kafka Connector reference

    Ingestion use cases

    • Filter data to ingest
    • Ingest nested data
    • Ingest and query sketches
    • Specify data schema
    • Ingest Kafka metadata

    Analytics

    • Query data
    • Connect over JDBC
    • Link to BI tools
    • Query parameters reference
  • Update a project
  • API documentation

    • OpenAPI reference
    • Query API

    Migrations

    • Migrate from Hybrid

Product info

    Release notes

    • 2023
    • 2022
  • Known limitations
  • Druid extensions

Use dashboards to monitor Polaris

Imply Polaris provides built-in dashboards and data cubes for monitoring query performance and event stream ingestion. You can use these tools to probe into detailed metrics and evaluate performance for your Polaris project. To access the monitoring dashboards and data cubes, go to the Monitoring section in the left sidebar.

This topic provides an overview of the monitoring capabilities available in the Polaris UI. For information on how to import metrics into third-party monitoring systems, see Monitor performance metrics.

User queries

The User Queries view provides a single-page dashboard for monitoring query performance.

In this dashboard, you can analyze the following:

  • User activity: Track the number of distinct query users and the top query users. You can filter by user to investigate performance issues for specific users.
  • Query performance: Track the 98th percentile of query execution times, average query latency, total number of queries executed, and total number of failed queries.
  • Query processing: Evaluate the average and 98th percentile of query wait times to determine whether to scale up your project in response to high concurrent load issues.
  • Segment scanning: Assess the number of segments scanned and the segment scan times. High segment scan times indicate that your segment files are too large, which can be resolved by data partitioning. If many scans are occurring, your data may be too fragmented, and you may benefit from configuring data rollup.

Streaming

The Streaming view provides a dashboard to monitor streaming ingestion.

This dashboard displays the following:

  • Volume of incoming events and latency to ingest those events
  • Issues from streaming ingestion, including unparseable events and expired records rejected by Polaris
  • Number of rows output from processed events

Detailed metrics

The Detailed Metrics view provides a data cube where you can investigate specific metrics with the option to filter by table, query type, and query ID.

← OverviewMonitor performance metrics →
  • User queries
  • Streaming
  • Detailed metrics
Key links
Try ImplyApache Druid siteImply GitHub
Get help
Stack OverflowSupportContact us
Learn more
BlogApache Druid docs
Copyright © 2023 Imply Data, Inc