Quickstart

In this guide, we'll spin up an Imply cluster, load some example data, and visualize it.

Prerequisites

You will need an Imply account for https://implycloud.com/. Sign up for a free account if you do not have one.

The configuration used in this quickstart is intended to minimize resource usage and is not meant for load testing large production data sets.

Launch a cluster

After you log into Imply Cloud, you will be presented with the main menu. Select "Manager" from the list of options. You will be taken to the Clusters view.

Clusters View

In this view, click on the "New cluster" button in the top right hand corner.

New Cluster

Choose a name for your cluster, and use the default values for the version and the instance role.

Let's spin up a basic cluster that uses one data server (used for storing and aggregating data), one query server (used for merging partial results from data servers), and one master server (used for cluster coordination). We will only use t2.small instances.

The cluster we are creating in this quickstart is not highly available. A highly available cluster requires, at a minimum, 2 data servers, 2 query servers, and 3 master servers.

Click "Create cluster" to launch a cluster in your AWS VPC.

Load data file

In this tutorial we will fetch and load data representing Wikipedia edits from June 27, 2016 from a public web server. If a firewall or connectivity restrictions prevent you from making outbound requests to fetch the file, you can load the sample manually using the instructions below.

1. Open Pivot. To access Pivot, go to http://localhost:9095. You should see a page similar to the following screenshot. If you see a connection refused error, it may mean your Druid cluster is not yet online. Try waiting a few seconds and refreshing the page.

quickstart 1

2. Open the Druid Console. Click the Load data button at the top right to open the data loader in the Druid console.

This data loader allows you to ingest from a number of static and streaming sources, as follows:

quickstart 2

3. Start the data loader. Select the HTTP(s) option, as we'll be loading data from an online location, and Connect data to start the flow.

You can get much more information for any of the settings or steps mentioned in the following instructions by clicking the information icon next to the setting in the UI, or by clicking the link to the Druid documentation in the page description in the top-right panel.

4. Sample the data. Enter https://static.imply.io/data/wikipedia.json.gz in the URIs input and Apply, to preview the data. Once you see the screen below you are ready to move to the next step by clicking Next: Parse data.

quickstart 3

The data loader cannot load every type of data supported by Druid. You may find that for your particular dataset, you need to load data by submitting a task directly, rather than going through the data loader. For an example of how to do this, see the section Loading sample Wikipedia data offline.

5. Configure the parser. The data loader automatically detects the parser type for the data and presents a preview of the parsed output. In this case, it should have suggested the json parser, as is appropriate for this dataset. Proceed to the next step by clicking Next: Parse time.

quickstart 4

6. Configure the time column parsing. Druid uses a timestamp column to partition your data. This page allows you to identify which column should be used as the primary time column and how the timestamp is formatted. In this case, the loader should have automatically detected the timestamp column and chosen the iso format. Click Next: Transform to continue.

quickstart 5

7. Configure the schema. Click Next a few more times to skip the Transform and Filter steps. The Transform step lets you modify columns at ingestion time or create new derived columns, while the Filter allows you to exclude unwanted columns.

The Configure schema step presents a preview of how the data will look in Druid after ingestion. Druid can index data using an ingestion-time, first-level aggregation known as "roll-up". Roll-up causes similar events to be aggregated during indexing, which can result in reduced disk usage and faster queries for certain types of data. The Druid Concepts page provides an introduction on how roll-up works. For this quickstart, click on the Rollup toggle to turn rollup off and click Next: Partition.

quickstart 6

8. Configure the partition. The partition defines the time chunk granularity of the ingested data. A time chunk has one or more data segments, with the data timestamped to that time chunk. Choose DAY as the Segment granularity for our data and click Next: Tune.

quickstart 8

9. Examine the final spec. Click Next until you get to the Edit JSON spec step, accepting the defaults in the Tune and Publishpanes. Note that the Publish step is where you can specify a name for the datasource. This name identifies the datasource in the Datasources list, among other places, so a descriptive name for it may be helpful.

You have constructed an ingestion spec. You can edit it if needed prior to submitting it. Click Submit to submit the ingestion task.

quickstart 7

10. Wait for the data to finish loading. You will be taken to the task screen, with your newly submitted task selected. Once the loader has indicated that the data has been indexed, you can move on to the next section to define a data cube and begin visualizing the data.

This section showed you how to load data from files, but Druid also supports streaming ingestion. Druid's streaming ingestion can load data with virtually no delay between events occurring and being available for queries. For more information, see Loading data.

quickstart 9

Create a data cube

Go back to Pivot and make sure that your newly ingested datasource appears in the list (it might take a few seconds for it to show up).

quickstart 10

Switch to the Visualize section of Pivot by clicking on the Visuals button on the top bar. From here, you can create data cubes to model your data, explore these cubes, and organize views into dashboards. Start by clicking + Create new data cube.

quickstart 11

In the dialog that comes up, make sure that wikipedia is the selected Source and that Auto-fill dimensions and measures is selected. Continue by clicking Next: Create data cube.

From here you can configure the various aspects of your data cube, including defining and customizing the cube's dimensions and measures. The data cube creation flow can intelligently inspect the columns in your data source and determine possible dimensions and measures automatically. We enabled this when we selected Auto-fill dimensions and measures on the previous screen and you can see that the cube's settings have been largely pre-populated. In our case, the suggestions are appropriate so we can continue by clicking on the Save button in the top-right corner.

Pivot's data cubes are highly configurable and give you the flexibility to represent your dataset, as well as derived and custom columns, in many different ways. The documentation on dimensions and measures is a good starting point for learning how to configure a data cube.

Visualize a data cube

After clicking Save, the data cube view for this new data cube is automatically loaded. In the future, this view can also be loaded by clicking on the name of the data cube (in this example, 'Wikipedia') from the Visualize screen.

quickstart 12

Here, you can explore a dataset by filtering and splitting it across any dimension. For each filtered split of your data, you will see the aggregate value of your selected measures. For example, on the wikipedia dataset, you can see the most frequently edited pages by splitting on Page. Drag Page to the Show bar, and keep the default sort, by Number of Events. You should see a screen like the following:

quickstart 13

The data cube view suggests different visualizations based on how you split your data. If you split on a string column, your data is initially presented as a table. If you split on time, the data cube view recommends a time series plot, and if you split on a numeric column you will get a bar chart. Try replacing the Page dimension with Time in the Show bar. Your visualization switches to a time series chart, like the following:

quickstart 14

You can also change the visualization manually by choosing your preferred visualization from the dropdown. If the shown dimensions are not appropriate for a particular visualization, the data cube view will recommend alternative dimensions.

For more information on visualizing data, refer to the Data cubes section.

Run SQL

Imply includes an easy-to-use interface for issuing Druid SQL queries. To access the SQL editor, go to the Run SQL section. If you are in the visualization view, you can navigate to this screen by selecting SQL from the hamburger menu in the top-left corner of the page. Once there, try running the following query, which will return the most edited Wikipedia pages:

SELECT page, COUNT(*) AS Edits
FROM wikipedia
WHERE "__time" BETWEEN TIMESTAMP '2016-06-27 00:00:00' AND TIMESTAMP '2016-06-28 00:00:00'
GROUP BY page
ORDER BY Edits
DESC LIMIT 5

You should see results like the following:

quickstart 15

For more details on making SQL queries with Druid, see the Druid SQL documentation.

Next steps

Congratulations! You have now installed and run Imply on a single machine, loaded a sample dataset into Druid, defined a data cube, explored some simple visualizations, and executed queries using Druid SQL.

Next, you can:

Overview

Administer

Manage Data

Query Data

Visualize

Configure

Misc