This guide introduces you to Imply by taking you through the steps to deploy a new cluster, load sample data, and then query and create visualizations of your dataset.
There are several ways to get started with Imply. The easiest way is to sign up for the Imply Cloud (AWS) Free Tier. Imply Cloud is a managed service that will deploy and manage scalable Imply clusters directly in your AWS account for you. If you are unable to use Imply Cloud, you may run the Kubernetes-based Imply Manager locally for evaluation purposes.
To follow the steps for using one of these methods, continue with the relevant section here:
The configuration used in this quickstart is intended to minimize resource usage and is not meant for load testing large production data sets. For production-ready installations, see Production-ready installation instructions.
Start Imply Cloud
To follow these steps, you will need an Imply Cloud account. Sign up for a free account if you do not have one.
When you log into Imply Cloud, you will start at the Clusters view:
In this view, click on the "New cluster" button in the top right hand corner.
Choose a name for your cluster, and use the default values for the remainder of the settings.
Click "Create cluster" to launch a cluster in your AWS VPC. Note that clusters can take between 20–30 minutes to launch.
The cluster we are creating in this quickstart is not highly available. If planning a highly available deployment, consider as a starting point the recommended cluster topology for the Imply Free Tier in AWS, which consists of: three m5.large instances for master servers, two c5.large instances for query servers, and three i3.xlarge instances for data servers.
Start Imply Private on Kubernetes
The following steps take you through installing Imply on a single machine using Minikube and an Imply-maintained Helm chart. Minikube is a small-scale, single-node Kubernetes engine suitable for exploratory purposes. If you are new to Imply and Kubernetes, this is a good way to get started.
We'll set up a local cluster consisting of:
- Imply Manager
- An Imply cluster with one master node, one query node, and two data nodes
- A MySQL server (metadata storage for both the manager and the Imply cluster)
- A single-node ZooKeeper cluster
- A single-node MinIO server (as deep storage)
The commands assume that you are using OSX, but can be easily adapted for a Windows or Linux OS. Our Kubernetes cluster will use 3 CPUs and 6 GB RAM, so your target machine should be able to support those specifications.
We will be using the Homebrew package manager to install Minikube and Helm. If you do not already have it, get it here.
Follow these steps to install Imply locally:
If you already have a Kubernetes environment available to use, you can skip to the next step. Otherwise, set up Minikube on your machine:
Using Homebrew, run the following command:
brew install minikube hyperkit
For complete installation instructions, including instructions for other operating systems, see Minikube installation instructions in the Kubernetes documentation.
Start a local Kubernetes cluster with the required resources:
minikube start --cpus 3 --memory 6144 --disk-size 16g
Set up Helm:
- If you already have Helm installed, ensure that it is v3 or later. Otherwise, install Helm with the following command:
brew install helm
- Add the Imply repository to Helm by running:
helm repo add imply https://static.imply.io/onprem/helm helm repo update
- If you already have Helm installed, ensure that it is v3 or later. Otherwise, install Helm with the following command:
By default, a new installation includes a 30-day trial license. If you have a longer-term license that you want to apply, follow these steps:
- Create a file named IMPLY_MANAGER_LICENSE_KEY and paste your license key as the content of the file.
- Create a Kubernetes secret named imply-secrets by running:
kubectl create secret generic imply-secrets --from-file=IMPLY_MANAGER_LICENSE_KEY
Now deploy the Imply chart to install Imply:
helm install imply imply/imply
The chart will take a minute or two to deploy, after which you will be presented with information on how to access your cluster.
Your cluster is now running! As noted in the Helm output, set up port forwarding to access the following services:
kubectl --namespace default port-forward svc/imply-manager-int 9097
Pivot and Druid console
kubectl --namespace default port-forward svc/imply-query 8888 9095
You can now access the Imply UIs from your web browser at the following addresses:
- Imply Manager: http://localhost:9097
- Druid console: http://localhost:8888
- Pivot: http://localhost:9095
If you're new to Imply, continue on to load some sample data.
If you want to learn more about the parameters available in the Helm chart, see Adapting the Imply Manager Helm Chart.
Load data file
In this tutorial we will fetch and load data representing Wikipedia edits from June 27, 2016 from a public web server. To complete this tutorial, your cluster must be able to reach static.imply.io.
1. Open Pivot. To access Pivot:
- In Imply Cloud: Click the Open button from the cluster list or cluster overview page.
- In a self-hosted Imply Manager: After setting up kubectl port forwarding, go to http://localhost:9095. You should see a page similar to the following. If you get a connection refused error, your Imply cluster may not yet be online. Try waiting a few seconds and refreshing the page.
2. Open the Druid Console. Click on Load data to open the data loader in the Druid console.
This data loader allows you to ingest from a number of static and streaming sources:
3. Start the data loader. Select the HTTP(s) option, as we'll be loading data from an online location, and Connect data to start the flow.
4. Sample the data. Enter
https://static.imply.io/data/wikipedia.json.gz in the
URIs input and Apply, to preview the data.
Once you see the screen below you are ready to move to the next step by clicking Next: Parse data.
5. Configure the parser. The data loader automatically detects the parser type for the data and presents a preview of the
parsed output. In this case, it should have suggested the
json parser, as is appropriate for this dataset. Proceed to the next step by clicking Next: Parse time.
6. Configure the time column parsing. Druid uses a timestamp column to partition your data. This page allows you to
identify which column should be used as the primary time column and how the timestamp is formatted. In this case, the
loader should have automatically detected the
timestamp column and chosen the
Click Next: Transform to continue.
7. Skip the transform and filter steps. Transforms let you modify columns at ingestion time and create new derived columns. Filters allow you to exclude unwanted columns from the ingested data. These are not required for the quickstart, so click Next: Filter, and then Next: Configure schema.
8. Configure the schema. This step presents a preview of how the data will look in Druid after ingestion. Druid can index data using an ingestion-time, first-level aggregation known as "roll-up". Roll-up causes similar events to be aggregated during indexing, which results in reduced disk usage and faster queries for certain types of data. The roll-up tutorial provides an introduction on how roll-up works. For this quickstart, click on the Rollup toggle to turn rollup off and click Next: Partition.
9. Configure the partition. This section determines how the data will be partitioned. Data in Imply is always primarily partitioned by time, and here
you can choose the granularity of the time intervals. Choose
DAY as the Segment granularity for our data and click Next: Tune.
10. Skip the tune and publish steps. The next sections of the data loader allow you to modify tuning and publishing parameters for the ingestion job. The defaults here are appropriate, so click Next: Publish, and then Next: Edit spec. For subsequent jobs, note that the Publish section is where you specify the name of the datasource which is used when managing or querying your data.
11. Examine the final spec. The last page of the data loader provides an overview of the ingestion spec that will be submitted. Here, advanced users can make manual adjustments to the spec to configure functionality not available through the data loader. When you are ready, click Submit to begin the ingestion task.
12. Wait for the data to finish loading. You will be taken to the task screen, and should see your task begin to run. Once the task status changes to SUCCESS, you can move on to the next section to define a data cube and begin visualizing the data.
Create a data cube
Go back to Pivot and make sure that your newly ingested datasource appears in the list (it might take a few seconds for it to show up).
Switch to the Visualize section of Pivot by clicking on the Visuals button on the top bar. From here, you can create data cubes to model your data, explore these cubes, and organize views into dashboards. Start by clicking + Create new data cube.
In the dialog that comes up, make sure that
wikipedia is the selected Source and that Auto-fill dimensions and measures is selected.
Continue by clicking Next: Create data cube.
From here you can configure the various aspects of your data cube, including defining and customizing the cube's dimensions and measures. The data cube creation flow can intelligently inspect the columns in your data source and determine possible dimensions and measures automatically. We enabled this when we selected Auto-fill dimensions and measures on the previous screen and you can see that the cube's settings have been largely pre-populated. In our case, the suggestions are appropriate so we can continue by clicking on the Save button in the top-right corner.
Pivot's data cubes are highly configurable and give you the flexibility to represent your dataset, as well as derived and custom columns, in many different ways. The documentation on dimensions and measures is a good starting point for learning how to configure a data cube.
Visualize a data cube
After clicking Save, the data cube view for this new data cube is automatically loaded. In the future, this view can also be loaded by clicking on the name of the data cube (in this example, 'Wikipedia') from the Visuals screen.
Here, you can explore a dataset by filtering and splitting it across any dimension. For each filtered split of your data, you will see the aggregate value of your selected measures. For example, on the wikipedia dataset, you can see the most frequently edited pages by splitting on Page. Drag Page to the Show bar, and keep the default sort, by Number of Events. You should see a screen like the following:
The data cube view suggests different visualizations based on how you split your data. If you split on a string column, your data is initially presented as a table. If you split on time, the data cube view recommends a time series plot, and if you split on a numeric column you will get a bar chart. Try replacing the Page dimension with Time in the Show bar. Your visualization switches to a time series chart, like the following:
You can also change the visualization manually by choosing your preferred visualization from the dropdown. If the shown dimensions are not appropriate for a particular visualization, the data cube view will recommend alternative dimensions.
For more information on visualizing data, refer to the Data cubes section.
Imply includes an easy-to-use interface for issuing Druid SQL queries. To access the SQL editor, go to the SQL section. If you are in the visualization view, you can navigate to this screen by selecting SQL from the hamburger menu in the top-left corner of the page. Once there, try running the following query, which will return the most edited Wikipedia pages:
SELECT page, COUNT(*) AS Edits FROM wikipedia WHERE "__time" BETWEEN TIMESTAMP '2016-06-27 00:00:00' AND TIMESTAMP '2016-06-28 00:00:00' GROUP BY page ORDER BY Edits DESC LIMIT 5
You should see results like the following:
For more details on making SQL queries with Druid, see the Druid SQL documentation.
Congratulations! You have now deployed a simple Imply cluster, loaded a sample dataset into Imply, defined a data cube, explored some simple visualizations, and executed queries using Druid SQL.
Next, you can:
- Configure a data cube to customize dimensions and measures for your data cube.
- Create a dashboard with your favorite views and share it.
- Read more about supported query methods including visualization, SQL, and API.
Production-ready installation instructions
As previously mentioned, the configuration described in this quickstart is intended for investigatory or learning scenarios. To learn more about production-ready installations, refer to the following guides: