Migrate to Imply
A common path to using Imply is to start with open source Apache Druid® and then move to Imply to take advantage of its analytics and operational features. This topic describes how to migrate from Apache Druid to Imply.
Best practices for migrating from Apache Druid to Imply
- Before migrating to an Imply STS release, update to the latest Apache Druid release.
- Before migrating to an Imply LTS release, update to the latest Apache Druid version available.
- Review the Imply Release notes for changes in the newest version.
- Upgrade using cluster restarts. Rolling upgrades are possible, but not preferred.
- Use the same configuration parameters for the services as the Apache Druid version's configuration after reviewing the release notes for parameter changes.
- If you changed the location of the Apache Druid segment cache, copy the segment cache from the existing Apache Druid cluster before restarting services using the new Imply version. Update the segment metadata with the new location in the
druid_segments
table. - To start a cluster with a high segment count in Imply, follow these steps:
- Start all historical processes and wait for the lifecycle to start.
- Start all master services (Coordinator and Overlord) and wait for the lifecycle to start.
- Start all query node services (Broker and Router) and wait for the lifecycle to start.
- Start the Middle Manager and wait for the lifecycle to start.
- Resume Supervisors and tasks.
Migrate from Apache Druid to Imply Enterprise
If you are satisfied with your current Druid configuration, you can deploy Imply's distribution of Apache Druid in the same configuration with minimal changes:
Ensure that your current version of Apache Druid is the most recent relative to the version of Druid included in Imply's distribution of Apache Druid.
Replace or augment your Druid Broker processes with Query servers. Query servers run Druid Routers, Druid Brokers, and Pivot.
Update all your other Druid nodes to run Imply's distribution of Apache Druid, continuing to use your existing configurations.
(Optional) To combine your existing Druid Coordinators and Druid Overlords into Master servers:
Configure a Master server using your existing configurations.
Deploy new Master servers with the following command:
bin/supervise -c conf/supervise/master-without-zk.conf
The new Master servers connect to your existing ZooKeeper and metadata storage. A best practice is to run with two Master servers to support failover.
If you are using unmanaged Imply, instead run:
bin/supervise -c conf/supervise/master-without-zk.conf
Stop your old Druid Coordinators and Druid Overlords.
Migrate to a new Imply Hybrid cluster
The following instructions are organized based on how the new cluster accesses data.
- The new cluster uses the same deep storage and metadata storage servers as the old cluster
- The new cluster has no data but can access the old cluster's deep storage
- The new cluster has data and can access the old cluster's deep storage
- The new cluster has no data and cannot access the old cluster's deep storage
- The new cluster has data and cannot access the old cluster's deep storage
The new cluster uses the same deep storage and metadata storage servers
In this scenario, you must reboot your Druid services:
- Stop the old cluster. To prevent data corruption, ensure the old cluster is completely down before starting your new cluster.
- Start the new cluster.
The new cluster has no data and can access the old cluster's deep storage
This scenario applies if you are using Imply Hybrid and:
- The new Imply cluster can access the old cluster's deep storage.
- The old and the new metadata storage are both MySQL.
- The new deep storage is clean with no data.
- The new metadata storage is brand new and clean with no data.
In this scenario:
- Copy the
druid_config
,druid_datasource
,druid_supervisors
, anddruid_segments
tables from the old metadata storage to the new metadata storage:Run
mysqldump
on the old metadata storage (and Pivot, if used).Import the
mysqldump
into the new metadata storage. The output file has a SQL extension because it contains SQL commands. The-p
command inmysqlimport
asks for a password:mysqldump -h <host_name> -u <user_name> -p --single-transaction --skip-add-drop-table --no-create-info --no-create-db <db_name> druid_config druid_dataSource druid_supervisors druid_segments > output_file.sql
- Start the new cluster. The Coordinator automatically starts reading the old segments' metadata in the new metadata storage, and then historical nodes load them from the old deep storage. The data in old deep storage remains intact. The old cluster continues writing to the old metadata storage and the old deep storage. The new cluster writes to the new metadata storage and the new deep storage.
Once you do this migration, the old and the new cluster share the same data segment files in deep storage for any data ingested before the migration. Data ingested after the migration goes to different files. Avoid running permanent data deletion tasks on datasources that share segments between two clusters because it causes the clusters to delete each others' data.
If the new Druid cluster shares the same ZooKeeper quorum as the old, use a different base znode
path by configuring druid.zk.paths.base
in Druid's common.runtime.properties
to a different name, such as /druid-newcluster
. The default value is /druid
. It must also use a different druid.discovery.curator.path
.
The new cluster has data and can access the old cluster's deep storage
This scenario applies if you are using Imply Hybrid and:
- The new Imply cluster can access the old cluster's deep storage.
- The old and the new metadata storage are both MySQL.
- The new cluster's deep storage has some data in it.
- The new cluster's metadata storage has some data in it.
In this scenario:
Ensure there are no collisions in the paths between the old deep storage and the new deep storage. If there are collisions, do the following:
- Change the path of the old deep storage.
- Give the same paths when you modify the old
mysqldump
file of the old metadata storage.
Copy the data from the old deep storage to the new deep storage.
Configure the new clusters with different deep storage path and DB server address.
Run
mysqldump
on the old cluster's metadata storage, excluding the DDL. Do not overwrite the target metadata storage:mysqldump -h <host_name> -u <user_name> -p --single-transaction --skip-add-drop-table --no-create-info --no-create-db <db_name> druid_config druid_dataSource druid_supervisors druid_segments > source_output_file.sql
Change the location of the segments in the
druid_segments
table in themysqldump
file from the previous step to point to the new deep storage location:sed -i .bak 's/\\"bucket\\":\\"<old_segment_name>\\"/\\"bucket\\":\\"<new_segment_name>\\"/' /dir/source_output_file.sql
Copy the
druid_config
,druid_datasource
,druid_supervisors
, anddruid_segments
tables from the old metadata storage to the new metadata storage by importing themysqldump
file above into the new target metadata storage. The old cluster keeps writing to the old metadata storage and the old deep storage; the new cluster writes to the new metadata storage and the new deep storage:mysql -h <host_name> -u <user_name> -p <db_name> < /dir/source_output_file.sql
The new cluster has no data and cannot access the old cluster's deep storage
This scenario applies if you are using Imply Hybrid and:
- The old and the new clusters use different deep storage and metadata storage servers.
- The new cluster cannot access the old cluster's deep storage.
- The old and the new metadata storage is MySQL.
- The new deep storage has no data in it.
- The new metadata storage has no data in it.
In this scenario:
Copy the data from old deep storage to new deep storage. Consider using a staging area as an intermediate location.
Configure the new clusters with a different deep storage path and DB server address.
Run the
mysqldump
command on the old metadata storage:mysqldump -h <host_name> -u <user_name> -p --single-transaction --skip-add-drop-table --no-create-info --no-create-db <db_name> druid_config druid_dataSource druid_supervisors druid_segments > output_file.sql
Change the location of the segments in the
druid_segments
table in the abovemysqldump
file to point to the new deep storage location:sed -i .bak 's/\\"bucket\\":\\"<old_bucket_name>\\"/\\"bucket\\":\\"<new_bucket_name>\\"/' /tmp/output_file.sql
Copy the
druid_config
,druid_datasource
,druid_supervisors
,druid_rules
, anddruid_segments
tables from the old metadata storage to the new metadata storage by importing the abovemysqldump
file into new target metadata storage:mysql -h <host_name> -u <user_name> -p <db_name> < /dir/output_file.sql
Delete the
druid_rules
table from target MySQL. Starting the cluster recreates this table.Start the new cluster.
The Coordinator automatically starts reading the old segments' metadata in the new metadata storage, and then historical nodes load them from the new deep storage. The data in old deep storage remains. The old cluster keeps writing to the old database and old deep storage. The new cluster writes to the new metadata storage and new deep storage.
The new cluster has data and cannot access the old cluster's deep storage
This scenario applies if you are using Imply Hybrid and:
- The old and the new clusters use different deep storage and metadata storage servers.
- The new cluster cannot access the old cluster's deep storage.
- The old and the new metadata storage is MySQL.
- The new deep storage has data in it.
- The new metadata storage has data in it.
In this scenario:
Ensure there are no collisions in the paths between the old deep storage and the new deep storage. If there are collisions, change the path of the old deep storage to something else and give the same paths when you modify the old
mysqldump
file of the old metadata storage.Copy the data from the old deep storage to the new deep storage. You can use a staging area as an intermediate location.
Configure the new cluster with a different deep storage path and metadata storage server address.
Run the
mysqldump
command on the old metadata storage excluding the DDL. Do not overwrite the target metadata storage:mysqldump -h <host_name> -u <user_name> -p --skip-add-drop-table --no-create-info --no-create-db <db_name> druid_config druid_dataSource druid_supervisors druid_segments > source_output_file.sql
Change the location of the segments in the
druid_segments
table in themysqldump
file above to point to the new deep storage location:sed -i .bak 's/\\"bucket\\":\\"<old_segment_name>\\"/\\"bucket\\":\\"<new_segment_name>\\"/' /dir/source_output_file.sql
Copy the
druid_config
,druid_datasource
,druid_supervisors
, anddruid_segments
tables from the old metadata storage to the new metadata storage by importing the above modifiedmysqldump
file into the new target metadata storage:mysql -h <host_name> -u <user_name> -p <db_name> < /dir/source_output_file.sql
Start the new cluster.
The Coordinator automatically starts reading the old segments' metadata in the new metadata storage, and then historical nodes load them from the new deep storage. The data in old deep storage remains. The old cluster keeps writing to the old metadata storage and the old deep storage. The new cluster writes to the new database and the new deep storage.