Query data by API
Project-less regional API resources have been deprecated and will be removed by the end of September 2024.
You must include the project ID in the URL for all regional API calls in projects created after September 29, 2023.
For example: https://ORGANIZATION_NAME.REGION.CLOUD_PROVIDER.api.imply.io/v1/projects/PROJECT_ID
Projects created before September 29, 2023 can continue to use project-less URLs until the end of September 2024. We strongly recommend updating your regional API calls to include the project ID prior to September 2024. See the API migration guide for more information.
After you have created one or more tables and ingested data, you can use the Query API to run queries against your data. For information on how to write SQL queries in Imply Polaris, see Querying data.
This topic shows how to query a table in Polaris using Druid SQL.
Prerequisites
This topic assumes you have an API key with the AccessQueries
permission.
In the examples below, the key value is stored in the variable named POLARIS_API_KEY
.
For information about how to obtain an API key and assign permissions, see API key authentication.
For more information on permissions, visit Permissions reference.
Query using Druid SQL
To submit a Druid SQL query via the API, send a POST
request along with a JSON body containing your query.
Change the SQL statement in the query
field to modify the query.
Sample request
The following example shows how to submit a query using the API:
- cURL
- Python
curl --location --request POST "https://ORGANIZATION_NAME.REGION.CLOUD_PROVIDER.api.imply.io/v1/projects/PROJECT_ID/query/sql" \
--user ${POLARIS_API_KEY}: \
--header "Accept: application/json" \
--header "Content-Type: application/json" \
--data-raw '{
"query": "SELECT continent, COUNT(*) AS counts FROM \"Koalas to the Max\" GROUP BY 1 ORDER BY counts DESC"
}'
import os
import requests
import json
url = "https://ORGANIZATION_NAME.REGION.CLOUD_PROVIDER.api.imply.io/v1/projects/PROJECT_ID/query/sql"
apikey = os.getenv("POLARIS_API_KEY")
payload = json.dumps({
"query": "SELECT continent, COUNT(*) AS counts FROM \"Koalas to the Max\" GROUP BY 1 ORDER BY counts DESC"
})
headers = {
'Accept': 'application/json',
'Content-Type': 'application/json',
'Authorization': f'Basic {apikey}'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
Sample response
The following example shows a successful response containing the query result:
[
{
"continent": "North America",
"counts": 241671
},
{
"continent": "Europe",
"counts": 169433
},
{
"continent": "Oceania",
"counts": 35905
},
{
"continent": "Asia",
"counts": 29902
},
{
"continent": "South America",
"counts": 25336
},
{
"continent": "Africa",
"counts": 2263
},
{
"continent": "",
"counts": 922
}
]
Obtain results in CSV format
Specify resultFormat
in the JSON object to set the format of the query results.
You can return results as JSON objects or arrays or in CSV format.
For additional fields you can define for Query API request payloads, see Query API.
Sample request
The following example shows how to submit a query and return CSV-formatted results:
- cURL
- Python
curl --location --request POST "https://ORGANIZATION_NAME.REGION.CLOUD_PROVIDER.api.imply.io/v1/projects/PROJECT_ID/query/sql" \
--user ${POLARIS_API_KEY}: \
--header "Accept: application/json" \
--header "Content-Type: application/json" \
--data-raw '{
"query": "SELECT continent, COUNT(*) AS counts FROM \"Koalas to the Max\" GROUP BY 1 ORDER BY counts DESC",
"resultFormat": "csv"
}'
import os
import requests
import json
url = "https://ORGANIZATION_NAME.REGION.CLOUD_PROVIDER.api.imply.io/v1/projects/PROJECT_ID/query/sql"
apikey = os.getenv("POLARIS_API_KEY")
payload = json.dumps({
"query": "SELECT continent, COUNT(*) AS counts FROM \"Koalas to the Max\" GROUP BY 1 ORDER BY counts DESC",
"resultFormat": "csv"
})
headers = {
'Accept': 'application/json',
'Content-Type': 'application/json',
'Authorization': f'Basic {apikey}'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
Sample response
The following example shows a successful response containing the query result in CSV format:
North America,241671
Europe,169433
Oceania,35905
Asia,29902
South America,25336
Africa,2263
,922
Download results to a file
To save the query results to a file, use the curl -o, --output
option or the file handling capabilities of your application's programming language.
Sample request
The following example shows how to submit a query and save the results to a file named output.txt
:
- cURL
- Python
curl --location --request POST "https://ORGANIZATION_NAME.REGION.CLOUD_PROVIDER.api.imply.io/v1/projects/PROJECT_ID/query/sql" \
--user ${POLARIS_API_KEY}: \
--header "Accept: application/json" \
--header "Content-Type: application/json" \
--data-raw '{
"query": "SELECT continent, COUNT(*) AS counts FROM \"Koalas to the Max\" GROUP BY 1 ORDER BY counts DESC"
}' \
--output output.txt
import os
import requests
import json
url = "https://ORGANIZATION_NAME.REGION.CLOUD_PROVIDER.api.imply.io/v1/projects/PROJECT_ID/query/sql"
apikey = os.getenv("POLARIS_API_KEY")
payload = json.dumps({
"query": "SELECT continent, COUNT(*) AS counts FROM \"Koalas to the Max\" GROUP BY 1 ORDER BY counts DESC"
})
headers = {
'Accept': 'application/json',
'Content-Type': 'application/json',
'Authorization': f'Basic {apikey}'
}
response = requests.request("POST", url, headers=headers, data=payload)
with open("output.txt", "w") as f:
f.write(response.text)
Sample response
The file output.txt
contains the query results:
[
{
"continent": "North America",
"counts": 241671
},
{
"continent": "Europe",
"counts": 169433
},
{
"continent": "Oceania",
"counts": 35905
},
{
"continent": "Asia",
"counts": 29902
},
{
"continent": "South America",
"counts": 25336
},
{
"continent": "Africa",
"counts": 2263
},
{
"continent": "",
"counts": 922
}
]
Learn more
See the following topics for more information:
- Query API for details on the Polaris Query API.
- Druid SQL documentation for reference on Druid SQL queries.
- Query using JDBC for querying data in Polaris using JDBC.
- Time series functions for reference on time series functions.