Skip to main content

Time series functions

beta

Time series functions are an Imply Polaris beta feature. You should only enable a beta feature after testing in a staging environment. For more information, see Preview features.

The bucketMillis parameter is being removed

The bucketMillis parameter has been deprecated and will be removed at the end of March 2024. Instead of the bucketMillis parameter, use the timeProperties object. It provides more detailed information about the time properties, including bucket period, timezone, and origin.

Time series functions operate on data recorded over a period of time. You use time series functions to analyze time series data, identify trends and seasonality, interpolate values, and load extra time periods to fill in boundary values.

You can include time series functions in the SQL workbench or in the query of the request to the Query API.

Time series aggregation

The following table lists the time series aggregation functions that you can use to generate a binary representation of the time series. You can use any time series aggregation function that uses only raw data to ingest time series data.

FunctionDescription
DOWNSAMPLED_SUM_TIMESERIES(timeColumn, dataColumn, window, timeProperties [, maxEntries])Creates a time series by downsampling recordings based on the bucket period specified in timeProperties. Data points are downsampled by addition. For each bucket, the timestamp corresponds to the start time of the bucket and the data point represents the added values.
  • timeColumn: Name of the column that contains the timestamps corresponding to the time series being created. timeColumn can be a numeric or a timestamp column.
  • dataColumn: Name of the column that contains the data points corresponding to the time series being created.
  • window: ISO 8601 interval that restricts the time range over which the time series is created.
  • timeProperties: List of time properties for the aggregation separated by semicolons. The first property is an ISO 8601 period which divides the recordings into time buckets. The second property, timeZone, is the timezone of the timestamps in the time series. The third property, origin, is the starting time of the bucketing. Both timeZone and origin are optional parameters. The following example shows all three parameters: P1Y;IST;1999-12-31T01:00:00.000.
  • maxEntries: Maximum number of entries allowed in the time series at any point in time. maxEntries defaults to the number of buckets possible given the window parameter and the bucket period.
DOWNSAMPLED_SUM_TIMESERIES(timeseriesColumn, window, timeProperties [, maxEntries])Creates a time series by downsampling a column of time series objects based on the bucket period specified in timeProperties. Data points are downsampled by addition. For each time bucket, the timestamp corresponds to the start time of the bucket and the data point represents the added values.
  • timeseriesColumn: Name of the column that contains the time series objects.
  • window: ISO 8601 interval that restricts the time range over which the time series is created.
  • timeProperties: List of time properties for the aggregation separated by semicolons. The first property is an ISO 8601 period which divides the recordings into time buckets. The second property, timeZone, is the timezone of the timestamps in the time series. The third property, origin, is the starting time of the bucketing. Both timeZone and origin are optional parameters. The following example shows all three parameters: P1Y;IST;1999-12-31T01:00:00.000.
  • maxEntries: Maximum number of entries allowed in the time series at any point in time. maxEntries defaults to the number of buckets possible given the window parameter and the bucket period.
INGEST_TIMESERIES(TIME_PARSE(timeColumn), dataColumn [, maxEntries])At ingestion time, create a time series measure from raw data. Similar to TIMESERIES. Use the one that best fits your use case.
  • timeColumn: Name of the column that contains the timestamps corresponding to the time series being created. timeColumn can be a numeric or a timestamp column.
  • dataColumn: Name of the column that contains the data points corresponding to the time series being created.
  • maxEntries: Maximum number of entries allowed in the time series at any point in time. maxEntries defaults to 300,000.
LATEST_TIMESERIES(timeColumn, dataColumn, versionColumn, window, timeBucket [, maxEntries])Creates a time series with bucketed timestamps and latest data points for each bucket using the ISO 8601 period specified in timeBucket. For each bucket, Druid selects the data point from dataColumn with the highest value in versionColumn.
  • timeColumn: Name of the column that contains the timestamps corresponding to the time series being created. timeColumn can be a numeric or a timestamp column.
  • dataColumn: Name of the column that contains the data points corresponding to the time series being created.
  • versionColumn: Determines which data point to keep when multiple points fall into the same bucket. If two or more points occur in the same time bucket, Druid uses the value in versionColumn to identify the latest version for the data point.
  • window: ISO 8601 interval that restricts the time range over which the time series is created.
  • timeBucket: ISO 8601 period that specifies how to separate timestamps into buckets. The bucket period should be smaller than the window period.
  • maxEntries: Maximum number of entries allowed in the time series at any point in time. maxEntries defaults to 7200.
LATEST_TIMESERIES_TO_TIMESERIES(latestTimeseries)Takes a time series generated by LATEST_TIMESERIES and converts it into a standard time series format for further processing.
TIMESERIES(timeColumn, dataColumn, window [, maxEntries])Creates a time series from raw data.
  • timeColumn: Name of the column that contains the timestamps corresponding to the time series being created. timeColumn can be a numeric or a timestamp column.
  • dataColumn: Name of the column that contains the data points corresponding to the time series being created.
  • window: ISO 8601 interval that restricts the time range over which the time series is created.
  • maxEntries: Maximum number of entries allowed in the time series at any point in time. maxEntries defaults to 7200.
TIMESERIES(timeseriesColumn, window [, maxEntries])Creates a time series from merging a column of time series objects.
  • timeseriesColumn: Name of the column that contains the time series objects.
  • window: ISO 8601 interval that restricts the time range over which the time series is created.
  • maxEntries: Maximum number of entries allowed in the time series at any point in time. maxEntries defaults to 7200.
SUM_TIMESERIES(timeseriesColumn, window [, maxEntries])Creates a time series by taking a column of time series objects and adding their data points for the same timestamp. The time series objects must have the same timestamps and window period.
  • timeseriesColumn: Name of the column that contains the time series objects.
  • window: ISO 8601 interval that restricts the time range over which the time series is created.
  • maxEntries: Maximum number of entries allowed in the time series at any point in time. maxEntries defaults to 260_000.

To convert the binary output to JSON format, wrap the outermost function with the TIMESERIES_TO_JSON function. The output is a time series object with the following properties:

  • window: ISO 8601 time interval for which the time series is created.
  • timestamps: List of sorted timestamps.
  • dataPoints: List of data points associated with the timestamps.
  • timeProperties: Time properties object that contains ISO period, timeZone, and origin parameters.
  • bucketMillis: Deprecated. Time interval in milliseconds that determines how to segment the data.
  • bounds: If the overall query time interval is greater than the aggregator window, the bounds object contains the closest data points to the window on each end.

Examples

DOWNSAMPLED_SUM_TIMESERIES

The following query creates a time series by downsampling recordings based on the 2019-08-25T00:00:00.031Z/2019-08-25T00:05:46.517Z period. The resulting output is in JSON format.

SELECT TIMESERIES_TO_JSON(DOWNSAMPLED_SUM_TIMESERIES("__time", "session_length", '2019-08-19T00:00:00.234Z/2019-08-19T23:59:59.877Z', 'PT1M')) as downsampled_timeseries
FROM "Koalas to the Max"
View the results
{
"downsampled_timeseries":{
"window":"2019-08-19T00:00:00.234Z/2019-08-19T23:59:59.877Z",
"timestamps":[
1566172800000,1566172860000,1566172920000,1566172980000,1566173040000,1566173100000,1566173160000,1566173220000,1566173280000,1566173340000,1566173400000,1566173460000,1566173520000,1566173580000,1566173640000,1566173700000,1566173760000,1566173820000,
...
1566258000000,1566258060000,1566258120000,1566258180000,1566258240000,1566258300000,1566258360000,1566258420000,1566258480000,1566258540000,1566258600000,1566258660000,1566258720000,1566258780000,1566258840000,1566258900000,1566258960000,1566259020000,1566259080000,1566259140000
],
"dataPoints":[
4342102.0,6799755.0,1.5951734E7,8491739.0,9520073.0,8127726.0,6219452.0,4783010.0,7.5452827E7,9363897.0,6134945.0,6603625.0,6723617.0,8909703.0,6024551.0,4110716.0,4771525.0,2430190.0,1999305.0,1.327893E7,1.3758501E7,9693550.0,8530676.0,1.15453733E8,1.67724155E8,1.3165745E8,
...
2.6223013E7,3.1531803E7,6.7241171E7,2.0335366E7,4.0052009E7,1.9367057E7,2.126301E7,2.4467079E7,2.4874429E7,5.0194879E7,5.654094E7,2.8103319E7,6.3475976E7,4.6137679E7,9.7408696E7,5.0078246E7,7.585225E7,3.1495826E7,2.3159649E7,2.7507422E7,2.5954539E7,3.6406193E7,2.4787654E7
],
"timeProperties":{
"period":"PT1M",
"origin":"1970-01-01T00:00:00.000Z",
"timeZone":"UTC"
},
"bucketMillis":60000,
"bounds":{
"start":{
"data":null,
"timestamp":null
},
"end":{
"data":596.0,
"timestamp":1566259199877
}
}
}
}

INGEST_TIMESERIES

info

You can also use any of the time series aggregation functions that don't require a timeseries input for ingestion. Use the one that best fits your use case.

When you ingest batch or streaming data, you can use time series aggregator functions to create a measure for time series data. For example, you could use the INGEST_TIMESERIES function:

INGEST_TIMESERIES(TIME_PARSE(timeColumn), dataColumn [, maxEntries]

For a dataset that has entries that resemble the following:

{"id": "id7120664", "vendorId": 2, "pickupDate": "2024-09-25T09:40:52.565995", "dropoffDate": "2024-09-25T10:49:52.566003", "passengerCount": 2, "pickupLongitude": "-73.87303925", "pickupLatitude": "40.77410507"}

To create a column named timeseries_col that contains a timeseries based on the dropOffDate column as the timeColumn and passengerCount as the dataColumn, use the following for ingestion:

{
"type": "batch",
...
"mappings" : [
{
"columnName": "__time",
"expression": "TIME_PARSE(\"dropOffDate\")"
},
...
{
"columnName": "timeseries_col",
"expression": "INGEST_TIMESERIES(TIME_PARSE("dropOffDate"), "passengerCount", 1000)"
},
...
}

LATEST_TIMESERIES_TO_TIMESERIES

The following query creates a time series with bucketed timestamps and latest data points. It then converts the result into a standard time series object. The resulting output is in JSON format.

SELECT TIMESERIES_TO_JSON(LATEST_TIMESERIES_TO_TIMESERIES(LATEST_TIMESERIES("__time", "session_length", "number", '2019-08-19T00:00:00.234Z/2019-08-19T23:59:59.877Z', 'PT1S', 100000))) 
AS "latest_timeseries"
FROM "Koalas to the Max"
View the results
{ 
"latest_timeseries":{
"window":"2019-08-19T00:00:00.234Z/2019-08-19T23:59:59.877Z",
"timestamps":[
1566172800000,1566172801000,1566172802000,1566172803000,1566172804000,1566172806000,1566172807000,1566172808000,1566172809000,1566172810000,1566172811000,1566172812000,1566172813000,1566172814000,1566172816000,1566172819000,1566172820000,1566172822000,1566172824000,1566172825000,1566172826000,1566172828000,1566172829000,1566172830000,1566172831000,1566172833000,1566172835000,1566172836000,1566172837000,1566172839000,1566172841000,1566172842000,
...
1566259172000,1566259173000,1566259174000,1566259175000,1566259176000,1566259177000,1566259178000,1566259179000,1566259180000,1566259181000,1566259182000,1566259183000,1566259184000,1566259185000,1566259186000,1566259187000,1566259188000,1566259189000,1566259190000,1566259191000,1566259192000,1566259193000,1566259194000,1566259195000,1566259196000,1566259197000,1566259198000,1566259199000
],
"dataPoints":[
104299.0,19160.0,20745.0,21937.0,22257.0,24265.0,310.0,54115.0,610.0,25109.0,53974.0,54457.0,28420.0,56605.0,58492.0,65947.0,63330.0,114584.0,174546.0,40607.0,68808.0,74171.0,75215.0,72494.0,77837.0,23252.0,4978.0,128980.0,52693.0,53978.0,133557.0,57156.0,86310.0,59413.0,765.0,61225.0,62879.0,95316.0,43290.0,8624.0,67033.0,145262.0,9658.0,48375.0,49576.0,50778.0,
...
132623.0,347675.0,149694.0,148012.0,604621.0,97816.0,147565.0,155028.0,41645.0,20054.0,51099.0,44627.0,156399.0,47021.0,155118.0,634826.0,149187.0,95755.0,291126.0,16934.0,36694.0,19266.0,33330.0,21696.0,629267.0,299101.0,59587.0,160392.0,61953.0,40521.0,201279.0,39283.0,30235.0,102234.0,182021.0,1464480.0,656406.0,51069.0,22928.0,37221.0,211217.0,25256.0,40199.0,75910.0,52349.0,138193.0,44461.0,79445.0
],
"timeProperties":{
"period":"PT1S",
"origin":null,
"timeZone":"UTC"
},
"bucketMillis":1000,
"bounds":{
"start":{
"data":null,
"timestamp":null
},
"end":{
"data":null,
"timestamp":null
}
}
}
}

TIMESERIES

The following query creates a time series using the raw data values stored in the table named Koalas to the Max. The resulting output is in JSON format.

SELECT count(*) as total_count,
TIMESERIES_TO_JSON(TIMESERIES("__time", "session_length", '2019-08-19T00:00:00.234Z/2019-08-19T23:59:59.877Z', 203000)) as timeseries_json
FROM "Koalas to the Max"
View the results
{
"total_count":505431,
"timeseries_json":{
"window":"2019-08-19T00:00:00.234Z/2019-08-19T23:59:59.877Z",
"timestamps":[
1566172800528,1566172800673,1566172801161,1566172802190,1566172802746,1566172802898,1566172803363,1566172803950,1566172804257,1566172804277,1566172806439,1566172806973,1566172807599,1566172808060,1566172808639,1566172808743,1566172809378,1566172810073,1566172810258,1566172810583,1566172810620,1566172810628,1566172811236,1566172811608,
...
1566259194247,1566259194530,1566259194578,1566259194770,1566259195158,1566259195245,1566259195692,1566259195993,1566259196571,1566259196616,1566259196692,1566259197087,1566259197604,1566259197757,1566259197932,1566259198509,1566259198510,1566259198822,1566259198956,1566259199198,1566259199218
],
"dataPoints":[
104299.0,426.0,19160.0,44628.0,20745.0,12621.0,30449.0,21937.0,31354.0,22257.0,24265.0,34045.0,310.0,54115.0,1743.0,1752.0,610.0,25109.0,19976.0,209.0,3149.0,3158.0,4357.0,53974.0,54457.0,1686.0,1714.0,28420.0,56605.0,58492.0,65947.0,297716.0,63330.0,114037.0,114584.0,174546.0,40607.0,41207.0,68808.0,74171.0,120331.0,75215.0,
...
22640.0,392121.0,4156.0,4164.0,108702.0,150.0,80517.0,95.0,211217.0,107111.0,24639.0,37388.0,53994.0,1186.0,163499.0,1192.0,2930.0,2937.0,5937.0,25256.0,40199.0,70814.0,2833.0,26741.0,30111.0,75477.0,75910.0,27301.0,18108.0,265741.0,52349.0,58323.0,138193.0,70892.0,32893.0,6763.0,44461.0,30794.0,8311.0,40725.0,79445.0
],
"timeProperties":null,
"bucketMillis":null,
"bounds":{
"start":{
"data":null,
"timestamp":null
},
"end":{
"data":596.0,
"timestamp":1566259199877
}
}
}
}

Time series processing functions

The following table lists the time series processing functions that you can use to transform or augment a time series. You can apply these functions to a time series repeatedly and in any suitable order.

FunctionDescription
ADD_TIMESERIES(timeseries1, timeseries2 [, shouldNullPoison])Adds the data points in the two time series objects and returns a new time series. The two provided time series must have identical timestamps.
shouldNullPoison sets the output to null if any of the input time series are null. shouldNullPoison defaults to false.
DELTA_TIMESERIES(timeseries [, bucketPeriod])Returns a new time series which contains delta differences between the data points in the input time series. You can use the DELTA_TIMESERIES function to analyze counter data (an always increasing numeric metric) and determine the rate of increase of the counter value over time.
bucketPeriod is an ISO 8601 period through which the deltas are added up (or folded) within a time bucket. bucketPeriod defaults to 1 millisecond.
DIVIDE_TIMESERIES(timeseries1, timeseries2 [, shouldNullPoison])Divides the data points in the two time series objects and returns a new time series. The two provided time series must have identical timestamps.
shouldNullPoison sets the output to null if any of the input time series are null. shouldNullPoison defaults to false. The function returns a non-null time series if the first time series is non-null and the second time series is null. If the denominator data point in any division is 0, the new data point for the divided time series is a NaN value.
FILTER_TIMESERIES(timeseries, filterExpr)Filters the input time series by filterExpr and returns a new time series.
filterExpr is a Druid SQL expression in string format. In filterExpr, you can reference the time series data by using timestamp for time values and value for data points.
FIRST_IN_TIMESERIES(timeseries)Returns the first data point value in the input time series.
LAST_IN_TIMESERIES(timeseries)Returns the last data point value in the input time series.
MAP_TIMESERIES(timeseries, mapExpr)Returns a new time series by mapping the data points of the input time series through mapExpr.
mapExpr takes a Druid SQL expression as a string argument and applies it to each value of the time series. You can reference the (time, datapoint) tuple in a time series using (timestamp, value) variables in the mapExpr. For example, if you call MAP_TIMESERIES(timeseries, 'pow(value, 2) + timestamp'), Druid changes every data point in the time series to pow(value, 2) + timestamp where value is the current data point value and timestamp is the millisecond time of that entry's recording.
MAX_OVER_TIMESERIES(timeseries)Returns the largest data point value in the input time series.
MIN_OVER_TIMESERIES(timeseries)Returns the smallest data point value in the input time series.
MULTIPLY_TIMESERIES(timeseries1, timeseries2)Multiplies the data points in the two time series objects and returns a new time series. The two provided time series must have identical timestamps.
QUANTILE_OVER_TIMESERIES(timeseries, percentile)Computes a quantile value over the data points in the input time series.
percentile is a percentile between 0 and 1 which determines the quantile to be computed.
SUBTRACT_TIMESERIES(timeseries1, timeseries2 [, shouldNullPoison])Subtracts the data points in the two time series objects and returns a new time series. The two provided time series must have identical timestamps.
shouldNullPoison sets the output to null if any of the input time series are null. shouldNullPoison defaults to false.The function returns a non-null time series if the first time series is non-null and the second time series is null.
SUM_OVER_TIMESERIES(timeseries)Returns a double value equivalent to the sum over the data points of the input time series.
AVG_OVER_TIMESERIES(timeseries)Returns a double value equivalent to the average over the data points of the input time series.
TIMESERIES_ATTACH_META(timeseries, metadataKey, metadataValue)Merges the provided metadata property to a time series. metadataKey accepts period as ISO period, timeZone as string, and origin as datetime.
TIMESERIES_CLEAR_META(timeseries [, metadataKey]Clears the provided metadata property from a time series. metadataKey accepts period as ISO period, timeZone as string, and origin as datetime. If no argument is provided, all three metadata properties are cleared from the time series object.
TIMESERIES_TO_JSON(timeseries)Outputs time series data in JSON format.
TIMESERIES_SIZE(timeseries)Computes the size of the input time series.

Interpolation

Interpolation functions let you estimate the values of missing data points at regular intervals in the time series.

You define the regular time intervals by segmenting the time since Unix epoch (00:00:00 UTC on January 1, 1970) using the bucketPeriod parameter. The bucketPeriod parameter is an ISO 8601 period that determines the segment period in which time is chunked for calculating interpolation points.

The supported interpolation methods include:

  • Linear: Uses linear interpolation to fill the data points for missing timestamps.
  • Padding: Carries forward the last seen value in the time series to the missing timestamps.
  • Backfill: Carries backwards the last seen value in the time series to the missing timestamps.

The following table lists the interpolation functions that you can apply to the time series object. Note that the boundary version of a function retains only the entries within the specified range, discarding the rest. The interpolation version of a function preserves all original entries.

FunctionDescription
LINEAR_INTERPOLATION(timeseries, bucketPeriod)Creates a time series from the input time series object with linear interpolation for missing data points. Retains all the original entries of the input time series.
LINEAR_BOUNDARY(timeseries, bucketPeriod)Creates a time series from the input time series object with linear interpolation for missing data points. The resulting time series contains recordings at time bucket boundaries, discarding the rest of the entries.
PADDING_INTERPOLATION(timeseries, bucketPeriod)Creates a time series from the input time series object with padding interpolation for missing data points. Retains all the original entries of the input time series.
PADDED_BOUNDARY(timeseries, bucketPeriod)Creates a time series from the input time series object with padding interpolation for missing data points. The resulting time series contains recordings at time bucket boundaries, discarding the rest of the entries.
BACKFILL_INTERPOLATION(timeseries, bucketPeriod)Creates a time series from the input time series object with backfill interpolation for missing data points. Retains all the original entries of the input time series.
BACKFILL_BOUNDARY(timeseries, bucketPeriod)Creates a time series from the input time series object with backfill interpolation for missing data points. The resulting time series contains recordings at time bucket boundaries, discarding the rest of the entries.

Time-weighted average

You can calculate time-weighted averages of data points within each regular time interval. You define the regular time intervals by segmenting time since Unix epoch (00:00:00 UTC on January 1, 1970) using the bucketPeriod parameter. Time intervals that have no data points associated with them are not included in the output.

FunctionDescription
TIME_WEIGHTED_AVERAGE(timeseries, interpolator, bucketPeriod)Creates a time series from the given time series object with time-weighted average for each bucket point.
  • interpolator: Interpolation strategy to fill missing edge points for time intervals. interpolator accepts the following values: linear, padding, or backfill. See Interpolation for additional details.
  • bucketPeriod: ISO 8601 period that determines the segment period in which time is chunked for calculating time-weighted average.

Examples

This section contains examples of time series processing functions.

DELTA_TIMESERIES

The following query creates a time series that contains delta differences between the data points. The resulting output is in JSON format.

SELECT count(*) as total_count,
TIMESERIES_TO_JSON(DELTA_TIMESERIES(TIMESERIES("__time", "session_length", '2019-08-19T00:00:00.234Z/2019-08-19T23:59:59.877Z', 203000))) as delta
FROM "Koalas to the Max"
View the results
{
"total_count":505431,
"delta":{
"window":"2019-08-19T00:00:00.234Z/2019-08-19T23:59:59.877Z",
"timestamps":[
1566172800528,1566172801161,1566172802190,1566172803363,1566172804257,1566172806439,1566172806973,1566172808060,1566172808743,1566172810073,1566172810620,1566172810628,1566172811236,1566172811608,1566172812020,1566172812108,1566172813384,1566172814168,1566172816126,1566172819890,1566172820757,1566172822345,1566172822478,1566172824205,
...
1566259192032,1566259192297,1566259192312,1566259192490,1566259192673,1566259192806,1566259192963,1566259193583,1566259193674,1566259193808,1566259193949,1566259194247,1566259194530,1566259194770,1566259195158,1566259195245,1566259195692,1566259196616,1566259197087,1566259197604,1566259198510,1566259199198
],
"dataPoints":[
18734.0,25468.0,17828.0,9417.0,2008.0,9780.0,53805.0,9.0,24499.0,2940.0,9.0,1199.0,49617.0,483.0,28.0,26706.0,28185.0,1887.0,7455.0,231769.0,50707.0,547.0,59962.0,600.0,27601.0,5363.0,46160.0,122878.0,77548.0,14.0,124002.0,24028.0,22070.0,1285.0,23833.0,99065.0,25268.0,86077.0,61100.0,3091.0,59578.0,2830.0,89169.0,37125.0,7.0,3619.0,58409.0,138749.0,38717.0,
...
1445426.0,59580.0,93485.0,34391.0,612478.0,9.0,18199.0,47.0,39868.0,208198.0,22718.0,23654.0,22422.0,7380.0,6713.0,369481.0,8.0,104538.0,80367.0,211122.0,12749.0,16606.0,162313.0,1738.0,7.0,3000.0,19319.0,14943.0,30615.0,23908.0,3370.0,45366.0,433.0,247633.0,5974.0,79870.0,37698.0,32414.0,38720.0
],
"timeProperties":{
"period":"PT0.001S",
"origin":null,
"timeZone":"UTC"
},
"bucketMillis":1,
"bounds":{
"start":{
"data":null,
"timestamp":null
},
"end":{
"data":null,
"timestamp":null
}
}
}
}

FILTER_TIMESERIES

The following query filters the input time series, returning data points greater than 100:

SELECT
TIMESERIES_TO_JSON(FILTER_TIMESERIES(TIMESERIES("__time", "session_length", '2019-08-19T00:00:00.234Z/2019-08-19T23:59:59.877Z', 203000), 'value < 100')) as filter_timeseries
FROM "Koalas to the Max"
View the results
{
"filter_timeseries":{
"window":"2019-08-19T00:00:00.234Z/2019-08-19T23:59:59.877Z",
"timestamps":[
1566172885539,1566172928927,1566173071079,1566173075315,1566173133402,1566173203777,1566173208260,1566173217798,1566173266136,1566173557037,1566173570284,1566173738653,1566173955946,1566174130776,1566174241576,
...
1566258924202,1566258930025,1566258931231,1566258935433,1566258961051,1566259001367,1566259010929,1566259011250,1566259064649,1566259141470,1566259146937,1566259159204,1566259161960,1566259179331,1566259192458
],
"dataPoints":[
70.0,98.0,56.0,95.0,53.0,88.0,57.0,70.0,38.0,89.0,82.0,98.0,48.0,86.0,82.0,48.0,45.0,44.0,65.0,98.0,74.0,94.0,46.0,98.0,53.0,56.0,96.0,88.0,73.0,97.0,98.0,36.0,89.0,15.0,51.0,18.0,93.0,77.0,68.0,97.0,66.0,29.0,43.0,92.0,45.0,97.0,50.0,38.0,
...
70.0,56.0,52.0,60.0,55.0,78.0,62.0,73.0,55.0,57.0,63.0,94.0,53.0,66.0,75.0,49.0,58.0,46.0,61.0,74.0,47.0,58.0,76.0,48.0,56.0,41.0,55.0,41.0,54.0,43.0,64.0,75.0,72.0,29.0,43.0,93.0,85.0,43.0,90.0,99.0,95.0,96.0,62.0,99.0,52.0,95.0
],
"timeProperties":null,
"bucketMillis":null,
"bounds":{
"start":{
"data":null,
"timestamp":null
},
"end":{
"data":596.0,
"timestamp":1566259199877
}
}
}
}

FIRST_IN_TIMESERIES and LAST_IN_TIMESERIES

The following query returns the first and last data point values in the time series:

SELECT
FIRST_IN_TIMESERIES(TIMESERIES("__time", "session_length", '2019-08-19T00:00:00.234Z/2019-08-19T23:59:59.877Z', 203000)) as first_data_point,
LAST_IN_TIMESERIES(TIMESERIES("__time", "session_length", '2019-08-19T00:00:00.234Z/2019-08-19T23:59:59.877Z', 203000)) as last_data_point
FROM "Koalas to the Max"
View the results
{
"first_data_point":15079,
"last_data_point":79445
}

LINEAR_INTERPOLATION

The following query creates a time series with linear interpolation for missing data points at equal intervals of one hour. The resulting output is in JSON format.

SELECT count(*) as total_count,
TIMESERIES_TO_JSON(LINEAR_INTERPOLATION(TIMESERIES("__time", "session_length", '2019-08-19T00:00:00.234Z/2019-08-19T23:59:59.877Z', 203000), 'PT1H')) as timeseries
FROM "Koalas to the Max"
View the results
{
"total_count":505431,
"timeseries":{
"window":"2019-08-19T00:00:00.234Z/2019-08-19T23:59:59.877Z",
"timestamps":[
1566172800528,1566172800673,1566172801161,1566172802190,1566172802746,1566172802898,1566172803363,1566172803950,1566172804257,1566172804277,1566172806439,1566172806973,1566172807599,1566172808060,1566172808639,1566172808743,1566172809378,1566172810073,1566172810258,1566172810583,1566172810620,1566172810628,1566172811236,1566172811608,
...
1566259194578,1566259194770,1566259195158,1566259195245,1566259195692,1566259195993,1566259196571,1566259196616,1566259196692,1566259197087,1566259197604,1566259197757,1566259197932,1566259198509,1566259198510,1566259198822,1566259198956,1566259199198,1566259199218
],
"dataPoints":[
104299.0,426.0,19160.0,44628.0,20745.0,12621.0,30449.0,21937.0,31354.0,22257.0,24265.0,34045.0,310.0,54115.0,1743.0,1752.0,610.0,25109.0,19976.0,209.0,3149.0,3158.0,4357.0,53974.0,54457.0,1686.0,1714.0,28420.0,56605.0,58492.0,65947.0,297716.0,63330.0,114037.0,114584.0,174546.0,40607.0,41207.0,68808.0,74171.0,120331.0,75215.0,
...
22640.0,392121.0,4156.0,4164.0,108702.0,150.0,80517.0,95.0,211217.0,107111.0,24639.0,37388.0,53994.0,1186.0,163499.0,1192.0,2930.0,2937.0,5937.0,25256.0,40199.0,70814.0,2833.0,26741.0,30111.0,75477.0,75910.0,27301.0,18108.0,265741.0,52349.0,58323.0,138193.0,70892.0,32893.0,6763.0,44461.0,30794.0,8311.0,40725.0,79445.0
],
"timeProperties":{
"period":"PT0.001S",
"origin":null,
"timeZone":"UTC"
},
"bucketMillis":1,
"bounds":{
"start":{
"data":null,
"timestamp":null
},
"end":{
"data":596.0,
"timestamp":1566259199877
}
}
}
}

MAP_TIMESERIES

The following query returns a new times series by mapping sqrt square root function over the data points in the input time series:

SELECT count(*) as total_count,
TIMESERIES_TO_JSON(MAP_TIMESERIES(TIMESERIES("__time", "session_length", '2019-08-19T00:00:00.234Z/2019-08-19T23:59:59.877Z', 203000), 'sqrt(value)')) as sqrt_timeseries
FROM "Koalas to the Max"
View the results
{
"total_count":505431,
"sqrt_timeseries":{
"window":"2019-08-19T00:00:00.234Z/2019-08-19T23:59:59.877Z",
"timestamps":[
1566172800528,1566172800673,1566172801161,1566172802190,1566172802746,1566172802898,1566172803363,1566172803950,1566172804257,1566172804277,1566172806439,1566172806973,1566172807599,1566172808060,1566172808639
...
1566259195692,1566259195993,1566259196571,1566259196616,1566259196692,1566259197087,1566259197604,1566259197757,1566259197932,1566259198509,1566259198510,1566259198822,1566259198956,1566259199198,1566259199218
],
"dataPoints":[
322.95355703258633,20.639767440550294,138.4196517839862,211.25340233946528,144.03124660989366,112.34322409473569,174.49641830135081,148.11144452742334,177.0706073858674,149.18780110987626,155.77226967596,184.51287217969374,17.60681686165901,
...
165.23014252853503,134.56596895203484,515.5007274485653,228.79903845951802,241.501552790039,371.74319092620914,266.2555163747786,181.36427432104702,82.23746105030237,210.85777196963835,175.4821928287882,91.16468614545876,201.80436070610565,281.8598942737331
],
"timeProperties":null,
"bucketMillis":null,
"bounds":{
"start":{
"data":null,
"timestamp":null
},
"end":{
"data":596.0,
"timestamp":1566259199877
}
}
}
}

MAX_OVER_TIMESERIES and MIN_OVER_TIMESERIES

The following query returns the largest and the smallest data point values in the time series:

SELECT
MAX_OVER_TIMESERIES(TIMESERIES("__time", "session_length", '2019-08-19T00:00:00.234Z/2019-08-19T23:59:59.877Z', 203000)) as largest_data_point,
MIN_OVER_TIMESERIES(TIMESERIES("__time", "session_length", '2019-08-19T00:00:00.234Z/2019-08-19T23:59:59.877Z', 203000)) as smallest_data_point
FROM "Koalas to the Max"
View the results
{
"largest_data_point":5790908842,
"smallest_data_point":-3550727
}

PADDING_INTERPOLATION

The following query creates a time series with padding interpolation for missing data points at equal intervals of one hour. The resulting output is in JSON format.

SELECT count(*) as total_count,
TIMESERIES_TO_JSON(PADDING_INTERPOLATION(TIMESERIES("__time", "session_length", '2019-08-19T00:00:00.234Z/2019-08-19T23:59:59.877Z', 203000), 'PT1H')) as timeseries
FROM "Koalas to the Max"
View the results
{
"total_count":505431,
"timeseries":{
"window":"2019-08-19T00:00:00.234Z/2019-08-19T23:59:59.877Z",
"timestamps":[
1566172800528,1566172800673,1566172801161,1566172802190,1566172802746,1566172802898,1566172803363,1566172803950,1566172804257,1566172804277,1566172806439,1566172806973,1566172807599,1566172808060,1566172808639,1566172808743,1566172809378,1566172810073,
...
1566259194770,1566259195158,1566259195245,1566259195692,1566259195993,1566259196571,1566259196616,1566259196692,1566259197087,1566259197604,1566259197757,1566259197932,1566259198509,1566259198510,1566259198822,1566259198956,1566259199198,1566259199218
],
"dataPoints":[
104299.0,426.0,19160.0,44628.0,20745.0,12621.0,30449.0,21937.0,31354.0,22257.0,24265.0,34045.0,310.0,54115.0,1743.0,1752.0,610.0,25109.0,19976.0,209.0,3149.0,3158.0,4357.0,53974.0,54457.0,1686.0,1714.0,28420.0,56605.0,58492.0,65947.0,297716.0,63330.0,114037.0,114584.0,174546.0,
...
95.0,211217.0,107111.0,24639.0,37388.0,53994.0,1186.0,163499.0,1192.0,2930.0,2937.0,5937.0,25256.0,40199.0,70814.0,2833.0,26741.0,30111.0,75477.0,75910.0,27301.0,18108.0,265741.0,52349.0,58323.0,138193.0,70892.0,32893.0,6763.0,44461.0,30794.0,8311.0,40725.0,79445.0
],
"timeProperties":{
"period":"PT0.001S",
"origin":null,
"timeZone":"UTC"
},
"bucketMillis":1,
"bounds":{
"start":{
"data":null,
"timestamp":null
},
"end":{
"data":596.0,
"timestamp":1566259199877
}
}
}
}

QUANTILE_OVER_TIMESERIES

The following query calculates the 50th percentile value over the data points in the time series:

SELECT count(*) as total_count,
QUANTILE_OVER_TIMESERIES(TIMESERIES("__time", "session_length", '2019-08-19T00:00:00.234Z/2019-08-19T23:59:59.877Z', 203000), 0.5) as quantile
FROM "Koalas to the Max"
View the results
{
"total_count":505431,
"quantile":32822
}

SUM_OVER_TIMESERIES and AVG_OVER_TIMESERIES

The following query returns the sum and average over all the data point values in the time series:

SELECT
SUM_OVER_TIMESERIES(TIMESERIES("__time", "session_length", '2019-08-19T00:00:00.234Z/2019-08-19T23:59:59.877Z', 203000)) as sum_over_timeseries,
AVG_OVER_TIMESERIES(TIMESERIES("__time", "session_length", '2019-08-19T00:00:00.234Z/2019-08-19T23:59:59.877Z', 203000)) as avg_over_timeseries
FROM "Koalas to the Max"
View the results
{
"sum_over_timeseries":115864398163,
"avg_over_timeseries":571151.6662295858
}

TIMESERIES_ATTACH_META

The following query merges the provided timeZone property to the time series. The resulting output shows the timeZone property set to Asia/Kolkata instead of the default UTC.

SELECT TIMESERIES_TO_JSON(TIMESERIES_ATTACH_META(TIMESERIES("__time", "session_length", '2019-08-19T00:00:00.234Z/2019-08-19T23:59:59.877Z', 203000), 'timeZone', 'IST')) as timeseries_meta
FROM "Koalas to the Max"
View the results
{
"timeseries_meta":{
"window":"2019-08-19T00:00:00.234Z/2019-08-19T23:59:59.877Z",
"timestamps":[
1566172800528,1566172800673,1566172801161,1566172802190,1566172802746,1566172802898,1566172803363,1566172803950,1566172804257,1566172804277,1566172806439,1566172806973,1566172807599,1566172808060,1566172808639,1566172808743,1566172809378,1566172810073,
...
1566259196571,1566259196616,1566259196692,1566259197087,1566259197604,1566259197757,1566259197932,1566259198509,1566259198510,1566259198822,1566259198956,1566259199198,1566259199218
],
"dataPoints":[
104299.0,426.0,19160.0,44628.0,20745.0,12621.0,30449.0,21937.0,31354.0,22257.0,24265.0,34045.0,310.0,54115.0,1743.0,1752.0,610.0,25109.0,19976.0,209.0,3149.0,3158.0,4357.0,53974.0,54457.0,1686.0,1714.0,28420.0,56605.0,58492.0,65947.0,297716.0,63330.0,
...
2937.0,5937.0,25256.0,40199.0,70814.0,2833.0,26741.0,30111.0,75477.0,75910.0,27301.0,18108.0,265741.0,52349.0,58323.0,138193.0,70892.0,32893.0,6763.0,44461.0,30794.0,8311.0,40725.0,79445.0
],
"timeProperties":{
"period":null,
"origin":null,
"timeZone":"Asia/Kolkata"
},
"bucketMillis":null,
"bounds":{
"start":{
"data":null,
"timestamp":null
},
"end":{
"data":596.0,
"timestamp":1566259199877
}
}
}
}

TIMESERIES_SIZE

The following query computes the size of the time series:

SELECT count(*) as total_count,
TIMESERIES_SIZE(TIMESERIES("__time", "session_length", '2019-08-19T00:00:00.234Z/2019-08-19T23:59:59.877Z', 203000)) as time_series_size
FROM "Koalas to the Max"
View the results
{
"total_count":505431,
"time_series_size":202860
}

TIME_WEIGHTED_AVERAGE

The following query creates a times series with time-weighted average for each bucket point. The resulting output is in JSON format.

SELECT count(*) as total_count,
TIMESERIES_TO_JSON(TIME_WEIGHTED_AVERAGE(TIMESERIES("__time", "session_length", '2019-08-19T00:00:00.234Z/2019-08-19T23:59:59.877Z', 203000), 'backfill', 'PT1M')) as timeseries
FROM "Koalas to the Max"
View the results
{
"total_count":505431,
"timeseries":{
"window":"2019-08-19T00:00:00.234Z/2019-08-19T23:59:59.877Z",
"timestamps":[
1566172860000,1566172920000,1566172980000,1566173040000,1566173100000,1566173160000,1566173220000,1566173280000,1566173340000,1566173400000,1566173460000,1566173520000,1566173580000,1566173640000,1566173700000,1566173760000,1566173820000,1566173880000,
...
1566258420000,1566258480000,1566258540000,1566258600000,1566258660000,1566258720000,1566258780000,1566258840000,1566258900000,1566258960000,1566259020000,1566259080000,1566259140000],
"dataPoints":[
60661.208183333336,256822.28623333335,78493.2697,84797.42005,83089.56221666667,93242.89868333333,55477.63733333333,678886.6576333334,84828.26483333333,62037.23965,89680.16408333334,50320.28748333333,69950.59883333334
...
46773333334,147223.4491,174239.39256666668,83126.1091,153152.88816666667,63640.22588333333,288866.9824166667,179143.40398333332,260116.72173333334,131835.1912,62802.97078333333,82252.3607,94091.53451666667,118883.66955,63952.626033333334
],
"timeProperties":{
"period":"PT1M",
"origin":null,
"timeZone":"UTC"
},
"bucketMillis":60000,
"bounds":{
"start":{
"data":49297.86866666667,
"timestamp":1566172800000
},
"end":{
"data":null,
"timestamp":null
}
}
}
}

Learn more

See the following topics for more information: