Skip to main content

Send events to Lumi

In the observability landscape, events are the building blocks that help you understand the state of your system at any given moment. Sending events is an integral step in the event lifecycle, after which you can parse and transform your data and explore it with various queries.

Imply Lumi is an observability warehouse that can integrate with your existing tooling and workflow. Whether you use Lumi as a standalone product or alongside your current setup, consider the following aspects when you send events to Lumi:

  • Forwarding agent and communication protocol that fits your infrastructure
  • Types of events and data structures you have
  • Any transformations or filtering needed for incoming events

Ingestion integrations

You can send events to Lumi through various ingestion integrations. The integration you select depends on your application, performance, and security requirements.

Integrations page

Ingestion protocol strategy

The specific integration you use to send events to Lumi depends on your use case and event forwarding infrastructure. This section presents high-level guidance on which protocol may be suitable for your observability application.

To learn about Lumi as it relates to the Splunk ecosystem, see Lumi concepts for Splunk users.

The following diagram summarizes these strategies:

Ingestion integration diagram

Event format

The types of data you send to Lumi depends on your observability use case. You may choose to instrument an application to generate telemetry about its state, performance, requests, and interactions with other services. In other scenarios, you may rely on the event logs already generated by another system, such as Windows logs on file system changes or network information.

Lumi provides tooling to parse and structure your data from specific event formats. You can use pipelines to process the events before you store and search them. Pipelines make it easy for you to automatically extract log data into searchable attributes. You can then explore events by filtering on those attributes.

Lumi provides a library of predefined pipelines, which contain a set of standard processors to parse and transform events with a specific data structure or format. You can also define your own pipeline to transform any kind of data you send to Lumi. To learn more about pipelines, see Transform events with pipelines.

Overview of pipeline processing

In addition to pipelines, you can also control some event enrichment and parsing from IAM key settings. These settings are available to the specified integrations. For example, Lumi doesn't apply the HEC attributes source, sourcetype, and index for incoming events through the OTLP integration, even if you use the same IAM key for HEC and OTLP. You can still add or modify these attributes using a pipeline.

Note that pipelines and integrations are fundamentally separate concepts. Pipelines process your data, whereas integrations are used for sending data. These components have some overlap in terms of event enrichment and event parsing:

  • For incoming events through the HEC receiver, the IAM key can store default values for assigning select user attributes. Regardless of integration, you can assign or modify these values in a pipeline.
  • For incoming events using Splunk-to-Splunk, the IAM key stores rules to parse events, using select settings from Splunk configuration. Regardless of integration, you can parse events in a pipeline using regex and grok expressions.

For more information on IAM key settings, see Attributes on an IAM key.

To learn about how Lumi prioritizes and assigns user attributes, see Event model.

Learn more

For tutorials on the send methods, see the following topics:

See the Quickstart and File upload reference for details on uploading files to Lumi.