AWS Open Distro for OpenTelemetry
Getting Started with the OTLP Exporters
Getting Started with the OTLP Exporters
An exporter is a component in the OpenTelemetry Collector configured to send data to different systems/back-ends. Different exporters converts OpenTelemetry protocol (OTLP) formatted data to their respective predefined back-end format and exports this data to be interpreted by the back-end or system.
OTLP Protocol
The OpenTelemetry Protocol (OTLP) defines the encoding, transport, and delivery mechanism of telemetry data between telemetry sources, intermediate processes such as collectors and telemetry backends. OTLP is a protocol based on requests and responses, for example the client sends requests and the server corresponds with responses. OTLP is currently implemented over two types of transport systems, gRPC and HTTP, specifying the Protocol Buffers schema (protobuf) used for the payloads. The protobuf schema of the messages is the same for OTLP/HTTP and OTLP/gRPC.
OTLP/HTTP
The OTLP implementation transport system over HTTP uses protobuf payloads either in binary format or JSON. OTLP/HTTP uses HTTP POST requests to send telemetry data from clients to servers. Implementations may use HTTP/1.1 or HTTP/2 transports, if an HTTP/2 connection cannot be established it should fallback to HTTP/1.1 transport.
OTLP/gRPC
OTLP/gRPC sends telemetry data with unary requests in ExportTraceServiceRequest
for traces, ExportMetricsServiceRequest
for metrics,
ExportLogsServiceRequest
for logs. The language independent interface types for these mentioned pipeline data can be found
here. The client will continuously send sequences of requests to the server
and expects to receive a response with each request. You can learn more about the OTLP protocol
here.
Setting up A Monitoring Backend
OpenTelemetry can export traces, logs, and metrics to various backends to analyze in order to understand an application’s performance and behavior. There are multiple monitoring backends (also known as end-points) which can support OpenTelemetry using the OTLP protocol.
In this section, we share getting started configurations to the Collector to export telemetry data to AppDynamics, Honeycomb, Lightstep, New Relic, and SumoLogic end-points.
Prerequisites
For using any backends supported by OpenTelemetry, make sure you have set up the Collector.
AppDynamics
AppDynamics supports OpenTelemetry by ingesting OTLP directly, so users of the AWS Distro for OpenTelemetry (ADOT) can send tracing data directly to AppDynamics without the need for additional plugins or non-OTLP exporters.
Requirements
Before you can use the AWS Distro for OpenTelemetry with the AppDynamics endpoint, you need:
- You need AppDynamics SaaS Controller >= v21.2.0.
- You need to be admitted to the AppDynamics OpenTelemetry private beta program.
Configuration (Collector)
The configuration takes place in the OTLP exporter in the Collector config YAML file.
- Set the OTLP endpoint through the OTLP HTTP Exporter. To configure your AppDynamics Controller to work with the ADOT Collector, edit your
otel-config.yml
configuration file. - Set the AppDynamics API key
<x-api-key>
(Your AppDynamics API key must be defined as an HTTP header. To obtain your unique x-api-key, you should work closely with your AppDynamics account team.) - Use resource attributes to add your AppDynamics account information:
appdynamics.controller.account
: Your AppDynamics Controller account name.appdynamics.controller.host
: your AppDynamics Controller host name.appdynamics.controller.port
: your AppDynamics Controller port number.service.name
your AppDynamics tier name. trace resource attribute for every service being monitored.service.namespace
: your AppDynamics application name. Set correspondingservice.namespace
trace resource attribute for every service being monitored.
For custom attributes, see Ingest OpenTelemetry Trace Data
Example
processors: resource: attributes: - key: appdynamics.controller.account action: upsert value: "acme" - key: appdynamics.controller.host action: upsert value: "acme.saas.appdynamics.com" - key: appdynamics.controller.port action: upsert value: 443 batch: timeout: 30s send_batch_size: 8192receivers: otlp: protocols: grpc: endpoint: localhost:4317 http: endpoint: localhost:4318exporters: otlphttp: endpoint: "https://pdx-sls-agent-api.saas.appdynamics.com" headers: {"x-api-key": "****************"}service: pipelines: traces: receivers: [otlp] processors: [resource, batch] exporters: [otlphttp]
Grafana Labs
Grafana Cloud Agent
The Grafana Cloud Agent is an all-in-one agent for collecting metrics, logs, and traces, built on top of production-ready open source software. The batteries-included nature removes the need to install more than one piece of software by including common integrations for monitoring out of the box. Through our native integrations with OpenTelemetry, Prometheus, and OpenMetrics, we ensure full compatibility with the complete CNCF ecosystem while adding extra functionality for scaling collection.
Requirements
Dependencies:
- OpenTelemetry-Collector
- Prometheus
- Cortex
- Loki
Runtime requirements:
- ETCD or Consul for scaling functionality
Configuration
Create a YAML configuration file with the following contents to configure the Agent to collect metrics a Prometheus- or OpenMetrics-compatible endpoint:
1prometheus:2 # Configures where scraped metric data will be stored. Data read from this3 # directory will be forwarded using the configured remote_write settings.4 # this directory5 wal_directory: /tmp/agent6 configs:7 # An invidiual config here identifies a single Prometheus instance. Each8 # instance holds its own set of scrape jobs and remote_write settings. The9 # name specified here must be unique.10 - name: primary11 scrape_configs:12 # A job is responsible for discovering a set of targets to scrape.13 # static_configs specifies a static list of host:port pairs to use14 # as targets. Replace "host:port" below with the hostname and port15 # number of the OpenMetrics-compatible endpoint to collect metrics16 # from.17 - job_name: collect18 static_configs:19 - targets: ['host:port']20 # remote_write configures where to send metrics using the Prometheus21 # Remote Write format. If authentication is used, replace USERNAME and22 # password accordingly. Otherwise, omit the basic_auth block.23 remote_write:24 - url: REMOTE_WRITE_URL25 basic_auth:26 username: USERNAME27 password: PASSWORD
Integrations may be enabled to also collect metrics from common systems. Add the following block to your configuration file:
1integrations:2 # Enable the "node_exporter" integration, which runs3 # https://github.com/prometheus/node_exporter in-process and scrapes metrics4 # from it.5 node_exporter:6 enabled: true7 # Configured identically to remote_write from the previous section. This8 # section must exist if integrations are used.9 prometheus_remote_write:10 - url: REMOTE_WRITE_URL11 basic_auth:12 username: USERNAME13 password: PASSWORD
Log support may be added by adding a loki
block. Use the following code block
to collect all log files from /var/log
:
1loki:2 positions:3 # Configures where to store byte offsets of recently read files.4 filename: /tmp/positions.yaml5 clients:6 # Configures the location to send logs using the Loki write API.7 # If authentication is not needed, omit the basic_auth block.8 - url: LOKI_URL9 basic_auth:10 username: USERNAME11 password: PASSWORD12 scrape_configs:13 # Configures a scrape job to find log files to collect from. Targets14 # must be set to localhost.15 #16 # __path__ may be set to any glob-patterned filepath where log files are17 # stored.18 - job_name: varlogs19 static_configs:20 - targets:21 - localhost22 labels:23 __path__: /var/log/*log
Support for collecting traces may be added with a tempo
block. Use the
following code block to collect spans and forward them to an OTLP-compatible
endpoint:
1tempo:2 receivers:3 # Configure jaeger support. grpc supports spans over port4 # 14250, thrift_binary over 6832, thrift_compact over 6831,5 # and thrift_http over 14268. Specific port numbers may be6 # customized within the config for the protocol.7 jaeger:8 protocols:9 grpc:10 thrift_binary:11 thrift_compact:12 thrift_http:13 # Configure opencensus support. Spans can be sent over port 5567814 # by default.15 opencensus:16 # Configure otlp support. Spans can be sent to port 4317 by17 # default.18 otlp:19 protocols:20 grpc:21 http:22 # Configure zipkin support. Spans can be sent to port 9411 by23 # default.24 zipkin:25
26 # Configures where to send collected spans and traces. Outgoing spans are sent27 # in the OTLP format. Replace OTLP_ENDPOINT with the host:port of the target28 # OTLP-compatible host. If the OTLP endpoint uses authentication, configure29 # USERNAME and PASSWORD accordingly. Otherwise, omit the basic_auth section.30 push_config:31 endpoint: OTLP_ENDPOINT32 basic_auth:33 username: USERNAME34 password: PASSWORD
A full configuration reference is located in the Grafana Cloud Agent code repository
Honeycomb
Honeycomb supports OpenTelemetry by ingesting OTLP directly, so users of the AWS Distro for OpenTelemetry (ADOT) can send tracing data directly to Honeycomb without the need for additional plugins or non-OTLP exporters.
Requirements
Before you can use the AWS Distro for OpenTelemetry with the Honeycomb endpoint, you need:
- You will need a Honeycomb account, if you don’t currently have one you can sign up here.
- An API key for the Honeycomb Environment you're sending data to
Configuration (Collector)
The configuration will take place in the OTLP exporter in the Collector config YAML file.
- Set the OTLP endpoint to api.honeycomb.io:443
- Add your Honeycomb API key as an OTLP header (you can find your API key under Environment settings)
- The name of a dataset for metrics, if you're sending them
Example
To send trace data, all you need is the API key for your Environment:
1# Honeycomb Collector configuration2exporters:3 otlp:4 endpoint: api.honeycomb.io:4435 headers:6 # You can find your Honeycomb API key under Environment settings7 "x-honeycomb-team":"<YOUR_API_KEY>"
To send metrics data, you also need to specify the dataset for metrics data:
1# Honeycomb Collector configuration2exporters:3 otlp:4 endpoint: api.honeycomb.io:4435 headers:6 # You can find your Honeycomb API key under Environment settings7 "x-honeycomb-team":"<YOUR_API_KEY>"8 "x-honeycomb-dataset": "<YOUR_METRICS_DATASET>"
See Honeycomb's OpenTelemetry Collector docs to learn about additional configuration options.
Support
If you have any trouble using the AWS Distro for OpenTelemetry with Honeycomb, you can reach out to the ADOT support team or directly to the Honeycomb support page.
Lightstep
Lightstep supports OpenTelemetry natively, via OTLP. If you’re already set up with AWS Distro for OpenTelemetry, then getting data into Lightstep only requires an edit to the YAML config file for the Collector to get started.
Requirements
Before you can use the AWS Distro for OpenTelemetry with Lightstep, you need:
- A Lightstep account. If you don't already have one, you can create a free account here.
- An access token for your Lightstep project. This can be found in project settings (the gear icon in the sidebar).
Configuration (Collector)
The configuration will take place in the OTLP exporter in the Collector config YAML file.
- Configure the Collector to export OTLP.
- Set the OTLP endpoint to point to Lightstep.
- Public endpoint: ingest.lightstep.com:443
- Private satellites: the address of your satellite load balancer.
- Add your Lightstep access token as an OTLP header.
- Header name: "lightstep-access-token"
Example
1# Lightstep Collector configuration2exporters:3 otlp:4 # NOTE: if you are using private satellites, replace this public5 # endpoint with the address of your satellite load balancer.6 endpoint: ingest.lightstep.com:4437 # Your access token can be found in the project settings page8 headers: {"lightstep-access-token":"<YOUR_ACCESS_TOKEN>"}
New Relic
New Relic supports OpenTelemetry natively, via OTLP. If you’re already set up with the AWS Distro for OpenTelemetry, then sending data to New Relic can be accomplished with a simple change to the collector's YAML config file.
Requirements
Before you can use the AWS Distro for OpenTelemetry with New Relic, you will need:
- A New Relic account. If you don't already have one, you can sign up for a free account.
- An Ingest-License Key for your account. Select "Create a Key" and "Ingest-License Key" for the type.
Configuration (Collector)
The configuration will take place in the OTLP exporter section of the Collector config YAML file.
- Set the OTLP endpoint to otlp.nr-data.net:4317
- Add your New Relic Ingest-License key as an OTLP header.
- Header name: "api-key"
Example
1# New Relic OTLP Collector configuration2exporters:3 otlp:4 endpoint: otlp.nr-data.net:43175 headers: 6 api-key: <YOUR_NEW_RELIC_LICENSE_KEY>
OpenSearch
OpenSearch supports ingesting enchriched trace data via Data Prepper, a standalone application that converts OLTP formatted data for use in OpenSearch. Data Prepper supports receiving trace data from OpenTelemetry natively via OTLP. Once you've set up a Data Prepper instance, completing the data pipeline is as simple as editing your YAML config file for the Collector and getting started.
Requirements
Before you can use the AWS Distro for OpenTelemetry with OpenSearch, you need:
- A Data Prepper instance, configured to write to your OpenSearch cluster. Configuration documentation can be found here.
Configuration (Collector)
The configuration will take place in the OTLP exporter in the Collector config YAML file.
- Configure the Collector to export OTLP.
- Set the OTLP endpoint to that of your Data Prepper instance or cluster.
Example
1# Data Prepper Collector configuration2exporters:3 otlp/data-prepper:4 # Port 21890 is the default port exposed by Data Prepper.5 endpoint: <YOUR_DATA_PREPPER_ADDRESS>:218906
7service:8 pipelines:9 traces:10 exporters: [otlp/data-prepper]
Sumo Logic
Sumo Logic supports tracing telemetry signal from OpenTelemetry natively via OTLP. If you’re already set up with AWS Distro for OpenTelemetry, then exporting data into a SumoLogic backend is as simple as editing your YAML config file for the Collector and getting started.
Requirements
Before you can use the AWS Distro for OpenTelemetry with Sumo Logic you need:
- A Sumo Logic account. If you don't already have one, you can create an account here.
- A HTTP Traces endpoint URL. Instructions how to get one are available here.
Configuration (Collector)
The configuration will take place in the batch processor
and OTLP/HTTP exporter
in the Collector config YAML file.
Example
1# SumoLogic Collector configuration2processors:3 batch/traces:4 timeout: 5s5 send_batch_size: 5126 7exporters:8 otlphttp:9 traces_endpoint: https://YOUR_SUMOLOGIC_HTTP_TRACES_ENDPOINT_URL10
11service:12 pipelines:13 traces:14 exporters: [otlphttp]15 processors: [batch/traces]
If you are instrumenting your application using OpenTelemetry JavaScript, Java, Python, Go, Ruby, .NET you can use the SumoLogic documentation to set up your application and obtain telemetry data.
Support
If you have any trouble using the AWS Distro for OpenTelemetry with Sumo Logic, you can reach out to the ADOT support team or directly to the Sumo Logic support page.
Questions, Feedback?
We would love to hear more common configuration scenarios or improvements to this documentation from you! Please submit an issue on the aws-otel community page to let us know.