Skip to content

Commit

Permalink
Merge pull request #2536 from splunk/urbiz-OD6209-deprecate-sapm
Browse files Browse the repository at this point in the history
[OD6209]: Deprecate SAPM
  • Loading branch information
aurbiztondo-splunk authored Jan 10, 2025
2 parents 9b49b34 + 9742be3 commit b3b16ad
Show file tree
Hide file tree
Showing 20 changed files with 90 additions and 128 deletions.
4 changes: 2 additions & 2 deletions _includes/collector-config-ootb.rst
Original file line number Diff line number Diff line change
Expand Up @@ -165,15 +165,15 @@ The following diagram shows the default traces pipeline:

subgraph Exporters
direction LR
traces/sapm:::exporter
traces/otlphttp:::exporter
traces/signalfx/out:::exporter
end

%% Connections beyond categories are added later
traces/jaeger --> traces/memory_limiter
traces/otlp --> traces/memory_limiter
traces/zipkin --> traces/memory_limiter
traces/resourcedetection --> traces/sapm
traces/resourcedetection --> traces/otlphttp
traces/resourcedetection --> traces/signalfx/out

Learn more about these receivers:
Expand Down
15 changes: 3 additions & 12 deletions apm/apm-spans-traces/span-formats.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,13 +16,14 @@ For more information on the ingest API endpoints, see :new-page:`Send APM traces
Span formats compatible with the OpenTelemetry Collector
================================================================

The Splunk Distribution of the OpenTelemetry Collector can collect spans in the following format:
The Splunk Distribution of the OpenTelemetry Collector can collect spans in the following formats:

- Jaeger: gRPC and Thrift
- Zipkin v1, v2 JSON
- Splunk APM Protocol (SAPM)
- OpenTelemetry Protocol (OTLP)

.. note:: Splunk APM Protocol (SAPM) components are deprecated. Use the OTLP format instead.

The following examples show how to configure receivers in the collector configuration file. You can use multiple receivers according to your needs.

.. tabs::
Expand Down Expand Up @@ -51,14 +52,6 @@ The following examples show how to configure receivers in the collector configur
zipkin:
endpoint: 0.0.0.0:9411

.. code-tab:: yaml SAPM

# To receive spans in SAPM format

receivers:
sapm:
endpoint: 0.0.0.0:7276

.. code-tab:: yaml OTLP

# To receive spans in OTLP format
Expand All @@ -85,14 +78,12 @@ The ingest endpoint for Splunk Observability Cloud at ``https://ingest.<realm>.s
* OTLP at ``/v2/trace/otlp`` with ``Content-Type:application/x-protobuf``
* Jaeger Thrift with ``Content-Type:application/x-thrift``
* Zipkin v1, v2 with ``Content-Type:application/json``
* SAPM with ``Content-Type:application/x-protobuf``

In addition, the following endpoints are available:

* OTLP at ``/v2/trace/otlp`` with ``Content-Type:application/x-protobuf``
* Jaeger Thrift at ``/v2/trace/jaegerthrift`` with ``Content-Type:application/x-thrift``
* Zipkin v1, v2 at ``/v2/trace/signalfxv1`` with ``Content-Type:application/json``
* SAPM at ``/v2/trace/sapm`` with ``Content-Type:application/x-protobuf``

For more information on the ingest API endpoints, see :new-page:`Send APM traces <https://dev.splunk.com/observability/docs/apm/send_traces/>`.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -69,9 +69,9 @@ Traces don't appear in Splunk Observability Cloud
If traces from your instrumented application or service are not available in Splunk Observability Cloud, verify the OpenTelemetry Collector configuration:

* Make sure that the Splunk Distribution of OpenTelemetry Collector is running.
* Make sure that a ``zipkin`` receiver and a ``sapm`` exporter are configured.
* Make sure that a ``zipkin`` receiver and an ``otlp`` exporter are configured.
* Make sure that the ``access_token`` and ``endpoint`` fields are configured.
* Check that the traces pipeline is configured to use the ``zipkin`` receiver and ``sapm`` exporter.
* Check that the traces pipeline is configured to use the ``zipkin`` receiver and ``otlp`` exporter.

Metrics don't appear in Splunk Observability Cloud
==================================================================
Expand Down
2 changes: 1 addition & 1 deletion gdi/monitors-cloud/heroku.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Heroku
The Splunk OpenTelemetry Connector for Heroku is a buildpack for the Splunk Distribution of the OpenTelemetry Collector. The buildpack installs
and runs the Splunk OpenTelemetry Connector on a Dyno to receive, process and export metric and trace data for Splunk Observability Cloud:

- Splunk APM through the ``sapm`` exporter. The ``otlphttp`` exporter can be used with a custom configuration.
- Splunk APM through the ``otlphttp`` exporter.
- Splunk Infrastructure Monitoring through the ``signalfx`` exporter.

See :ref:`otel-intro` to learn more.
Expand Down
4 changes: 2 additions & 2 deletions gdi/opentelemetry/collector-addon/collector-addon-install.rst
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ Follow these steps to install the Splunk Add-on for OpenTelemetry Collector to a

#. In Splunk_TA_otel/local, create or open the access_token file, and replace the existing contents with the token value you copied from Splunk Observability Cloud. Save the updated file.

#. In :strong:`Splunk Observability Cloud`, select your name, then select the Organization tab to verify that the realm value in the realm and sapm-endpoint files in your local folder reflect the value shown in Splunk Observability Cloud. Save any changes you make in the local files.
#. In :strong:`Splunk Observability Cloud`, select your name, then select the Organization tab to verify that the realm value in the realm and ingest endpoints files in your local folder reflect the value shown in Splunk Observability Cloud. Save any changes you make in the local files.

#. Restart Splunkd. Your Add-on solution is now deployed.

Expand Down Expand Up @@ -75,7 +75,7 @@ Follow these steps to install the Splunk Add-on for the OpenTelemetry Collector

#. In Splunk_TA_otel/local, create or open the access_token file, and replace the existing contents with the token value you copied from Splunk Observability Cloud. Save the updated file.

#. In :strong:`Splunk Observability Cloud`, select your name, then select the Organization tab to verify that the realm value in the realm and sapm-endpoint files in your local folder match the value shown in Splunk Observability Cloud. Save any changes you make in the local files.
#. In :strong:`Splunk Observability Cloud`, select your name, then select the Organization tab to verify that the realm value in the realm and ingest endpoints files in your local folder match the value shown in Splunk Observability Cloud. Save any changes you make in the local files.

#. In :strong:`Splunk Web`, select :guilabel:`Settings > Forwarder Management` to access your deployment server.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -50,17 +50,17 @@ For example:

.. code-block::
2021-11-12T00:22:32.172Z info exporterhelper/queued_retry.go:325 Exporting failed. Will retry the request after interval. {"kind": "exporter", "name": "sapm", "error": "server responded with 429", "interval": "4.4850027s"}
2021-11-12T00:22:38.087Z error exporterhelper/queued_retry.go:190 Dropping data because sending_queue is full. Try increasing queue_size. {"kind": "exporter", "name": "sapm", "dropped_items": 1348}
2021-11-12T00:22:32.172Z info exporterhelper/queued_retry.go:325 Exporting failed. Will retry the request after interval. {"kind": "exporter", "name": "otlphttp", "error": "server responded with 429", "interval": "4.4850027s"}
2021-11-12T00:22:38.087Z error exporterhelper/queued_retry.go:190 Dropping data because sending_queue is full. Try increasing queue_size. {"kind": "exporter", "name": "otlphttp", "dropped_items": 1348}
If you can't fix throttling by bumping limits on the backend or reducing amount of data sent through the Collector, you can avoid OOMs by reducing the sending queue of the failing exporter. For example, you can reduce ``sending_queue`` for the ``sapm`` exporter:
If you can't fix throttling by bumping limits on the backend or reducing amount of data sent through the Collector, you can avoid OOMs by reducing the sending queue of the failing exporter. For example, you can reduce ``sending_queue`` for the ``otlphttp`` exporter:

.. code-block:: yaml
agent:
config:
exporters:
sapm:
otlphttp:
sending_queue:
queue_size: 512
Expand Down
2 changes: 1 addition & 1 deletion gdi/opentelemetry/components/attributes-processor.rst
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ You can then add the attributes processors to any compatible pipeline. For examp
- memory_limiter
- batch
- resourcedetection
exporters: [sapm, signalfx]
exporters: [otlphttp, signalfx]
metrics:
receivers: [hostmetrics, otlp, signalfx]
processors:
Expand Down
2 changes: 1 addition & 1 deletion gdi/opentelemetry/components/filter-processor.rst
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ You can then add the filter processors to any compatible pipeline. For example:
- memory_limiter
- batch
- resourcedetection
exporters: [sapm, signalfx]
exporters: [otlphttp, signalfx]
metrics:
receivers: [hostmetrics, otlp, signalfx]
processors:
Expand Down
2 changes: 1 addition & 1 deletion gdi/opentelemetry/components/groupbyattrs-processor.rst
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ Use the processor to perform the following actions:
* :ref:`Compact multiple records <groupbyattrs-processor-compact>` that share the same ``resource`` and ``InstrumentationLibrary`` attributes but are under multiple ``ResourceSpans`` or ``ResourceMetrics`` or ``ResourceLogs`` into a single ``ResourceSpans`` or ``ResourceMetrics`` or ``ResourceLogs``, when an empty list of keys is provided.

* This happens, for example, when you use the ``groupbytrace`` processor, or when data comes in multiple requests.
* If you compact data it takes less memory, it's more efficiently processed and serialized, and the number of export requests is reduced, for example if you use the ``sapm`` exporter. See more at :ref:`splunk-apm-exporter`.
* If you compact data it takes less memory, it's more efficiently processed and serialized, and the number of export requests is reduced.

.. tip:: Use the ``groupbyattrs`` processor together with ``batch`` processor, as a consecutive step. Grouping records together under matching resource and/or InstrumentationLibrary reduces the fragmentation of data.

Expand Down
2 changes: 1 addition & 1 deletion gdi/opentelemetry/components/jaeger-receiver.rst
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ The Jaeger receiver uses helper files for additional capabilities:
Remote sampling
-----------------------------------------------

Since version 0.61.0, remote sampling is no longer supported. Instead, since version 0.59.0, use the ``jaegerremotesapmpling`` extension for remote sampling.
Since version 0.61.0, remote sampling is no longer supported. Instead, since version 0.59.0, use the ``jaegerremotesampling`` extension for remote sampling.

.. _jaeger-receiver-settings:

Expand Down
2 changes: 1 addition & 1 deletion gdi/opentelemetry/components/logging-exporter.rst
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ To activate the logging exporter, add it to any pipeline you want to diagnose. F
- memory_limiter
- batch
- resourcedetection
exporters: [sapm, signalfx, logging]
exporters: [otlphttp, signalfx, logging]
metrics:
receivers: [hostmetrics, otlp, signalfx]
processors: [memory_limiter, batch, resourcedetection]
Expand Down
102 changes: 43 additions & 59 deletions gdi/opentelemetry/components/otlphttp-exporter.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,24 +7,26 @@ OTLP/HTTP exporter
.. meta::
:description: The OTLP/HTTP exporter allows the OpenTelemetry Collector to send metrics, traces, and logs via HTTP using the OTLP format. Read on to learn how to configure the component.

The OTLP/HTTP exporter sends metrics, traces, and logs through HTTP using the OTLP format. The supported pipeline types are ``traces``, ``metrics``, and ``logs``. See :ref:`otel-data-processing` for more information.

You can also use the OTLP exporter for advanced options to send data using the OTLP format. See more at :ref:`otlp-exporter`.
.. note:: Use the OTLP/HTTP exporter as the default method to send traces to Splunk Observability Cloud.

If you need to bypass the Collector and send data in the OTLP format directly to Splunk Observability Cloud:
The OTLP/HTTP exporter sends metrics, traces, and logs through HTTP using the OTLP format. The supported pipeline types are ``traces``, ``metrics``, and ``logs``. See :ref:`otel-data-processing` for more information.

* To send metrics, use the otlp endpoint. Find out more in the dev portal at :new-page:`Sending data points <https://dev.splunk.com/observability/docs/datamodel/ingest>`. Note that this option only accepts protobuf payloads.

* To send traces, use the gRPC endpoint. For more information, see :ref:`grpc-data-ingest`.
You can also use the OTLP exporter for advanced options to send data using gRPC protocol. See more at :ref:`otlp-exporter`.

Read more about the OTLP format at the OTel repo :new-page:`OpenTelemetry Protocol Specification <https://github.com/open-telemetry/opentelemetry-proto/blob/main/docs/specification.md>`.

Get started
======================

.. note::

This component is included in the default configuration of the Splunk Distribution of the OpenTelemetry Collector to send traces to Splunk Observability Cloud when deploying in host monitoring (agent) mode. See :ref:`otel-deployment-mode` for more information.

For details about the default configuration, see :ref:`otel-kubernetes-config`, :ref:`linux-config-ootb`, or :ref:`windows-config-ootb`. You can customize your configuration any time as explained in this document.

Follow these steps to configure and activate the component:

1. Deploy the Splunk Distribution of OpenTelemetry Collector to your host or container platform:
1. Deploy the Splunk Distribution of the OpenTelemetry Collector to your host or container platform:

- :ref:`otel-install-linux`
- :ref:`otel-install-windows`
Expand All @@ -33,63 +35,54 @@ Follow these steps to configure and activate the component:
2. Configure the exporter as described in the next section.
3. Restart the Collector.

The OTLP/HTTP exporter is not included in the default configuration of the Splunk Distribution of the OpenTelemetry Collector. If you want to add it, the following settings are required:
Configuration options
--------------------------------

* ``endpoint``. The target base URL to send data to, for example ``https://example.com:4318``. No default value.
The following settings are required:

* Each type of signal is added to this base URL. For example, for traces, ``https://example.com:4318/v1/traces``.
* ``traces_endpoint``. The target URL to send trace data to. ``https://ingest.<realm>.signalfx.com/v2/trace/otlp`` for Splunk Observability Cloud.

The following settings are optional:
The following settings are optional and can be added to the configuration for more advanced use cases:

* ``logs_endpoint``. The target URL to send log data to.

* For example, ``https://example.com:4318/v1/logs``.
* If this setting is present, the endpoint setting is ignored for logs.
* ``logs_endpoint``. The target URL to send log data to. For example, ``https://example.com:4318/v1/logs``.

* ``metrics_endpoint``. The target URL to send metric data to.

* For example, ``https://example.com:4318/v1/metrics``.
* If this setting is present, the endpoint setting is ignored for metrics.
* ``metrics_endpoint``. The target URL to send metric data to. For example, ``"https://ingest.us0.signalfx.com/v2/trace/otlp"`` to send metrics to Splunk Observability Cloud.

* ``traces_endpoint``. The target URL to send trace data to.

* For example, ``https://example.com:4318/v1/traces``.
* If this setting is present, the endpoint setting is ignored for traces.

* ``tls``. See :ref:`TLS Configuration Settings <otlphttp-exporter-settings>` in this document for the full set of available options.
* ``tls``. See :ref:`TLS Configuration Settings <otlphttp-exporter-settings>` in this document for the full set of available options. Only applicable for sending data to a custom endpoint.

* ``timeout``. ``30s`` by default. HTTP request time limit. For details see :new-page:`https://golang.org/pkg/net/http/#Client`.

* ``read_buffer_size``. ``0`` by default. ReadBufferSize for HTTP client.

* ``write_buffer_size``. ``512 * 1024`` by default. WriteBufferSize for the HTTP client.

Sample configurations
Sample configuration
--------------------------------

To send traces and metrics to Splunk Observability Cloud using OTLP over HTTP, configure the ``metrics_endpoint`` and ``traces_endpoint`` settings to the REST API ingest endpoints. For example:

.. code-block:: yaml
exporters:
otlphttp:
metrics_endpoint: "https://ingest.${SPLUNK_REALM}.signalfx.com/v2/datapoint/otlp"
traces_endpoint: "https://ingest.${SPLUNK_REALM}.signalfx.com/v2/trace/otlp"
compression: gzip
headers:
"X-SF-Token": "${SPLUNK_ACCESS_TOKEN}"
To complete the configuration, include the receiver in the required pipeline of the ``service`` section of your
exporters:
otlphttp:
# The target URL to send trace data to. By default it's set to ``https://ingest.${SPLUNK_REALM}.signalfx.com/v2/trace/otlp``.
traces_endpoint: https://ingest.<realm>.signalfx.com/v2/trace/otlp
# Set of HTTP headers added to every request.
headers:
# X-SF-Token is the authentication token provided by Splunk Observability Cloud.
X-SF-Token: <access_token>
To complete the configuration, include the exporter in the required pipeline of the ``service`` section of your
configuration file. For example:

.. code:: yaml
service:
pipelines:
metrics:
exporters: [otlphttp]
traces:
exporters: [otlphttp]
service:
pipelines:
metrics:
exporters: [otlphttp]
traces:
exporters: [otlphttp]
Configuration examples
--------------------------------
Expand All @@ -98,13 +91,11 @@ This is a detailed configuration example:

.. code-block:: yaml
endpoint: "https://1.2.3.4:1234"
tls:
ca_file: /var/lib/mycert.pem
cert_file: certfile
key_file: keyfile
insecure: true
traces_endpoint: https://ingest.us0.signalfx.com/v2/trace/otlp
metrics_endpoint: https://ingest.us0.signalfx.com/v2/datapoint/otlp
headers:
X-SF-Token: <access_token>
timeout: 10s
read_buffer_size: 123
write_buffer_size: 345
Expand All @@ -119,20 +110,15 @@ This is a detailed configuration example:
multiplier: 1.3
max_interval: 60s
max_elapsed_time: 10m
headers:
"can you have a . here?": "F0000000-0000-0000-0000-000000000000"
header1: 234
another: "somevalue"
compression: gzip
Configure gzip compression
--------------------------------

By default, gzip compression is turned on. To turn it off, use the following configuration:
By default, gzip compression is turned on. To turn it off use the following configuration:

.. code-block:: yaml
exporters:
otlphttp:
...
Expand All @@ -147,23 +133,21 @@ The following table shows the configuration options for the OTLP/HTTP exporter:

.. raw:: html

<div class="metrics-standard" category="included" url="https://raw.githubusercontent.com/splunk/collector-config-tools/main/cfg-metadata/exporter/otlphttp.yaml"></div>
<div class="metrics-standard" category="included" url="https://raw.githubusercontent.com/splunk/collector-config-tools/main/cfg-metadata/exporter/otlphttp.yaml"></div>


Troubleshooting
======================



.. raw:: html

<div class="include-start" id="troubleshooting-components.rst"></div>
<div class="include-start" id="troubleshooting-components.rst"></div>

.. include:: /_includes/troubleshooting-components.rst

.. raw:: html

<div class="include-stop" id="troubleshooting-components.rst"></div>
<div class="include-stop" id="troubleshooting-components.rst"></div>



10 changes: 4 additions & 6 deletions gdi/opentelemetry/components/sapm-receiver.rst
Original file line number Diff line number Diff line change
@@ -1,13 +1,11 @@
.. _sapm-receiver:

****************************
SAPM receiver
****************************
********************************************
Splunk APM (SAPM) receiver (deprecated)
********************************************

.. meta::
:description: Receives traces from other collectors or from the SignalFx Smart Agent.

The Splunk Distribution of the OpenTelemetry Collector supports the SAPM receiver. Documentation is planned for a future release.

To find information about this component in the meantime, see :new-page:`SAPM receiver <https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/sapmreceiver>` on GitHub.
.. caution:: The SAPM receiver is deprecated and will be removed in April 2025. To receive traces from other Collector instances use the :ref:`otlp-receiver` instead.

Loading

0 comments on commit b3b16ad

Please sign in to comment.