In kubernetes pod container, there are 2 different dotnet processes, one for the main application and the other for monitoring the application.
I have configured Entrypoint as shown below in both Dockerfiles.
ENTRYPOINT sh -c "dotnet WebApplication1.dll & exec dotnet ./arc_dotnet/ApplicareDotnetTrace.dll"
I am using OpenTelemetry Collector and Jaeger to view the application's trace details. However, the details of other processes running on the system are also being displayed in Jaeger. I need to prevent the details of these unwanted processes from being shown.
I tried adding the following environment variable to the application yaml and the data continues to display.
- name: OTEL_DOTNET_AUTO_EXCLUDE_PROCESS_ARGS
value: "*ApplicareDotnetTrace.dll"
- name: OTEL_DOTNET_AUTO_EXCLUDE_PROCESSES
value: "ApplicareDotnetTrace.dll"
I have also attached my collector yaml configuration.
collector.yaml:
# 1. Namespace for the collector
apiVersion: v1
kind: Namespace
metadata:
name: monitoring
labels:
name: monitoring
---
apiVersion: v1
kind: ConfigMap
metadata:
name: otel-collector-config
namespace: monitoring
data:
collector-config.yaml: |
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
processors:
batch: {}
resource/standard_attributes:
attributes:
- key: k8s.namespace.name
from_attribute: k8s.namespace.name
action: upsert
- key: k8s.pod.name
from_attribute: k8s.pod.name
action: upsert
metricstransform:
transforms:
- include: arc_metrics_.*
match_type: regexp
action: update
operations:
- action: add_label
new_label: metric_source
new_value: otel_collector
filter/drop_internal_metrics:
metrics:
exclude:
match_type: regexp
metric_names:
- ^prometheus_.*
- ^process_.*
- ^go_.*
filter/exclude_arc_process:
traces:
span:
- 'resource.attributes["process.command_line"] == "dotnet ./arc_dotnet/ApplicareDotnetTrace.dll"'
- 'name == "ApplicareDotnetTrace"'
exporters:
prometheusremotewrite:
endpoint: "http://20.246.110.231:9090/api/v1/write"
namespace: "arc_metrics"
resource_to_telemetry_conversion:
enabled: true
debug:
verbosity: detailed
otlphttp:
endpoint: "http://172.200.169.98:4318"
service:
pipelines:
metrics:
receivers: [otlp]
processors: [resource/standard_attributes, metricstransform, filter/drop_internal_metrics, batch]
exporters: [prometheusremotewrite]
traces:
receivers: [otlp]
processors: [filter/exclude_arc_process, batch]
exporters: [otlphttp, debug] # Keep debug temporarily
---
apiVersion: v1
kind: Service
metadata:
name: otel-collector
namespace: monitoring
spec:
selector:
app: otel-collector
ports:
- name: prometheus
protocol: TCP
port: 8889
targetPort: 8889
- name: otlp-grpc
protocol: TCP
port: 4317
targetPort: 4317
- name: otlp-http
protocol: TCP
port: 4318
targetPort: 4318
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: otel-collector
namespace: monitoring
spec:
replicas: 1
selector:
matchLabels:
app: otel-collector
template:
metadata:
labels:
app: otel-collector
spec:
containers:
- name: otel-collector
image: otel/opentelemetry-collector-contrib:latest
command:
- "/otelcol-contrib"
- "--config=/conf/collector-config.yaml"
ports:
- containerPort: 8889
- containerPort: 4317
- containerPort: 4318
volumeMounts:
- name: config
mountPath: /conf
volumes:
- name: config
configMap:
name: otel-collector-config
What I want is: I don't want to receive trace data from the ApplicareDotnetTrace.dll application or process running in a kubernetes pod container.