metric
Emit custom metrics by extracting values from messages.
# Config fields, showing default valueslabel: ""metric:type: ""name: ""labels: {}value: ""
This processor works by evaluating an interpolated field value for each message and updating a emitted metric according to the type.
Custom metrics such as these are emitted along with Benthos internal metrics, where you can customize where metrics are sent, which metric names are emitted and rename them as/when appropriate. For more information check out the metrics docs here.
Fields​
type​
The metric type to create.
Type: string
Default: ""
Options: counter, counter_by, gauge, timing.
name​
The name of the metric to create, this must be unique across all Benthos components otherwise it will overwrite those other metrics.
Type: string
Default: ""
labels​
A map of label names and values that can be used to enrich metrics. Labels are not supported by some metric destinations, in which case the metrics series are combined. This field supports interpolation functions.
Type: object
Default: {}
# Exampleslabels:topic: ${! meta("kafka_topic") }type: ${! json("doc.type") }
value​
For some metric types specifies a value to set, increment. This field supports interpolation functions.
Type: string
Default: ""
Examples​
- Counter
- Gauge
In this example we emit a counter metric called Foos, which increments for every message processed, and we label the metric with some metadata about where the message came from and a field from the document that states what type it is. We also configure our metrics to emit to CloudWatch, and explicitly only allow our custom metric and some internal Benthos metrics to emit.
pipeline:processors:- metric:name: Foostype: counterlabels:topic: ${! meta("kafka_topic") }partition: ${! meta("kafka_partition") }type: ${! json("document.type").or("unknown") }metrics:mapping: |root = if !["Foos","input_received","output_sent"].contains(this) { deleted() }aws_cloudwatch:namespace: ProdConsumer
In this example we emit a gauge metric called FooSize, which is given a value extracted from JSON messages at the path foo.size. We then also configure our Prometheus metric exporter to only emit this custom metric and nothing else. We also label the metric with some metadata.
pipeline:processors:- metric:name: FooSizetype: gaugelabels:topic: ${! meta("kafka_topic") }value: ${! json("foo.size") }metrics:mapping: 'if this != "FooSize" { deleted() }'prometheus: {}
Types​
counter​
Increments a counter by exactly 1, the contents of value are ignored
by this type.
counter_by​
If the contents of value can be parsed as a positive integer value
then the counter is incremented by this value.
For example, the following configuration will increment the value of the
count.custom.field metric by the contents of field.some.value:
pipeline:processors:- metric:type: counter_byname: CountCustomFieldvalue: ${!json("field.some.value")}
gauge​
If the contents of value can be parsed as a positive integer value
then the gauge is set to this value.
For example, the following configuration will set the value of the
gauge.custom.field metric to the contents of field.some.value:
pipeline:processors:- metric:type: gaugename: GaugeCustomFieldvalue: ${!json("field.some.value")}
timing​
Equivalent to gauge where instead the metric is a timing.