From a05e40f00b0d1d615ab452f68cc5f3ce369d975c Mon Sep 17 00:00:00 2001 From: Cyril Tovena Date: Wed, 6 Jan 2021 10:03:22 +0100 Subject: [PATCH 1/2] Fixes LogQL documentation ref links. Signed-off-by: Cyril Tovena --- docs/sources/logql/_index.md | 31 +++++++++++++++---------------- 1 file changed, 15 insertions(+), 16 deletions(-) diff --git a/docs/sources/logql/_index.md b/docs/sources/logql/_index.md index a23e79fd3511..2e5ec2b62c68 100644 --- a/docs/sources/logql/_index.md +++ b/docs/sources/logql/_index.md @@ -82,14 +82,14 @@ Some expressions can mutate the log content and respective labels (e.g `| line_f A log pipeline can be composed of: -- [Line Filter Expression](#Line-Filter-Expression). -- [Parser Expression](#Parser-Expression) -- [Label Filter Expression](#Label-Filter-Expression) -- [Line Format Expression](#Line-Format-Expression) -- [Labels Format Expression](#Labels-Format-Expression) -- [Unwrap Expression](#Unwrap-Expression) +- [Line Filter Expression](#line-filter-expression). +- [Parser Expression](#parser-expression) +- [Label Filter Expression](#label-filter-expression) +- [Line Format Expression](#line-format-expression) +- [Labels Format Expression](#labels-format-expression) +- [Unwrap Expression](#unwrapped-range-aggregations) -The [unwrap Expression](#Unwrap-Expression) is a special expression that should only be used within metric queries. +The [unwrap Expression](#unwrapped-range-aggregations) is a special expression that should only be used within metric queries. #### Line Filter Expression @@ -126,7 +126,7 @@ For example, while the result will be the same, the following query `{job="mysql #### Parser Expression -Parser expression can parse and extract labels from the log content. Those extracted labels can then be used for filtering using [label filter expressions](#Label-Filter-Expression) or for [metric aggregations](#Metric-Queries). +Parser expression can parse and extract labels from the log content. Those extracted labels can then be used for filtering using [label filter expressions](#label-filter-expression) or for [metric aggregations](#metric-queries). Extracted label keys are automatically sanitized by all parsers, to follow Prometheus metric name convention.(They can only contain ASCII letters and digits, as well as underscores and colons. They cannot start with a digit.) @@ -141,7 +141,7 @@ For instance, the pipeline `| json` will produce the following mapping: In case of errors, for instance if the line is not in the expected format, the log line won't be filtered but instead will get a new `__error__` label added. -If an extracted label key name already exists in the original log stream, the extracted label key will be suffixed with the `_extracted` keyword to make the distinction between the two labels. You can forcefully override the original label using a [label formatter expression](#Labels-Format-Expression). However if an extracted key appears twice, only the latest label value will be kept. +If an extracted label key name already exists in the original log stream, the extracted label key will be suffixed with the `_extracted` keyword to make the distinction between the two labels. You can forcefully override the original label using a [label formatter expression](#labels-format-expression). However if an extracted key appears twice, only the latest label value will be kept. We support currently support json, logfmt and regexp parsers. @@ -219,7 +219,7 @@ those labels: "duration" => "1.5s" ``` -It's easier to use the predefined parsers like `json` and `logfmt` when you can, falling back to `regexp` when the log lines have unusual structure. Multiple parsers can be used during the same log pipeline which is useful when you want to parse complex logs. ([see examples](#Multiple-parsers)) +It's easier to use the predefined parsers like `json` and `logfmt` when you can, falling back to `regexp` when the log lines have unusual structure. Multiple parsers can be used during the same log pipeline which is useful when you want to parse complex logs. ([see examples](#multiple-parsers)) #### Label Filter Expression @@ -236,7 +236,7 @@ We support multiple **value** types which are automatically inferred from the qu - **Number** are floating-point number (64bits), such as`250`, `89.923`. - **Bytes** is a sequence of decimal numbers, each with optional fraction and a unit suffix, such as "42MB", "1.5Kib" or "20b". Valid bytes units are "b", "kib", "kb", "mib", "mb", "gib", "gb", "tib", "tb", "pib", "pb", "eib", "eb". -String type work exactly like Prometheus label matchers use in [log stream selector](#Log-Stream-Selector). This means you can use the same operations (`=`,`!=`,`=~`,`!~`). +String type work exactly like Prometheus label matchers use in [log stream selector](#log-stream-selector). This means you can use the same operations (`=`,`!=`,`=~`,`!~`). > The string type is the only one that can filter out a log line with a label `__error__`. @@ -249,7 +249,7 @@ Using Duration, Number and Bytes will convert the label value prior to comparisi For instance, `logfmt | duration > 1m and bytes_consumed > 20MB` -If the conversion of the label value fails, the log line is not filtered and an `__error__` label is added. To filters those errors see the [pipeline errors](#Pipeline-Errors) section. +If the conversion of the label value fails, the log line is not filtered and an `__error__` label is added. To filters those errors see the [pipeline errors](#pipeline-errors) section. You can chain multiple predicates using `and` and `or` which respectively express the `and` and `or` binary operations. `and` can be equivalently expressed by a comma, a space or another pipe. Label filters can be place anywhere in a log pipeline. @@ -278,7 +278,7 @@ It will evaluate first `duration >= 20ms or method="GET"`. To evaluate first `me | duration >= 20ms or (method="GET" and size <= 20KB) ``` -> Label filter expressions are the only expression allowed after the [unwrap expression](#Unwrap-Expression). This is mainly to allow filtering errors from the metric extraction (see [errors](#Pipeline-Errors)). +> Label filter expressions are the only expression allowed after the [unwrap expression](#unwrapped-range-aggregations). This is mainly to allow filtering errors from the metric extraction (see [errors](#pipeline-errors)). #### Line Format Expression @@ -311,7 +311,6 @@ The renaming form `dst=src` will _drop_ the `src` label after remapping it to th > A single label name can only appear once per expression. This means `| label_format foo=bar,foo="new"` is not allowed but you can use two expressions for the desired effect: `| label_format foo=bar | label_format foo="new"` - ### Log Queries Examples #### Multiple filtering @@ -373,7 +372,7 @@ LogQL also supports wrapping a log query with functions that allow for creating Metric queries can be used to calculate things such as the rate of error messages, or the top N log sources with the most amount of logs over the last 3 hours. -Combined with log [parsers](#Parser-Expression), metrics queries can also be used to calculate metrics from a sample value within the log line such latency or request size. +Combined with log [parsers](#parser-expression), metrics queries can also be used to calculate metrics from a sample value within the log line such latency or request size. Furthermore all labels, including extracted ones, will be available for aggregations and generation of new series. ### Range Vector aggregation @@ -410,7 +409,7 @@ It returns the per-second rate of all non-timeout errors within the last minutes #### Unwrapped Range Aggregations -Unwrapped ranges uses extracted labels as sample values instead of log lines. However to select which label will be use within the aggregation, the log query must end with an unwrap expression and optionally a label filter expression to discard [errors](#Pipeline-Errors). +Unwrapped ranges uses extracted labels as sample values instead of log lines. However to select which label will be use within the aggregation, the log query must end with an unwrap expression and optionally a label filter expression to discard [errors](#pipeline-errors). The unwrap expression is noted `| unwrap label_identifier` where the label identifier is the label name to use for extracting sample values. From 52760b9ee64124b331a3b04023d1f3265c99fcc1 Mon Sep 17 00:00:00 2001 From: Cyril Tovena Date: Wed, 6 Jan 2021 10:08:01 +0100 Subject: [PATCH 2/2] Fix old link. Signed-off-by: Cyril Tovena --- docs/sources/clients/aws/eks/_index.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/docs/sources/clients/aws/eks/_index.md b/docs/sources/clients/aws/eks/_index.md index 143805f7e13a..efbf9bbbd296 100644 --- a/docs/sources/clients/aws/eks/_index.md +++ b/docs/sources/clients/aws/eks/_index.md @@ -10,12 +10,12 @@ After this tutorial you will able to query all your logs in one place using Graf - [Sending logs from EKS with Promtail](#sending-logs-from-eks-with-promtail) - - [Requirements](#requirements) - - [Setting up the cluster](#setting-up-the-cluster) - - [Adding Promtail DaemonSet](#adding-promtail-daemonset) - - [Fetching kubelet logs with systemd](#fetching-kubelet-logs-with-systemd) - - [Adding Kubernetes events](#adding-kubernetes-events) - - [Conclusion](#conclusion) + - [Requirements](#requirements) + - [Setting up the cluster](#setting-up-the-cluster) + - [Adding Promtail DaemonSet](#adding-promtail-daemonset) + - [Fetching kubelet logs with systemd](#fetching-kubelet-logs-with-systemd) + - [Adding Kubernetes events](#adding-kubernetes-events) + - [Conclusion](#conclusion) @@ -248,7 +248,7 @@ If you want to push this further you can check out [Joe's blog post][blog annota [grafana logs namespace]: namespace-grafana.png [relabel_configs]:https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config [syslog]: ../../../installation/helm#run-promtail-with-syslog-support -[Filters]: https://grafana.com/docs/loki/latest/logql/#filter-expression +[Filters]: https://grafana.com/docs/loki/latest/logql/#line-filter-expression [kubelet]: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#:~:text=The%20kubelet%20works%20in%20terms,PodSpecs%20are%20running%20and%20healthy. [LogQL]: https://grafana.com/docs/loki/latest/logql/ [blog events]: https://grafana.com/blog/2019/08/21/how-grafana-labs-effectively-pairs-loki-and-kubernetes-events/