Skip to content

Commit

Permalink
Doc: Vague link text. (#2698)
Browse files Browse the repository at this point in the history
* Fixes remove vague link text #2680

* Fix remaining 'docs' links.

* Accept suggestion.

Co-authored-by: Diana Payton <52059945+oddlittlebird@users.noreply.github.com>

* Change HTTP API to Ruler API.

Co-authored-by: Diana Payton <52059945+oddlittlebird@users.noreply.github.com>

* Correct typo.

Co-authored-by: Diana Payton <52059945+oddlittlebird@users.noreply.github.com>

* Capitalize Github.

Co-authored-by: Diana Payton <52059945+oddlittlebird@users.noreply.github.com>

* Capitalize Promtail.

Co-authored-by: Diana Payton <52059945+oddlittlebird@users.noreply.github.com>

* Add copy-edit suggestions from @achatterjee-grafana.

* Incorporate @owen-d's changes.

Co-authored-by: Diana Payton <52059945+oddlittlebird@users.noreply.github.com>
  • Loading branch information
bemasher and oddlittlebird authored Oct 12, 2020
1 parent 9a3592c commit e644095
Show file tree
Hide file tree
Showing 17 changed files with 32 additions and 36 deletions.
12 changes: 6 additions & 6 deletions docs/sources/alerting/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ ruler:

## Prometheus Compatible

When running the Ruler (which runs by default in the single binary), Loki accepts rules files and then schedules them for continual evaluation. These are _Prometheus compatible_! This means the rules file has the same structure as in [Prometheus](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/), with the exception that the rules specified are in LogQL.
When running the Ruler (which runs by default in the single binary), Loki accepts rules files and then schedules them for continual evaluation. These are _Prometheus compatible_! This means the rules file has the same structure as in [Prometheus' Alerting Rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/), except that the rules specified are in LogQL.

Let's see what that looks like:

Expand Down Expand Up @@ -63,7 +63,7 @@ rules:

### `<rule>`

The syntax for alerting rules is (see the LogQL [docs](https://grafana.com/docs/loki/latest/logql/#metric-queries) for more details):
The syntax for alerting rules is (see the LogQL [Metric Queries](https://grafana.com/docs/loki/latest/logql/#metric-queries) for more details):

```yaml
# The name of the alert. Must be a valid label value.
Expand Down Expand Up @@ -137,7 +137,7 @@ Many nascent projects, apps, or even companies may not have a metrics backend ye

We don't always control the source code of applications we run. Think load balancers and the myriad components (both open source and closed third-party) that support our applications; it's a common problem that these don't expose a metric you want (or any metrics at all). How then, can we bring them into our observability stack in order to monitor them effectively? Alerting based on logs is a great answer for these problems.

For a sneak peek of how to combine this with the upcoming LogQL v2 functionality, take a look at Ward Bekker's [video](https://www.youtube.com/watch?v=RwQlR3D4Km4) which builds a robust nginx monitoring dashboard entirely from nginx logs.
For a sneak peek of how to combine this with the upcoming LogQL v2 functionality, take a look at Ward Bekker's video [Grafana Loki sneak peek: Generate Ad-hoc metrics from your NGINX Logs](https://www.youtube.com/watch?v=RwQlR3D4Km4) which builds a robust nginx monitoring dashboard entirely from nginx logs.

### Event alerting

Expand Down Expand Up @@ -231,7 +231,7 @@ jobs:

One option to scale the Ruler is by scaling it horizontally. However, with multiple Ruler instances running they will need to coordinate to determine which instance will evaluate which rule. Similar to the ingesters, the Rulers establish a hash ring to divide up the responsibilities of evaluating rules.

The possible configurations are listed fully in the configuration [docs](https://grafana.com/docs/loki/latest/configuration/), but in order to shard rules across multiple Rulers, the rules API must be enabled via flag (`-experimental.Ruler.enable-api`) or config file parameter. Secondly, the Ruler requires it's own ring be configured. From there the Rulers will shard and handle the division of rules automatically. Unlike ingesters, Rulers do not hand over responsibility: all rules are re-sharded randomly every time a Ruler is added to or removed from the ring.
The possible configurations are listed fully in the [configuration documentation](https://grafana.com/docs/loki/latest/configuration/), but in order to shard rules across multiple Rulers, the rules API must be enabled via flag (`-experimental.Ruler.enable-api`) or config file parameter. Secondly, the Ruler requires it's own ring be configured. From there the Rulers will shard and handle the division of rules automatically. Unlike ingesters, Rulers do not hand over responsibility: all rules are re-sharded randomly every time a Ruler is added to or removed from the ring.

A full sharding-enabled Ruler example is:

Expand All @@ -256,7 +256,7 @@ ruler:

The Ruler supports six kinds of storage: configdb, azure, gcs, s3, swift, and local. Most kinds of storage work with the sharded Ruler configuration in an obvious way, i.e. configure all Rulers to use the same backend.

The local implementation reads the rule files off of the local filesystem. This is a read only backend that does not support the creation and deletion of rules through [the API](https://grafana.com/docs/loki/latest/api/#Ruler). Despite the fact that it reads the local filesystem this method can still be used in a sharded Ruler configuration if the operator takes care to load the same rules to every Ruler. For instance this could be accomplished by mounting a [Kubernetes ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/) onto every Ruler pod.
The local implementation reads the rule files off of the local filesystem. This is a read-only backend that does not support the creation and deletion of rules through the [Ruler API](https://grafana.com/docs/loki/latest/api/#Ruler). Despite the fact that it reads the local filesystem this method can still be used in a sharded Ruler configuration if the operator takes care to load the same rules to every Ruler. For instance, this could be accomplished by mounting a [Kubernetes ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/) onto every Ruler pod.

A typical local configuration might look something like:
```
Expand All @@ -269,7 +269,7 @@ With the above configuration, the Ruler would expect the following layout:
/tmp/loki/rules/<tenant id>/rules1.yaml
/rules2.yaml
```
Yaml files are expected to be in the [Prometheus format](#Prometheus_Compatible) but include LogQL expressions as specified in the beginning of this doc.
Yaml files are expected to be [Prometheus compatible](#Prometheus_Compatible) but include LogQL expressions as specified in the beginning of this doc.

## Future improvements

Expand Down
4 changes: 2 additions & 2 deletions docs/sources/best-practices/current-best-practices.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ Loki has several client options: [Promtail](https://github.com/grafana/loki/tree

Each of these come with ways to configure what labels are applied to create log streams. But be aware of what dynamic labels might be applied.
Use the Loki series API to get an idea of what your log streams look like and see if there might be ways to reduce streams and cardinality.
Details of the Series API can be found [here](https://grafana.com/docs/loki/latest/api/#series), or you can use [logcli](https://grafana.com/docs/loki/latest/getting-started/logcli/) to query Loki for series information.
Series information can be queried through the [Series API](https://grafana.com/docs/loki/latest/api/#series), or you can use [logcli](https://grafana.com/docs/loki/latest/getting-started/logcli/).

In Loki 1.6.0 and newer the logcli series command added the `--analyze-labels` flag specifically for debugging high cardinality labels:

Expand Down Expand Up @@ -105,7 +105,7 @@ It's also worth noting that the batching nature of the Loki push API can lead to

## 7. Use `chunk_target_size`

This was added earlier this year when we [released v1.3.0 of Loki](https://grafana.com/blog/2020/01/22/loki-1.3.0-released/), and we've been experimenting with it for several months. We have `chunk_target_size: 1536000` in all our environments now. This instructs Loki to try to fill all chunks to a target _compressed_ size of 1.5MB. These larger chunks are more efficient for Loki to process.
This was added earlier in the [Loki v1.3.0](https://grafana.com/blog/2020/01/22/loki-1.3.0-released/) release, and we've been experimenting with it for several months. We have `chunk_target_size: 1536000` in all our environments now. This instructs Loki to try to fill all chunks to a target _compressed_ size of 1.5MB. These larger chunks are more efficient for Loki to process.

A couple other config variables affect how full a chunk can get. Loki has a default `max_chunk_age` of 1h and `chunk_idle_period` of 30m to limit the amount of memory used as well as the exposure of lost logs if the process crashes.

Expand Down
6 changes: 3 additions & 3 deletions docs/sources/clients/aws/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,6 @@ title: AWS

Sending logs from AWS services to Loki is a little different depending on what AWS service you are using:

- [EC2](ec2/)
- [ECS](ecs/)
- [EKS](eks/)
* [Elastic Compute Cloud (EC2)](ec2/)
* [Elastic Container Service (ECS)](ecs/)
* [Elastic Kubernetes Service (EKS)](eks/)
10 changes: 5 additions & 5 deletions docs/sources/clients/aws/ec2/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Before we start you'll need:
- The [AWS CLI][aws cli] configured (run `aws configure`).
- A Grafana instance with a Loki data source already configured, you can use [GrafanaCloud][GrafanaCloud] free trial.

For the sake of simplicity we'll use a GrafanaCloud Loki and Grafana instances, you can get an free account for this tutorial on our [website][GrafanaCloud], but all the steps are the same if you're running your own Open Source version of Loki and Grafana instances.
For the sake of simplicity we'll use a Grafana Cloud Loki and Grafana instances, you can get a free account for this tutorial at [Grafana Cloud], but all the steps are the same if you're running your own Open Source version of Loki and Grafana instances.

To make it easy to learn all the following instructions are manual, however in a real setup we recommend you to use provisioning tools such as [Terraform][terraform], [CloudFormation][cloud formation], [Ansible][ansible] or [Chef][chef].

Expand Down Expand Up @@ -97,7 +97,7 @@ chmod a+x "promtail-linux-amd64"
```

Now we're going to download the [promtail configuration](../../promtail/) file below and edit it, don't worry we will explain what those means.
The file is also available on [github][config gist].
The file is also available as a gist at [cyriltovena/promtail-ec2.yaml][config gist].

```bash
curl https://github.com/raw/grafana/loki/master/docs/sources/clients/aws/ec2/promtail-ec2.yaml > ec2-promtail.yaml
Expand Down Expand Up @@ -147,11 +147,11 @@ The **clients** section allow you to target your loki instance, if you're using

Since we're running on AWS EC2 we want to uses EC2 service discovery, this will allows us to scrape metadata about the current instance (and even your custom tags) and attach those to our logs. This way managing and querying on logs will be much easier.

Make sure to replace accordingly you current `region`, `access_key` and `secret_key`, alternatively you can use an [AWS Role][role] ARN, for more information about this, see the `ec2_sd_config` [documentation][ec2_sd_config].
Make sure to replace accordingly you current `region`, `access_key` and `secret_key`, alternatively you can use an [AWS Role][role] ARN, for more information about this, see documentation for [`ec2_sd_config`][ec2_sd_config].

Finally the [`relabeling_configs`][relabel] section has three purposes:

1. Selecting the labels discovered you want to attach to your targets. In our case here, we're keeping `instance_id` as instance, the tag `Name` as name and the `zone` of the instance. Make sure to check out the Prometheus [documentation][ec2_sd_config] for the full list of available labels.
1. Selecting the labels discovered you want to attach to your targets. In our case here, we're keeping `instance_id` as instance, the tag `Name` as name and the `zone` of the instance. Make sure to check out the Prometheus [`ec2_sd_config`][ec2_sd_config] documentation for the full list of available labels.

2. Choosing where promtail should find log files to tail, in our example we want to include all log files that exist in `/var/log` using the glob `/var/log/**.log`. If you need to use multiple glob, you can simply add another job in your `scrape_configs`.

Expand Down Expand Up @@ -259,7 +259,7 @@ We will edit our previous config (`vi ec2-promtail.yaml`) and add the following

Note that you can use [relabeling][relabeling] to convert systemd labels to match what you want. Finally make sure that the path of journald logs is correct, it might be different on some systems.

> You can download the final config example in [our repository][final config].
> You can download the final config example from our [GitHub repository][final config].

That's it, save the config and you can `reboot` the machine (or simply restart the service `systemctl restart promtail.service`).

Expand Down
4 changes: 2 additions & 2 deletions docs/sources/clients/aws/ecs/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ aws iam attach-role-policy --role-name ecsTaskExecutionRole --policy-arn "arn:aw

Amazon [Firelens][Firelens] is a log router (usually `fluentd` or `fluentbit`) you run along the same task definition next to your application containers to route their logs to Loki.

In this example we will use [fluentbit][fluentbit] (with the [Loki plugin][fluentbit loki] installed) but if you prefer [fluentd][fluentd] make sure to check the [documentation][fluentd loki].
In this example we will use [fluentbit][fluentbit] (with the [Loki plugin][fluentbit loki] installed) but if you prefer [fluentd][fluentd] make sure to check the [fluentd output plugin][fluentd loki] documentation.

> We recommend you to use [fluentbit][fluentbit] as it's less resources consuming than [fluentd][fluentd].
Expand Down Expand Up @@ -163,7 +163,7 @@ All `options` of the `logConfiguration` will be automatically translated into [f
LineFormat key_value
```

This `OUTPUT` config will forward logs to [GrafanaCloud][GrafanaCloud] Loki, to learn more about those options make sure to read the [documentation of the Loki output][fluentbit loki].
This `OUTPUT` config will forward logs to [GrafanaCloud][GrafanaCloud] Loki, to learn more about those options make sure to read the [fluentbit output plugin][fluentbit loki] documentation.
We've kept some interesting and useful labels such as `container_name`, `ecs_task_definition` , `source` and `ecs_cluster` but you can statically add more via the `Labels` option.

> If you want run multiple containers in your task, all of them needs a `logConfiguration` section, this give you the opportunity to add different labels depending on the container.
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/clients/aws/eks/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -206,7 +206,7 @@ pipelineStages:
namespace: ""
```
> Pipeline stages are great ways to parse log content and create labels (which are [indexed][labels post]), if you want to configure more of them, check out the [documentation][pipeline].
> Pipeline stages are great ways to parse log content and create labels (which are [indexed][labels post]), if you want to configure more of them, check out the [pipeline][pipeline] documentation.
Now update Promtail again:
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/clients/docker-driver/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ containers and ship them to Loki. The plugin can be configured to send the logs
to a private Loki instance or [Grafana Cloud](https://grafana.com/oss/loki).

> Docker plugins are not yet supported on Windows; see the
> [Docker docs](https://docs.docker.com/engine/extend) for more information.
> [Docker Engine managed plugin system](https://docs.docker.com/engine/extend) documentation for more information.
Documentation on configuring the Loki Docker Driver can be found on the
[configuration page](./configuration).
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/clients/lambda-promtail/_index.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Lambda Promtail

Loki includes an [AWS SAM](https://aws.amazon.com/serverless/sam/) package template for shipping Cloudwatch logs to Loki via a set of promtails [here](https://github.com/grafana/loki/tree/master/tools/lambda-promtail). This is done via an intermediary [lambda function](https://aws.amazon.com/lambda/) which processes cloudwatch events and propagates them to a promtail instance (or set of instances behind a load balancer) via the push-api [scrape config](../promtail/configuration#loki_push_api_config).
Loki includes an [AWS SAM](https://aws.amazon.com/serverless/sam/) package template for shipping Cloudwatch logs to Loki via a [set of promtails](https://github.com/grafana/loki/tree/master/tools/lambda-promtail). This is done via an intermediary [lambda function](https://aws.amazon.com/lambda/) which processes cloudwatch events and propagates them to a promtail instance (or set of instances behind a load balancer) via the push-api [scrape config](../promtail/configuration#loki_push_api_config).

## Uses

Expand Down
2 changes: 0 additions & 2 deletions docs/sources/clients/promtail/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -338,8 +338,6 @@ kubernetes_sd_configs:

[Pipeline](../pipelines/) stages are used to transform log entries and their labels. The pipeline is executed after the discovery process finishes. The `pipeline_stages` object consists of a list of stages which correspond to the items listed below.

Stages serve several purposes, more detail can be found [here](../pipelines/).

In most cases, you extract data from logs with `regex` or `json` stages. The extracted data is transformed into a temporary map object. The data can then be used by promtail e.g. as values for `labels` or as an `output`. Additionally any other stage aside from `docker` and `cri` can access the extracted data.

```yaml
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/clients/promtail/scraping.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ There are different types of labels present in Promtail:
for the full list of Kubernetes meta labels.

- The `__path__` label is a special label which Promtail uses after discovery to
figure out where the file to read is located. Wildcards are allowed, for example `/var/log/*.log` to get all files with a `log` extension in the specified directory, and `/var/log/**/*.log` for matching files and directories recursively. For a full list of options check out the [docs for the library promtail uses.](https://github.com/bmatcuk/doublestar)
figure out where the file to read is located. Wildcards are allowed, for example `/var/log/*.log` to get all files with a `log` extension in the specified directory, and `/var/log/**/*.log` for matching files and directories recursively. For a full list of options check out the docs for the [library](https://github.com/bmatcuk/doublestar) promtail uses.

- The label `filename` is added for every file found in `__path__` to ensure the
uniqueness of the streams. It is set to the absolute path of the file the line
Expand Down
2 changes: 1 addition & 1 deletion docs/sources/community/getting-in-touch.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ If you have any questions or feedback regarding Loki:

- Ask a question on the Loki Slack channel. To invite yourself to the Grafana Slack, visit [http://slack.raintank.io/](http://slack.raintank.io/) and join the #loki channel.
- [File a GitHub issue](https://github.com/grafana/loki/issues/new) for bugs, issues and feature suggestions.
- Send an email to [lokiproject@googlegroups.com](mailto:lokiproject@googlegroups.com), or use the [web interface](https://groups.google.com/forum/#!forum/lokiproject).
- Send an email to [lokiproject@googlegroups.com](mailto:lokiproject@googlegroups.com), or visit the [google groups](https://groups.google.com/forum/#!forum/lokiproject) page.

Please file UI issues directly to the [Grafana repository](https://github.com/grafana/grafana/issues/new).

Expand Down
2 changes: 1 addition & 1 deletion docs/sources/getting-started/labels.md
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,7 @@ The two previous examples use statically defined labels with a single value; how
__path__: /var/log/apache.log
```
This regex matches every component of the log line and extracts the value of each component into a capture group. Inside the pipeline code, this data is placed in a temporary data structure that allows using it for several purposes during the processing of that log line (at which point that temp data is discarded). Much more detail about this can be found [here](../../clients/promtail/pipelines/).
This regex matches every component of the log line and extracts the value of each component into a capture group. Inside the pipeline code, this data is placed in a temporary data structure that allows using it for several purposes during the processing of that log line (at which point that temp data is discarded). Much more detail about this can be found in the [Promtail pipelines](../../clients/promtail/pipelines/) documentation.
From that regex, we will be using two of the capture groups to dynamically set two labels based on content from the log line itself:
Expand Down
3 changes: 1 addition & 2 deletions docs/sources/operations/authentication.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,7 @@ as NGINX using basic auth or an OAuth2 proxy.
Note that when using Loki in multi-tenant mode, Loki requires the HTTP header
`X-Scope-OrgID` to be set to a string identifying the tenant; the responsibility
of populating this value should be handled by the authenticating reverse proxy.
For more information on multi-tenancy please read its
[documentation](../multi-tenancy/).
Read the [multi-tenancy](../multi-tenancy/) documentation for more information.

For information on authenticating Promtail, please see the docs for [how to
configure Promtail](../../clients/promtail/configuration/).
Loading

0 comments on commit e644095

Please sign in to comment.