Skip to content

Commit

Permalink
docs: use repetitive numbering (grafana#2699)
Browse files Browse the repository at this point in the history
Signed-off-by: Cyril Tovena <cyril.tovena@gmail.com>
  • Loading branch information
sandangel authored and cyriltovena committed Oct 21, 2020
1 parent 35f7a3d commit 3dced22
Show file tree
Hide file tree
Showing 16 changed files with 73 additions and 73 deletions.
30 changes: 15 additions & 15 deletions docs/sources/architecture/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ processes with the following limitations:
monolithic mode with more than one replica, as each replica must be able to
access the same storage backend, and local storage is not safe for concurrent
access.
2. Individual components cannot be scaled independently, so it is not possible
1. Individual components cannot be scaled independently, so it is not possible
to have more read components than write components.

## Components
Expand Down Expand Up @@ -117,17 +117,17 @@ the hash ring. Each ingester has a state of either `PENDING`, `JOINING`,
1. `PENDING` is an Ingester's state when it is waiting for a handoff from
another ingester that is `LEAVING`.

2. `JOINING` is an Ingester's state when it is currently inserting its tokens
1. `JOINING` is an Ingester's state when it is currently inserting its tokens
into the ring and initializing itself. It may receive write requests for
tokens it owns.

3. `ACTIVE` is an Ingester's state when it is fully initialized. It may receive
1. `ACTIVE` is an Ingester's state when it is fully initialized. It may receive
both write and read requests for tokens it owns.

4. `LEAVING` is an Ingester's state when it is shutting down. It may receive
1. `LEAVING` is an Ingester's state when it is shutting down. It may receive
read requests for data it still has in memory.

5. `UNHEALTHY` is an Ingester's state when it has failed to heartbeat to
1. `UNHEALTHY` is an Ingester's state when it has failed to heartbeat to
Consul. `UNHEALTHY` is set by the distributor when it periodically checks the ring.

Each log stream that an ingester receives is built up into a set of many
Expand All @@ -137,8 +137,8 @@ interval.
Chunks are compressed and marked as read-only when:

1. The current chunk has reached capacity (a configurable value).
2. Too much time has passed without the current chunk being updated
3. A flush occurs.
1. Too much time has passed without the current chunk being updated
1. A flush occurs.

Whenever a chunk is compressed and marked as read-only, a writable chunk takes
its place.
Expand Down Expand Up @@ -320,12 +320,12 @@ writes and improve query performance.
To summarize, the read path works as follows:

1. The querier receives an HTTP/1 request for data.
2. The querier passes the query to all ingesters for in-memory data.
3. The ingesters receive the read request and return data matching the query, if
1. The querier passes the query to all ingesters for in-memory data.
1. The ingesters receive the read request and return data matching the query, if
any.
4. The querier lazily loads data from the backing store and runs the query
1. The querier lazily loads data from the backing store and runs the query
against it if no ingesters returned data.
5. The querier iterates over all received data and deduplicates, returning a
1. The querier iterates over all received data and deduplicates, returning a
final set of data over the HTTP/1 connection.

## Write Path
Expand All @@ -335,9 +335,9 @@ To summarize, the read path works as follows:
To summarize, the write path works as follows:

1. The distributor receives an HTTP/1 request to store data for streams.
2. Each stream is hashed using the hash ring.
3. The distributor sends each stream to the appropriate ingesters and their
1. Each stream is hashed using the hash ring.
1. The distributor sends each stream to the appropriate ingesters and their
replicas (based on the configured replication factor).
4. Each ingester will create a chunk or append to an existing chunk for the
1. Each ingester will create a chunk or append to an existing chunk for the
stream's data. A chunk is unique per tenant and per labelset.
5. The distributor responds with a success code over the HTTP/1 connection.
1. The distributor responds with a success code over the HTTP/1 connection.
14 changes: 7 additions & 7 deletions docs/sources/clients/promtail/pipelines.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,14 +14,14 @@ stages:

1. **Parsing stages** parse the current log line and extract data out of it. The
extracted data is then available for use by other stages.
2. **Transform stages** transform extracted data from previous stages.
3. **Action stages** take extracted data from previous stages and do something
1. **Transform stages** transform extracted data from previous stages.
1. **Action stages** take extracted data from previous stages and do something
with them. Actions can:
1. Add or modify existing labels to the log line
2. Change the timestamp of the log line
3. Change the content of the log line
4. Create a metric based on the extracted data
4. **Filtering stages** optionally apply a subset of stages or drop entries based on some
1. Change the timestamp of the log line
1. Change the content of the log line
1. Create a metric based on the extracted data
1. **Filtering stages** optionally apply a subset of stages or drop entries based on some
condition.

Typical pipelines will start with a parsing stage (such as a
Expand All @@ -37,7 +37,7 @@ Note that pipelines can not currently be used to deduplicate logs; Loki will
receive the same log line multiple times if, for example:

1. Two scrape configs read from the same file
2. Duplicate log lines in a file are sent through a pipeline. Deduplication is
1. Duplicate log lines in a file are sent through a pipeline. Deduplication is
not done.

However, Loki will perform some deduplication at query time for logs that have
Expand Down
4 changes: 2 additions & 2 deletions docs/sources/clients/promtail/stages/cri.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,8 @@ supports the specific CRI log format. CRI specifies log lines log lines as
space-delimited values with the following components:

1. `time`: The timestamp string of the log
2. `stream`: Either stdout or stderr
3. `log`: The contents of the log line
1. `stream`: Either stdout or stderr
1. `log`: The contents of the log line

No whitespace is permitted between the components. In the following example,
only the first log line can be properly formatted using the `cri` stage:
Expand Down
4 changes: 2 additions & 2 deletions docs/sources/clients/promtail/stages/docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,8 @@ only supports the specific Docker log format. Each log line from Docker is
written as JSON with the following keys:

1. `log`: The content of log line
2. `stream`: Either `stdout` or `stderr`
3. `time`: The timestamp string of the log line
1. `stream`: Either `stdout` or `stderr`
1. `time`: The timestamp string of the log line

## Examples

Expand Down
4 changes: 2 additions & 2 deletions docs/sources/clients/promtail/stages/tenant.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ Given the following log line:
The pipeline would:

1. Decode the JSON log
2. Set the label `app="api"`
3. Process the `match` stage checking if the `{app="api"}` selector matches
1. Set the label `app="api"`
1. Process the `match` stage checking if the `{app="api"}` selector matches
and - whenever it matches - run the sub stages. The `tenant` sub stage
would override the tenant with the value `"team-api"`.
8 changes: 4 additions & 4 deletions docs/sources/clients/promtail/troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,10 +71,10 @@ cat my.log | promtail --config.file promtail.yaml
Given the following order of events:

1. `promtail` is tailing `/app.log`
2. `promtail` current position for `/app.log` is `100` (byte offset)
3. `promtail` is stopped
4. `/app.log` is truncated and new logs are appended to it
5. `promtail` is restarted
1. `promtail` current position for `/app.log` is `100` (byte offset)
1. `promtail` is stopped
1. `/app.log` is truncated and new logs are appended to it
1. `promtail` is restarted

When `promtail` is restarted, it reads the previous position (`100`) from the
positions file. Two scenarios are then possible:
Expand Down
4 changes: 2 additions & 2 deletions docs/sources/community/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,5 +5,5 @@ weight: 1100
# Community

1. [Governance](governance/)
2. [Getting in Touch](getting-in-touch/)
3. [Contributing](contributing/)
1. [Getting in Touch](getting-in-touch/)
1. [Contributing](contributing/)
10 changes: 5 additions & 5 deletions docs/sources/design-documents/2020-02-Promtail-Push-API.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ rejected pushes. Users are recommended to do one of the following:

1. Have a dedicated Promtail instance for receiving pushes. This also applies to
using the syslog target.
2. Have a separated k8s service that always resolves to the same Promtail pod,
1. Have a separated k8s service that always resolves to the same Promtail pod,
bypassing the load balancing issue.

## Implementation
Expand Down Expand Up @@ -100,10 +100,10 @@ Loki uses. There are some concerns with this approach:

1. The gRPC Gateway reverse proxy will need to play nice with the existing HTTP
mux used in Promtail.
2. We couldn't control the HTTP and Protobuf formats separately as Loki can.
3. Log lines will be double-encoded thanks to the reverse proxy.
4. A small overhead of using a reverse proxy in-process will be introduced.
5. This breaks our normal pattern of writing our own shim functions; may add
1. We couldn't control the HTTP and Protobuf formats separately as Loki can.
1. Log lines will be double-encoded thanks to the reverse proxy.
1. A small overhead of using a reverse proxy in-process will be introduced.
1. This breaks our normal pattern of writing our own shim functions; may add
some cognitive overhead of having to deal with the gRPC gateway as an outlier
in the code.

Expand Down
6 changes: 3 additions & 3 deletions docs/sources/getting-started/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ weight: 300
# Getting started with Loki

1. [Grafana](grafana/)
2. [LogCLI](logcli/)
3. [Labels](labels/)
4. [Troubleshooting](troubleshooting/)
1. [LogCLI](logcli/)
1. [Labels](labels/)
1. [Troubleshooting](troubleshooting/)

6 changes: 3 additions & 3 deletions docs/sources/getting-started/get-logs-into-loki.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ The following instructions should help you get started.
wget https://github.com/raw/grafana/loki/master/cmd/promtail/promtail-local-config.yaml
```

2. Open the config file in the text editor of your choice. It should look similar to this:
1. Open the config file in the text editor of your choice. It should look similar to this:

```
server:
Expand All @@ -42,7 +42,7 @@ scrape_configs:

The seven lines under `scrape_configs` are what send the logs that Loki generates to Loki, which then outputs them in the command line and http://localhost:3100/metrics.

3. Copy the seven lines under `scrape_configs`, and then paste them under the original job (you can also just edit the original seven lines).
1. Copy the seven lines under `scrape_configs`, and then paste them under the original job (you can also just edit the original seven lines).

Below is an example that sends logs from a default Grafana installation to Loki. We updated the following fields:
- job_name - This differentiates the logs collected from other log groups.
Expand All @@ -60,7 +60,7 @@ scrape_configs:
__path__: "C:/Program Files/GrafanaLabs/grafana/data/log/grafana.log"
```

4. Enter the following command to run Promtail. Examples below assume you have put the config file in the same directory as the binary.
1. Enter the following command to run Promtail. Examples below assume you have put the config file in the same directory as the binary.

**Windows**

Expand Down
2 changes: 1 addition & 1 deletion docs/sources/maintaining/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,4 @@ weight: 1200
This section details information for maintainers of Loki.

1. [Releasing Loki](release/)
2. [Releasing `loki-build-image`](release-loki-build-image/)
1. [Releasing `loki-build-image`](release-loki-build-image/)
2 changes: 1 addition & 1 deletion docs/sources/maintaining/release-loki-build-image.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,4 +13,4 @@ The [`loki-build-image`](https://github.com/grafana/loki/tree/master/loki-build-
1. .circleci/config.yml
1. Run `make drone` to rebuild the drone yml file with the new image version (the image version in the Makefile is used)
1. Commit your changes (else you will get a WIP tag)
2. Run `make build-image-push`
1. Run `make build-image-push`
34 changes: 17 additions & 17 deletions docs/sources/maintaining/release.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,9 +20,9 @@ page](https://github.com/settings/keys). If the GPG key for the email address
used to commit with Loki is not present, follow these instructions to add it:

1. Run `gpg --armor --export <your email address>`
2. Copy the output.
3. In the settings page linked above, click "New GPG Key".
4. Copy and paste the PGP public key block.
1. Copy the output.
1. In the settings page linked above, click "New GPG Key".
1. Copy and paste the PGP public key block.

#### Signing Commits and Tags by Default

Expand Down Expand Up @@ -50,22 +50,22 @@ export GPG_TTY=$(tty)

1. Create a new branch to update `CHANGELOG.md` and references to version
numbers across the entire repository (e.g. README.md in the project root).
2. Modify `CHANGELOG.md` with the new version number and its release date.
3. List all the merged PRs since the previous release. This command is helpful
1. Modify `CHANGELOG.md` with the new version number and its release date.
1. List all the merged PRs since the previous release. This command is helpful
for generating the list (modifying the date to the date of the previous release): `curl https://github.com/gitapi/search/issues?q=repo:grafana/loki+is:pr+"merged:>=2019-08-02" | jq -r ' .items[] | "* [" + (.number|tostring) + "](" + .html_url + ") **" + .user.login + "**: " + .title'`
4. Go through `docs/` and find references to the previous release version and
1. Go through `docs/` and find references to the previous release version and
update them to reference the new version.
5. *Without creating a tag*, create a commit based on your changes and open a PR
1. *Without creating a tag*, create a commit based on your changes and open a PR
for updating the release notes.
1. Until [852](https://github.com/grafana/loki/issues/852) is fixed, updating
Helm and Ksonnet configs needs to be done in a separate commit following
the release tag so that Helm tests pass.
6. Merge the changelog PR.
7. Create a new tag for the release.
1. Merge the changelog PR.
1. Create a new tag for the release.
1. Once this step is done, the CI will be triggered to create release
artifacts and publish them to a draft release. The tag will be made
publicly available immediately.
2. Run the following to create the tag:
1. Run the following to create the tag:

```bash
RELEASE=v1.2.3 # UPDATE ME to reference new release
Expand All @@ -74,28 +74,28 @@ export GPG_TTY=$(tty)
git tag -s $RELEASE -m "tagging release $RELEASE"
git push origin $RELEASE
```
8. Watch CircleCI and wait for all the jobs to finish running.
1. Watch CircleCI and wait for all the jobs to finish running.

## Updating Helm and Ksonnet configs

These steps should be executed after the previous section, once CircleCI has
finished running all the release jobs.

1. Run `bash ./tools/release_prepare.sh`
2. When prompted for the release version, enter the latest tag.
3. When prompted for new Helm version numbers, the defaults should suffice (a
1. When prompted for the release version, enter the latest tag.
1. When prompted for new Helm version numbers, the defaults should suffice (a
minor version bump).
4. Commit the changes to a new branch, push, make a PR, and get it merged.
1. Commit the changes to a new branch, push, make a PR, and get it merged.

## Publishing the Release Draft

Once the previous two steps are completed, you can publish your draft!

1. Go to the [GitHub releases page](https://github.com/grafana/loki/releases)
and find the drafted release.
2. Edit the drafted release, copying and pasting *notable changes* from the
1. Edit the drafted release, copying and pasting *notable changes* from the
CHANGELOG. Add a link to the CHANGELOG, noting that the full list of changes
can be found there. Refer to other releases for help with formatting this.
3. Optionally, have other team members review the release draft so you feel
1. Optionally, have other team members review the release draft so you feel
comfortable with it.
4. Publish the release!
1. Publish the release!
14 changes: 7 additions & 7 deletions docs/sources/operations/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,11 @@ weight: 800
# Operating Loki

1. [Upgrading](upgrade/)
2. [Authentication](authentication/)
3. [Observability](observability/)
4. [Scalability](scalability/)
5. [Storage](storage/)
1. [Authentication](authentication/)
1. [Observability](observability/)
1. [Scalability](scalability/)
1. [Storage](storage/)
1. [Table Manager](storage/table-manager/)
2. [Retention](storage/retention/)
6. [Multi-tenancy](multi-tenancy/)
7. [Loki Canary](loki-canary/)
1. [Retention](storage/retention/)
1. [Multi-tenancy](multi-tenancy/)
1. [Loki Canary](loki-canary/)
2 changes: 1 addition & 1 deletion docs/sources/operations/storage/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ how to configure the storage and the index.
For more information:

1. [Table Manager](table-manager/)
2. [Retention](retention/)
1. [Retention](retention/)

## Supported Stores

Expand Down
2 changes: 1 addition & 1 deletion docs/sources/operations/storage/table-manager.md
Original file line number Diff line number Diff line change
Expand Up @@ -193,7 +193,7 @@ to `0`.
The Table Manager can be executed in two ways:

1. Implicitly executed when Loki runs in monolithic mode (single process)
2. Explicitly executed when Loki runs in microservices mode
1. Explicitly executed when Loki runs in microservices mode


### Monolithic mode
Expand Down

0 comments on commit 3dced22

Please sign in to comment.