Skip to content

Commit

Permalink
26.0.0: upgrade the branch to docusaurus2 (#14417)
Browse files Browse the repository at this point in the history
* convert img tags to markdown syntax

(cherry picked from commit 4041032)
(cherry picked from commit 40365dc)

* docu2 format redirect

(cherry picked from commit 9bfd1b1)

* docu2 sidebar

(cherry picked from commit 1ffb8a9)

* doc: escape tags in markdown in prepration for docusaurus2

(cherry picked from commit 94b4ea3)
(cherry picked from commit 589c90e)

* upgrade to docusaurus2

* put old site in website_old

* fix top nav and add apache license to stub files

* delete hidden from sidebar

* add node version

* fix spellchecker

(cherry picked from commit fadb7f3)

* update readme

* docusaurus2 test branch for 26 (#17)

* delete old website folder

* add apache license

* add apache license to css file

* update code tab syntax

(cherry picked from commit 37b725b)

* docs: fix links (#14504)

(cherry picked from commit ae85890)

* fix link color

(cherry picked from commit 1cef26d)

---------

Co-authored-by: Victoria Lim <vtlim@users.noreply.github.com>
  • Loading branch information
317brian and vtlim authored Jul 24, 2023
1 parent ae85890 commit 76e01d8
Show file tree
Hide file tree
Showing 38 changed files with 12,189 additions and 25,042 deletions.
1 change: 0 additions & 1 deletion .github/workflows/static-checks.yml
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,6 @@ jobs:
run: |
(cd website && npm install)
cd website
npm run link-lint
npm run spellcheck
- name: web console
Expand Down
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,8 @@ integration-tests/gen-scripts/
**/.local/
**/druidapi.egg-info/
examples/quickstart/jupyter-notebooks/docker-jupyter/notebooks
website/.docusaurus/


# ignore NetBeans IDE specific files
nbproject
Expand Down
8 changes: 6 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,9 +84,13 @@ Use the built-in query workbench to prototype [DruidSQL](https://druid.apache.or

### Documentation

See the [latest documentation](https://druid.apache.org/docs/latest/) for the documentation for the current official release. If you need information on a previous release, you can browse [previous releases documentation](https://druid.apache.org/docs/).
See the [latest documentation](https://druid.apache.org/docs/latest/) for the documentation for the current official release. If you need information on a previous release, you can browse [previous releases documentation](https://druid.apache.org/docs/).

Make documentation and tutorials updates in [`/docs`](https://github.com/apache/druid/tree/master/docs) using [MarkDown](https://www.markdownguide.org/) and contribute them using a pull request.
Make documentation and tutorials updates in [`/docs`](https://github.com/apache/druid/tree/master/docs) using [MarkDown](https://www.markdownguide.org/) or extended Markdown [(MDX)](https://mdxjs.com/). Then, open a pull request.

To build the site locally, you need Node 16.14 or higher and to install Docusaurus 2 with `npm|yarn install` in the `website` directory. Then you can run `npm|yarn start` to launch a local build of the docs.

If you're looking to update non-doc pages like Use Cases, those files are in the [`druid-website-src`](https://github.com/apache/druid-website-src/tree/master) repo.

### Community

Expand Down
4 changes: 2 additions & 2 deletions docs/design/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Druid has a distributed architecture that is designed to be cloud-friendly and e

The following diagram shows the services that make up the Druid architecture, how they are typically organized into servers, and how queries and data flow through this architecture.

<img src="../assets/druid-architecture.png" width="800"/>
![](../assets/druid-architecture.png)

The following sections describe the components of this architecture.

Expand Down Expand Up @@ -107,7 +107,7 @@ example, a single day, if your datasource is partitioned by day). Within a chunk
[_segments_](../design/segments.md). Each segment is a single file, typically comprising up to a few million rows of data. Since segments are
organized into time chunks, it's sometimes helpful to think of segments as living on a timeline like the following:

<img src="../assets/druid-timeline.png" width="800" />
![](../assets/druid-timeline.png)

A datasource may have anywhere from just a few segments, up to hundreds of thousands and even millions of segments. Each
segment is created by a MiddleManager as _mutable_ and _uncommitted_. Data is queryable as soon as it is added to
Expand Down
2 changes: 1 addition & 1 deletion docs/design/processes.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ Druid processes can be deployed any way you like, but for ease of deployment we
* **Query**
* **Data**

<img src="../assets/druid-architecture.png" width="800"/>
![](../assets/druid-architecture.png)

This section describes the Druid processes and the suggested Master/Query/Data server organization, as shown in the architecture diagram above.

Expand Down
2 changes: 1 addition & 1 deletion docs/ingestion/ingestion-spec.md
Original file line number Diff line number Diff line change
Expand Up @@ -237,7 +237,7 @@ Dimension objects can have the following components:

| Field | Description | Default |
|-------|-------------|---------|
| type | Either `auto`, `string`, `long`, `float`, `double`, or `json`. For the `auto` type, Druid determines the most appropriate type for the dimension and assigns one of the following: STRING, ARRAY<STRING>, LONG, ARRAY<LONG>, DOUBLE, ARRAY<DOUBLE>, or COMPLEX<json> columns, all sharing a common 'nested' format. When Druid infers the schema with schema auto-discovery, the type is `auto`. | `string` |
| type | Either `auto`, `string`, `long`, `float`, `double`, or `json`. For the `auto` type, Druid determines the most appropriate type for the dimension and assigns one of the following: STRING, ARRAY<STRING\>, LONG, ARRAY<LONG\>, DOUBLE, ARRAY<DOUBLE\>, or COMPLEX<json\> columns, all sharing a common 'nested' format. When Druid infers the schema with schema auto-discovery, the type is `auto`. | `string` |
| name | The name of the dimension. This will be used as the field name to read from input records, as well as the column name stored in generated segments.<br /><br />Note that you can use a [`transformSpec`](#transformspec) if you want to rename columns during ingestion time. | none (required) |
| createBitmapIndex | For `string` typed dimensions, whether or not bitmap indexes should be created for the column in generated segments. Creating a bitmap index requires more storage, but speeds up certain kinds of filtering (especially equality and prefix filtering). Only supported for `string` typed dimensions. | `true` |
| multiValueHandling | Specify the type of handling for [multi-value fields](../querying/multi-value-dimensions.md). Possible values are `sorted_array`, `sorted_set`, and `array`. `sorted_array` and `sorted_set` order the array upon ingestion. `sorted_set` removes duplicates. `array` ingests data as-is | `sorted_array` |
Expand Down
80 changes: 59 additions & 21 deletions docs/multi-stage-query/api.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ id: api
title: SQL-based ingestion and multi-stage query task API
sidebar_label: API
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';

<!--
~ Licensed to the Apache Software Foundation (ASF) under one
Expand Down Expand Up @@ -52,9 +54,10 @@ As an experimental feature, this endpoint also accepts SELECT queries. SELECT qu
by the controller, and written into the [task report](#get-the-report-for-a-query-task) as an array of arrays. The
behavior and result format of plain SELECT queries (without INSERT or REPLACE) is subject to change.

<!--DOCUSAURUS_CODE_TABS-->
<Tabs>

<TabItem value="1" label="HTTP">

<!--HTTP-->

```
POST /druid/v2/sql/task
Expand All @@ -69,7 +72,10 @@ POST /druid/v2/sql/task
}
```

<!--curl-->
</TabItem>

<TabItem value="2" label="curl">


```bash
# Make sure you replace `username`, `password`, `your-instance`, and `port` with the values for your deployment.
Expand All @@ -83,7 +89,10 @@ curl --location --request POST 'https://<username>:<password>@<your-instance>:<p
}'
```

<!--Python-->
</TabItem>

<TabItem value="3" label="Python">


```python
import json
Expand All @@ -108,7 +117,9 @@ print(response.text)

```

<!--END_DOCUSAURUS_CODE_TABS-->
</TabItem>

</Tabs>

#### Response

Expand All @@ -132,22 +143,29 @@ You can retrieve status of a query to see if it is still running, completed succ

#### Request

<!--DOCUSAURUS_CODE_TABS-->
<Tabs>

<TabItem value="4" label="HTTP">

<!--HTTP-->

```
GET /druid/indexer/v1/task/<taskId>/status
```

<!--curl-->
</TabItem>

<TabItem value="5" label="curl">


```bash
# Make sure you replace `username`, `password`, `your-instance`, `port`, and `taskId` with the values for your deployment.
curl --location --request GET 'https://<username>:<password>@<your-instance>:<port>/druid/indexer/v1/task/<taskId>/status'
```

<!--Python-->
</TabItem>

<TabItem value="6" label="Python">


```python
import requests
Expand All @@ -163,7 +181,9 @@ response = requests.get(url, headers=headers, data=payload, auth=('USER', 'PASSW
print(response.text)
```

<!--END_DOCUSAURUS_CODE_TABS-->
</TabItem>

</Tabs>

#### Response

Expand Down Expand Up @@ -208,22 +228,29 @@ For an explanation of the fields in a report, see [Report response fields](#repo

#### Request

<!--DOCUSAURUS_CODE_TABS-->
<Tabs>

<TabItem value="7" label="HTTP">

<!--HTTP-->

```
GET /druid/indexer/v1/task/<taskId>/reports
```

<!--curl-->
</TabItem>

<TabItem value="8" label="curl">


```bash
# Make sure you replace `username`, `password`, `your-instance`, `port`, and `taskId` with the values for your deployment.
curl --location --request GET 'https://<username>:<password>@<your-instance>:<port>/druid/indexer/v1/task/<taskId>/reports'
```

<!--Python-->
</TabItem>

<TabItem value="9" label="Python">


```python
import requests
Expand All @@ -236,7 +263,9 @@ response = requests.get(url, headers=headers, auth=('USER', 'PASSWORD'))
print(response.text)
```

<!--END_DOCUSAURUS_CODE_TABS-->
</TabItem>

</Tabs>

#### Response

Expand Down Expand Up @@ -511,7 +540,7 @@ The response shows an example report for a query.
"0": 1,
"1": 1,
"2": 1
},
},
"totalMergersForUltimateLevel": 1,
"progressDigest": 1
}
Expand Down Expand Up @@ -587,22 +616,29 @@ The following table describes the response fields when you retrieve a report for

#### Request

<!--DOCUSAURUS_CODE_TABS-->
<Tabs>

<TabItem value="10" label="HTTP">

<!--HTTP-->

```
POST /druid/indexer/v1/task/<taskId>/shutdown
```

<!--curl-->
</TabItem>

<TabItem value="11" label="curl">


```bash
# Make sure you replace `username`, `password`, `your-instance`, `port`, and `taskId` with the values for your deployment.
curl --location --request POST 'https://<username>:<password>@<your-instance>:<port>/druid/indexer/v1/task/<taskId>/shutdown'
```

<!--Python-->
</TabItem>

<TabItem value="12" label="Python">


```python
import requests
Expand All @@ -618,7 +654,9 @@ response = requests.post(url, headers=headers, data=payload, auth=('USER', 'PASS
print(response.text)
```

<!--END_DOCUSAURUS_CODE_TABS-->
</TabItem>

</Tabs>

#### Response

Expand Down
Loading

0 comments on commit 76e01d8

Please sign in to comment.