Skip to content

Commit

Permalink
Update docs
Browse files Browse the repository at this point in the history
Signed-off-by: Tamal Saha <tamal@appscode.com>
  • Loading branch information
tamalsaha committed Sep 1, 2024
1 parent dc0b639 commit 64bc2ab
Show file tree
Hide file tree
Showing 27 changed files with 901 additions and 231 deletions.
14 changes: 3 additions & 11 deletions content/docs/v2024.8.21/guides/druid/quickstart/overview/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,9 +135,9 @@ You can also use options like **Amazon S3**, **Google Cloud Storage**, **Azure B

Druid uses the metadata store to house various metadata about the system, but not to store the actual data. The metadata store retains all metadata essential for a Druid cluster to work. **Apache Derby** is the default metadata store for Druid, however, it is not suitable for production. **MySQL** and **PostgreSQL** are more production suitable metadata stores.

Luckily, **PostgreSQL** and **MySQL** both are readily available in KubeDB as CRD and KubeDB operator will automatically create a **MySQL** cluster and create a database in it named `druid` by default.
Luckily, **PostgreSQL** and **MySQL** both are readily available in KubeDB as CRD and **KubeDB** operator will automatically create a **MySQL** cluster and create a database in it named `druid` by default.

If you choose to use **PostgreSQL** as metadata storage, you can simply mention that in the `spec.metadataStorage.type` of the `Druid` CR and KubeDB operator will deploy a `PostgreSQL` cluster for druid to use.
If you choose to use **PostgreSQL** as metadata storage, you can simply mention that in the `spec.metadataStorage.type` of the `Druid` CR and KubeDB operator will deploy a `PostgreSQL` cluster for druid to use.

[//]: # (In this tutorial, we will use a **MySQL** named `mysql-demo` in the `demo` namespace and create a database named `druid` inside it using [initialization script]&#40;/docs/guides/mysql/initialization/#prepare-initialization-scripts&#41;.)

Expand All @@ -146,8 +146,6 @@ If you choose to use **PostgreSQL** as metadata storage, you can simply mention
[//]: # ()
[//]: # (```bash)

[//]: # ($ kubectl create configmap -n demo my-init-script \)

[//]: # (--from-literal=init.sql="$&#40;curl -fsSL https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/druid/quickstart/mysql-init-script.sql&#41;")

[//]: # (configmap/my-init-script created)
Expand All @@ -163,13 +161,7 @@ If you choose to use **PostgreSQL** as metadata storage, you can simply mention

Apache Druid uses [Apache ZooKeeper](https://zookeeper.apache.org/) (ZK) for management of current cluster state i.e. internal service discovery, coordination, and leader election.

Fortunately, KubeDB also has support for **ZooKeeper** and can easily be deployed using the guide [here](/docs/v2024.8.21/guides/zookeeper/quickstart/quickstart)

In this tutorial, we will create a ZooKeeper named `zk-demo` in the `demo` namespace.
```bash
$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/druid/quickstart/zk-demo.yaml
zookeeper.kubedb.com/zk-demo created
```
Fortunately, KubeDB also has support for **ZooKeeper** and **KubeDB** operator will automatically create a **ZooKeeper** cluster for druid to use.

## Create a Druid Cluster

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -344,13 +344,16 @@ Currently supported node types are -
data:
maxUnavailable: 1
replicas: 3
resources:
limits:
cpu: 500m
memory: 1Gi
requests:
cpu: 500m
memory: 1Gi
podTemplate:
spec:
containers:
- name: "elasticsearch"
resources:
requests:
cpu: "500m"
limits:
cpu: "600m"
memory: "1.5Gi"
storage:
accessModes:
- ReadWriteOnce
Expand All @@ -362,13 +365,16 @@ Currently supported node types are -
ingest:
maxUnavailable: 1
replicas: 3
resources:
limits:
cpu: 500m
memory: 1Gi
requests:
cpu: 500m
memory: 1Gi
podTemplate:
spec:
containers:
- name: "elasticsearch"
resources:
requests:
cpu: "500m"
limits:
cpu: "600m"
memory: "1.5Gi"
storage:
accessModes:
- ReadWriteOnce
Expand All @@ -380,13 +386,17 @@ Currently supported node types are -
master:
maxUnavailable: 1
replicas: 2
resources:
limits:
cpu: 500m
memory: 1Gi
requests:
cpu: 500m
memory: 1Gi
podTemplate:
spec:
containers:
- name: "elasticsearch"
resources:
limits:
cpu: 500m
memory: 1Gi
requests:
cpu: 500m
memory: 1Gi
storage:
accessModes:
- ReadWriteOnce
Expand Down Expand Up @@ -729,9 +739,9 @@ KubeDB accept following fields to set in `spec.podTemplate:`
- annotations (petset's annotation)
- labels (petset's labels)
- spec:
- args
- env
- resources
- containers
- volumes
- podPlacementPolicy
- initContainers
- imagePullSecrets
- nodeSelector
Expand All @@ -746,24 +756,24 @@ KubeDB accept following fields to set in `spec.podTemplate:`
- readinessProbe
- lifecycle

You can checkout the full list [here](https://github.com/kmodules/offshoot-api/blob/ea366935d5bad69d7643906c7556923271592513/api/v1/types.go#L42-L259). Uses of some field of `spec.podTemplate` is described below,
You can check out the full list [here](https://github.com/kmodules/offshoot-api/blob/master/api/v2/types.go#L26C1-L279C1).

Uses of some fields of `spec.podTemplate` are described below,

#### spec.podTemplate.spec.env

`spec.podTemplate.spec.env` is an `optional` field that specifies the environment variables to pass to the Elasticsearch Docker image.

You are not allowed to pass the following `env`:
- `node.name`
- `node.ingest`
- `node.master`
- `node.data`
#### spec.podTemplate.spec.tolerations

The `spec.podTemplate.spec.tolerations` is an optional field. This can be used to specify the pod's tolerations.

#### spec.podTemplate.spec.volumes

The `spec.podTemplate.spec.volumes` is an optional field. This can be used to provide the list of volumes that can be mounted by containers belonging to the pod.

#### spec.podTemplate.spec.podPlacementPolicy

`spec.podTemplate.spec.podPlacementPolicy` is an optional field. This can be used to provide the reference of the podPlacementPolicy. This will be used by our Petset controller to place the db pods throughout the region, zone & nodes according to the policy. It utilizes kubernetes affinity & podTopologySpreadContraints feature to do so.

```ini
Error from server (Forbidden): error when creating "./elasticsearch.yaml": admission webhook "elasticsearch.validators.kubedb.com" denied the request: environment variable node.name is forbidden to use in Elasticsearch spec
```

#### spec.podTemplate.spec.imagePullSecrets

Expand All @@ -790,23 +800,52 @@ spec:
serviceAccountName: es
```

#### spec.podTemplate.spec.resources
#### spec.podTemplate.spec.containers

The `spec.podTemplate.spec.containers` can be used to provide the list containers and their configurations for to the database pod. some of the fields are described below,

##### spec.podTemplate.spec.containers[].name
The `spec.podTemplate.spec.containers[].name` field used to specify the name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated.

##### spec.podTemplate.spec.containers[].args
`spec.podTemplate.spec.containers[].args` is an optional field. This can be used to provide additional arguments to database installation.

##### spec.podTemplate.spec.containers[].env

`spec.podTemplate.spec.env` is an `optional` field that specifies the environment variables to pass to the Elasticsearch Containers.

You are not allowed to pass the following `env`:
- `node.name`
- `node.ingest`
- `node.master`
- `node.data`


```ini
Error from server (Forbidden): error when creating "./elasticsearch.yaml": admission webhook "elasticsearch.validators.kubedb.com" denied the request: environment variable node.name is forbidden to use in Elasticsearch spec
```

##### spec.podTemplate.spec.containers[].resources

`spec.podTemplate.spec.resources` is an `optional` field. If the `spec.topology` field is not set, then it can be used to request or limit computational resources required by the database pods. To learn more, visit [here](http://kubernetes.io/docs/user-guide/compute-resources/).
`spec.podTemplate.spec.containers[].resources` is an `optional` field. then it can be used to request or limit computational resources required by the database pods. To learn more, visit [here](http://kubernetes.io/docs/user-guide/compute-resources/).

```yaml
spec:
podTemplate:
spec:
resources:
limits:
cpu: "1"
memory: 1Gi
requests:
cpu: 500m
memory: 512Mi
containers:
- name: "elasticsearch"
resources:
limits:
cpu: 500m
memory: 1Gi
requests:
cpu: 500m
memory: 1Gi
```



### spec.serviceTemplates

`spec.serviceTemplates` is an `optional` field that contains a list of the serviceTemplate. The templates are identified by the `alias`. For Elasticsearch, the configurable services' `alias` are `primary` and `stats`.
Expand Down
92 changes: 61 additions & 31 deletions content/docs/v2024.8.21/guides/kafka/concepts/kafka.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ info:
As with all other Kubernetes objects, a Kafka needs `apiVersion`, `kind`, and `metadata` fields. It also needs a `.spec` section. Below is an example Kafka object.

```yaml
apiVersion: kubedb.com/v1alpha2
apiVersion: kubedb.com/v1
kind: Kafka
metadata:
name: kafka
Expand Down Expand Up @@ -76,37 +76,37 @@ spec:
name: kafka-ca-issuer
topology:
broker:
replicas: 3
resources:
limits:
memory: 1Gi
requests:
cpu: 500m
memory: 1Gi
podTemplate:
spec:
containers:
- name: kafka
resources:
requests:
cpu: 500m
memory: 1024Mi
limits:
cpu: 700m
memory: 2Gi
storage:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storage: 10Gi
storageClassName: standard
suffix: broker
controller:
replicas: 3
resources:
limits:
memory: 1Gi
requests:
cpu: 500m
memory: 1Gi
storage:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: standard
suffix: controller
replicas: 1
podTemplate:
spec:
containers:
- name: kafka
resources:
requests:
cpu: 500m
memory: 1024Mi
limits:
cpu: 700m
memory: 2Gi
monitor:
agent: prometheus.io/operator
prometheus:
Expand Down Expand Up @@ -328,7 +328,9 @@ KubeDB accept following fields to set in `spec.podTemplate:`
- annotations (petset's annotation)
- labels (petset's labels)
- spec:
- resources
- containers
- volumes
- podPlacementPolicy
- initContainers
- containers
- imagePullSecrets
Expand All @@ -344,17 +346,26 @@ KubeDB accept following fields to set in `spec.podTemplate:`
- readinessProbe
- lifecycle

You can check out the full list [here](https://github.com/kmodules/offshoot-api/blob/39bf8b2/api/v2/types.go#L44-L279). Uses of some field of `spec.podTemplate` is described below,
You can check out the full list [here](https://github.com/kmodules/offshoot-api/blob/master/api/v2/types.go#L26C1-L279C1).
Uses of some field of `spec.podTemplate` is described below,

NB. If `spec.topology` is set, then `spec.podTemplate` needs to be empty. Instead use `spec.topology.<controller/broker>.podTemplate`

#### spec.podTemplate.spec.nodeSelector
#### spec.podTemplate.spec.tolerations

`spec.podTemplate.spec.nodeSelector` is an optional field that specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). To learn more, see [here](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) .
The `spec.podTemplate.spec.tolerations` is an optional field. This can be used to specify the pod's tolerations.

#### spec.podTemplate.spec.volumes

The `spec.podTemplate.spec.volumes` is an optional field. This can be used to provide the list of volumes that can be mounted by containers belonging to the pod.

#### spec.podTemplate.spec.podPlacementPolicy

#### spec.podTemplate.spec.resources
`spec.podTemplate.spec.podPlacementPolicy` is an optional field. This can be used to provide the reference of the podPlacementPolicy. This will be used by our Petset controller to place the db pods throughout the region, zone & nodes according to the policy. It utilizes kubernetes affinity & podTopologySpreadContraints feature to do so.

`spec.podTemplate.spec.resources` is an optional field. This can be used to request compute resources required by the database pods. To learn more, visit [here](http://kubernetes.io/docs/user-guide/compute-resources/).
#### spec.podTemplate.spec.nodeSelector

`spec.podTemplate.spec.nodeSelector` is an optional field that specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). To learn more, see [here](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) .

### spec.serviceTemplates

Expand All @@ -379,6 +390,25 @@ KubeDB allows following fields to set in `spec.serviceTemplates`:

See [here](https://github.com/kmodules/offshoot-api/blob/kubernetes-1.21.1/api/v1/types.go#L237) to understand these fields in detail.


#### spec.podTemplate.spec.containers

The `spec.podTemplate.spec.containers` can be used to provide the list containers and their configurations for to the database pod. some of the fields are described below,

##### spec.podTemplate.spec.containers[].name
The `spec.podTemplate.spec.containers[].name` field used to specify the name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated.

##### spec.podTemplate.spec.containers[].args
`spec.podTemplate.spec.containers[].args` is an optional field. This can be used to provide additional arguments to database installation.

##### spec.podTemplate.spec.containers[].env

`spec.podTemplate.spec.containers[].env` is an optional field that specifies the environment variables to pass to the Redis containers.

##### spec.podTemplate.spec.containers[].resources

`spec.podTemplate.spec.containers[].resources` is an optional field. This can be used to request compute resources required by containers of the database pods. To learn more, visit [here](http://kubernetes.io/docs/user-guide/compute-resources/).

### spec.deletionPolicy

`deletionPolicy` gives flexibility whether to `nullify`(reject) the delete operation of `Kafka` crd or which resources KubeDB should keep or delete when you delete `Kafka` crd. KubeDB provides following four deletion policies:
Expand Down
Loading

0 comments on commit 64bc2ab

Please sign in to comment.