The Ansible Runner project to deploy operators and their related Custom Resources to OpenShift.
The role OpenShift Spices, allows install and configure the following software components using operators:
-
[*] Red Hat OpenShift Serverless both Serving and Eventing
-
[*] Red Hat OpenShift Pipelines
-
[*] Argo CD
-
[*] Red Hat OpenShift Service Mesh
-
[*] Strimzi Kafka
-
[*] Apache Camel-K
-
[*] Eclipse Che
NOTE: Based on your configuration you might need other tools like:
git clone https://github.com/openshift-spice-runner
export REPO_HOME=`pwd`/openshift-spice-runner
All configuration are done using $REPO_HOME/.cluster/.env
:
Variable | Description | Default |
---|
| RUNNER_PLAYBOOK | The cluster configuration playbook, this file will be searched in $REPO_HOME/project folder. Just filename is suffice. | playbook.yml |
The role variables can be passed using the $REPO_HOME/cluster/env/extravars
.
Copy the file and update it as needed:
cp $REPO_HOME/cluster/env/extravars.example $REPO_HOME/cluster/env/extravars
Check kameshsampath.openshift_app_spices for list of configuration parameters.
The inventory file $REPO_HOME/hosts
, allows the play tobe run across OpenShift clusters:
; Example Google Cloud
gcp ansible_host=localhost kubeconfig=/runner/inventory/gcp.kubeconfig cloud_profile=gcp
; Example AWS
aws ansible_host=localhost kubeconfig=/runner/inventory/aws.kubeconfig cloud_profile=aws
; Example Azure
azr ansible_host=localhost kubeconfig=/runner/inventory/azr.kubeconfig cloud_profile=azr
The above example shows the sample configuration(s) for three clouds GCP, AWS and Azure. Each cloud is configured the format:
<cloud-alias> ansible_host=localhost kubeconfig=/runner/inventory/<kubeconfig file> cloud_profile <gcp|aws|azr>
- cloud-alias: The host alias for the cloud to used and logged by ansible
- ansible_host: The
ansible_host
is always set tolocalhost
as the play will run within the docker container and connect to cluster using API - kubeconfig: The Cloud specific
kubeconfig
file path. Just update the file name as needed. The$REPO_HOME/inventory
is mounted as/runner/inventory
, which makes the file path/runner/inventory
to be same for all clouds
Copy the file and update it as needed:
cp $REPO_HOME/cluster/inventory/hosts.example $REPO_HOME/cluster/inventory/hosts
The makefile provides the following targets:
- provision - Creates a minikube cluster with profile name
- configure - Creates a minikube cluster with profile name
- unprovision - Deletes the created minikube cluster
To provision a cluster with OpenShift Service Mesh run:
cd $REPO_HOME
cp $REPO_HOME/cluster/examples/servicemesh.yml $REPO_HOME/cluster/project/playbook.yml
make configure
To provision a cluster with OpenShift Serverless run:
cd $REPO_HOME
cp $REPO_HOME/cluster/examples/serverless.yml $REPO_HOME/cluster/project/playbook.yml
make configure
To provision a cluster with OpenShift Pipelines, OpenShift Serverless and Argo CD run:
cd $REPO_HOME
cp $REPO_HOME/cluster/examples/serverless_pipelines_argocd.yml $REPO_HOME/cluster/project/playbook.yml
make configure
Based on the installation the following Tutorials might be of help:
Any python module that you need to install can added to the $REPO_HOME/requirements.txt
, this file will be processed automatically before the play runs.
Any roles/collections that need to installed can be added to the $REPO_HOME/requirements.yml
, this file will be processed automatically before the play runs.
Please check Galaxy for the structure of requirements.yml
file.