Skip to content

Commit

Permalink
Merge pull request #23 from devsecops-workshop/master
Browse files Browse the repository at this point in the history
Current Version
  • Loading branch information
nexus-Six committed Jul 6, 2023
2 parents 7f91a6e + 79729ac commit 6369f0d
Show file tree
Hide file tree
Showing 22 changed files with 9,809 additions and 549 deletions.
74 changes: 61 additions & 13 deletions content/1-intro/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,29 +28,44 @@ We try to balance guided workshop steps and challenging you to use your knowledg

## Workshop Environment

To run this workshop you basically need a fresh and empty OpenShift 4.10 cluster with cluster-admin access. In addition you will be asked to use the `oc` commandline client for some tasks.

### As Part of a Red Hat Workshop

#### For Attendees

As part of the workshop you will be provided with freshly installed OpenShift 4.10 clusters. Depending on attendee numbers we might ask you to gather in teams. Some workshop tasks must be done only once for the cluster (e.g. installing Operators), others like deploying and securing the application can be done by every team member separately in their own Project. This will be mentioned in the guide.

You'll get all access details for your lab cluster from the facilitators. This includes the URL to the OpenShift console and information about how to SSH into your bastion host to run `oc` if asked to.

### On Your Own
#### For Facilitators

The easiest way to provide this environment is through the Red Hat Demo System. Provision catalog item **Red Hat OpenShift Container Platform 4 Demo** for the the attendees.

### Self Hosted

As there is not special setup for the OpenShift cluster you should be able to run the workshop with any 4.10 cluster of you own. Just make sure you have cluster admin privileges.
While the workshop is designed to be run on Red Hat Demo System you should be able to run the workshop on a 4.10 cluster of you own.

Just make sure :

- You have cluster admin privileges
- Sizing
- 3 Master Nodes (Similar to AWS m5.2x.large)
- 2 Worker (Similar to AWS m5.4x.large)
- Authentication htpasswd enabled
- For the ACM chapter you will need AWS credentials to automatically deploy a SingleNode OpenShift
- Some names in the workshop may need to be customized for your environment (e.g. storage naming)

This workshop was tested with these versions :

- Red Hat OpenShift : 4.10.36
- Red Hat OpenShift : 4.12.12
- Red Hat Advanced Cluster Security for Kubernetes: 3.74.1
- Red Hat OpenShift Dev Spaces : 3.5.0
- Red Hat OpenShift Pipelines: 1.6.4
- Red Hat OpenShift GitOps: 1.5.1
- Red Hat Quay: 3.8.5
- Red Hat OpenShift Dev Spaces : 3.6.0
- Red Hat OpenShift Pipelines: 1.10.3
- Red Hat OpenShift GitOps: 1.8.3
- Red Hat Quay: 3.8.8
- Red Hat Quay Bridge Operator: 3.7.11
- Red Hat Data Foundation : 4.10.11
- Gitea Operator 1.3.0
- Red Hat Data Foundation : 4.12.03
- Gitea Operator: 1.3.0
- Web Terminal: 1.7.0

## Workshop Flow

Expand All @@ -75,9 +90,42 @@ Click the generated link once to apply it the the current guide.
{{< rawhtml >}}

<script>

function replaceURLParameter(url, parameter) {
//prefer to use l.search if you have a location/link object

console.log("ReplaceURLParameter in -> " + url + " " + parameter);

var urlparts = url.split('?');
if (urlparts.length >= 2) {

var prefix = encodeURIComponent("domain") + '=';
var pars = urlparts[1].split(/[&;]/g);

//reverse iteration as may be destructive
for (var i = pars.length; i-- > 0;) {
//idiom for string.startsWith
if (pars[i].lastIndexOf(prefix, 0) !== -1) {
pars.splice(i, 1);

}
}
pars.push("domain=" + parameter)

return urlparts[0] + (pars.length > 0 ? '?' + pars.join('&') : '');
}
else
{
url = url + "?domain=" + parameter;
}
consol.log("Returning -> " + url)
return url;
}
function get_domain() {
var domain = document.getElementById("domain").value;
var url = window.location.href + "?domain=" + domain;

var domainVal = document.getElementById("domain").value;
var url = replaceURLParameter(window.location.href, domainVal)

var a = document.createElement('a');
var linkText = document.createTextNode(url);
a.appendChild(linkText);
Expand Down
130 changes: 91 additions & 39 deletions content/10-rhacs-setup/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ weight = 15

During the workshop you went through the OpenShift developer experience starting from software development using Quarkus and `odo`, moving on to automating build and deployment using Tekton pipelines and finally using GitOps for production deployments.

Now it's time to add another extremely important piece to the setup; enhancing application security in a containerized world. Using a recent addition to the OpenShift portfolio: **Red Hat Advanced Cluster Security for Kubernetes**!
Now it's time to add another extremely important piece to the setup; enhancing application security in a containerized world. Using **Red Hat Advanced Cluster Security for Kubernetes**, of course!

## Install RHACS

Expand Down Expand Up @@ -33,8 +33,51 @@ You must install the ACS Central instance in its own project and not in the **rh
- Select **Project: rhacs-operator → Create project**
- Create a new project called **stackrox** (Red Hat recommends using **stackrox** as the project name.)
- In the Operator view under **Provided APIs** on the tile **Central** click **Create Instance**
- Accept the name **stackrox-central-services**
- Adjust the memory limit of the central instance to `6Gi` (**Central Component Settings->Resources->Limits>Memory**).
- Switch to the YAMl View.
- Replace the YAML content with the following:

``` yaml
apiVersion: platform.stackrox.io/v1alpha1
kind: Central
metadata:
name: stackrox-central-services
namespace: stackrox
spec:
central:
db:
isEnabled: Default
persistence:
persistentVolumeClaim:
claimName: central-db
resources:
limits:
cpu: 2
memory: 6Gi
requests:
cpu: 500m
memory: 1Gi
exposure:
loadBalancer:
enabled: false
port: 443
nodePort:
enabled: false
route:
enabled: true
persistence:
persistentVolumeClaim:
claimName: stackrox-db
egress:
connectivityPolicy: Online
scanner:
analyzer:
scaling:
autoScaling: Disabled
maxReplicas: 2
minReplicas: 1
replicas: 1
scannerComponent: Enabled
```
- Click **Create**
After deployment has finished (**Status** `Conditions: Deployed, Initialized` in the Operator view on the tab **Central**) it can take some time until the application is completely up and running. One easy way to check the state is to switch to the **Developer** console view at the upper left. Then make sure you are in the **stackrox** project and open the **Topology** map. You'll see the three deployments of an **Central** instance:
Expand Down Expand Up @@ -70,45 +113,56 @@ To actually do and see anything you need to add a **SecuredCluster** (be it the

This is because you don't have a monitored and secured OpenShift cluster yet.

### Create an integration to scan the Quay registry
### Prepare to add Secured Clusters

So to enable scanning of images in Quay, you'll have to configure an **Integration** with valid credentials, so this is what you'll do.
Now we'll add your OpenShift cluster as **Secured Cluster** to ACS.

Now create a new Integration:
- Access the **RHACS Portal** and configure the already existing integrations of type **Generic Docker Registry**.
- Go to **Platform Configuration -> Integrations -> Generic Docker Registry**.
- Click the **New integration** button
- **Integration name**: Quay local
- **Endpoint**: `https://quay-quay-quay.apps.<DOMAIN>` (replace domain if required)
- **Username**: quayadmin
- **Password**: quayadmin
- Press the **Test** button to validate the connection and press **Save** when the test is successful.
First you have to generate an init bundle which contains certificates and is used to authenticate a **SecuredCluster** to the **Central** instance, regardless if it's the same cluster as the Central instance or a remote/other cluster.

### Prepare to add Secured Clusters
We are using the API to create the init bundle in this workshop because we use the Web Terminal and can't upload a downloaded file to it. For the steps to create the init bundle in the ACS Portal see the appendix.

First you have to generate an init bundle which contains certificates and is used to authenticate a **SecuredCluster** to the **Central** instance, again regardless if it's the same cluster as the Central instance or a remote/other cluster.
Let's create the init bundle using the ACS **API** on the commandline:

In the **ACS Portal**:
Go to your Web Terminal (if it timed out just start it again), then paste, edit and execute the following lines:

- Set the ACS API endpoint, replace `<central_url>` with the URL of your ACS portal
``` bash
export ROX_ENDPOINT=<central_url>:443
```
- Set the admin password (same as for the portal, look up the secrets again)
``` bash
export PASSWORD=<password>
```
- Give the init bundle a name
``` bash
export DATA={\"name\":\"my-init-bundle\"}
```
- Finally run the `curl` command against the API to create the init bundle using the variables set above
``` bash
curl -k -o bundle.json -X POST -u "admin:$PASSWORD" -H "Content-Type: application/json" --data $DATA https://${ROX_ENDPOINT}/v1/cluster-init/init-bundles
```
- Convert it to the needed format
``` bash
cat bundle.json | jq -r '.kubectlBundle'  | base64 -d > kube-secrets.bundle
```

- Navigate to **Platform Configuration → Integrations**.
- Under the **Authentication Tokens** section, click on **Cluster Init Bundle**.
- Click **Generate bundle**
- Enter a name for the cluster init bundle and click **Generate**.
- Click **Download Kubernetes Secret File** to download the generated bundle.
You should now have two files in your Web Terminal session: `bundle.json` and `kube-secrets.bundle`.

The init bundle needs to be applied on all OpenShift clusters you want to secure & monitor.

{{% notice info %}}
As said, you can create an init bundle in the ACS Portal, download it and apply it from any terminal where you can run `oc` against your cluster. We did it the API way to show you how to do it and to enable you to use the Web Terminal.
{{% /notice %}}

### Prepare the Secured Cluster

For this workshop we run **Central** and **SecuredCluster** on one OpenShift cluster. E.g. we monitor and secure the same cluster the central services live on.

**Apply the init bundle**

- Use the `oc` command to log in to the OpenShift cluster as `cluster-admin`.
- The easiest way might be to use the **Copy login command** link from the UI
- Switch to the **Project** you installed **ACS Central** in, it should be `stackrox`.
- Run `oc create -f <init_bundle>.yaml -n stackrox` pointing to the init bundle you downloaded from the Central instance and the Project you created.
- This will create a number of secrets:
Again in the web terminal:
- Run `oc create -f kube-secrets.bundle -n stackrox` pointing to the init bundle you downloaded from the Central instance or created via the API as above.
- This will create a number of secrets, the output should be:

```
secret/collector-tls created
Expand All @@ -135,21 +189,19 @@ Now go to your **ACS Portal** again, after a couple of minutes you should see yo

## Configure Quay Integrations in ACS

For ACS to be able to access images in your local Quay registry, one additional step has to be taken.

Access the ACS Portal and go to **Platform Configuration -> Integrations -> Generic Docker Registry**. You should see a number of autogenerated (from existing pull-secrets) entries.

You have to change the entry pointing to the local Quay registry, it should look like:

`Autogenerated https://quay-quay-quay.apps.<DOMAIN> ....`
### Create an integration to scan the Quay registry

Open and edit the integration using the three dots at the right:
To enable scanning of images in your Quay registry, you'll have to configure an **Integration** with valid credentials, so this is what you'll do.

Now create a new Integration:
- Access the **RHACS Portal** and configure the already existing integrations of type **Generic Docker Registry**.
- Go to **Platform Configuration -> Integrations -> Generic Docker Registry**.
- Click the **New integration** button
- **Integration name**: Quay local
- **Endpoint**: `https://quay-quay-quay.apps.<DOMAIN>` (replace domain if required)
- **Username**: quayadmin
- **Password**: the password you entered when creating the quayadmin user
- Make sure **Update stored credentials** is checked
- Press the Test button to validate the connection
- Press Save when the test is successful.
- **Password**: quayadmin
- Press the **Test** button to validate the connection and press **Save** when the test is successful.

## Architecture recap

Expand Down
8 changes: 5 additions & 3 deletions content/12-create-policy/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,14 +21,16 @@ These are the steps you will go through:

## Create a Custom System Policy

First create the system policy. In the **ACS Portal** do the following:
First create a new policy category and the system policy. In the **ACS Portal** do the following:

- **Platform Configuration->Policy Management->Create policy**
- **Platform Configuration->Policy Management->Policy categories tab->Create category**
- Enter `Workshop` as **Category name**
- Click **Create**
- **Platform Configuration->Policy Management->Policies tab->Create policy**
- **Policy Details**
- **Name:** Workshop RHSA-2021:4904
- **Severity:** Critical
- **Categories:** Workshop
- This will create a new Category if it doesn't exist
- Click **Next**
- **Policy Behaviour**
- **Lifecycle Stages:** Build, Deploy
Expand Down
Loading

0 comments on commit 6369f0d

Please sign in to comment.