Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document how Capsule integrates in an ArgoCD GitOps based environment #527

Open
bsctl opened this issue Mar 14, 2022 · 13 comments
Open

Document how Capsule integrates in an ArgoCD GitOps based environment #527

bsctl opened this issue Mar 14, 2022 · 13 comments
Assignees
Labels
documentation Improvements or additions to documentation

Comments

@bsctl
Copy link
Member

bsctl commented Mar 14, 2022

Describe the feature

Document how Capsule integrates in an ArgoCD GitOps based environment.

What would the new user story look like?

The cluster admin can learn how to configure an ArgoCD GitOps based environment with Capsule

Expected behaviour

A detailed user guide is provided into documentation.

@bsctl bsctl added the blocked-needs-validation Issue need triage and validation label Mar 14, 2022
@ptx96 ptx96 self-assigned this Mar 16, 2022
@ptx96 ptx96 added documentation Improvements or additions to documentation and removed blocked-needs-validation Issue need triage and validation labels Mar 16, 2022
@MaxFedotov
Copy link
Collaborator

What we do in order to configure ArgoCD to work with capsule and capsule-proxy:

  1. We had a dedicated cluster (we call it management cluster) in each region where we install argoCD, where we connect all kubernetes clusters, located in the region
  2. We add system:serviceaccounts:argocd-system to userGroups in CapsuleConfiguration CRD in each cluster
  3. We add the following to owners list in each `Tenant CRD:
  - kind: ServiceAccount
    name: system:serviceaccount:argocd-system:argocd-manager
    proxySettings:
    - kind: IngressClasses
      operations:
      - List
    - kind: Nodes
      operations:
      - List
      - Update
    - kind: StorageClasses
      operations:
      - List
  1. When we add new cluster to argocd we use following command:
argocd cluster add --kubeconfig=/Users/m_fedotov/argo/ed-ks2t-kubeconfig.yaml --system-namespace argocd-system ed-ks2t

where we specify that argocd service account should be created in argocd-system namespace

And that's all :)

@prometherion
Copy link
Member

Am I wrong or assigning ArgoCD ServiceAccount as Tenant Owner means that the ArgoCD instance would be able to create Namespace resources only if assigned to a Tenant?

With that setup, if I understood correctly, ArgoCD would deploy only the Tenant namespaces, other components would be managed by a different instance due to the owner.namespace.capsule.clastix.io webhook, isn't it?

@MaxFedotov
Copy link
Collaborator

Yes and no :) we had a single instance of argocd, which manages both, user application and cluster components (they are located in different argocd projects).
The thing is that in our case every node or namespace is located in some tenant - for cluster components we use a special tenant called system (and for components which are installed before capsule we create namespaces using yaml files, where we manually add all annotations and labels, which will be otherwise added by capsule)

@rumstead
Copy link

rumstead commented Apr 12, 2022

What we do in order to configure ArgoCD to work with capsule and capsule-proxy:

  1. We had a dedicated cluster (we call it management cluster) in each region where we install argoCD, where we connect all kubernetes clusters, located in the region
  2. We add system:serviceaccounts:argocd-system to userGroups in CapsuleConfiguration CRD in each cluster
  3. We add the following to owners list in each `Tenant CRD:
  - kind: ServiceAccount
    name: system:serviceaccount:argocd-system:argocd-manager
    proxySettings:
    - kind: IngressClasses
      operations:
      - List
    - kind: Nodes
      operations:
      - List
      - Update
    - kind: StorageClasses
      operations:
      - List
  1. When we add new cluster to argocd we use following command:
argocd cluster add --kubeconfig=/Users/m_fedotov/argo/ed-ks2t-kubeconfig.yaml --system-namespace argocd-system ed-ks2t

where we specify that argocd service account should be created in argocd-system namespace

And that's all :)

Isn't this a chicken and egg problem? How do you create the argocd-system before Argo CD has access to deploy to the cluster?

$ argocd cluster add  --system-namespace argocd-system docker-desktop                                                                                                               
FATA[0002] Failed to create service account "argocd-manager" in namespace "argocd-system": namespaces "argocd-system" not found 

@krugerm-4c
Copy link

Hi all.
We are busy investigating capsule for multi-tenancy and are busy using ArgoCD to achieve GitOps pattern.
In our case we deploy ArgoCD within the cluster and it manages that cluster.
What I am struggling to achieve is to declaratively create a Tenant namespace using the label defined in the Capsule documentation.

How I understand it is if I create a namespace as any user and use the label capsule.clastix.io/tenant then it should still be picked up as a namespace under the Tenant, but from what I can see the namespace does not link back to the Tenant.

See example of YAML below:

---
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
  name: dev
spec:
  owners:
  - name: system:serviceaccount:argocd:argocd-server
    kind: ServiceAccount
---
kind: Namespace
apiVersion: v1
metadata:
  name: dev-application
  labels:
    capsule.clastix.io/tenant: dev

Hoping someone can clarify how to link the namespace resource to the specific Tenant.

@oliverbaehler
Copy link
Collaborator

HI @krugerm-4c

Have you added system:serviceaccount:argocd:argocd-server to the default CapsuleConfiguration?

@krugerm-4c
Copy link

HI @krugerm-4c

Have you added system:serviceaccount:argocd:argocd-server to the default CapsuleConfiguration?

I added that under the userGroups directive in the default CapsuleConfiguration, but it didn't work.

The big difference I can see is that the namespace created from an tenant via kubectl has the ownerReferences under metadata to link it to Capsule and the specific Tenant.

If I add the below metadata to the namespace YAML manually it links up correctly:

...
metadata:
  ownerReferences:
    - apiVersion: capsule.clastix.io/v1beta1
      kind: Tenant
      name: dev
      uid: 37873e71-f302-4416-bcdf-3a653d470a28
      controller: true
      blockOwnerDeletion: true
...

But this seems like a very dirty way as I would need to pull the uid from the cluster resource somehow compared to just working with the documented label.

Is there maybe something else I missed?

@prometherion
Copy link
Member

@krugerm-4c no, the ownerReference is done by the Capsule mutating webhook that intercepts the Namespace creation calls.

These calls are filtered if the user issuing those is part of the Capsule groups, I'd say something is not working there properly due to a misconfiguration. I know that @MaxFedotov is using that without any problem: would be great if you could share the CapsuleConfiguration content, along with an ArgoCD Application example.

@krugerm-4c
Copy link

@krugerm-4c no, the ownerReference is done by the Capsule mutating webhook that intercepts the Namespace creation calls.

These calls are filtered if the user issuing those is part of the Capsule groups, I'd say something is not working there properly due to a misconfiguration. I know that @MaxFedotov is using that without any problem: would be great if you could share the CapsuleConfiguration content, along with an ArgoCD Application example.

@prometherion

See below configuration I work with:

---
# capsule-tenant.yaml
apiVersion: capsule.clastix.io/v1beta1
kind: Tenant
metadata:
  name: dev
spec:
  owners:
  - name: system:serviceaccount:argocd:argocd-server
    kind: ServiceAccount
---
# capsule-default-configuration.yaml
apiVersion: capsule.clastix.io/v1alpha1
kind: CapsuleConfiguration
metadata:
  annotations:
    capsule.clastix.io/enable-tls-configuration: 'true'
    capsule.clastix.io/mutating-webhook-configuration-name: capsule-mutating-webhook-configuration
    capsule.clastix.io/tls-secret-name: capsule-tls
    capsule.clastix.io/validating-webhook-configuration-name: capsule-validating-webhook-configuration
    meta.helm.sh/release-name: capsule
    meta.helm.sh/release-namespace: capsule-system
  name: default
spec:
  forceTenantPrefix: false
  protectedNamespaceRegex: ''
  userGroups:
    - capsule.clastix.io
    - system:serviceaccount:argocd:argocd-server
---
# argocd-application.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
  name: capsule-namespaces
  namespace: argocd
spec:
  destination:
    server: https://kubernetes.default.svc
  project: default
  source:
    path: ./
    repoURL: https://bitbucket.org/krugerm4C/capsule-poc
    targetRevision: HEAD
---

The git repository is a public one I am using for the creation of a PoC.

For context, it has a single YAML file to be synced:

---
# tenant-namespaces.yaml
kind: Namespace
apiVersion: v1
metadata:
  name: dev-namespace-1
  labels:
    capsule.clastix.io/tenant: dev
---
kind: Namespace
apiVersion: v1
metadata:
  name: dev-namespace-2
  labels:
    capsule.clastix.io/tenant: dev
---

@prometherion
Copy link
Member

There's a typo in the CapsuleConfiguration, please, notice the difference.

apiVersion: capsule.clastix.io/v1alpha1
kind: CapsuleConfiguration
metadata:
  annotations:
    capsule.clastix.io/enable-tls-configuration: 'true'
    capsule.clastix.io/mutating-webhook-configuration-name: capsule-mutating-webhook-configuration
    capsule.clastix.io/tls-secret-name: capsule-tls
    capsule.clastix.io/validating-webhook-configuration-name: capsule-validating-webhook-configuration
    meta.helm.sh/release-name: capsule
    meta.helm.sh/release-namespace: capsule-system
  name: default
spec:
  forceTenantPrefix: false
  protectedNamespaceRegex: ''
  userGroups:
    - capsule.clastix.io
-    - system:serviceaccount:argocd:argocd-server
+    - system:serviceaccounts:argocd

Since we're talking about groups, you must specify the group, not the user.

Let me know if this works for you.

@krugerm-4c
Copy link

There's a typo in the CapsuleConfiguration, please, notice the difference.

apiVersion: capsule.clastix.io/v1alpha1
kind: CapsuleConfiguration
metadata:
  annotations:
    capsule.clastix.io/enable-tls-configuration: 'true'
    capsule.clastix.io/mutating-webhook-configuration-name: capsule-mutating-webhook-configuration
    capsule.clastix.io/tls-secret-name: capsule-tls
    capsule.clastix.io/validating-webhook-configuration-name: capsule-validating-webhook-configuration
    meta.helm.sh/release-name: capsule
    meta.helm.sh/release-namespace: capsule-system
  name: default
spec:
  forceTenantPrefix: false
  protectedNamespaceRegex: ''
  userGroups:
    - capsule.clastix.io
-    - system:serviceaccount:argocd:argocd-server
+    - system:serviceaccounts:argocd

Since we're talking about groups, you must specify the group, not the user.

Let me know if this works for you.

That worked!

The namespaces synced by ArgoCD are showing up on the Tenant custom resource.

Thanks @prometherion. Looked back at the documentation and didn't see this part about the default CapsuleConfiguration related to namespaces and service accounts.

@meetdpv
Copy link

meetdpv commented Oct 19, 2023

One challenge that we are facing with Argocd is that it does not support pre-delete hook used in Capsule. Is there an alternate for this?

@prometherion
Copy link
Member

@meetdpv unfortunately it seems missing on the ArgoCD side: argoproj/argo-cd#7575.

I don't see this issue as such a blocking one, since Capsule shouldn't be installed and uninstalled.
And even if this is required, the leftovers are essentially the self-signed certificate Secret (that can be ignored if you're running with cert-manager) and the ClusterRole and ClusterRoleBindings named capsule-namespace-deleter, and capsule-namespace-provisioner.

Upon a Helm re-installation, these will be created at runtime by Capsule.

Unless there's a specific situation, I don't see any problem with this preventing the usage of Capsule.

Is there an alternate for this?

FluxCD is widely used along with Capsule, AFAIK.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

8 participants