Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

K8s Vault docker can't start pod log says : Error initializing listener of type tcp: error loading TLS cert: open : no such file or directory #450

Open
meiry opened this issue Jan 21, 2021 · 3 comments
Labels
bug Something isn't working

Comments

@meiry
Copy link

meiry commented Jan 21, 2021

Describe the bug
I created k8s secret and followed the manual on how to create TLS certificates

first, i create the secret in k8s then
i deploy vault with ha configuration and raft storage
bug pods are with error that looks like this :

Error initializing listener of type tcp: error loading TLS cert: open : no such file or directory 

Steps to reproduce the behavior:

  1. Install chart
  2. Run vault command
  3. See error (vault logs, etc.)
 kubectl get secret vault-server-tls -n vault-foo -o yaml
apiVersion: v1
data:
  vault.ca: LS0t.....GSUNBVEUtLS0tLQo=
  vault.crt: LS0tLS1CRUdJ....LS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
  vault.key: LS0tLS1CR....S0tDQo=
kind: Secret
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |     {"apiVersion":"v1","data":{"vault.ca":"LS0tLS1CR...FWS0tLS0tDQo="},"kind":"Secret","metadata":{"annotations":{},"creationTimestamp":"2021-01-21T09:34:31Z","managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:data":{".":{},"f:vault.ca":{},"f:vault.crt":{},"f:vault.key":{}},"f:type":{}},"manager":"kubectl.exe","operation":"Update","time":"2021-01-21T09:34:31Z"}],"name":"vault-server-tls","namespace":"vault-foo","selfLink":"/api/v1/namespaces/vault-foo/secrets/vault-server-tls","uid":"845b856e-d934-46dd-b094-ca75084542cd"},"type":"Opaque"}
  creationTimestamp: "2021-01-21T09:34:31Z"
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:data:
        .: {}
        f:vault.ca: {}
        f:vault.crt: {}
        f:vault.key: {}
      f:metadata:
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/last-applied-configuration: {}
      f:type: {}
    manager: kubectl.exe
    operation: Update
    time: "2021-01-21T09:39:10Z"
  name: vault-server-tls
  namespace: vault-foo
  resourceVersion: "62302347"
  selfLink: /api/v1/namespaces/vault-foo/secrets/vault-server-tls
  uid: 845b856e-d934-46dd-b094-ca75084542cd
type: Opaque
$ kubectl describe statefulset vault -n vault-foo
Name:               vault
Namespace:          vault-foo
CreationTimestamp:  Thu, 21 Jan 2021 16:42:16 +0200
Selector:           app.kubernetes.io/instance=vault,app.kubernetes.io/name=vault,component=server
Labels:             app.kubernetes.io/instance=vault
                    app.kubernetes.io/managed-by=Helm
                    app.kubernetes.io/name=vault
Annotations:        meta.helm.sh/release-name: vault
                    meta.helm.sh/release-namespace: vault-foo
Replicas:           3 desired | 3 total
Update Strategy:    OnDelete
Pods Status:        3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           app.kubernetes.io/instance=vault
                    app.kubernetes.io/name=vault
                    component=server
                    helm.sh/chart=vault-0.9.0
  Service Account:  vault
  Containers:
   vault:
    Image:       hashicorp/vault-enterprise:1.5.0_ent
    Ports:       8200/TCP, 8201/TCP, 8202/TCP
    Host Ports:  0/TCP, 0/TCP, 0/TCP
    Command:
      /bin/sh
      -ec
    Args:
      cp /vault/config/extraconfig-from-values.hcl /tmp/storageconfig.hcl;
      [ -n "${HOST_IP}" ] && sed -Ei "s|HOST_IP|${HOST_IP?}|g" /tmp/storageconfig.hcl;
      [ -n "${POD_IP}" ] && sed -Ei "s|POD_IP|${POD_IP?}|g" /tmp/storageconfig.hcl;
      [ -n "${HOSTNAME}" ] && sed -Ei "s|HOSTNAME|${HOSTNAME?}|g" /tmp/storageconfig.hcl;
      [ -n "${API_ADDR}" ] && sed -Ei "s|API_ADDR|${API_ADDR?}|g" /tmp/storageconfig.hcl;
      [ -n "${TRANSIT_ADDR}" ] && sed -Ei "s|TRANSIT_ADDR|${TRANSIT_ADDR?}|g" /tmp/storageconfig.hcl;
      [ -n "${RAFT_ADDR}" ] && sed -Ei "s|RAFT_ADDR|${RAFT_ADDR?}|g" /tmp/storageconfig.hcl;
      /usr/local/bin/docker-entrypoint.sh vault server -config=/tmp/storageconfig.hcl

    Limits:
      cpu:     2
      memory:  16Gi
    Requests:
      cpu:      2
      memory:   8Gi
    Liveness:   http-get https://:8200/v1/sys/health%3Fstandbyok=true delay=60s timeout=3s period=5s #success=1 #failure=2
    Readiness:  http-get https://:8200/v1/sys/health%3Fstandbyok=true&sealedcode=204&uninitcode=204 delay=5s timeout=3s period=5s #success=1 #failure=2
    Environment:
      HOST_IP:               (v1:status.hostIP)
      POD_IP:                (v1:status.podIP)
      VAULT_K8S_POD_NAME:    (v1:metadata.name)
      VAULT_K8S_NAMESPACE:   (v1:metadata.namespace)
      VAULT_ADDR:           https://127.0.0.1:8200
      VAULT_API_ADDR:       https://$(POD_IP):8200
      SKIP_CHOWN:           true
      SKIP_SETCAP:          true
      HOSTNAME:              (v1:metadata.name)
      VAULT_CLUSTER_ADDR:   https://$(HOSTNAME).vault-internal:8201
      VAULT_RAFT_NODE_ID:    (v1:metadata.name)
      HOME:                 /home/vault
      VAULT_CACERT:         /vault/userconfig/vault-server-tls/vault.crt
    Mounts:
      /home/vault from home (rw)
      /vault/audit from audit (rw)
      /vault/config from config (rw)
      /vault/data from data (rw)
      /vault/userconfig/vault-server-tls from userconfig-vault-server-tls (ro)
  Volumes:
   config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      vault-config
    Optional:  false
   userconfig-vault-server-tls:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  vault-server-tls
    Optional:    false
   home:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
Volume Claims:
  Name:          data
  StorageClass:
  Labels:        <none>
  Annotations:   <none>
  Capacity:      10Gi
  Access Modes:  [ReadWriteOnce]
  Name:          audit
  StorageClass:
  Labels:        <none>
  Annotations:   <none>
  Capacity:      10Gi
  Access Modes:  [ReadWriteOnce]
Events:
  Type    Reason            Age   From                    Message
  ----    ------            ----  ----                    -------
  Normal  SuccessfulCreate  40m   statefulset-controller  create Pod vault-0 in StatefulSet vault successful
  Normal  SuccessfulCreate  40m   statefulset-controller  create Pod vault-1 in StatefulSet vault successful
  Normal  SuccessfulCreate  40m   statefulset-controller  create Pod vault-2 in StatefulSet vault successful

Environment

  • Kubernetes version: Aws EKS
  • vault-helm version: v3.2.2"

Chart values:

$ helm get values vault -n vault-foo
USER-SUPPLIED VALUES:
global:
  enabled: true
  tlsDisable: false
injector:
  enabled: true
  image:
    repository: hashicorp/vault-k8s
    tag: latest
  resources:
    limits:
      cpu: 250m
      memory: 256Mi
    requests:
      cpu: 250m
      memory: 256Mi
server:
  auditStorage:
    enabled: true
  extraEnvironmentVars:
    VAULT_CACERT: /vault/userconfig/vault-server-tls/vault.crt
  extraVolumes:
  - name: vault-server-tls
    type: secret
  ha:
    enabled: true
    raft:
      config: |
        ui = true
        listener "tcp" {
          address = "[::]:8200"
          cluster_address = "[::]:8201"
          #tls_disable = 1
        }

        storage "raft" {
          path = "/vault/data"
            retry_join {
            leader_api_addr = "http://vault-0.vault-internal:8200"
            leader_ca_cert_file = "/vault/userconfig/vault-server-tls/vault.crt"
            leader_client_cert_file = "/vault/userconfig/vault-server-tls/vault.crt"
            leader_client_key_file = "/vault/userconfig/vault-server-tls/vault.key"
          }
          retry_join {
            leader_api_addr = "http://vault-1.vault-internal:8200"
            leader_ca_cert_file = "/vault/userconfig/vault-server-tls/vault.crt"
            leader_client_cert_file = "/vault/userconfig/vault-server-tlsr/vault.crt"
            leader_client_key_file = "/vault/userconfig/vault-server-tls/vault.key"
          }
          retry_join {
            leader_api_addr = "http://vault-2.vault-internal:8200"
            leader_ca_cert_file = "/vault/userconfig/vault-server-tls/vault.crt"
            leader_client_cert_file = "/vault/userconfig/vault-server-tls/vault.crt"
            leader_client_key_file = "/vault/userconfig/vault-server-tls/vault.key"
          }
        }

        service_registration "kubernetes" {}
      enabled: true
      setNodeId: true
    replicas: 3
  image:
    repository: hashicorp/vault-enterprise
    tag: 1.5.0_ent
  livenessProbe:
    enabled: true
    initialDelaySeconds: 60
    path: /v1/sys/health?standbyok=true
  readinessProbe:
    enabled: true
    path: /v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204
  resources:
    limits:
      cpu: 2000m
      memory: 16Gi
    requests:
      cpu: 2000m
      memory: 8Gi
  standalone:
    enabled: false
ui:
  enabled: true
  externalPort: 8200
  serviceNodePort: null
  serviceType: LoadBalancer

The dockers are not started as I'm getting :

$ kubectl exec -it vault-0 -n vault-foo – /bin/sh
Unable to use a TTY - input is not a terminal or the right kind of file
error: unable to upgrade connection: container not found ("vault")

$ kubectl exec -i -t vault-0 -n vault-foo – ls /vault/userconfig/vault-server-tls
Unable to use a TTY - input is not a terminal or the right kind of file
error: unable to upgrade connection: container not found ("vault")

but the pods are installed

$ kubectl get pods -n vault-foo
NAME                                   READY   STATUS             RESTARTS   AGE
vault-0                                0/1     CrashLoopBackOff   5          3m36s
vault-1                                0/1     CrashLoopBackOff   5          3m36s
vault-2                                0/1     CrashLoopBackOff   5          3m36s
vault-agent-injector-d54bdc675-79ll7   1/1     Running            0          3m36s

what I'm missing here?

@meiry meiry added the bug Something isn't working label Jan 21, 2021
@in0rdr
Copy link

in0rdr commented May 5, 2021

Hi @meiry, I really feel your pain, just had the same error.

The issue is as follows. You specify leader_ca_cert_file (incl. key and ca) in the retry_join, but have TCP listener which has TLS enabled by default, tls_disable=false, because you commented the line with #tls_disable = 1, where you probably intended to disable TLS previously.

Now, the error message is actually correct: open : no such file or directory

The path for the TLS file is empty (open :), because not specified.

To fix the issue, I had to add the following lines:

        listener "tcp" {
          address = "[::]:8200"
          cluster_address = "[::]:8201"
          #tls_disable = 1
          tls_cert_file = "/vault/tls/vault.crt"
          tls_key_file  = "/vault/tls/vault.key"
          tls_client_ca_file = "/vault/tls/ca.crt"
        }

This is also explained here:

Hope this helps.

By the way, a recent change made the configuration with the extraVolumes deprecated.

So you could simply specify the secret volume like this now 🚀:

  extraEnvironmentVars:
    VAULT_CACERT: /vault/tls/vault.ca
#  extraVolumes:
#    - type: secret
#      name: vault-server-tls
  volumes:
    - name: tls 
      secret:
        secretName: vault-server-tls 
  volumeMounts:
    - name: tls 
      mountPath: /vault/tls
      readOnly: true

Happy Helming! 😬

@in0rdr
Copy link

in0rdr commented May 5, 2021

oh yes, and don't forget to change the leader_api_addr in the retry_join blocks to the https address once you have the TLS listener working.

@in0rdr
Copy link

in0rdr commented Jun 14, 2024

@meiry did you succeed with the listener configuration or is this issue really still open as of today?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants