Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to configure ingress for K8s deployment with Helm (WSL 2) #8110

Closed
2 tasks done
bhoo-git opened this issue Jul 2, 2024 · 1 comment
Closed
2 tasks done

Unable to configure ingress for K8s deployment with Helm (WSL 2) #8110

bhoo-git opened this issue Jul 2, 2024 · 1 comment
Labels
need info Need more information to investigate the issue

Comments

@bhoo-git
Copy link

bhoo-git commented Jul 2, 2024

Actions before raising this issue

  • I searched the existing issues and did not find anything similar.
  • I read/searched the docs

Steps to Reproduce

I am running into issues where I cannot configure the ingress to access the CVAT front/back-end. I have followed the documentation for CVAT deployment on k8s using helm. Here are some of the logs in the process of configuring the cluster.

helm upgrade -n cvat test -i --create-namespace ./helm-chart -f ./helm-chart/values.yaml -f ./helm-chart/values.override.yaml
Release "test" does not exist. Installing it now.
walk.go:74: found symbolic link in path: /.../helm-chart/analytics resolves to /.../components/analytics. Contents of linked file included and used
NAME: test
LAST DEPLOYED: Tue Jul  2 13:50:33 2024
NAMESPACE: cvat
STATUS: deployed
REVISION: 1
kubectl get svc -n cvat
NAME                       TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)                                        AGE
opa                        ClusterIP      10.107.35.24     <none>         8181/TCP                                       51s
test-backend-service       ClusterIP      10.103.188.162   <none>         8080/TCP                                       51s
test-clickhouse            ClusterIP      10.96.184.146    <none>         8123/TCP,9000/TCP,9004/TCP,9005/TCP,9009/TCP   51s
test-clickhouse-headless   ClusterIP      None             <none>         8123/TCP,9000/TCP,9004/TCP,9005/TCP,9009/TCP   51s
test-frontend-service      ClusterIP      10.97.202.102    <none>         80/TCP                                         51s
test-grafana               ClusterIP      10.109.141.13    <none>         80/TCP                                         51s
test-kvrocks               ClusterIP      10.97.57.10      <none>         6666/TCP                                       51s
test-postgresql            ClusterIP      10.105.62.138    <none>         5432/TCP                                       51s
test-postgresql-hl         ClusterIP      None             <none>         5432/TCP                                       51s
test-redis-headless        ClusterIP      None             <none>         6379/TCP                                       51s
test-redis-master          ClusterIP      10.106.7.207     <none>         6379/TCP                                       51s
test-traefik               LoadBalancer   10.106.226.109   192.168.49.2   80:30098/TCP,443:31231/TCP                     51s
test-vector                ClusterIP      10.109.75.97     <none>         80/TCP                                         51s
test-vector-headless       ClusterIP      None             <none>         80/TCP                                         51s
kubectl get ingress -n cvat
NAME                  CLASS          HOSTS        ADDRESS   PORTS   AGE
test-cvat             test-traefik   cvat.local             80      67s
test-cvat-analytics   test-traefik   cvat.local             80      67s
kubectl describe ingress test-cvat -n cvat
Name:             test-cvat
Labels:           app.kubernetes.io/instance=test
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=cvat
                  app.kubernetes.io/version=latest
                  helm.sh/chart=cvat
Namespace:        cvat
Address:          
Ingress Class:    test-traefik
Default backend:  <default>
Rules:
  Host        Path  Backends
  ----        ----  --------
  cvat.local  
              /api         test-backend-service:8080 (10.244.2.109:8080)
              /admin       test-backend-service:8080 (10.244.2.109:8080)
              /static      test-backend-service:8080 (10.244.2.109:8080)
              /django-rq   test-backend-service:8080 (10.244.2.109:8080)
              /profiler    test-backend-service:8080 (10.244.2.109:8080)
              /            test-frontend-service:80 (10.244.2.110:80)
Annotations:  meta.helm.sh/release-name: test
              meta.helm.sh/release-namespace: cvat
Events:       <none>
ping cvat.local
PING cvat.local (192.168.49.2) 56(84) bytes of data.
^C
--- cvat.local ping statistics ---
4 packets transmitted, 0 received, 100% packet loss, time 3094ms

Expected Behavior

I am expecting a response from the cvat.local DNS upon sending requests through ping or curl.

Possible Solution

No response

Context

This is the values.yaml file I used for context:

traefik:
  enabled: true
  service:
    externalIPs:
      - "192.168.49.2" #add minikube ip when testing locally.
ingress:
  enabled: true

The values.yaml file looks like this:

# Default values for cvat.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.


imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""

cvat:
  backend:
    labels: {}
    annotations: {}
    resources: {}
    affinity: {}
    tolerations: []
    additionalEnv: []
    additionalVolumes: []
    additionalVolumeMounts: []
    # -- The service account the backend pods will use to interact with the Kubernetes API
    serviceAccount:
      name: default

    initializer:
      labels: {}
      annotations: {}
      resources: {}
      affinity: {}
      tolerations: []
      additionalEnv: []
      additionalVolumes: []
      additionalVolumeMounts: []
    server:
      replicas: 1
      labels: {}
      annotations: {}
      resources: {}
      affinity: {}
      tolerations: []
      envs:
        ALLOWED_HOSTS: "*"
      additionalEnv: []
      additionalVolumes: []
      additionalVolumeMounts: []
    worker:
      export:
        replicas: 2
        labels: {}
        annotations: {}
        resources: {}
        affinity: {}
        tolerations: []
        additionalEnv: []
        additionalVolumes: []
        additionalVolumeMounts: []
      import:
        replicas: 2
        labels: {}
        annotations: {}
        resources: {}
        affinity: {}
        tolerations: []
        additionalEnv: []
        additionalVolumes: []
        additionalVolumeMounts: []
      annotation:
        replicas: 1
        labels: {}
        annotations: {}
        resources: {}
        affinity: {}
        tolerations: []
        additionalEnv: []
        additionalVolumes: []
        additionalVolumeMounts: []
      webhooks:
        replicas: 1
        labels: {}
        annotations: {}
        resources: {}
        affinity: {}
        tolerations: []
        additionalEnv: []
        additionalVolumes: []
        additionalVolumeMounts: []
      qualityreports:
        replicas: 1
        labels: {}
        annotations: {}
        resources: {}
        affinity: {}
        tolerations: []
        additionalEnv: []
        additionalVolumes: []
        additionalVolumeMounts: []
      analyticsreports:
        replicas: 1
        labels: {}
        annotations: {}
        resources: {}
        affinity: {}
        tolerations: []
        additionalEnv: []
        additionalVolumes: []
        additionalVolumeMounts: []
    utils:
      replicas: 1
      labels: {}
      annotations: {}
      resources: {}
      affinity: {}
      tolerations: []
      additionalEnv: []
      additionalVolumes: []
      additionalVolumeMounts: []
    replicas: 1
    image: cvat/server
    tag: dev
    imagePullPolicy: Always
    permissionFix:
      enabled: true
    service:
      annotations:
        traefik.ingress.kubernetes.io/service.sticky.cookie: "true"
      spec:
        type: ClusterIP
        ports:
          - port: 8080
            targetPort: 8080
            protocol: TCP
            name: http
    defaultStorage:
        enabled: true
#        storageClassName: default
#        accessModes:
#         - ReadWriteMany
        size: 20Gi
    disableDistinctCachePerService: false
  frontend:
    replicas: 1
    image: cvat/ui
    tag: dev
    imagePullPolicy: Always
    labels: {}
    #  test: test
    annotations: {}
    # test.io/test: test
    resources: {}
    affinity: {}
    tolerations: []
    # nodeAffinity:
    #   requiredDuringSchedulingIgnoredDuringExecution:
    #     nodeSelectorTerms:
    #     - matchExpressions:
    #       - key: kubernetes.io/e2e-az-name
    #         operator: In
    #         values:
    #         - e2e-az1
    #         - e2e-az2
    additionalEnv: []
    # Example:
    #  - name: volume-from-secret
    # - name: TEST
    #  value: "test"
    additionalVolumes: []
    # Example(assumes that pvc was already created):
    # - name: tmp
    #   persistentVolumeClaim:
    #       claimName: tmp
    additionalVolumeMounts: []
    # Example:
    # -   mountPath: /tmp
    #     name: tmp
    #     subPath: test
    service:
      type: ClusterIP
      ports:
        - port: 80
          targetPort: 80
          protocol: TCP
          name: http
  opa:
    replicas: 1
    image: openpolicyagent/opa
    tag: 0.63.0
    imagePullPolicy: IfNotPresent
    labels: {}
    #  test: test
    annotations: {}
    # test.io/test: test
    resources: {}
    affinity: {}
    tolerations: []
    # nodeAffinity:
    #   requiredDuringSchedulingIgnoredDuringExecution:
    #     nodeSelectorTerms:
    #     - matchExpressions:
    #       - key: kubernetes.io/e2e-az-name
    #         operator: In
    #         values:
    #         - e2e-az1
    #         - e2e-az2
    additionalEnv: []
    # Example:
    #  - name: volume-from-secret
    # - name: TEST
    #  value: "test"
    additionalVolumes: []
    # Example(assumes that pvc was already created):
    # - name: tmp
    #   persistentVolumeClaim:
    #       claimName: tmp
    additionalVolumeMounts: []
    # Example:
    # -   mountPath: /tmp
    #     name: tmp
    #     subPath: test
    composeCompatibleServiceName: true # Sets service name to opa in order to be compatible with Docker Compose. Necessary because changing IAM_OPA_DATA_URL via environment variables in current images. Hinders multiple deployment due to duplicate name
    service:
      type: ClusterIP
      ports:
        - port: 8181
          targetPort: 8181
          protocol: TCP
          name: http

  kvrocks:
    enabled: true
    external:
      host: kvrocks-external.localdomain
    existingSecret: "cvat-kvrocks-secret"
    secret:
      create: true
      name: cvat-kvrocks-secret
      password: cvat_kvrocks
    image: apache/kvrocks
    tag: 2.7.0
    imagePullPolicy: IfNotPresent
    labels: {}
    #  test: test
    annotations: {}
    # test.io/test: test
    resources: {}
    affinity: {}
    tolerations: []
    nodeAffinity: {}
    #   requiredDuringSchedulingIgnoredDuringExecution:
    #     nodeSelectorTerms:
    #     - matchExpressions:
    #       - key: kubernetes.io/e2e-az-name
    #         operator: In
    #         values:
    #         - e2e-az1
    #         - e2e-az2
    additionalEnv: []
    # Example:
    # - name: TEST
    #   value: "test"
    additionalVolumes: []
    # Example(assumes that pvc was already created):
    # - name: tmp
    #   persistentVolumeClaim:
    #       claimName: tmp
    additionalVolumeMounts: []
    # Example:
    # -   mountPath: /tmp
    #     name: tmp
    #     subPath: test
    defaultStorage:
      enabled: true
#     storageClassName: default
#     accessModes:
#       - ReadWriteOnce
      size: 100Gi

postgresql:
  #See https://github.com/bitnami/charts/blob/master/bitnami/postgresql/ for more info
  enabled: true # false for external db
  external:
    # Ignored if an empty value is set
    host: ""
    # Ignored if an empty value is set
    port: ""
  # If not external following config will be applied by default
  auth:
    existingSecret: "{{ .Release.Name }}-postgres-secret"
    username: cvat
    database: cvat
  service:
    ports:
      postgresql: 5432
  secret:
    create: true
    name: "{{ .Release.Name }}-postgres-secret"
    password: cvat_postgresql
    postgres_password: cvat_postgresql_postgres
    replication_password: cvat_postgresql_replica

# https://artifacthub.io/packages/helm/bitnami/redis
redis:
  enabled: true
  external:
    host: 127.0.0.1
  architecture: standalone
  auth:
    existingSecret: "cvat-redis-secret"
    existingSecretPasswordKey: password
  secret:
    create: true
    name: cvat-redis-secret
    password: cvat_redis
  # TODO: persistence options

nuclio:
  enabled: false
# See https://github.com/nuclio/nuclio/blob/master/hack/k8s/helm/nuclio/values.yaml for more info
#  registry:
#    loginUrl: someurl
#    credentials:
#      username: someuser
#      password: somepass

analytics:
  # Set clickhouse.enabled to false if you disable analytics or use an external database
  enabled: true
  clickhouseDb: cvat
  clickhouseUser: user
  clickhousePassword: user
  clickhouseHost: "{{ .Release.Name }}-clickhouse"
  clickhousePort: 8123

vector:
  envFrom:
    - secretRef:
        name: cvat-analytics-secret
  existingConfigMaps:
    - cvat-vector-config
  dataDir: "/vector-data-dir"
  containerPorts:
    - name: http
      containerPort: 80
      protocol: TCP
  service:
    ports:
      - name: http
        port: 80
        protocol: TCP
  image:
    tag: "0.26.0-alpine"

clickhouse:
  # Set to false in case of external db usage
  enabled: true
  shards: 1
  replicaCount: 1
  extraEnvVarsSecret: cvat-analytics-secret
  initdbScriptsSecret: cvat-clickhouse-init
  auth:
    username: user
    existingSecret: cvat-analytics-secret
    existingSecretKey: CLICKHOUSE_PASSWORD
  # Consider enabling zookeeper if a distributed configuration is used
  zookeeper:
    enabled: false

grafana:
  envFromSecret: cvat-analytics-secret
  datasources:
    datasources.yaml:
      apiVersion: 1
      datasources:
      - name: 'ClickHouse'
        type: 'grafana-clickhouse-datasource'
        isDefault: true
        jsonData:
          defaultDatabase: ${CLICKHOUSE_DB}
          port: ${CLICKHOUSE_PORT}
          server: ${CLICKHOUSE_HOST}
          username: ${CLICKHOUSE_USER}
          tlsSkipVerify: false
          protocol: http
        secureJsonData:
          password: ${CLICKHOUSE_PASSWORD}
        editable: false
  dashboardProviders:
    dashboardproviders.yaml:
      apiVersion: 1
      providers:
      - name: 'default'
        orgId: 1
        folder: ''
        type: file
        disableDeletion: false
        editable: true
        options:
          path: /var/lib/grafana/dashboards
  dashboardsConfigMaps:
    default: "cvat-grafana-dashboards"
  plugins:
    - grafana-clickhouse-datasource
  grafana.ini:
    server:
      root_url: https://cvat.local/analytics
    dashboards:
      default_home_dashboard_path: /var/lib/grafana/dashboards/default/all_events.json
    users:
      viewers_can_edit: true
    auth:
      disable_login_form: true
      disable_signout_menu: true
    auth.anonymous:
      enabled: true
      org_role: Admin
    auth.basic:
      enabled: false

ingress:
  ## @param ingress.enabled Enable ingress resource generation for CVAT
  ##
  enabled: false
  ## @param ingress.hostname Host for the ingress resource
  ##
  hostname: cvat.local
  ## @param ingress.annotations Additional annotations for the Ingress resource.
  ##
  ## e.g:
  ## annotations:
  ##   kubernetes.io/ingress.class: nginx
  ##
  annotations: {}
  ## @param ingress.className IngressClass that will be be used to implement the Ingress (Kubernetes 1.18+)
  ## This is supported in Kubernetes 1.18+ and required if you have more than one IngressClass marked as the default for your cluster
  ## ref: https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/
  ##
  className: ""
  ## @param ingress.tls Enable TLS configuration for the host defined at `ingress.hostname` parameter
  ## TLS certificates will be retrieved from a TLS secret defined in tlsSecretName parameter
  ##
  tls: false
  ## @param ingress.tlsSecretName Specifies the name of the secret containing TLS certificates. Ignored if ingress.tls is false
  ##
  tlsSecretName: ingress-tls-cvat

traefik:
  enabled: false
  logs:
    general:
      format: json
    access:
      enabled: true
      format: json
      fields:
        general:
          defaultmode: drop
          names:
            ClientHost: keep
            DownstreamContentSize: keep
            DownstreamStatus: keep
            Duration: keep
            RequestHost: keep
            RequestMethod: keep
            RequestPath: keep
            RequestPort: keep
            RequestProtocol: keep
            RouterName: keep
            StartUTC: keep
  providers:
    kubernetesIngress:
      allowEmptyServices: true

smokescreen:
  opts: ''

I have ensured that the external IP for the k8s cluster as well as the DNS is properly configured. Here are the logs:

minikube ip
$ 192.168.49.2
kubectl config current-context
$ minikube
# This file was automatically generated by WSL. To stop automatic generation of this file, add the following entry to /etc/wsl.conf:
# [network]
# generateHosts = false
127.0.0.1       localhost
127.0.1.1       mylaptop

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.49.2 cvat.local

Environment

OS system: Windows, specifically WSL 2 (WSL version: 2.2.4.0)
Minikube Version: v1.33.1
Docker version:

Server: Docker Desktop
 Engine:
  Version:          24.0.6
  API version:      1.43 (minimum version 1.12)
  Go version:       go1.20.7
  Git commit:       1a79695
  Built:            Mon Sep  4 12:32:16 2023
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.22
  GitCommit:        8165feabfdfe38c65b599c4993d227328c231fca
 runc:
  Version:          1.1.8
  GitCommit:        v1.1.8-0-g82f18fe
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
@bhoo-git bhoo-git added the bug Something isn't working label Jul 2, 2024
@azhavoro azhavoro removed the bug Something isn't working label Jul 5, 2024
@azhavoro
Copy link
Contributor

azhavoro commented Jul 5, 2024

@bhoo-git Why would you expect to get an ICMP ping response?
What is curl error?

@azhavoro azhavoro added the need info Need more information to investigate the issue label Jul 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
need info Need more information to investigate the issue
Projects
None yet
Development

No branches or pull requests

3 participants