Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cvat helmchart on kubernetes v.2.14.4 not working with s3 cloudstorage. #8108

Closed
2 tasks done
Enzawdo opened this issue Jul 2, 2024 · 5 comments
Closed
2 tasks done
Labels
bug Something isn't working

Comments

@Enzawdo
Copy link

Enzawdo commented Jul 2, 2024

Actions before raising this issue

  • I searched the existing issues and did not find anything similar.
  • I read/searched the docs

Steps to Reproduce

I recently uppdated cvat from 2.10.2 to 2.14.4. The uppdate has gone well in general with all the pods are up and running. But when i try to attach a on prem s3 to cvat using the cloud storage and it gives me this error: "Failed to connect to proxy URL: "http://localhost:4750". I don't know why it's trying to connect to a proxy, i have dubblechecked all the secret and access keys which worked fine before the update. I dont know if its some issues with the new image or something.

Expected Behavior

I expect it would attach the s3 storage as it did on v.2.10.2

Possible Solution

No response

Context

No response

Environment

cvat version 2.14.4 
kubernetes 
onprem s3 storage
@Enzawdo Enzawdo added the bug Something isn't working label Jul 2, 2024
@azhavoro
Copy link
Contributor

azhavoro commented Jul 5, 2024

Could you post the output of the describe command for one of cvat-local-backend-server-xxx-yyyy pod?

@Enzawdo
Copy link
Author

Enzawdo commented Jul 5, 2024

Name: cvat-utv-backend-server-6c76b859f5-5bmdw
Namespace: utv
Priority: 0
Service Account: cvat
Node: utv1-zf274-isolated-worker-small-ckkz4/10.26.7.35
Start Time: Mon, 01 Jul 2024 14:16:13 +0200
Labels: app=cvat-app
app.kubernetes.io/instance=cvat-utv
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=cvat
app.kubernetes.io/version=latest
component=server
helm.sh/chart=cvat
pod-template-hash=6c76b859f5
tier=backend
Annotations: k8s.ovn.org/pod-networks:
{"default":{"ip_addresses":["10.128.27.217/23"],"mac_address":"0a:58:0a:80:1b:d9","gateway_ips":["10.128.26.1"],"ip_address":"10.128.27.21...
k8s.v1.cni.cncf.io/network-status:
[{
"name": "ovn-kubernetes",
"interface": "eth0",
"ips": [
"10.128.27.217"
],
"mac": "0a:58:0a:80:1b:d9",
"default": true,
"dns": {}
}]
openshift.io/scc: anyuid
Status: Running
IP: 10.128.27.217
IPs:
IP: 10.128.27.217
Controlled By: ReplicaSet/utv-backend-server-6c76b859f5
Containers:
cvat-backend:
Container ID: cri-o://70b183d8e08a25e6a64a6fd1441623beb1092859c592d9fece4623219424ef65fgh
Image: cvat/server:v2.14.4
Image ID: sha256:7c40c9673bfec7e3832855730d5de04177d6b590f3d3c9cd639492483398f8ae
Port: 8080/TCP
Host Port: 0/TCP
Args:
run
server
State: Running
Started: Mon, 01 Jul 2024 14:16:16 +0200
Ready: True
Restart Count: 0
Environment:
ALLOWED_HOSTS: *
DJANGO_MODWSGI_EXTRA_ARGS:
IAM_OPA_BUNDLE: 1
CVAT_REDIS_INMEM_HOST: utv-redis-master
CVAT_REDIS_INMEM_PORT: 6379
CVAT_REDIS_INMEM_PASSWORD: <set to the key 'password' in secret 'cvat-redis-secret'> Optional: false
CVAT_REDIS_ONDISK_HOST: kvrocks
CVAT_REDIS_ONDISK_PORT: 6666
CVAT_REDIS_ONDISK_PASSWORD: <set to the key 'password' in secret 'cvat-kvrocks-secret'> Optional: false
CVAT_POSTGRES_HOST: postgresql
CVAT_POSTGRES_PORT: 5432
CVAT_POSTGRES_USER: <set to the key 'username' in secret 'postgres-secret'> Optional: false
CVAT_POSTGRES_DBNAME: <set to the key 'database' in secret 'postgres-secret'> Optional: false
CVAT_POSTGRES_PASSWORD: <set to the key 'password' in secret 'postgres-secret'> Optional: false
SMOKESCREEN_OPTS:
REQUESTS_CA_BUNDLE: /etc/config/s3a/ca-bundle.crt
Mounts:
/etc/config/s3a from s3a-volume (rw)
/home/django/data from cvat-backend-data (rw,path="data")
/home/django/data/cache from cvat-backend-per-service-cache (rw)
/home/django/keys from cvat-backend-data (rw,path="keys")
/home/django/logs from cvat-backend-data (rw,path="logs")
/home/django/models from cvat-backend-data (rw,path="models")
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-548t5 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
cvat-backend-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: analysverket-cvat-utv-backend-data
ReadOnly: false
cvat-backend-per-service-cache:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit:
s3a-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: trafikverket-local-ca
Optional: false
kube-api-access-548t5:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
ConfigMapName: openshift-service-ca.crt
ConfigMapOptional:
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:

@azhavoro
Copy link
Contributor

azhavoro commented Jul 5, 2024

You need to set SMOKESCREEN_OPTS to add allow rule for your s3 cidr
or disable it completely by overriding the configuration file https://github.com/cvat-ai/cvat/blob/develop/cvat/settings/base.py#L709

Please see the PR description for more details
#6362

@alexyao2015
Copy link

It seems like the linked PR was merged back in 2.5 whereas I had no issues connecting it in 2.12.1. It would seem to be that it's unlikely this could be the cause of the issues mentioned.

@azhavoro
Copy link
Contributor

azhavoro commented Jul 6, 2024

I didn't say the PR mentioned was causing this behavior, the description at the link gives the solution, but if you need to know the PR that causes this for onperm s3 f234693

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants