Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Umbrella Issue for KubeEdge tests enhancement #5562

Open
4 tasks
Shelley-BaoYue opened this issue Apr 24, 2024 · 12 comments
Open
4 tasks

Umbrella Issue for KubeEdge tests enhancement #5562

Shelley-BaoYue opened this issue Apr 24, 2024 · 12 comments
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@Shelley-BaoYue
Copy link
Collaborator

What would you like to be added/modified:

As KubeEdge undergoes version iterations, the testing work should also be updated and enhanced in sync with the requirements. This includes the following tasks:

@Shelley-BaoYue Shelley-BaoYue added the kind/feature Categorizes issue or PR as related to a new feature. label Apr 24, 2024
@1Shubham7
Copy link
Contributor

Hey @Shelley-BaoYue,
This is Shubham, I am a contributor at CNCF Kyverno and have previously also contributed to CNCF ORAS. I am currently working as a technical writing intern at a startup called GeeksforGeeks where I write articles about DevOps and Kubernetes. I want to apply for LFX term 2 with KubeEdge and am really interested in this issue. I have some experience in writing E2E tests and am learning more and working on writing some E2E tests for KubeEdge (#5043). I have decent knowledge in golang and have been contributing to CNCF projects since 8 months now. I will keep on exploring the project and work on improving the test coverage and apply for LFX with this issue.

Thanks

@Stazz0
Copy link

Stazz0 commented May 13, 2024

Hello @Shelley-BaoYue,
My name is Ashish Bhawel, and I'm a final-year Information Technology Engineering student at JEC, Jabalpur, India. I'm writing to express my interest in contributing to the KubeEdge project on improving test coverage (#5562) as part of the LFX Term 2 program.

While I'm little new to KubeEdge itself, I have some experience in writing E2E tests. I have started working on some E2E tests for KubeEdge (#5043) to further enhance my understanding of the project. Additionally, I possess a decent knowledge of Golang, Kubernetes and KubeEdge.

I'm highly motivated to participate in LFX Term 2 and believe my skills align well with this project's requirements. I'm eager to learn more about the specific testing needs and how I can best contribute.

I've also applied for Google Summer of Code (GSoC) this year, demonstrating my commitment to open-source contributions.

I'll continue to explore the project and work on improving the test coverage. I plan to formally submit my LFX application focusing on this issue shortly.

Thank you
Sincerely,
Ashish Bhawel

@Shelley-BaoYue
Copy link
Collaborator Author

Hi, @1Shubham7 Thx for your contributions these days. I've chosen you as the LFX term2 test cases enhancement mentee. How about getting start on this work? Personally, I think it will be easier to start with unit test 😄

@1Shubham7
Copy link
Contributor

I am truly grateful to you @Shelley-BaoYue for selecting me as an LFX mentee. This opportunity means a lot to me, I will make sure to work hard and exceed your expectations. Thanks for acknowledging my contributions as well. Please do keep guiding me during the mentorship, I will get started will writing unit tests from today, I have my exam today, I will finish that up and get started.

Once again, thanks a lot Shelley, it really means a lot to me :)

@Shelley-BaoYue
Copy link
Collaborator Author

Haha, let's get started! Feel free to communicate here if you have any question.

@1Shubham7
Copy link
Contributor

/assign

@1Shubham7
Copy link
Contributor

1Shubham7 commented Jul 10, 2024

Hey @Shelley-BaoYue,
I think I have written a good amount of unit tests covering multiple packages and I wanted to write e2e tests now, I was working on one and trying to run e2e tests and I ran into this issue. whenever I run make e2e (even without changing current code`, it does some stuff, creates a cluster, etc. installs CRDs etc. and then this keeps on running forever:

+ kubectl get nodes
+ grep edge-node
+ grep -q -w Ready
+ true
+ sleep 3

Here is the complete message. Please tell how to proceed.

make e2e
tests/scripts/execute.sh 
/workspaces/44/kubeedge
++ dirname /workspaces/44/kubeedge/tests/scripts/compile.sh
+ cd /workspaces/44/kubeedge/tests/scripts
++ pwd
+ workdir=/workspaces/44/kubeedge/tests/scripts
+ cd /workspaces/44/kubeedge/tests/scripts
+ cd ../
+ echo /workspaces/44/kubeedge/tests
/workspaces/44/kubeedge/tests
+ compilemodule=
+ '[' 0 -eq 0 ']'
+ echo 'compiling all tests !!'
compiling all tests !!
+ ginkgo build e2e
Compiled e2e.test
+ ginkgo build e2e_keadm
Compiled e2e_keadm.test
+ ginkgo build e2e_edgesite
Compiled e2e_edgesite.test
+++ dirname /workspaces/44/kubeedge/hack/local-up-kubeedge.sh
++ cd /workspaces/44/kubeedge/hack
++ pwd
+ KUBEEDGE_ROOT=/workspaces/44/kubeedge/hack/..
+ ENABLE_DAEMON=true
+ LOG_DIR=/tmp
+ LOG_LEVEL=2
+ TIMEOUT=60s
+ PROTOCOL=WebSocket
+ CONTAINER_RUNTIME=containerd
+ KIND_IMAGE=kindest/node:v1.26.0
+ echo -e 'The installation of the cni plugin will overwrite the cni config file. Use export CNI_CONF_OVERWRITE=false to disable it.'
The installation of the cni plugin will overwrite the cni config file. Use export CNI_CONF_OVERWRITE=false to disable it.
+ [[ x == \x ]]
+ CLUSTER_NAME=test
+ export 'CLUSTER_CONTEXT=--name test'
+ CLUSTER_CONTEXT='--name test'
+ [[ true = false ]]
+ trap cleanup ERR
+ trap cleanup INT
+ cleanup
+ echo 'Cleaning up...'
Cleaning up...
+ uninstall_kubeedge
+ [[ -n '' ]]
+ [[ -n '' ]]
+ sudo rm -rf /tmp/etc/kubeedge /tmp/var/lib/kubeedge
+ echo 'Running kind: [kind delete cluster --name test]'
Running kind: [kind delete cluster --name test]
+ kind delete cluster --name test
Deleting cluster "test" ...
+ source /workspaces/44/kubeedge/hack/../hack/lib/golang.sh
++ YES=y
++ NO=n
++ ALL_BINARIES_AND_TARGETS=(cloudcore:cloud/cmd/cloudcore admission:cloud/cmd/admission keadm:keadm/cmd/keadm edgecore:edge/cmd/edgecore edgesite-agent:edgesite/cmd/edgesite-agent edgesite-server:edgesite/cmd/edgesite-server csidriver:cloud/cmd/csidriver iptablesmanager:cloud/cmd/iptablesmanager edgemark:edge/cmd/edgemark controllermanager:cloud/cmd/controllermanager)
++ IFS=' '
++ read -ra KUBEEDGE_ALL_TARGETS
+++ kubeedge::golang::get_all_targets
+++ local -a targets
+++ for bt in "${ALL_BINARIES_AND_TARGETS[@]}"
+++ targets+=("${bt##*:}")
+++ for bt in "${ALL_BINARIES_AND_TARGETS[@]}"
+++ targets+=("${bt##*:}")
+++ for bt in "${ALL_BINARIES_AND_TARGETS[@]}"
+++ targets+=("${bt##*:}")
+++ for bt in "${ALL_BINARIES_AND_TARGETS[@]}"
+++ targets+=("${bt##*:}")
+++ for bt in "${ALL_BINARIES_AND_TARGETS[@]}"
+++ targets+=("${bt##*:}")
+++ for bt in "${ALL_BINARIES_AND_TARGETS[@]}"
+++ targets+=("${bt##*:}")
+++ for bt in "${ALL_BINARIES_AND_TARGETS[@]}"
+++ targets+=("${bt##*:}")
+++ for bt in "${ALL_BINARIES_AND_TARGETS[@]}"
+++ targets+=("${bt##*:}")
+++ for bt in "${ALL_BINARIES_AND_TARGETS[@]}"
+++ targets+=("${bt##*:}")
+++ for bt in "${ALL_BINARIES_AND_TARGETS[@]}"
+++ targets+=("${bt##*:}")
+++ echo cloud/cmd/cloudcore cloud/cmd/admission keadm/cmd/keadm edge/cmd/edgecore edgesite/cmd/edgesite-agent edgesite/cmd/edgesite-server cloud/cmd/csidriver cloud/cmd/iptablesmanager edge/cmd/edgemark cloud/cmd/controllermanager
++ IFS=' '
++ read -ra KUBEEDGE_ALL_BINARIES
+++ kubeedge::golang::get_all_binaries
+++ local -a binaries
+++ for bt in "${ALL_BINARIES_AND_TARGETS[@]}"
+++ binaries+=("${bt%%:*}")
+++ for bt in "${ALL_BINARIES_AND_TARGETS[@]}"
+++ binaries+=("${bt%%:*}")
+++ for bt in "${ALL_BINARIES_AND_TARGETS[@]}"
+++ binaries+=("${bt%%:*}")
+++ for bt in "${ALL_BINARIES_AND_TARGETS[@]}"
+++ binaries+=("${bt%%:*}")
+++ for bt in "${ALL_BINARIES_AND_TARGETS[@]}"
+++ binaries+=("${bt%%:*}")
+++ for bt in "${ALL_BINARIES_AND_TARGETS[@]}"
+++ binaries+=("${bt%%:*}")
+++ for bt in "${ALL_BINARIES_AND_TARGETS[@]}"
+++ binaries+=("${bt%%:*}")
+++ for bt in "${ALL_BINARIES_AND_TARGETS[@]}"
+++ binaries+=("${bt%%:*}")
+++ for bt in "${ALL_BINARIES_AND_TARGETS[@]}"
+++ binaries+=("${bt%%:*}")
+++ for bt in "${ALL_BINARIES_AND_TARGETS[@]}"
+++ binaries+=("${bt%%:*}")
+++ echo cloudcore admission keadm edgecore edgesite-agent edgesite-server csidriver iptablesmanager edgemark controllermanager
++ KUBEEDGE_ALL_CROSS_GOARMS=(8 7)
++ KUBEEDGE_ALL_SMALL_BINARIES=(edgecore)
++ read -ra KUBEEDGE_CLOUD_TESTCASES
+++ kubeedge::golang::get_cloud_test_dirs
+++ local findDirs
+++ local -a dirArray
+++ cd /workspaces/44/kubeedge/hack/..
++++ find -L ./cloud -not '(' '(' -path './cloud/test/integration/*' ')' -prune ')' -name '*_test.go' -print0
++++ xargs -0n1 dirname
++++ LC_ALL=C
++++ sort -u
+++ findDirs='./cloud/cmd/cloudcore/app
./cloud/pkg/admissioncontroller
./cloud/pkg/cloudhub/common/model
./cloud/pkg/cloudhub/dispatcher
./cloud/pkg/cloudhub/servers/httpserver
./cloud/pkg/cloudhub/session
./cloud/pkg/cloudstream
./cloud/pkg/common/messagelayer
./cloud/pkg/common/util
./cloud/pkg/controllermanager/nodegroup
./cloud/pkg/devicecontroller/manager
./cloud/pkg/dynamiccontroller/application
./cloud/pkg/edgecontroller/manager
./cloud/pkg/policycontroller
./cloud/pkg/policycontroller/manager
./cloud/pkg/router/messagelayer
./cloud/pkg/router/provider/eventbus
./cloud/pkg/router/utils
./cloud/pkg/synccontroller
./cloud/pkg/synccontroller/config
./cloud/pkg/taskmanager/util'
+++ dirArray=(${findDirs// /})
+++ echo ./cloud/cmd/cloudcore/app ./cloud/pkg/admissioncontroller ./cloud/pkg/cloudhub/common/model ./cloud/pkg/cloudhub/dispatcher ./cloud/pkg/cloudhub/servers/httpserver ./cloud/pkg/cloudhub/session ./cloud/pkg/cloudstream ./cloud/pkg/common/messagelayer ./cloud/pkg/common/util ./cloud/pkg/controllermanager/nodegroup ./cloud/pkg/devicecontroller/manager ./cloud/pkg/dynamiccontroller/application ./cloud/pkg/edgecontroller/manager ./cloud/pkg/policycontroller ./cloud/pkg/policycontroller/manager ./cloud/pkg/router/messagelayer ./cloud/pkg/router/provider/eventbus ./cloud/pkg/router/utils ./cloud/pkg/synccontroller ./cloud/pkg/synccontroller/config ./cloud/pkg/taskmanager/util
++ read -ra KUBEEDGE_EDGE_TESTCASES
+++ kubeedge::golang::get_edge_test_dirs
+++ local findDirs
+++ dirArray=()
+++ local -a dirArray
+++ cd /workspaces/44/kubeedge/hack/..
++++ find ./edge/pkg -name '*_test.go'
++++ xargs '-I{}' dirname '{}'
++++ uniq
+++ findDirs='./edge/pkg/devicetwin/dtclient
./edge/pkg/devicetwin/dtcontext
./edge/pkg/devicetwin/dtmanager
./edge/pkg/devicetwin
./edge/pkg/devicetwin/dttype
./edge/pkg/devicetwin/dtmodule
./edge/pkg/devicetwin/dtcommon
./edge/pkg/devicetwin
./edge/pkg/eventbus/dao
./edge/pkg/eventbus/mqtt
./edge/pkg/eventbus/common/util
./edge/pkg/edgehub
./edge/pkg/edgehub/clients/quicclient
./edge/pkg/edgehub/clients/wsclient
./edge/pkg/edgehub/certificate
./edge/pkg/edgehub
./edge/pkg/edgehub/config
./edge/pkg/edgehub/common/http
./edge/pkg/edgehub/common/certutil
./edge/pkg/metamanager/dao
./edge/pkg/metamanager
./edge/pkg/metamanager/metaserver/auth
./edge/pkg/metamanager/metaserver/agent
./edge/pkg/metamanager/metaserver/kubernetes/storage
./edge/pkg/servicebus/dao
./edge/pkg/servicebus/util'
+++ dirArray=(${findDirs// /})
+++ echo ./edge/pkg/devicetwin/dtclient ./edge/pkg/devicetwin/dtcontext ./edge/pkg/devicetwin/dtmanager ./edge/pkg/devicetwin ./edge/pkg/devicetwin/dttype ./edge/pkg/devicetwin/dtmodule ./edge/pkg/devicetwin/dtcommon ./edge/pkg/devicetwin ./edge/pkg/eventbus/dao ./edge/pkg/eventbus/mqtt ./edge/pkg/eventbus/common/util ./edge/pkg/edgehub ./edge/pkg/edgehub/clients/quicclient ./edge/pkg/edgehub/clients/wsclient ./edge/pkg/edgehub/certificate ./edge/pkg/edgehub ./edge/pkg/edgehub/config ./edge/pkg/edgehub/common/http ./edge/pkg/edgehub/common/certutil ./edge/pkg/metamanager/dao ./edge/pkg/metamanager ./edge/pkg/metamanager/metaserver/auth ./edge/pkg/metamanager/metaserver/agent ./edge/pkg/metamanager/metaserver/kubernetes/storage ./edge/pkg/servicebus/dao ./edge/pkg/servicebus/util
++ read -ra KUBEEDGE_KEADM_TESTCASES
+++ kubeedge::golang::get_keadm_test_dirs
+++ cd /workspaces/44/kubeedge/hack/..
++++ find -L ./keadm -name '*_test.go' -print
++++ xargs -n1 dirname
++++ uniq
+++ findDirs='./keadm/cmd/keadm/app/cmd/ctl/get
./keadm/cmd/keadm/app/cmd/ctl/util
./keadm/cmd/keadm/app/cmd/ctl/restart
./keadm/cmd/keadm/app/cmd/beta
./keadm/cmd/keadm/app/cmd/util
./keadm/cmd/keadm/app/cmd/cloud
./keadm/cmd/keadm/app/cmd/debug
./keadm/cmd/keadm/app/cmd/helm
./keadm/cmd/keadm/app/cmd/common'
+++ dirArray=(${findDirs// /})
+++ echo ./keadm/cmd/keadm/app/cmd/ctl/get ./keadm/cmd/keadm/app/cmd/ctl/util ./keadm/cmd/keadm/app/cmd/ctl/restart ./keadm/cmd/keadm/app/cmd/beta ./keadm/cmd/keadm/app/cmd/util ./keadm/cmd/keadm/app/cmd/cloud ./keadm/cmd/keadm/app/cmd/debug ./keadm/cmd/keadm/app/cmd/helm ./keadm/cmd/keadm/app/cmd/common
++ read -ra KUBEEDGE_PKG_TESTCASES
+++ kubeedge::golang::get_pkg_test_dirs
+++ cd /workspaces/44/kubeedge/hack/..
++++ find -L ./pkg -name '*_test.go' -print
++++ uniq
++++ xargs -n1 dirname
+++ findDirs='./pkg/util/pass-through
./pkg/util
./pkg/util/validation
./pkg/apis/componentconfig/cloudcore/v1alpha1/validation
./pkg/apis/componentconfig/edgecore/v1alpha2/validation
./pkg/apis/componentconfig/edgecore/v1alpha1/validation
./pkg/image
./pkg/metaserver
./pkg/metaserver/util'
+++ dirArray=(${findDirs// /})
+++ echo ./pkg/util/pass-through ./pkg/util ./pkg/util/validation ./pkg/apis/componentconfig/cloudcore/v1alpha1/validation ./pkg/apis/componentconfig/edgecore/v1alpha2/validation ./pkg/apis/componentconfig/edgecore/v1alpha1/validation ./pkg/image ./pkg/metaserver ./pkg/metaserver/util
++ KUBEEDGE_ALL_TESTCASES=(${KUBEEDGE_CLOUD_TESTCASES[@]} ${KUBEEDGE_EDGE_TESTCASES[@]} ${KUBEEDGE_KEADM_TESTCASES[@]} ${KUBEEDGE_PKG_TESTCASES[@]})
++ readonly KUBEEDGE_ALL_TESTCASES
++ ALL_COMPONENTS_AND_GETTESTDIRS_FUNCTIONS=(cloud::::kubeedge::golang::get_cloud_test_dirs edge::::kubeedge::golang::get_edge_test_dirs keadm::::kubeedge::golang::get_keadm_test_dirs pkg::::kubeedge::golang::get_pkg_test_dirs)
+ source /workspaces/44/kubeedge/hack/../hack/lib/install.sh
+ install_cr
+ attempt_num=0
+ max_attempts=5
+ '[' 0 -lt 5 ']'
+ [[ containerd = \d\o\c\k\e\r ]]
+ [[ containerd = \c\r\i\-\o ]]
+ [[ containerd = \i\s\u\l\a\d ]]
+ echo 'No need to download container runtime'
No need to download container runtime
+ break
+ '[' 0 -eq 5 ']'
+ check_prerequisites
+ kubeedge::golang::verify_golang_version
++ go version
+ echo 'go detail version: go version go1.22.3 linux/amd64'
go detail version: go version go1.22.3 linux/amd64
++ go version
++ sed s/go//g
++ awk -F ' ' '{printf $3}'
+ goversion=1.22.3
+ echo 'go version: 1.22.3'
go version: 1.22.3
++ echo 1.22.3
++ awk -F . '{printf $1}'
+ X=1
++ echo 1.22.3
++ awk -F . '{printf $2}'
+ Y=22
+ '[' 1 -lt 1 ']'
+ '[' 22 -lt 20 ']'
+ check_kubectl
+ echo 'checking kubectl'
checking kubectl
+ command -v kubectl
+ [[ 0 -ne 0 ]]
+ echo -n 'found kubectl, '
found kubectl, + kubectl version --client
Client Version: v1.30.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
+ check_kind
+ echo 'checking kind'
checking kind
+ command -v kind
+ [[ 0 -ne 0 ]]
+ echo -n 'found kind, version: '
found kind, version: + kind version
kind v0.21.0 go1.22.3 linux/amd64
+ [[ containerd = \d\o\c\k\e\r ]]
+ [[ containerd = \c\r\i\-\o ]]
+ [[ containerd = \i\s\u\l\a\d ]]
+ [[ containerd = \c\o\n\t\a\i\n\e\r\d ]]
+ verify_containerd_installed
+ command -v containerd
+ set -eE
+ build_cloudcore
+ echo 'building the cloudcore...'
building the cloudcore...
+ make -C /workspaces/44/kubeedge/hack/.. WHAT=cloudcore
make[1]: Entering directory '/workspaces/44/kubeedge'
hack/make-rules/build_with_container.sh hack/make-rules/build.sh cloudcore
start building inside container
Unable to find image 'kubeedge/build-tools:1.21.11-ke1' locally
1.21.11-ke1: Pulling from kubeedge/build-tools
41af1b5f0f51: Pull complete 
109b019e7259: Pull complete 
72f6490ea5db: Pull complete 
200222cf14c0: Pull complete 
47ac1fe4e0cf: Pull complete 
Digest: sha256:4838f686771b613c8e47a0064672b7bf677a58abe84b079a64682a700f8c98a3
Status: Downloaded newer image for kubeedge/build-tools:1.21.11-ke1
go detail version: go version go1.21.11 linux/amd64
go version: 1.21.11
building github.com/kubeedge/kubeedge/cloud/cmd/cloudcore
+ go build -o /kubeedge/_output/local/bin/cloudcore -gcflags= -ldflags '-s -w -buildid= -X github.com/kubeedge/kubeedge/pkg/version.buildDate=2024-07-10T12:00:15Z -X github.com/kubeedge/kubeedge/pkg/version.gitCommit=7b71e0fe80fcd917d769ba94a56642b2f6e3a91f -X github.com/kubeedge/kubeedge/pkg/version.gitTreeState=dirty -X github.com/kubeedge/kubeedge/pkg/version.gitVersion=v1.17.0-beta.0.87+7b71e0fe80fcd9-dirty -X github.com/kubeedge/kubeedge/pkg/version.gitMajor=1 -X github.com/kubeedge/kubeedge/pkg/version.gitMinor=17+' github.com/kubeedge/kubeedge/cloud/cmd/cloudcore
+ set +x
make[1]: Leaving directory '/workspaces/44/kubeedge'
+ build_edgecore
+ echo 'building the edgecore...'
building the edgecore...
+ make -C /workspaces/44/kubeedge/hack/.. WHAT=edgecore
make[1]: Entering directory '/workspaces/44/kubeedge'
hack/make-rules/build_with_container.sh hack/make-rules/build.sh edgecore
start building inside container
go detail version: go version go1.21.11 linux/amd64
go version: 1.21.11
building github.com/kubeedge/kubeedge/edge/cmd/edgecore
+ go build -o /kubeedge/_output/local/bin/edgecore -gcflags= -ldflags '-s -w -buildid= -X github.com/kubeedge/kubeedge/pkg/version.buildDate=2024-07-10T12:03:36Z -X github.com/kubeedge/kubeedge/pkg/version.gitCommit=7b71e0fe80fcd917d769ba94a56642b2f6e3a91f -X github.com/kubeedge/kubeedge/pkg/version.gitTreeState=dirty -X github.com/kubeedge/kubeedge/pkg/version.gitVersion=v1.17.0-beta.0.87+7b71e0fe80fcd9-dirty -X github.com/kubeedge/kubeedge/pkg/version.gitMajor=1 -X github.com/kubeedge/kubeedge/pkg/version.gitMinor=17+' github.com/kubeedge/kubeedge/edge/cmd/edgecore
+ set +x
make[1]: Leaving directory '/workspaces/44/kubeedge'
+ kind_up_cluster
+ echo 'Running kind: [kind create cluster --name test]'
Running kind: [kind create cluster --name test]
+ kind create cluster --name test --image kindest/node:v1.26.0
Creating cluster "test" ...
 ✓ Ensuring node image (kindest/node:v1.26.0) 🖼 
 ✓ Preparing nodes 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
Set kubectl context to "kind-test"
You can now use your cluster with:

kubectl cluster-info --context kind-test

Thanks for using kind! 😊
+ export KUBECONFIG=/home/codespace/.kube/config
+ KUBECONFIG=/home/codespace/.kube/config
+ check_control_plane_ready
+ echo 'wait the control-plane ready...'
wait the control-plane ready...
+ kubectl wait --for=condition=Ready node/test-control-plane --timeout=60s
node/test-control-plane condition met
+ kubectl delete daemonset kindnet -nkube-system
daemonset.apps "kindnet" deleted
+ kubectl create ns kubeedge
namespace/kubeedge created
+ create_device_crd
+ echo 'creating the device crd...'
creating the device crd...
+ kubectl apply -f /workspaces/44/kubeedge/hack/../build/crds/devices/devices_v1beta1_device.yaml
customresourcedefinition.apiextensions.k8s.io/devices.devices.kubeedge.io created
+ kubectl apply -f /workspaces/44/kubeedge/hack/../build/crds/devices/devices_v1beta1_devicemodel.yaml
customresourcedefinition.apiextensions.k8s.io/devicemodels.devices.kubeedge.io created
+ create_objectsync_crd
+ echo 'creating the objectsync crd...'
creating the objectsync crd...
+ kubectl apply -f /workspaces/44/kubeedge/hack/../build/crds/reliablesyncs/cluster_objectsync_v1alpha1.yaml
customresourcedefinition.apiextensions.k8s.io/clusterobjectsyncs.reliablesyncs.kubeedge.io created
+ kubectl apply -f /workspaces/44/kubeedge/hack/../build/crds/reliablesyncs/objectsync_v1alpha1.yaml
customresourcedefinition.apiextensions.k8s.io/objectsyncs.reliablesyncs.kubeedge.io created
+ create_rule_crd
+ echo 'creating the rule crd...'
creating the rule crd...
+ kubectl apply -f /workspaces/44/kubeedge/hack/../build/crds/router/router_v1_rule.yaml
customresourcedefinition.apiextensions.k8s.io/rules.rules.kubeedge.io created
+ kubectl apply -f /workspaces/44/kubeedge/hack/../build/crds/router/router_v1_ruleEndpoint.yaml
customresourcedefinition.apiextensions.k8s.io/ruleendpoints.rules.kubeedge.io created
+ create_operation_crd
+ echo 'creating the operation crd...'
creating the operation crd...
+ kubectl apply -f /workspaces/44/kubeedge/hack/../build/crds/operations/operations_v1alpha1_nodeupgradejob.yaml
customresourcedefinition.apiextensions.k8s.io/nodeupgradejobs.operations.kubeedge.io created
+ kubectl apply -f /workspaces/44/kubeedge/hack/../build/crds/operations/operations_v1alpha1_imageprepulljob.yaml
customresourcedefinition.apiextensions.k8s.io/imageprepulljobs.operations.kubeedge.io created
+ create_serviceaccountaccess_crd
+ echo 'creating the saaccess crd...'
creating the saaccess crd...
+ kubectl apply -f /workspaces/44/kubeedge/hack/../build/crds/policy/policy_v1alpha1_serviceaccountaccess.yaml
customresourcedefinition.apiextensions.k8s.io/serviceaccountaccesses.policy.kubeedge.io created
+ generate_streamserver_cert
+ CA_PATH=/tmp/etc/kubeedge/ca
+ CERT_PATH=/tmp/etc/kubeedge/certs
+ STREAM_KEY_FILE=/tmp/etc/kubeedge/certs/stream.key
+ STREAM_CSR_FILE=/tmp/etc/kubeedge/certs/stream.csr
+ STREAM_CRT_FILE=/tmp/etc/kubeedge/certs/stream.crt
+ K8SCA_FILE=/tmp/etc/kubernetes/pki/ca.crt
+ K8SCA_KEY_FILE=/tmp/etc/kubernetes/pki/ca.key
+ streamsubject=/C=CN/ST=Zhejiang/L=Hangzhou/O=KubeEdge
+ [[ ! -d /tmp/etc/kubernetes/pki ]]
+ mkdir -p /tmp/etc/kubernetes/pki
+ [[ ! -d /tmp/etc/kubeedge/ca ]]
+ mkdir -p /tmp/etc/kubeedge/ca
+ [[ ! -d /tmp/etc/kubeedge/certs ]]
+ mkdir -p /tmp/etc/kubeedge/certs
+ docker cp test-control-plane:/etc/kubernetes/pki/ca.crt /tmp/etc/kubernetes/pki/ca.crt
Successfully copied 3.07kB to /tmp/etc/kubernetes/pki/ca.crt
+ docker cp test-control-plane:/etc/kubernetes/pki/ca.key /tmp/etc/kubernetes/pki/ca.key
Successfully copied 3.58kB to /tmp/etc/kubernetes/pki/ca.key
+ cp /tmp/etc/kubernetes/pki/ca.crt /tmp/etc/kubeedge/ca/streamCA.crt
+ SUBJECTALTNAME='subjectAltName = IP.1:127.0.0.1'
+ echo subjectAltName = IP.1:127.0.0.1
+ touch /home/codespace/.rnd
+ openssl genrsa -out /tmp/etc/kubeedge/certs/stream.key 2048
+ openssl req -new -key /tmp/etc/kubeedge/certs/stream.key -subj /C=CN/ST=Zhejiang/L=Hangzhou/O=KubeEdge -out /tmp/etc/kubeedge/certs/stream.csr
+ openssl x509 -req -in /tmp/etc/kubeedge/certs/stream.csr -CA /tmp/etc/kubernetes/pki/ca.crt -CAkey /tmp/etc/kubernetes/pki/ca.key -CAcreateserial -out /tmp/etc/kubeedge/certs/stream.crt -days 5000 -sha256 -extfile /tmp/server-extfile.cnf
Certificate request self-signature ok
subject=C = CN, ST = Zhejiang, L = Hangzhou, O = KubeEdge
+ start_cloudcore
+ CLOUD_CONFIGFILE=/workspaces/44/kubeedge/hack/../_output/local/bin/cloudcore.yaml
+ CLOUD_BIN=/workspaces/44/kubeedge/hack/../_output/local/bin/cloudcore
+ /workspaces/44/kubeedge/hack/../_output/local/bin/cloudcore --defaultconfig
+ sed -i '/cloudStream:/{n;s/false/true/;}' /workspaces/44/kubeedge/hack/../_output/local/bin/cloudcore.yaml
+ [[ WebSocket = \Q\U\I\C ]]
+ sed -i '/dynamicController:/{n;s/false/true/;}' /workspaces/44/kubeedge/hack/../_output/local/bin/cloudcore.yaml
+ sed -i -e 's|kubeConfig: .*|kubeConfig: /home/codespace/.kube/config|g' -e 's|/var/lib/kubeedge/|/tmp&|g' -e 's|tlsCAFile: .*|tlsCAFile: /etc/kubeedge/ca/cloudhub/rootCA.crt|g' -e 's|tlsCAKeyFile: .*|tlsCAKeyFile: /etc/kubeedge/ca/cloudhub/rootCA.key|g' -e 's|tlsCertFile: .*|tlsCertFile: /etc/kubeedge/certs/cloudhub/server.crt|g' -e 's|tlsPrivateKeyFile: .*|tlsPrivateKeyFile: /etc/kubeedge/certs/cloudhub/server.key|g' -e 's|/etc/|/tmp/etc/|g' -e '/router:/{n;N;N;N;N;s/false/true/}' /workspaces/44/kubeedge/hack/../_output/local/bin/cloudcore.yaml
+ CLOUDCORE_LOG=/tmp/cloudcore.log
+ echo 'start cloudcore...'
start cloudcore...
+ CLOUDCORE_PID=39442
+ true
+ sleep 3
+ nohup sudo /workspaces/44/kubeedge/hack/../_output/local/bin/cloudcore --config=/workspaces/44/kubeedge/hack/../_output/local/bin/cloudcore.yaml --v=2
+ kubectl get secret -nkubeedge
+ grep -q tokensecret
No resources found in kubeedge namespace.
+ true
+ sleep 3
+ grep -q tokensecret
+ kubectl get secret -nkubeedge
No resources found in kubeedge namespace.
+ true
+ sleep 3
+ kubectl get secret -nkubeedge
+ grep -q tokensecret
+ break
+ [[ containerd = \c\o\n\t\a\i\n\e\r\d ]]
+ install_cni_plugins
+ CNI_DOWNLOAD_ADDR=https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz
+ CNI_PKG=cni-plugins-linux-amd64-v1.1.1.tgz
+ CNI_CONF_OVERWRITE=true
+ echo -e 'The installation of the cni plugin will overwrite the cni config file. Use export CNI_CONF_OVERWRITE=false to disable it.'
The installation of the cni plugin will overwrite the cni config file. Use export CNI_CONF_OVERWRITE=false to disable it.
+ '[' '!' -f /opt/cni/bin/loopback ']'
+ echo -e 'start installing CNI plugins...'
start installing CNI plugins...
+ sudo mkdir -p /opt/cni/bin
+ wget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz
--2024-07-10 12:09:02--  https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz
Resolving github.com (github.com)... 20.207.73.82
Connecting to github.com (github.com)|20.207.73.82|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://objects.githubusercontent.com/github-production-release-asset-2e65be/84575398/34412816-cbca-47a1-a428-9e738f2451d8?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=releaseassetproduction%2F20240710%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240710T120902Z&X-Amz-Expires=300&X-Amz-Signature=fc616929042a30bb588574faaa4df9d026653755fceabcb4dac96d076ce2118b&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=84575398&response-content-disposition=attachment%3B%20filename%3Dcni-plugins-linux-amd64-v1.1.1.tgz&response-content-type=application%2Foctet-stream [following]
--2024-07-10 12:09:02--  https://objects.githubusercontent.com/github-production-release-asset-2e65be/84575398/34412816-cbca-47a1-a428-9e738f2451d8?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=releaseassetproduction%2F20240710%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240710T120902Z&X-Amz-Expires=300&X-Amz-Signature=fc616929042a30bb588574faaa4df9d026653755fceabcb4dac96d076ce2118b&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=84575398&response-content-disposition=attachment%3B%20filename%3Dcni-plugins-linux-amd64-v1.1.1.tgz&response-content-type=application%2Foctet-stream
Resolving objects.githubusercontent.com (objects.githubusercontent.com)... 185.199.110.133, 185.199.109.133, 185.199.108.133, ...
Connecting to objects.githubusercontent.com (objects.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 36336160 (35M) [application/octet-stream]
Saving to: ‘cni-plugins-linux-amd64-v1.1.1.tgz’

cni-plugins-linux-amd64-v1.1.1.tgz      100%[==============================================================================>]  34.65M  37.2MB/s    in 0.9s    

2024-07-10 12:09:05 (37.2 MB/s) - ‘cni-plugins-linux-amd64-v1.1.1.tgz’ saved [36336160/36336160]

+ '[' '!' -f cni-plugins-linux-amd64-v1.1.1.tgz ']'
+ sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.1.1.tgz
./
./macvlan
./static
./vlan
./portmap
./host-local
./vrf
./bridge
./tuning
./firewall
./host-device
./sbr
./loopback
./dhcp
./ptp
./ipvlan
./bandwidth
+ sudo rm -rf cni-plugins-linux-amd64-v1.1.1.tgz
+ '[' '!' -f /opt/cni/bin/loopback ']'
+ CNI_CONFIG_FILE=/etc/cni/net.d/10-containerd-net.conflist
+ '[' -f /etc/cni/net.d/10-containerd-net.conflist ']'
+ sudo mkdir -p /etc/cni/net.d/
+ sudo sh -c 'cat > /etc/cni/net.d/10-containerd-net.conflist <<EOF
{
  "cniVersion": "1.0.0",
  "name": "containerd-net",
  "plugins": [
    {
      "type": "bridge",
      "bridge": "cni0",
      "isGateway": true,
      "ipMasq": true,
      "promiscMode": true,
      "ipam": {
        "type": "host-local",
        "ranges": [
          [{
            "subnet": "10.88.0.0/16"
          }],
          [{
            "subnet": "2001:db8:4860::/64"
          }]
        ],
        "routes": [
          { "dst": "0.0.0.0/0" },
          { "dst": "::/0" }
        ]
      }
    },
    {
      "type": "portmap",
      "capabilities": {"portMappings": true}
    }
  ]
}
EOF'
+ [[ containerd = \d\o\c\k\e\r ]]
+ [[ containerd = \c\o\n\t\a\i\n\e\r\d ]]
+ sudo systemctl restart containerd

"systemd" is not running in this container due to its overhead.
Use the "service" command to start services instead. e.g.: 

service --status-all
+ sleep 2
+ sleep 2
+ start_edgecore
+ EDGE_CONFIGFILE=/workspaces/44/kubeedge/hack/../_output/local/bin/edgecore.yaml
+ EDGE_BIN=/workspaces/44/kubeedge/hack/../_output/local/bin/edgecore
+ /workspaces/44/kubeedge/hack/../_output/local/bin/edgecore --defaultconfig
+ sed -i '/edgeStream:/{n;s/false/true/;}' /workspaces/44/kubeedge/hack/../_output/local/bin/edgecore.yaml
+ sed -i '/metaServer:/{n;s/false/true/;}' /workspaces/44/kubeedge/hack/../_output/local/bin/edgecore.yaml
+ [[ WebSocket = \Q\U\I\C ]]
+ [[ containerd = \d\o\c\k\e\r ]]
+ [[ containerd = \c\r\i\-\o ]]
+ [[ containerd = \i\s\u\l\a\d ]]
++ kubectl get secret -nkubeedge tokensecret '-o=jsonpath={.data.tokendata}'
++ base64 -d
+ token=04995c0b1ed0232d7e85ea22d10a51d12209e7ffcf97ec7d91d6d99be29c6d52.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE3MjA2OTk3Mzl9.BafYC3uzC3PZoWOkm1eWWs4GIGcPEU4w4dpjWYnLYgk
+ sed -i -e 's|token: .*|token: 04995c0b1ed0232d7e85ea22d10a51d12209e7ffcf97ec7d91d6d99be29c6d52.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE3MjA2OTk3Mzl9.BafYC3uzC3PZoWOkm1eWWs4GIGcPEU4w4dpjWYnLYgk|g' -e 's|hostnameOverride: .*|hostnameOverride: edge-node|g' -e 's|/etc/|/tmp/etc/|g' -e 's|/var/lib/kubeedge/|/tmp&|g' -e 's|mqttMode: .*|mqttMode: 0|g' -e '/serviceBus:/{n;s/false/true/;}' /workspaces/44/kubeedge/hack/../_output/local/bin/edgecore.yaml
+ sed -i -e 's|/tmp/etc/resolv|/etc/resolv|g' /workspaces/44/kubeedge/hack/../_output/local/bin/edgecore.yaml
+ EDGECORE_LOG=/tmp/edgecore.log
+ echo 'start edgecore...'
start edgecore...
+ export CHECK_EDGECORE_ENVIRONMENT=false
+ CHECK_EDGECORE_ENVIRONMENT=false
+ EDGECORE_PID=39803
+ [[ true = false ]]
+ echo 'Local KubeEdge cluster is running. Use "kill 15502" to shut it down.'
Local KubeEdge cluster is running. Use "kill 15502" to shut it down.
+ echo 'Logs:
  /tmp/cloudcore.log
  /tmp/edgecore.log

To start using your kubeedge, you can run:

  export PATH=/usr/local/rvm/gems/ruby-3.2.4/bin:/usr/local/rvm/gems/ruby-3.2.4@global/bin:/usr/local/rvm/rubies/ruby-3.2.4/bin:/vscode/bin/linux-x64/ea1445cc7016315d0f5728f8e8b12a45dc0a7286/bin/remote-cli:/home/codespace/.local/bin:/home/codespace/.dotnet:/home/codespace/nvm/current/bin:/home/codespace/.php/current/bin:/home/codespace/.python/current/bin:/home/codespace/java/current/bin:/home/codespace/.ruby/current/bin:/home/codespace/.local/bin:/usr/local/python/current/bin:/usr/local/py-utils/bin:/usr/local/oryx:/usr/local/go/bin:/go/bin:/usr/local/sdkman/bin:/usr/local/sdkman/candidates/java/current/bin:/usr/local/sdkman/candidates/gradle/current/bin:/usr/local/sdkman/candidates/maven/current/bin:/usr/local/sdkman/candidates/ant/current/bin:/usr/local/rvm/gems/default/bin:/usr/local/rvm/gems/default@global/bin:/usr/local/rvm/rubies/default/bin:/usr/local/share/rbenv/bin:/usr/local/php/current/bin:/opt/conda/bin:/usr/local/nvs:/usr/local/share/nvm/versions/node/v20.14.0/bin:/usr/local/hugo/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/share/dotnet:/home/codespace/.dotnet/tools:/usr/local/rvm/bin:/go/bin
  export KUBECONFIG=/home/codespace/.kube/config
  kubectl get nodes
'
+ nohup sudo -E /workspaces/44/kubeedge/hack/../_output/local/bin/edgecore --config=/workspaces/44/kubeedge/hack/../_output/local/bin/edgecore.yaml --v=2
Logs:
  /tmp/cloudcore.log
  /tmp/edgecore.log

To start using your kubeedge, you can run:

  export PATH=/usr/local/rvm/gems/ruby-3.2.4/bin:/usr/local/rvm/gems/ruby-3.2.4@global/bin:/usr/local/rvm/rubies/ruby-3.2.4/bin:/vscode/bin/linux-x64/ea1445cc7016315d0f5728f8e8b12a45dc0a7286/bin/remote-cli:/home/codespace/.local/bin:/home/codespace/.dotnet:/home/codespace/nvm/current/bin:/home/codespace/.php/current/bin:/home/codespace/.python/current/bin:/home/codespace/java/current/bin:/home/codespace/.ruby/current/bin:/home/codespace/.local/bin:/usr/local/python/current/bin:/usr/local/py-utils/bin:/usr/local/oryx:/usr/local/go/bin:/go/bin:/usr/local/sdkman/bin:/usr/local/sdkman/candidates/java/current/bin:/usr/local/sdkman/candidates/gradle/current/bin:/usr/local/sdkman/candidates/maven/current/bin:/usr/local/sdkman/candidates/ant/current/bin:/usr/local/rvm/gems/default/bin:/usr/local/rvm/gems/default@global/bin:/usr/local/rvm/rubies/default/bin:/usr/local/share/rbenv/bin:/usr/local/php/current/bin:/opt/conda/bin:/usr/local/nvs:/usr/local/share/nvm/versions/node/v20.14.0/bin:/usr/local/hugo/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/share/dotnet:/home/codespace/.dotnet/tools:/usr/local/rvm/bin:/go/bin
  export KUBECONFIG=/home/codespace/.kube/config
  kubectl get nodes

+ [[ true = false ]]
+ true
+ sleep 3
+ grep edge-node
+ kubectl get nodes
+ grep -q -w Ready
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep edge-node
+ grep -q -w Ready
+ true
+ sleep 3
+ grep -q -w Ready
+ kubectl get nodes
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep edge-node
+ grep -q -w Ready
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep edge-node
+ grep -q -w Ready
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ grep -q -w Ready
+ grep edge-node
+ kubectl get nodes
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep edge-node
+ grep -q -w Ready
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep edge-node
+ grep -q -w Ready
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep edge-node
+ grep -q -w Ready
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep edge-node
+ grep -q -w Ready
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep edge-node
+ grep -q -w Ready
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep -q -w Ready
+ grep edge-node
+ true
+ sleep 3
+ kubectl get nodes
+ grep edge-node
+ grep -q -w Ready
+ true
+ sleep 3

@Shelley-BaoYue
Copy link
Collaborator Author

@1Shubham7 It means that edgecore is not running properly. You can run make e2e in your local environment and check the edgecore log.

BTW,I hope to continue to do some UT work in the future. Currently, the coverage can be viewed through https://app.codecov.io/gh/kubeedge/kubeedge. We need to improve the coverage of the cloud and edge directories as much as possible. : )

@1Shubham7
Copy link
Contributor

1Shubham7 commented Jul 11, 2024

ok Shelley, checking it. BTW I didn't change any code in my local env and it showed that. also if UT is the priority for the project I will also focus on those. As we discussed earlier, I am also working on transitioning current tests with standard lib to testify/assert. Also Please do give me feedback on how I have been doing so far in the mentorship, I have been giving 10 - 12 hours everyday almost everyday and I will surely improve this. I hope I am going decent for getting positive mid term evaluations :)

Also Shelley, I will have my exams for a week in August so I will not be able to be very active at that time (But I will make sure to give at least 5-6 hours a day) and will make sure to cover it up later:)

@Shelley-BaoYue
Copy link
Collaborator Author

@1Shubham7 If the local make e2e execution fails, you can use journalctl -f -u edgecore.service to find the reason through the edgecore logs.

As for the timeline, I think you can arrange it freely. You've been doing very well so far. 👍 Take it easy and participate in the LFX based on your own scheduleand. I would prefer that you can continuous participate in the community's work and also gain something from the community. 😄

In terms of the work content, the community's target for UT is to achieve an 80% coverage rate (but this is not a requirement for you, feel free to supplement as much as possible), and you can learn about the e2e use cases related to the use of device plugins at the edge (you can refer to Kubernetes e2e test); I hope you can first learn and use KubeEdge, and during this process, identify areas that need optimization, including but not limited to code standards, documentation, and test cases.

@1Shubham7
Copy link
Contributor

1Shubham7 commented Jul 19, 2024

Hey @Shelley-BaoYue, since we last talked I have written some more UTs and I was stuck at writing an e2e which now successfully works, one small mistake costed me hours :) I am still unable to run make e2e locally, I will discuss that later with you and other maintainers after trying once again.

here's the PR, please also give your thoughts on my comment there about changes we can make in the code standards. Like you said, I will continue working on UTs, e2e you discussed, learning more about and using KubeEdge. 😄

@Shelley-BaoYue
Copy link
Collaborator Author

Hi, @1Shubham7 Thanks for your contributions in KubeEdge community. You can submit a issue at https://github.com/kubeedge/community/issues to become KubeEdge member 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

No branches or pull requests

3 participants