Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] unexpected removing all kafka resources when upgrade using helm3 #3877

Closed
gricuk opened this issue Oct 26, 2020 · 33 comments
Closed

[Bug] unexpected removing all kafka resources when upgrade using helm3 #3877

gricuk opened this issue Oct 26, 2020 · 33 comments
Labels

Comments

@gricuk
Copy link

gricuk commented Oct 26, 2020

Describe the bug
I'm using strimzi operator v0.19.0 and tried upgrade to 0.20.0. When I've ran helm upgrade procedure all my resources (users, topics, clusters) was removed.
I try to reproduce problem with fresh installed cluster and situation was reproduced again.

To Reproduce
Steps to reproduce the behavior:

1. helm install strimzi-kafka strimzi/strimzi-kafka-operator --namespace kafka --set watchNamespaces="{kafka,test-kafka}" --version=0.19.0
2. create cluster, users and topics from manifests (apiVersion: v1beta1)
3. helm upgrade strimzi-kafka strimzi/strimzi-kafka-operator --namespace kafka --set watchNamespaces="{kafka,test-kafka}"
kubectl get crd| grep kafka| wc -l
       0

After the steps above my cluster and users/topics was removed. The operator pod try to start and crashed with the following error:

2020-10-26 14:35:47 WARN  WatchConnectionManager:198 - Exec Failure: HTTP 404, Status: 404 - 404 page not found

java.net.ProtocolException: Expected HTTP 101 response but was '404 Not Found'
	at okhttp3.internal.ws.RealWebSocket.checkResponse(RealWebSocket.java:229) [com.squareup.okhttp3.okhttp-3.12.6.jar:?]
	at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:196) [com.squareup.okhttp3.okhttp-3.12.6.jar:?]
	at okhttp3.RealCall$AsyncCall.execute(RealCall.java:203) [com.squareup.okhttp3.okhttp-3.12.6.jar:?]
	at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) [com.squareup.okhttp3.okhttp-3.12.6.jar:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
2020-10-26 14:35:47 WARN  WatchConnectionManager:198 - Exec Failure: HTTP 404, Status: 404 - 404 page not found

java.net.ProtocolException: Expected HTTP 101 response but was '404 Not Found'
	at okhttp3.internal.ws.RealWebSocket.checkResponse(RealWebSocket.java:229) [com.squareup.okhttp3.okhttp-3.12.6.jar:?]
	at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:196) [com.squareup.okhttp3.okhttp-3.12.6.jar:?]
	at okhttp3.RealCall$AsyncCall.execute(RealCall.java:203) [com.squareup.okhttp3.okhttp-3.12.6.jar:?]
	at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32) [com.squareup.okhttp3.okhttp-3.12.6.jar:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]

Expected behavior
The operator should be updated without removing resources.

Environment (please complete the following information):

  • Strimzi version: 0.19.0
  • Installation method: Helm chart
  • Kubernetes cluster: v.1.18.8
  • Infrastructure: Rancher2 on Amazon EC2 instances
@gricuk gricuk added the bug label Oct 26, 2020
@scholzj
Copy link
Member

scholzj commented Oct 26, 2020

The error (404 Not Found) suggests that your CRDs are removed. The Helm Charts are mostly contributed by users, so I'm not sure what is the problem.

@gricuk
Copy link
Author

gricuk commented Oct 26, 2020

The error (404 Not Found) suggests that your CRDs are removed. The Helm Charts are mostly contributed by users, so I'm not sure what is the problem.

I’m using strimzi helm repo without any changes from my side.

@mmazek
Copy link

mmazek commented Oct 26, 2020

I just encountered the same situation. I'm running AWS EKS with Kubernetes 1.18 and when running update using helm, all the Strimzi CRDs were removed and not installed back.

@brokenjacobs
Copy link
Contributor

brokenjacobs commented Oct 26, 2020

same here, using the chart from: https://strimzi.io/charts/
Version 0.20.0

Looks related to this:
helm/helm#8163
helm/helm#7279

I don't see a resolution to this that doesn't involve a kafka outage. Perhaps drop doing this install with helm is the best approach for me.

@scholzj
Copy link
Member

scholzj commented Oct 26, 2020

Well, in 0.19.0 everyone complained that the Helm Chart index had the v1 version there next to v2. So we removed the v1 from it and only v2 is left. So that sounds like in the issues you linked. But TBH as a non-Helm user, it is not realy clear to me from all the issues and PRs linked there further down what is the actual problem and the solution. Is there something we should do in Strimzi? Or is this then just Helm issue?

BTW: The Helm2 chart (v1) is still available in the release page for download. Just not in the index.

@aneagoe
Copy link

aneagoe commented Oct 27, 2020

I've just hit the same issue on k3s single-node deployment. The CRDs seem to be managed by the helm chart so I guess there's something off there. Below the upgrade logs from the helm-operator:

ts=2020-10-27T13:32:19.36621305Z caller=helm.go:69 component=helm version=v3 info="performing update for strimzi-kafka-operator" targetNamespace=strimzi release=strimzi-kafka-operator
ts=2020-10-27T13:32:19.614669544Z caller=helm.go:69 component=helm version=v3 info="dry run for strimzi-kafka-operator" targetNamespace=strimzi release=strimzi-kafka-operator
ts=2020-10-27T13:32:20.324093381Z caller=helm.go:69 component=helm version=v3 info="performing update for cert-manager" targetNamespace=cert-manager release=cert-manager
ts=2020-10-27T13:32:20.649044703Z caller=helm.go:69 component=helm version=v3 info="dry run for cert-manager" targetNamespace=cert-manager release=cert-manager
ts=2020-10-27T13:32:22.166658114Z caller=release.go:311 component=release release=strimzi-kafka-operator targetNamespace=strimzi resource=strimzi:helmrelease/strimzi-kafka-operator helmVersion=v3 info="no changes" phase=dry-run-compare
ts=2020-10-27T13:32:27.105220407Z caller=release.go:311 component=release release=cert-manager targetNamespace=cert-manager resource=cert-manager:helmrelease/cert-manager helmVersion=v3 info="no changes" phase=dry-run-compare
ts=2020-10-27T13:32:28.273798311Z caller=release.go:311 component=release release=prometheus-operator targetNamespace=monitoring resource=monitoring:helmrelease/prometheus-operator helmVersion=v3 info="no changes" phase=dry-run-compare
ts=2020-10-27T13:35:08.512402419Z caller=release.go:79 component=release release=strimzi-kafka-operator targetNamespace=strimzi resource=strimzi:helmrelease/strimzi-kafka-operator helmVersion=v3 info="starting sync run"
ts=2020-10-27T13:35:11.184987118Z caller=release.go:353 component=release release=strimzi-kafka-operator targetNamespace=strimzi resource=strimzi:helmrelease/strimzi-kafka-operator helmVersion=v3 info="running upgrade" action=upgrade
ts=2020-10-27T13:35:11.218170987Z caller=helm.go:69 component=helm version=v3 info="preparing upgrade for strimzi-kafka-operator" targetNamespace=strimzi release=strimzi-kafka-operator
ts=2020-10-27T13:35:11.265187003Z caller=helm.go:69 component=helm version=v3 info="resetting values to the chart's original version" targetNamespace=strimzi release=strimzi-kafka-operator
ts=2020-10-27T13:35:11.549234831Z caller=helm.go:69 component=helm version=v3 info="performing update for strimzi-kafka-operator" targetNamespace=strimzi release=strimzi-kafka-operator
ts=2020-10-27T13:35:11.677293827Z caller=helm.go:69 component=helm version=v3 info="creating upgraded release for strimzi-kafka-operator" targetNamespace=strimzi release=strimzi-kafka-operator
ts=2020-10-27T13:35:11.873954568Z caller=helm.go:69 component=helm version=v3 info="checking 13 resources for changes" targetNamespace=strimzi release=strimzi-kafka-operator
ts=2020-10-27T13:35:11.889156124Z caller=helm.go:69 component=helm version=v3 info="Created a new ConfigMap called \"strimzi-cluster-operator\" in strimzi\n" targetNamespace=strimzi release=strimzi-kafka-operator
ts=2020-10-27T13:35:11.970217468Z caller=helm.go:69 component=helm version=v3 info="Deleting \"kafkas.kafka.strimzi.io\" in ..." targetNamespace=strimzi release=strimzi-kafka-operator
ts=2020-10-27T13:35:12.076143097Z caller=helm.go:69 component=helm version=v3 info="Deleting \"kafkaconnects.kafka.strimzi.io\" in ..." targetNamespace=strimzi release=strimzi-kafka-operator
ts=2020-10-27T13:35:12.118525008Z caller=helm.go:69 component=helm version=v3 info="Deleting \"kafkaconnects2is.kafka.strimzi.io\" in ..." targetNamespace=strimzi release=strimzi-kafka-operator
ts=2020-10-27T13:35:12.138900256Z caller=helm.go:69 component=helm version=v3 info="Deleting \"kafkatopics.kafka.strimzi.io\" in ..." targetNamespace=strimzi release=strimzi-kafka-operator
ts=2020-10-27T13:35:12.144809518Z caller=helm.go:69 component=helm version=v3 info="Deleting \"kafkausers.kafka.strimzi.io\" in ..." targetNamespace=strimzi release=strimzi-kafka-operator
ts=2020-10-27T13:35:12.152097065Z caller=helm.go:69 component=helm version=v3 info="Deleting \"kafkamirrormakers.kafka.strimzi.io\" in ..." targetNamespace=strimzi release=strimzi-kafka-operator
ts=2020-10-27T13:35:12.16910867Z caller=helm.go:69 component=helm version=v3 info="Deleting \"kafkabridges.kafka.strimzi.io\" in ..." targetNamespace=strimzi release=strimzi-kafka-operator
ts=2020-10-27T13:35:12.189448458Z caller=helm.go:69 component=helm version=v3 info="Deleting \"kafkaconnectors.kafka.strimzi.io\" in ..." targetNamespace=strimzi release=strimzi-kafka-operator
ts=2020-10-27T13:35:12.19472487Z caller=helm.go:69 component=helm version=v3 info="Deleting \"kafkamirrormaker2s.kafka.strimzi.io\" in ..." targetNamespace=strimzi release=strimzi-kafka-operator
ts=2020-10-27T13:35:12.216209334Z caller=helm.go:69 component=helm version=v3 info="Deleting \"kafkarebalances.kafka.strimzi.io\" in ..." targetNamespace=strimzi release=strimzi-kafka-operator
ts=2020-10-27T13:35:12.425014421Z caller=helm.go:69 component=helm version=v3 info="updating status for upgraded release for strimzi-kafka-operator" targetNamespace=strimzi release=strimzi-kafka-operator
ts=2020-10-27T13:35:12.606613657Z caller=release.go:364 component=release release=strimzi-kafka-operator targetNamespace=strimzi resource=strimzi:helmrelease/strimzi-kafka-operator helmVersion=v3 info="upgrade succeeded" revision=0.20.0

Afterward, there's no CRDs to be found but it's weird that helm doesn't throw any errors and straight off starts removing all kafka components. So I think this is more for the helm chart maintainers than anything else and these upgrades have to be thoroughly tested as no one wants to inadvertently kill their entire kafka clusters when upgrading the operator.

@aneagoe
Copy link

aneagoe commented Oct 27, 2020

Deleting and re-creating the HelmRelease object helps with this, but the biggest problem here is that upgrades are not just broken but actually cause all kafka objects to be deleted, thus having quite an impact.
The problem can be easily reproduced by installing 0.19.0 and then attempting to upgrade to 0.20.0. In our environment we use flux, so it was straightforward to recover and luckily this was just a test cluster (resources get automatically re-created by flux):

[andrei@andrei-nb:~]$ kb3 get hr -n strimzi
NAME                     RELEASE                  PHASE       STATUS     MESSAGE                                                                          AGE
strimzi-kafka-operator   strimzi-kafka-operator   Succeeded   deployed   Release was successful for Helm release 'strimzi-kafka-operator' in 'strimzi'.   5d
[andrei@andrei-nb:~]$ kb3 delete hr strimzi-kafka-operator -n strimzi
helmrelease.helm.fluxcd.io "strimzi-kafka-operator" deleted
[andrei@andrei-nb:~]$ kb3 get all -n strimzi
No resources found in strimzi namespace.
...
[andrei@andrei-nb:~]$ kubectl get hr -n strimzi
NAME                     RELEASE                  PHASE       STATUS     MESSAGE                                                                          AGE
strimzi-kafka-operator   strimzi-kafka-operator   Succeeded   deployed   Release was successful for Helm release 'strimzi-kafka-operator' in 'strimzi'.   3m58s
[andrei@andrei-nb:~]$ kubectl get all -n strimzi
NAME                                            READY   STATUS    RESTARTS   AGE
pod/strimzi-cluster-operator-7bb9fd9f7b-hsmfj   1/1     Running   0          2m44s

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/strimzi-cluster-operator   1/1     1            1           2m45s

NAME                                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/strimzi-cluster-operator-7bb9fd9f7b   1         1         1       2m46s
[andrei@andrei-nb:~]$ kubectl get crd|grep -i kafka
kafkaconnects.kafka.strimzi.io          2020-10-27T14:15:47Z
kafkaconnects2is.kafka.strimzi.io       2020-10-27T14:15:47Z
kafkatopics.kafka.strimzi.io            2020-10-27T14:15:47Z
kafkausers.kafka.strimzi.io             2020-10-27T14:15:47Z
kafkamirrormakers.kafka.strimzi.io      2020-10-27T14:15:47Z
kafkabridges.kafka.strimzi.io           2020-10-27T14:15:47Z
kafkaconnectors.kafka.strimzi.io        2020-10-27T14:15:47Z
kafkamirrormaker2s.kafka.strimzi.io     2020-10-27T14:15:47Z
kafkarebalances.kafka.strimzi.io        2020-10-27T14:15:47Z
kafkas.kafka.strimzi.io                 2020-10-27T14:15:47Z

@brokenjacobs
Copy link
Contributor

brokenjacobs commented Oct 27, 2020

flux user here as well. My solution was to stop using the helmrelease operator and switch to using kustomize with the strimzi release yaml. The only pain is you have to patch 5 different ClusterRoleBinding / RoleBinding resources to use a non-default namespace.

This could be made a bit easier by providing a set of kustomize manifests to users, but the overall process wasn't that bad.

You WILL take an outage doing this, but at least the change can be pushed with flux to multiple clusters. Not so with removing helmrelease and then re-adding. You have to wait for every cluster to quiesce. That's a big outage depending on how large your setup is / how many clusters you have.

This isn't really a Strimzi issue, but a Helm one. How hard is it to change helm3 to do the CRD check AFTER removing the templated resources from the previous install? I don't understand why this can't be done, and I've read a ton of helm tickets.

@gricuk
Copy link
Author

gricuk commented Oct 29, 2020

as I understand from related issues there is no way to upgrade using helm3?

@scholzj
Copy link
Member

scholzj commented Oct 29, 2020

I assume you can use the Helm2 Chart from https://github.com/strimzi/strimzi-kafka-operator/releases/tag/0.20.0 which probably doesn't change the CRDs?

@gricuk
Copy link
Author

gricuk commented Oct 30, 2020

I assume you can use the Helm2 Chart from https://github.com/strimzi/strimzi-kafka-operator/releases/tag/0.20.0 which probably doesn't change the CRDs?

But Helm2 marked as deprecated, what will happen when we try to migrate to helm3 or upgrade to strimz 0.2x in the future

@scholzj
Copy link
Member

scholzj commented Oct 30, 2020

Well, I assume that is what happened now. I do not know how Helm plans or does not plan to solve it TBH.

@aneagoe
Copy link

aneagoe commented Oct 30, 2020

It's not Helm that has to address this, it's strimzi helm3 implementation. While I understand that most of the chart is contributed by users, it's important to first acknowledge where the problem lies. So far evidence points to the helm3 strimzi charts and not helm itself. Also, until this is fixed, I don't see how anyone can really run it via helm in production, knowing that an upgrade might erase their entire cluster. It should be treated as a high prio or adoption will slow down. Sure, one can manage the deployment differently, but it defeats the purpose a little bit.

@scholzj
Copy link
Member

scholzj commented Oct 30, 2020

@aneagoe I think the previous comments and linked issues suggested something else. But if this is a bug in the chart as you say, can you then explain what the fix is then or open a PR?

@aneagoe
Copy link

aneagoe commented Oct 30, 2020

I've reviewed the linked issues and the behavior observed does seem to be inline with helm functionality. It was an informed decision to move the CRD definitions from templates/ to crds/ in the Strimzi helm3 chart but unfortunately, this had quite an adverse effect on people upgrading. However, contrary to my initial assumption, this is a "one off" incident that won't happen going forward. The ideal scenario would be one where upgrade from versions that have CRDs defined in templates/ to versions that have them defined in crds/ would fail hard and require explicit consent, pointing out that all resources would be wiped. Or maybe simply remove to upgrade 0.19 or earlier to 0.20. My helm3 chart implementation is minimal so I can't help with a PR I'm afraid.
Again, the real danger is now for people who are running 0.19 under helm3 and try to upgrade. Their clusters will be removed without any indication when an upgrade is attempted.

@brokenjacobs
Copy link
Contributor

There is no hands off way to do this unfortunately, and I really feel this is a helm3 issue to resolve. Certainly strimzi is not the only helm chart to manage CRD's in this way and require migration?

@gris-gris
Copy link

Any updates?
Experiencing this issue too
Stack: helmfile + helm

@gris-gris
Copy link

A quick workaround we found with our team:

  1. Backup and delete all secrets related to Strimzi helm chart sh.helm.release.v1.strimzi
  2. Redeploy with version 0.20.0, CRD's will not be deleted and version upgrade completes successfully

@driosalido
Copy link

driosalido commented Nov 11, 2020

A quick workaround we found with our team:

  1. Backup and delete all secrets related to Strimzi helm chart sh.helm.release.v1.strimzi
  2. Redeploy with version 0.20.0, CRD's will not be deleted and version upgrade completes successfully

There is also the option to edit the data in the helm secret instead of deleting it

kubectl get secrets -n NAMESPACE sh.helm.release.v1.DEPLOYNAME -o json | jq .data.release -r | base64 --decode | base64 --decode | gunzip - > /var/tmp/manifest.json

The remove the CRD data inside the templates and manifest sections and upload the secret again

DATA=`cat /var/tmp/manifest.json | gzip -c | base64 | base64`
kubectl patch secret -n NAMESPACE sh.helm.release.v1.DEPLOYNAME --type='json' -p="[ {\"op\":\"replace\",\"path\":\"/data/release\",\"value\":\"$DATA\"}]"

Then the upgrade to 0.20.0 will leave the CRDs alone..

@gris-gris
Copy link

A quick workaround we found with our team:

  1. Backup and delete all secrets related to Strimzi helm chart sh.helm.release.v1.strimzi
  2. Redeploy with version 0.20.0, CRD's will not be deleted and version upgrade completes successfully

There is also the option to edit the data in the helm secret instead of deleting it

kubectl get secrets -n NAMESPACE sh.helm.release.v1.DEPLOYNAME -o json | jq .data.release -r | base64 --decode | base64 --decode | gunzip - > /var/tmp/manifest.json

The remove the CRD data inside the templates and manifest sections and upload the secret again

DATA=`cat /var/tmp/manifest.json | gzip -c | base64 | base64`
kubectl patch secret -n NAMESPACE sh.helm.release.v1.DEPLOYNAME --type='json' -p="[ {\"op\":\"replace\",\"path\":\"/data/release\",\"value\":\"$DATA\"}]"

Then the upgrade to 0.20.0 will leave the CRDs alone..

That's more precise, thanks!

@IlyaNakhaichuk
Copy link

After the update, I get an unsupported cluster and operator, since I use the new lister, the old crd with version 0.19 remain. Tried updating with the 2to3 plugin but nothing came out. Link
Maybe someone faced the same problem?

@driosalido
Copy link

After the update, I get an unsupported cluster and operator, since I use the new lister, the old crd with version 0.19 remain. Tried updating with the 2to3 plugin but nothing came out. Link
Maybe someone faced the same problem?

Just replace the old crd with the new ones. They should be compatible.

for i in `/bin/ls install/cluster-operator/*Crd*`; do
kubectl replace -f $i
done

@IlyaNakhaichuk
Copy link

@driosalido Thanks, this helped to solve the problem.
I noticed that when the operator version is updated, the crd is not updated.
The first thing I do is update from version 0.19 -> 0.20 (manually update CRD) -> 0.21 (CRDs are not updated, they remain the same as in 0.20) -> 0.22 (CRD as in 0.20)
I immediately thought that the problem was that I was switching from 0.19 (helm2) to 0.20 (helm3), but for the sake of a test I tried to put a clean 0.20 and update to 0.21 (CRD remained the same as 0.20).
That is, I need to constantly manually update the CRD with every release?

@driosalido
Copy link

Yes, that's the problem of removing the CRD from the yaml manifest. Helm not longer controls what to do with them.

@nutzhub
Copy link

nutzhub commented Apr 22, 2021

Should it provide some backward compatible helm chart between helm2 and helm3?. in link https://www.infracloud.io/blogs/helm-2-3-migration/ it just add extra crds.yaml in template and keep creating CRDs rather than to do it manually

@scholzj
Copy link
Member

scholzj commented Apr 22, 2021

The CRDs are still part of the Helm Chart: https://github.com/strimzi/strimzi-kafka-operator/tree/main/helm-charts/helm3/strimzi-kafka-operator/crds

They should not be needed to be installed separately and at least in case of clean install it is not needed.

@IlyaNakhaichuk
Copy link

@scholzj But I tried with a clean install of 0.20, updating the operator from 0.20-> 0.21-0.22, but making the output kubectl get crd kafkas.kafka.strimzi.io -o yaml has not changed relative to version 0.20

@nutzhub
Copy link

nutzhub commented Apr 28, 2021

The CRDs are still part of the Helm Chart: https://github.com/strimzi/strimzi-kafka-operator/tree/main/helm-charts/helm3/strimzi-kafka-operator/crds

They should not be needed to be installed separately and at least in case of clean install it is not needed.

But helm3 treats CRDs as external resources https://helm.sh/docs/chart_best_practices/custom_resource_definitions/ I think, the helm chart needs to follow these methods?

@scholzj
Copy link
Member

scholzj commented Apr 28, 2021

I guess that explains why upgrade does not upgrade the CRDs (and makes me wonder even more than before why are people using Helm). But not sure what does it expect us to do - it sounds like all we can do is update the docs and add a note that people should update the CRDs manually?

@nutzhub
Copy link

nutzhub commented Apr 28, 2021

I guess that explains why upgrade does not upgrade the CRDs (and makes me wonder even more than before why are people using Helm). But not sure what does it expect us to do - it sounds like all we can do is update the docs and add a note that people should update the CRDs manually?

Probably it should be documented in Helm3 README

@vutkin
Copy link

vutkin commented May 3, 2021

The remove the CRD data inside the templates and manifest sections and upload the secret again

Have you removed it manually? Still not clear for me.

@vutkin
Copy link

vutkin commented May 5, 2021

hotifx snippet:

set +x

NAMESPACE=${1:-test}

now=$(date +"%Y%m%d-%H%M")
velero backup create "strimzi-backup-$now" --include-namespaces $NAMESPACE \
  || echo "no velero installed"
read -p "Are you sure? " -n 1 -r
if [[ $REPLY =~ ^[Yy]$ ]]
then
  rm -f sh.helm.release.v1.strimzi.*
  kubectl get secret -n $NAMESPACE --no-headers | awk '{print $1}' | \
    grep sh.helm.release.v1.strimzi | \
    xargs -I{} sh -c "kubectl get secret -n $NAMESPACE -o yaml \"{}\" > \"{}.yaml\""
  kubectl get secret -n $NAMESPACE --no-headers | awk '{print $1}' | grep sh.helm.release.v1.strimzi | xargs -I {} kubectl delete secret/{} -n $NAMESPACE
fi

then just upgrade helm release 0.19->0.20.1

scholzj added a commit to scholzj/strimzi-kafka-operator that referenced this issue May 7, 2021
Signed-off-by: Jakub Scholz <www@scholzj.com>
scholzj added a commit that referenced this issue May 11, 2021
* Update Helm upgrade docs - closes #3877

Signed-off-by: Jakub Scholz <www@scholzj.com>

* Apply suggestions from code review

Signed-off-by: Jakub Scholz <www@scholzj.com>

Co-authored-by: PaulRMellor <47596553+PaulRMellor@users.noreply.github.com>

* Review comments

Signed-off-by: Jakub Scholz <www@scholzj.com>

Co-authored-by: PaulRMellor <47596553+PaulRMellor@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

10 participants