Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consecutive "destroy" fails with 'variable "XX" is nil, but no error was reported' #474

Closed
JordanP opened this issue May 19, 2019 · 4 comments

Comments

@JordanP
Copy link
Contributor

JordanP commented May 19, 2019

Hi,
When I try to use the module and issue a terraform destroy twice, the 2nd time it fails with

Error: Error applying plan:

4 error(s) occurred:

* module.google-cloud-jordan.module.workers.output.instance_group: variable "workers" is nil, but no error was reported
* module.google-cloud-jordan.module.bootkube.output.kubeconfig-admin: variable "kubeconfig-admin" is nil, but no error was reported
* module.google-cloud-jordan.module.bootkube.output.kubeconfig-kubelet: variable "kubeconfig-kubelet" is nil, but no error was reported
* module.google-cloud-jordan.module.workers.output.target_pool: variable "workers" is nil, but no error was reported

It looks like a common TF issue (hashicorp/terraform#17862) but there was a work around that be great.

@dghubble
Copy link
Member

I'm not really sure what you're running into or how we'd address your situation here.

I will say, in practice, I never seem to need terraform destroy. I think in pretty much every scenario, it is better and clearer to destroy resources by removing cluster declarations from Terraform configuration, terraform plan, and terraform apply as usual. This flow keeps the same plan/apply flow as during creation, aligns well with keeping declarations in version control, and is compatible with automation if you choose to automate applies. That's why the docs on removing clusters recommend this too.

But I don't have much more help to offer.

@JordanP
Copy link
Contributor Author

JordanP commented May 20, 2019

I am toying with Typhoon K8s and I also create a whole lot of other resources. At the end of the day, I like to terraform destroy everything to cut cost. Typhoon resources are deleted but I sometimes have to run "terraform destroy" twice for other reasons and I got the mentioned error message on the second run.

Anyway, I think it's a bug in TF and I might suggest a workaround here if I find one not too ugly.

@dghubble
Copy link
Member

Yeah, currently I know GCP and Azure clusters require two apply runs to remove (or destroy I suppose). I keep track of known upstream issues like this on each cloud in errata. GCP its just related to cleanup timeout for the network resource (provider doesn't allow setting this), whereas Azure / azurerm provider is less reliable and sometimes just requires deleting the resource group out of band (its alpha).

@dghubble
Copy link
Member

Just going to keep this tracked in errata. There are a bunch of changes going on with providers for a few months and I'm hoping things get better once we all get over the v0.12 hill.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants