Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add option to allow no results when querying aws_instances Data Source #5055

Conversation

coop182
Copy link
Contributor

@coop182 coop182 commented Jul 3, 2018

Changes proposed in this pull request:

  • Add allow_no_results option to aws_instances Data Source

Output from acceptance testing:

$ make testacc TESTARGS='-run=TestAccAWSInstancesDataSource_noResults'
==> Checking that code complies with gofmt requirements...
TF_ACC=1 go test ./... -v -run=TestAccAWSInstancesDataSource_noResults -timeout 120m
?   	github.com/terraform-providers/terraform-provider-aws	[no test files]
=== RUN   TestAccAWSInstancesDataSource_noResults
--- PASS: TestAccAWSInstancesDataSource_noResults (32.55s)
PASS
ok  	github.com/terraform-providers/terraform-provider-aws/aws	32.596s

@ghost ghost added the size/M Managed by automation to categorize the size of a PR. label Jul 3, 2018
@bflad bflad added enhancement Requests to existing resources that expand the functionality or scope. service/ec2 Issues and PRs that pertain to the ec2 service. labels Jul 3, 2018
@catsby
Copy link
Contributor

catsby commented Jul 3, 2018

Hey there @coop182 ! Could you elaborate some on the possible use cases for this? The docs included mention "querying the count of ephemeral instances", how would you imagine this count is later used? Thanks 😄

@bflad bflad added the waiting-response Maintainers are waiting on response from community or contributor. label Jul 3, 2018
@coop182
Copy link
Contributor Author

coop182 commented Jul 3, 2018

Hey @catsby,

This would be really useful when doing rolling ASG deploys as outlined here - which implements the strategy outline by @phinze here.

Effectively being able to query the aws_instances data source without it throwing an error when no instances are found would allow it to be used to dynamically size the new ASG to the same size as the current ASG.

For example if I have a min_size of 1 and max_size of 20 and a desired_capacity that moves between those numbers constantly via auto scaling policies at the moment I can see no way to set the new ASG to the exact desired capacity required at the point in time of the deploy... this leads to either having to over or under provision the number of instances on a rolling ASG deploy.

Maybe I have missed a better way to do or think about this?

Example code:

# Get most recent Ubuntu AMI
data "aws_ami" "ubuntu" {
  most_recent = true

  filter {
    name   = "name"
    values = ["ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-*"]
  }

  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }

  filter {
    name   = "state"
    values = ["available"]
  }

  owners = ["099720109477"] # Canonical
}

# Create Launch Configuration from latest AMI
resource "aws_launch_configuration" "lc" {
  image_id                    = "${data.aws_ami.ami.id}"
  instance_type               = "t2.small"
  key_name                    = "${var.key_name}"
  security_groups             = ["${var.security_groups}"]

  lifecycle {
    create_before_destroy = true
  }
}

# Get the instances currently in the Autoscaling group based on the Launch configuration name
data "aws_instances" "asg" {
  filter {
    name   = "instance-state-name"
    values = ["running"]
  }

  filter {
    name   = "tag-value"
    values = ["${aws_launch_configuration.lc.id}"]
  }

  allow_no_results = true
}

# Create an autoscaling group - initially with a desired capacity of one but on further updates the desired capacity will be maintained when ASG is replaced.
resource "aws_autoscaling_group" "asg" {
  health_check_grace_period = 500
  health_check_type         = "ELB"
  launch_configuration      = "${aws_launch_configuration.lc.id}"
  max_size                  = 20
  min_size                  = 1
  desired_capacity          = "${length(data.aws_instances.asg.ids) == 0 ? 1 : length(data.aws_instances.asg.ids)}"
  min_elb_capacity          = "${length(data.aws_instances.asg.ids) == 0 ? 1 : length(data.aws_instances.asg.ids)}"
  name                      = "${aws_launch_configuration.lc.id}"
  vpc_zone_identifier       = ["${var.network_subnets}"]
  target_group_arns         = ["${aws_lb_target_group.main.arn}"]

  lifecycle {
    create_before_destroy = true
  }
}

@bflad bflad removed the waiting-response Maintainers are waiting on response from community or contributor. label Jul 4, 2018
@coop182 coop182 force-pushed the feature/data_instances_allow_no_results branch from b3f61a5 to 9c81e9d Compare July 4, 2018 23:25
@ghost ghost added the size/M Managed by automation to categorize the size of a PR. label Jul 4, 2018
@coop182
Copy link
Contributor Author

coop182 commented Sep 5, 2018

Any more feedback @catsby or @bflad?

@aeschright aeschright requested a review from a team June 25, 2019 21:31
Base automatically changed from master to main January 23, 2021 00:55
@breathingdust breathingdust requested a review from a team as a code owner January 23, 2021 00:55
@zhelding
Copy link
Contributor

Pull request #21306 has significantly refactored the AWS Provider codebase. As a result, most PRs opened prior to the refactor now have merge conflicts that must be resolved before proceeding.

Specifically, PR #21306 relocated the code for all AWS resources and data sources from a single aws directory to a large number of separate directories in internal/service, each corresponding to a particular AWS service. This separation of code has also allowed for us to simplify the names of underlying functions -- while still avoiding namespace collisions.

We recognize that many pull requests have been open for some time without yet being addressed by our maintainers. Therefore, we want to make it clear that resolving these conflicts in no way affects the prioritization of a particular pull request. Once a pull request has been prioritized for review, the necessary changes will be made by a maintainer -- either directly or in collaboration with the pull request author.

For a more complete description of this refactor, including examples of how old filepaths and function names correspond to their new counterparts: please refer to issue #20000.

For a quick guide on how to amend your pull request to resolve the merge conflicts resulting from this refactor and bring it in line with our new code patterns: please refer to our Service Package Refactor Pull Request Guide.

@ewbankkit ewbankkit added this to the v4.0.0 milestone Jan 2, 2022
@ewbankkit
Copy link
Contributor

@coop182 Thanks for the contribution 🎉 👏.
I have rolled your changes into #21219.

@ewbankkit ewbankkit closed this Jan 25, 2022
@github-actions
Copy link

This functionality has been released in v4.0.0 of the Terraform AWS Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you!

@github-actions
Copy link

I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 20, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement Requests to existing resources that expand the functionality or scope. service/ec2 Issues and PRs that pertain to the ec2 service. size/M Managed by automation to categorize the size of a PR.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants