Skip to content

Switching between Day 1 Architecture and The New Architecture of Hollowverse

Muhammad Fawwaz Orabi edited this page Sep 24, 2018 · 18 revisions

Instructions on this page assume you have write access to the repositories in the Hollowverse organization on GitHub as well as administrator access in AWS.

IMPORTANT: Make sure every step is complete before moving on to the next step. Do not perform any step if the AWS resources in the previous step are still being updated, created or removed.

Going from Day 1 Architecture to The New Architecture

  1. Deploy the following CloudFormation stacks using the Serverless framework:

    a. hollowverse/process-image using NODE_ENV=production yarn sls deploy --stage production.

    b. hollowverse/track-performance using NODE_ENV=production yarn sls deploy --stage production.

    Alternatively, you can trigger the deployments from the respective Travis CI pages (assuming you have write access to these repos).

  2. In the AWS web console, go to Route 53 and temporarily change the DNS record of both hollowverse.com and static.hollowverse.com to point to the IP of live.hollowverse.com (the IP should be in the DNS record list). This is to avoid any downtime.

  3. Now go to CloudFront and de-associate all the aliases in the CloudFront distribution whose origin is live.hollowverse.com except static.legacy.hollowverse.com. These aliases need to be de-associated before performing the next step.

  4. Deploy hollowverse/route-request using NODE_ENV=production yarn sls deploy --stage production or by triggering a new build from Travis.

  5. Now clone hollowverse/infrastructure:

    1. Revert this commit
    2. Run terraform init
    3. When asked for the bucket name, use hollowverse-terraform-state-production.
    4. Run terraform apply.
    5. When asked for variable values, use the following:
      • for stage, use production
      • for public_ssh_key, use the contents of id_rsa.pub stored in our secrets repository on GitHub.
      • for db_password, use the value stored in our secrets repository too
    6. Review the changes and type yes to start applying the changes.
  6. After the new infrastructure is up, clone the API repository, hollowverse/api and perform the following steps:

    • In serverless.yml, update the value of custom.vpnConfig.production.securityGroupIds to be the security group ID that is returned from running terraform output database_access_security_group in the hollowverse/infrastructure repository.
    • Run NODE_ENV=production yarn sls deploy --stage production.
  7. Update or add the DNS record for api.hollowverse.com to be an alias for api-apigw-production.hollowverse.com.

  8. After the API is up, revert this commit and deploy hollowverse/hollowverse (both master and beta branches):

    a. Checkout the master branch, revert the commit, and perform BRANCH=master NODE_ENV=production yarn sls deploy --stage master.

    b. Checkout the beta branch, revert the commit, and perform BRANCH=beta NODE_ENV=production yarn sls deploy --stage beta.

    Alternatively, you can trigger the deployments from the respective Travis CI pages (assuming you have write access to these repos).

  9. Point the DNS record of hollowverse.com and static.hollowverse.com to the new CloudFront distribution created by route-request whose origin is static.legacy.hollowverse.com.

  10. Copy the images in https://github.com/hollowverse-archive/scraper/tree/master/output/images to a new folder named notable-people in the S3 bucket named hollowverse-photos-unprocessed-production. It's recommended that the files are uploaded in batches of ~100 images and that you wait a few minutes between each batch to avoid hitting the rate limits of Cloudinary, which is the service that we use to crop the photos when they are uploaded.

Going from The New Architecture to Day 1 Architecture

  1. In Route 53, temporarily change the DNS records of both hollowverse.com and static.hollowverse.com to point to the IP of live.hollowverse.com (the IP should be in the DNS record list). This is to avoid any downtime.

  2. Go to CloudFront and choose the CloudFront distribution created by route-request whose origin is static.legacy.hollowverse.com.

    • Remove all aliases of this distribution
    • Disable logging
    • In the Behaviors tab, choose the default behavior and remove all the 3 Lambda@Edge functions associated with that behavior.
  3. Now go back to CloudFront main dashboard and the choose the CloudFront distribution whose origin is live.hollowverse.com. Add the following aliases:

    • hollowverse.com
    • static.hollowverse.com
    • www.hollowverse.com
    • www.thehollowverse.com
    • thehollowverse.com
  4. Copy the CloudFront domain (e.g. dryr01pq3kykt.cloudfront.net) of that distribution and go to Route 53. Update the DNS record for hollowverse.com to be an alias for that domain.

  5. Go to S3 and empty (but do not delete) the following buckets:

    • hollowverse-logs-production
    • hollowverse-photos-processed-production
    • hollowverse-photos-unprocessed-production
    • track-performance-production
  6. Clone hollowverse/infrastructure:

    1. Re-apply this commit.
    2. Run terraform init
    3. When asked for the bucket name, use hollowverse-terraform-state-production.
    4. Run terraform apply.
    5. When asked for the value of the stage variable, use production

    This will destroy almost all the infrastructure with the exception of the VPC which is required because the legacy website instance is in that VPC.

  7. Clone hollowverse/track-performance, hollowverse/route-request, hollowverse/process-image, hollowverse/api and run yarn sls remove --stage production in each repository.

  8. Clone hollowverse/hollowverse and run yarn sls remove --stage master and yarn sls remove --stage beta.

  9. Remove the DNS record for api.hollowverse.com in Route 53.

  10. Because changes to CloudFront distributions usually take a considerable amount of time to propagate, removing route-request could fail. Please manually check the CloudFormation dashboard after around half an hour and try to delete the route-request stack manually.