GCP Pentesting Guide


Hi, a few months back, I was part of a cloud penetration testing engagement, the company used Google’s Cloud Platform to host their infrastructure. During the engagement I was searching online for tools and templates used on such engagements, specific to GCP. While many businesses use GCP, not many resources exist specifically for pentesting on GCP.

So, the following post is my attempt to gather some of the resources and techniques I picked up during the engagement. It is aimed at helping security professionals working with GCP, to further understand the infrastructure, enumerate resources and exploit common misconfigurations and low hanging fruits.

The following ‘guide’ does not, by any means, cover everything or demonstrates new exploitation techniques. Most of the techniques covered, have been previously reported and explained on various blog posts. All the resources used for my research can be found at the end of this post.


A cloud penetration test is a type of security assessment that is designed to identify vulnerabilities in cloud-based systems and applications. This type of test, simulates a real-world attack by attempting to exploit weaknesses in the system in order to gain unauthorised access or steal sensitive data.
The penetration test can be conducted on various types of cloud environments, such as public, private, and hybrid clouds (GCP is a public cloud in our case). The goal of a cloud penetration test is to provide us with a clear understanding of our cloud security posture and to identify any areas where improvements can be made to better protect against potential cyber threats.

The following document outlines possible approaches to performing a penetration test against Google’s Cloud Platform  GCP (the same TTPs can be applied for any cloud provider); pre-testing stages, such as legal documents and permissions have been omitted for brevity.
Just as a quick reminder, most cloud providers do not explicitly prohibit penetration testing on your own assets but because in most cases the infrastructure is shared between customers, denial of service attacks are strictly prohibited. As a general rule you can follow the diagram shown below

Few words about gcloud

We will be using gcloud heavily throughout this guide so let’s quickly define how gcloud works.
The Google Cloud CLI (gcloud) is a set of tools to create and manage Google Cloud resources. You can use these tools to perform many common platform tasks from the command line or through scripts and other automation.
For example, we can use gcloud to list the buckets in a project. (sufficient permissions required)

gcloud storage ls

The gcloud command is really just a way of automating Google Cloud API calls. However, you can also perform them manually.
You can see what the raw HTTP API call for any individual gcloud command is simply by appending –log-http to the command.

Understanding the API endpoints and functionality can be very helpful when you’re operating with a very specific set of permissions, and trying to work out exactly what you can do.


As in every penetration test, our first step is OSINT. In our case, the engagement is performed as a assumed breach / white box test so, the majority of the information required are known to us.
However, if we had to go through every step of the OSINT process, the following resources can prove very useful.

Social Media


User discovery, position within the company, potential information about the company technology stack.

Oftentimes, we can gather information related to the technology stack used by the company by visiting the jobs page.


Identify users interest, potential technical issues and or personal equipment. This information can be later used for social engineering. 

  1. Twitonomy
  2. Social bearing
  3. Sleeping time
  4. Tweetbeaver : Ties screen name to twitter ID
  5. Spoonbill : Displays every change a user has ever made on their bio. 
  6. Tweetdeck : Displays multiple decks  columns  of categories that might be of interest, can help with investigations.
  1. Codeofaninja : Get user ID
  2. Imaginn : Download post

Developers, sys admins and IT technicians might often post information associated with the company’s infrastructure, especially if they are facing issues.

Discovering e-mail addresses
Hunting Breached Credentials
  1. Breach Parse : Custom built tool by Heath Adams, requires local databases.
  2. Dehashed : Check through multiple data points. If you find a user’s password in dehashed, check this hash to verify if the same password is used in other places as well. 
Hunting Businesses (info)
  1. Open Corporates :  Corporate records for various companies.
  2. AI Hit
Discover Website Technologies
  1. CentralOps :Various info about the website. (whois records, location, etc)
  2. BuiltWith :  Identify website technologies, programming languages, etc
  3. SpyOnWeb : Using VT to get tracker/tag information.
  4. VirusTotal : By using the “URL” option of VT we can get the UA (google tag manager), we can use this info and try to discover more by combining it with SpyOnWeb
  5. VisualPing : Tracks changes on a website and informs you by sending an e-mail.
  6. BackLink : Checks for backlinks to a site, can find a lot of information, some might be outdated but can still contain interesting info such as contacts.

Osint – Tools

Phoneinfoga : Tries to gather information for a given telephone number.

phoneinfoga scan -n <telephone_number>

Sherlock : Find usernames on various websites/platforms

sherlock <username>

H8mail : Check for breached emails

h8mail -t <email>

theHarvester : Find subdomains, users, IPs, emails and more. Can add APIs from various sources to refine the search.

theHarvester -d tesla.com -b all -l 500

Vulnerability Scanning (Tenable) – White Box – Authenticated Scan

Moving forward, we will talk about ‘authenticated’ vulnerability scanning.
If authenticated access to the platform is provided (or assumed), we can use vulnerability scanners (tennable.cs is used in our case) to try and identify potential vulnerabilities and misconfigurations.
Authorization and authentication between tenable and GCP is performed via service accounts.

Potential vulnerabilities might help us identify highly privileged accounts or misconfigurations that we could exploit to gain access or elevate privileges.

Highly Privileged account identified

Discover & Enumeration

Enumeration refers to the process of actively gathering information about a cloud infrastructure or application to identify potential vulnerabilities or weaknesses that can be exploited by attackers.
This involves using various tools and techniques to identify assets, services, open ports, user accounts, configurations, and other relevant details that can be used to gain unauthorised access or perform malicious activities.
Enumeration is a critical component of cloud penetration testing as it helps us identify potential attack vectors and inform the subsequent stages of the penetration testing process.

At this point, we will assume that the pentest’s scope has been defined, with our scope set, we can move into the enumeration/discovery. This can be done by using the asset manager that GCP (or any platform) offers or query the GCP APIs for specific services.
We will be using a custom-built script that scans a specific project and returns the following findings.

  • Public / Private IPs
  • Storage Buckets
  • Compute Instances
  • IAM
  • Clusters
  • Subnets
  • Cloud Functions
  • Firewalls
  • Peering Information

Enumeration Script

-The script is just a compilation of gcloud commands and some formatting, it’s purpose is to help us get a rough idea of the cloud environment we are working with.
Just copy the script (below) change permissions and execute-

enumeration.sh (Script)

! /bin/bash

Color variables


gcloud init
echo -e “”

read -p “Please enter Output Directory Name : ” directory
mkdir ./$directory
mkdir ./$directory/networking
echo -e “”

#Org / Folders
gcloud organizations list > $directory/organization.txt
echo -e “” >> $directory/organization.txt
org=$(gcloud organizations list | sed -n 2p | awk ‘{print $2}’)
gcloud resource-manager folders list –organization=$org >> $directory/organization.txt

#Clusters – GKE
gcloud container clusters list > $directory/clusters.txt
echo -e “$green Found GKE clusters, check $directory/clusters.txt for additional information $nocolor”
echo -e “”

#Compute IPs
gcloud –format=”value(networkInterfaces[0].accessConfigs[0].natIP)” compute instances list | uniq | grep “\S” > $directory/networking/public_ips.txt
gcloud compute instances list | awk ‘{print $5}’ | grep ^1 | uniq > $directory/networking/private_ips.txt
pubips=$(cat $directory/networking/public_ips.txt | wc -l)
privips=$(cat $directory/networking/private_ips.txt | wc -l)
echo -e “$green $pubips Public IPs $nocolor”
echo -e “$green $privips Private IPs $nocolor”

gcloud storage ls | cut -f3 -d “/” > $directory/buckets.txt
gcloud storage ls > $directory/buckets.txt
buckets=$(cat $directory/buckets.txt | wc -l)
echo -e “$green $buckets Storage Buckets $nocolor”

gcloud compute instances list > $directory/compute_instances.txt
ci=$(cat $directory/compute_instances.txt | wc -l)
echo -e “$green $ci Compute Instances $nocolor”

gcloud compute networks subnets list > $directory/networking/subnets.txt
subnets=$(cat $directory/networking/subnets.txt | wc -l)
echo -e “$green $subnets Subnets $nocolor”


gcloud compute firewall-rules list > $directory/firewalls.txt
firewalls=$(cat $directory/firewalls.txt | wc -l)
echo -e “$green $firewalls Firewall Rules $nocolor”

#IAM (user’s permissions)
project=$(gcloud config get-value project)
gcloud projects get-iam-policy $project > $directory/iam.txt

gcloud functions list > $directory/functions.txt
functions=$(cat $directory/functions.txt | wc -l)
echo -e “$green $functions Cloud Functions $nocolor”

gcloud pubsub subscriptions list > $directory/pubsub.txt
echo -e “$green Found pubsub subscriptions, please check $directory/pubsub.txt for additional information. $nocolor”

echo -e “”

#Peering Networks
gcloud compute networks peerings list > $directory/networking/peering.txt
peering=$(gcloud compute networks peerings list | wc -l)
echo -e “$green $project is interconnected (peered) to $peer networks $nocolor”
echo -e “”

echo -e “$green Results saved under $directory directory. Exiting! $nocolor”

The script uses gcloud to query the GCP APIs and is non-invasive, it requires an active user/service account and the results may vary depending on the account’s rights.
Once the script runs the created folders / files should look like the following image.

Script output files

Public / Private IPs

Discovered IPs can be scanned using ‘traditional’ means (port/service scanners and/or vulnerability scanners)  to discover services and potential attack vectors.

Public & Private IPs found in the project

Storage Buckets

Bucket enumeration can be performed both as an authenticated or unauthenticated user. In both scenarios we will be using a tool called BucketBrute from Rhino Security Labs.
Bucket brute is a script used to enumerate Google Storage buckets, determine what access you have to them, and determine if they can be privilege escalated.

Bucket Brute can be used by either providing a keyword  (which will be used to create a wordlist)

  ./gcpbucketbrute.py -k <keyword>

Or by providing a custom buckets list* (requires authentication)

*In grey/white box engagements we can use the enumeration script to generate a bucket list.

./gcpbucketbrute.py --check-list <path_to_list>
Bucket list generated from the enumeration script.

The scans can be performed while being authenticated via an access token / service account or completely unauthenticated (which can be used to check unauthenticated access).

Unauthenticated access to a bucket

Compute Instances

Having a clear picture of the compute instances residing in a project can help us identify the architecture (e.g containerized environment if Kuberentes is present) as well as the role of the specific project within the company. Meaning, if we find VMs named ‘finance’ or ‘dev’, it could help us further understand the targeted project and adjust our attack methodology, for example we could connect users to the project and use this information for social engineering attacks or use usernames find from our OSINT to bruteforce login portals within the project.

Another interesting piece of information  we can obtain from within a GCP VM is the access scope of the VM.
The authorization provided to applications hosted on a Compute Engine instance is limited by two separate configurations: the roles granted to the attached service account, and the access scopes that you set on the instance. Both of these configurations must allow access before the application running on the instance can access a resource.

The access scope is set when creating the VM.

We can query the metadata server to retrieve the current access scope of a VM using the following command.

curl http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/scopes     -H 'Metadata-Flavor:Google'

A VM with default access scope assigned will return the following.

Keep in mind that this shows to which APIs we are allowed to authenticate, the actions we are allowed to perform are based on our rights assigned by the IAM policies.


Now, before we move on to the IAM enumeration, it’s important to clarify the main points of some very different -but often interconnected- terms.

  • Authentication

    Authentication is the process by which your identity is confirmed through the use of some kind of credential. Authentication is about proving that you are who you say you are. Google provides many APIs and services, which require authentication to access. Google also provides a number of services that host applications written by our customers; these applications also need to determine the identity of their users.
    Authentication methods include but are not limited the following credentials :

    – User Accounts
    – Service Accounts
    – OAuth Tokens
    – API keys 
  • Authorization

    Authorization is the process of determining whether the principal or application attempting to access a resource has been authorised for that level of access.

So, in general, an authentication credential will allow you to authenticate against, let’s say, an API and the authorization -that is granted mainly through IAM- will define the actions that you are allowed to perform. 

Another very important term worth mentioning is ‘Permissions’.

  • Permissions

    Permissions in GCP are allowing access to the specific type of resource and role is a group of such permissions. e.g. Editor role has all the permissions that Viewer role has and also additional ones allowing to manage networking, instances,etc.
    compute.instances.create’ is a permission allowing to create an instance. ‘roles/Editor’ is a role containing this permission. Assigning roles assigns specific permissions for a user specific resources.
    Default service accounts in every project are given the Editor role

With that out of the way, let’s move to enumerating GCP IAM.
For this task we will be using the very handy script created by Rhino Labs. Please keep in mind that the script requires an access token from GCP to authenticate.

To fetch an access token for a gcloud CLI-Authenticated user, you can run the following command

gcloud auth print-access-token

Once we obtain our access token, we can use the scripts to

  1. Enumerate the  member permissions
python3 enumerate_member_permissions.py
  1. Check for privilege escalation paths based on the assigned permissions 
python3 check_for_privesc.py
  1. The output of the scripts is saved in a file named  ‘all_org_folder_proj_sa_permissions.json’, reviewing this file will point out the privesc paths we can follow/exploit

Detailed walkthrough on how to exploit each permissions along with the usage of the scripts can be found in a 2 part blog, part1, part2

In addition to the script provided by Rhino Labs we can also use our custom enumeration script to obtain information about the roles (both default and custom), groups, service accounts etc. that exist in a project.

The use of the aforementioned scripts will allow us to gain information about the project’s custom roles, assigned permissions, escalation paths and further assist us in identifying valuable user/service accounts.

Identifying custom role


Moving on to the cluster enumeration, with the help of our custom built script, we can gather information about the existing clusters, information include the Master IP which can reveal additional information if misconfigured, Node version which can point out an outdated cluster and the Number of Nodes which allows us to figure out the size of the cluster. 

Cluster information


Our trusty script also returns information about the existing networks/subnets that exist in the project, solidifying our scope definition and helping us identify additional targets. 

Networking information

Additionally, the peering file contains information about interconnected networks/projects. Interconnected projects pose higher risk since an attack could allow us to pivot to another network. 

Cloud Functions

Google Cloud Functions allow you to host code that is executed when an event is triggered. Secrets/Passwords can be potentially stored or hard coded in the function’s configuration file.
The script outputs the name of the functions which can then be used to get additional information about the cloud function.

Once we have the cloud function’s name, we can use the following command to review the configuration. 

gcloud functions describe [FUNCTION NAME]


Google Cloud Pub/Sub is a service that allows independent applications to send messages back and forth. Our script lists all the available topics in a project.

Using the pull command and the name obtained from the pubsub.txt file, we can try to view messages that have not yet been acknowledged as delivered.
Returns one or more messages from the specified Cloud Pub/Sub subscription, if there are any messages enqueued.

By default, this command returns only one message from the subscription. Use the –limit flag to specify the max messages to return. 

Quick Wins

VM Metadata

Every virtual machine (VM) instance stores its metadata on a metadata server. Your VM automatically has access to the metadata server API without any additional authorization. Metadata is stored as key:value pairs.
Every Compute Instance has access to a dedicated metadata server via the IP address
You can identify it as a host file entry like the one below

The metadata server available to a given instance will provide any user/process on that instance with an OAuth token that is automatically used as the default credentials when communicating with Google APIs via the gcloud command.

You can retrieve and inspect the token with the following curl command

curl "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token" \
-H "Metadata-Flavor Google"

One more quick win we can get, if IAM permissions allow, is to generate a new user along with SSH keys (this creates new user directory in the server). Using the following command, we can create the new SSH key and directly connect to the VM.

gcloud compute ssh <VM_name>

Make sure to set the –zone parameter if your default zone is not matching the VM’s zone, otherwise the command will fail.

Successful SSH creation and login. 
Unsuccessful ssh creation & login, blocked by IAM policies. 

The end

That’s all for now, as I mentioned before, none of the above are new or groundbreaking, it’s just a guide and a way to gather all of my resources in one place. If you find any mistakes or want to add something to the list, please reach out.
I will keep updating the post with new findings.
Take care.



TCM academy – Open-Source Intelligence (OSINT fundamentals)
Bucket Brute – Rhino Security Labs
GCP PrivEsc – Rhino Security Labs
Authentication & Authorization – GCP
PrivEsc & Exploitation on GCP
About VM metadata
Bearer Tokens
Compute Engine – Service Accounts
So You Think You Can Secure Your Cloud : Red Team Engagements in GCP | Pen Test HackFest Summit 2021
Cloud Penetration Testing Workshop | SANS Pen Test HackFest Summit 2020