Kali Linux Tools

Installation Instructions And Folder Setup For Gcpwn On Kali Linux

If you want to use docker to run the tool, you can use the existing Dockerfile to create a container with the tool and all dependencies installed.

It will then drop you into a venv inside the docker container when starting allowing you to run “python3 main.py”. Note because it is docker, unless you mount volumes with -v, your data will be wiped upon exiting the docker container

# From gcpwn base directory
docker build -t gcpwn .
docker run -it gcpwn

Local Install

Note I cannot guarantee support for other OS types/deviations from instructions below, but feel free to file issues if there are any major items that arise.

Supported OS: Kali Linux 6.6.9

Python Version: Python3 3.11.8

Installation Instructions:

  1. Setup a virtual environment as shown below. Not required but I ran into less difficulties with dependencies this way
  2. Clone the code from the official NetSPI GitHub organization, maybe check out some other cool repositories while your there 😉
  3. Run the setup script. If you don’t want to run the setup script and want to do the same steps manually, you just need to install the gcloud CLI tool + pip install all the libraries in the requirements.txt file.
  4. Start the tool via python3 main.py. If this is your first time, the tool will ask you to create a workspace.
    • This is a purely logical attempt at a container, you can pass in whatever name you want. See the subsequent wiki sections on adding authentication/running modules.
# Setup a virtual environment
python3 -m venv ./myenv
source myenv/bin/activate

# Clone the tool
git clone https://github.com/NetSPI/gcpwn.git

# Run setup.sh; This will install gcloud CLI tool and pip3 install -r requirements if you want to do those separately
chmod +x setup.sh; ./setup.sh

# Launch the tool after all items installed & create first workspace
python3 main.py
[*] No workspaces were detected.
New workspace name: my_workspace
[*] Workspace 'my_workspace' created.

Welcome to your workspace! Type 'help' or '?' to see available commands.

[*] Listing existing credentials...

Submit the name or index of an existing credential from above, or add NEW credentials via Application Default 
Credentails (adc - google.auth.default()), a file pointing to adc credentials, a standalone OAuth2 Token, 
or Service credentials. See wiki for details on each. To proceed with no credentials just hit ENTER and submit 
an empty string. 
 [1] *adc      <credential_name> [tokeninfo]                    (ex. adc mydefaultcreds [tokeninfo]) 
 [2] *adc-file <credential_name> <filepath> [tokeninfo]         (ex. adc-file mydefaultcreds /tmp/name2.json)
 [3] *oauth2   <credential_name> <token_value> [tokeninfo]      (ex. oauth2 mydefaultcreds ya[TRUNCATED]i3jJK)  
 [4] service   <credential_name> <filepath_to_service_creds>    (ex. service mydefaultcreds /tmp/name2.json)

*To get scope and/or email info for Oauth2 tokens (options 1-3) include a third argument of 
"tokeninfo" to send the tokens to Google's official oauth2 endpoint to get back scope. 
tokeninfo will set the credential name for oauth2, otherwise credential name will be used.
Advised for best results. See https://cloud.google.com/docs/authentication/token-types#access-contents.
Using tokeninfo will add scope/email to your references if not auto-picked up.

Input:  

Auto-Generated Folder Descriptions

Two folders, “GatheredData” and “LoggedActions” are auto-created and populated as you run the tool:

  1. GatheredData/[workspace_id]/* – This folder contains all downloaded data from modules along with any IAM analysis reports for permissions/roles.
    • For example, running modules run enum_buckets --download will try to download blobs to the specified folder, or running modules run process_iam_bindings will write the summary reports to this folder if --csv or --txt is specified.
  2. LoggedActions/[workspace_id]/* – This will timestamp when modules start/end so you can better compare the actions taken with your own logs via blue team exercises

Internal databases store the information. You don’t need to know the details below but if interested:

  1. databases/* – 3 databases are created in the databases folder:
    1. workspaces.db – Smallest database, just contains the workspace name + the integer ID assigned to it
    2. session.db – Contains your session credentials in a JSON serialized form. Includes individual permission actions in a JSON data structure for the given credname
    3. service_info.db – Contains all the object attributes as you enumerate data. When querying data from the tool, this is the table you are interacting with.
      • If you want to get resource information manually via sqlite3, this is the database to point to.
Varshini

Varshini is a Cyber Security expert in Threat Analysis, Vulnerability Assessment, and Research. Passionate about staying ahead of emerging Threats and Technologies.

Recent Posts

garak, LLM Vulnerability Scanner : The Comprehensive Tool For Assessing Language Model Security

garak checks if an LLM can be made to fail in a way we don't…

59 minutes ago

Vermilion : Mastering Linux Post-Exploitation For Red Team Success

Vermilion is a simple and lightweight CLI tool designed for rapid collection, and optional exfiltration…

59 minutes ago

AD-CS-Forest-Exploiter : Mastering Security Through PowerShell For AD CS Misconfiguration

ADCFFS is a PowerShell script that can be used to exploit the AD CS container…

59 minutes ago

Usage Of Tartufo – A Comprehensive Guide To Securing Your Git Repositories

Tartufo will, by default, scan the entire history of a git repository for any text…

59 minutes ago

Loco : A Rails-Inspired Framework For Rust Developers

Loco is strongly inspired by Rails. If you know Rails and Rust, you'll feel at…

1 day ago

Monolith : The Ultimate Tool For Storing Entire Web Pages As Single HTML Files

A data hoarder’s dream come true: bundle any web page into a single HTML file.…

1 day ago