Kali Linux

ShadowClone : Unleash The Power Of Cloud

ShadowClone is designed to delegate time consuming tasks to the cloud by distributing the input data to multiple serverless functions (AWS Lambda, Azure Functions etc.) and running the tasks in parallel resulting in huge performance boost!

ShadowClone uses IBM’s awesome Lithops library to distribute the workloads to serverless functions which is at the core of this tool. Effectively, it is a proof-of-concept script showcasing the power of cloud computing for performing our regular pentesting tasks.

Use Cases

  • DNS Bruteforce using a very large wordlist within seconds
  • Fuzz through a huge wordlist using ffuf on a single host
  • Fuzz a list of URLs on a single path all from different IP addresses
  • Port scan thousands of IPs in seconds
  • Run a nuclei template on a list of hosts

Get Started

Prerequisites

  • AWS/GCP/Azure/IBM cloud Account
  • Docker installed on local machine. (required for initial setup only)
  • Python 3.8+

Configuration

There are two parts of configuration – Cloud and Local

Although the final script is cloud agnostic and should work with any supported platform, I have only tested it on AWS so far. Instructions for setting up GCP, Azure and IBM cloud environments will be added soon.

Cloud

  • Login to your AWS account and get API credentials (access key & secret)
  • Go to IAM in AWS console and create a new policy with the following permissions:

{
“Version”: “2012-10-17”,
“Statement”: [
{
“Sid”: “VisualEditor0”,
“Effect”: “Allow”,
“Action”: [
“s3:“, “lambda:“,
“ec2:“, “ecr:“,
“sts:GetCallerIdentity”
],
“Resource”: “*”
}
]
}

  • Create a new role with “Lambda” use case and attach the above policy to it.
  • Keep a note of the ARN of this role, you will need it later.
  • Go to S3 and create two buckets in the same region where your lambda is going to be executed.
    • One bucket is used for storing logs, runtime information etc. and the other bucket will be used for storing uploaded files

If you are using AWS and would like to control the costs to remain in the free tier budget, I highly recommend following this article and setting up some budgets and alerts.

Local machine

  • Ensure docker is installed on your local machine. This is required for the initial setup only.
  • Clone the repo and install python dependencies

git clone https://github.com/fyoorer/ShadowClone.git
cd ShadowClone
python -m venv env
source env/bin/activate
pip install -r requirements.txt

All the magic happens in the lithops library, which should be installed after the previous command.

  • Verify that lithops command line utility is installed by running

lithops test

⚡ lithops test
2022-01-18 08:08:45,832 [INFO] lithops.config — Lithops v2.5.8
2022-01-18 08:08:45,833 [INFO] lithops.storage.backends.localhost.localhost — Localhost storage client created
2022-01-18 08:08:45,833 [INFO] lithops.localhost.localhost — Localhost compute client created
2022-01-18 08:08:45,833 [INFO] lithops.invokers — ExecutorID b9419a-0 | JobID A000 – Selected Runtime: python
2022-01-18 08:08:45,833 [INFO] lithops.invokers — Runtime python is not yet installed
2022-01-18 08:08:45,833 [INFO] lithops.localhost.localhost — Extracting preinstalled Python modules from python
2022-01-18 08:08:46,110 [INFO] lithops.invokers — ExecutorID b9419a-0 | JobID A000 – Starting function invocation: hello() – Total: 1 activations
2022-01-18 08:08:46,111 [INFO] lithops.invokers — ExecutorID b9419a-0 | JobID A000 – View execution logs at /tmp/lithops/logs/b9419a-0-A000.log
2022-01-18 08:08:46,111 [INFO] lithops.wait — ExecutorID b9419a-0 – Getting results from functions
100%|████████████████████████████████████████████████████████████| 1/1
2022-01-18 08:08:48,125 [INFO] lithops.executors — ExecutorID b9419a-0 – Cleaning temporary data
Hello fyoorer! Lithops is working as expected 🙂

If you see this, that means Lithops is installed and working as intended.

  • Now to make lithops work with your cloud provider, create a configuration file at ~/.lithops/config and copy the following content into it

vi ~/.lithops/config

lithops:
backend: aws_lambda
storage: aws_s3
aws:
access_key_id: AKIA[REDACTED] #changeme
secret_access_key: xxxx[REDACTED]xxxx #changeme
#account_id:  # Optional
aws_lambda:
execution_role: arn:aws:iam::123123123123:role/lithops-execution-role #changeme
region_name: us-east-1
runtime_memory: 512
runtime_timeout: 330
aws_s3:
storage_bucket: mybucket #changeme
region_name : us-east-1

The lines marked with #changeme need to be updated with the values noted above

  • access_key_id & secret_access_key – Your account’s API credentials
  • execution_role – Enter the IAM Role ARN noted above
  • storage_bucket – Enter the name of the bucket you wish to use for storing logs

Ensure that the config file is placed at ~/.lithops/config

Build

Now we need to build a container image with all our tools baked in which will be used by the serverless function.

Build the image using lithops build command:

lithops runtime build sc-runtime -f Dockerfile

Next, register the runtime in your cloud environment with the following command:

lithops runtime create sc-runtime –memory 512 –timeout 300

Check runtime successfully registered

lithops runtime list

Copy the runtime name displayed in the output. We will need it in the next step.

Finally, update the config.py with the name of your runtime and the bucket:

LITHOPS_RUNTIME=”lithops_v2-5-8_ke73/sc-runtime” #runtime name obtained from above
STORAGE_BUCKET=”mytestbucket” #name of the 2nd bucket created above

Run

Finally we are ready run some lambdas!

Usage

python shadowclone.py -h
usage: cloudcli.py [-h] -i INPUT [-s SPLITNUM] [-o OUTPUT] -c COMMAND
optional arguments:
-h, –help show this help message and exit
-i INPUT, –input INPUT
-s SPLITNUM, –split SPLITNUM
Number of lines per chunk of file
-o OUTPUT, –output OUTPUT
-c COMMAND, –command COMMAND
command to execute

How it works

We create a container image during the initial setup and register it as a runtime for our function in AWS/GCP/Azure whatever. When you execute ShadowClone on your computer, instances of that container are activated automatically and are only active for the duration of its execution. How many instances to activate is dynamically decided at runtime depending on the size of the input file provided and the split factor. The input is then split into chunks and equally distributed between all the instances to execute in parallel. For example, if your input file has 10,000 lines and you set the split factor to 100 lines, then it will be split into 100 chunks of 100 lines each and 100 instances will be run in parallel!

Features

  • Extremely fast
  • No need to maintain a VPS (or a fleet of it :))
  • Costs almost nothing per month
    • Compatible with free tiers of most cloud services
  • Cloud agnostic
    • Same script works with AWS, GCP, Azure etc.
  • Supports upto 1000 parallel invocations
  • Dynamically decide the number of invocations
  • Run any tool in parallel on the cloud
  • Pipe output to other tools

Comparison

This tool was inspired by the awesome Axiom and Fleex projects and goes beyond the concept of VPS for running the tools by using serverless functions and containers.

FeaturesAxiom/FleexShadowClone
Instances10-100s*1000s
CostPer instance/per minuteMostly Free**
Startup Time4-5 minutes2-3 seconds
Max Execution TimeUnlimited15 minutes
Idle Cost$++Free
On Demand ScalabilityNo

*Most cloud providers do not allow spinning up too many instances by default, so you are limited to around 10-15 instances at max. You have to make a request to the support to increase this number.

** AWS & Azure allow 1 million invocations per month for free. Google allows 2 million invocations per month for free. You will be charged only if you go above these limits

R K

Recent Posts

Shadow-rs : Harnessing Rust’s Power For Kernel-Level Security Research

shadow-rs is a Windows kernel rootkit written in Rust, demonstrating advanced techniques for kernel manipulation…

1 week ago

ExecutePeFromPngViaLNK – Advanced Execution Of Embedded PE Files via PNG And LNK

Extract and execute a PE embedded within a PNG file using an LNK file. The…

2 weeks ago

Red Team Certification – A Comprehensive Guide To Advancing In Cybersecurity Operations

Embark on the journey of becoming a certified Red Team professional with our definitive guide.…

3 weeks ago

CVE-2024-5836 / CVE-2024-6778 : Chromium Sandbox Escape via Extension Exploits

This repository contains proof of concept exploits for CVE-2024-5836 and CVE-2024-6778, which are vulnerabilities within…

3 weeks ago

Rust BOFs – Unlocking New Potentials In Cobalt Strike

This took me like 4 days (+2 days for an update), but I got it…

3 weeks ago

MaLDAPtive – Pioneering LDAP SearchFilter Parsing And Security Framework

MaLDAPtive is a framework for LDAP SearchFilter parsing, obfuscation, deobfuscation and detection. Its foundation is…

3 weeks ago