ShadowClone is designed to delegate time consuming tasks to the cloud by distributing the input data to multiple serverless functions (AWS Lambda, Azure Functions etc.) and running the tasks in parallel resulting in huge performance boost!
ShadowClone uses IBM’s awesome Lithops library to distribute the workloads to serverless functions which is at the core of this tool. Effectively, it is a proof-of-concept script showcasing the power of cloud computing for performing our regular pentesting tasks.
Use Cases
There are two parts of configuration – Cloud and Local
Although the final script is cloud agnostic and should work with any supported platform, I have only tested it on AWS so far. Instructions for setting up GCP, Azure and IBM cloud environments will be added soon.
{
“Version”: “2012-10-17”,
“Statement”: [
{
“Sid”: “VisualEditor0”,
“Effect”: “Allow”,
“Action”: [
“s3:“, “lambda:“,
“ec2:“, “ecr:“,
“sts:GetCallerIdentity”
],
“Resource”: “*”
}
]
}
If you are using AWS and would like to control the costs to remain in the free tier budget, I highly recommend following this article and setting up some budgets and alerts.
git clone https://github.com/fyoorer/ShadowClone.git
cd ShadowClone
python -m venv env
source env/bin/activate
pip install -r requirements.txt
All the magic happens in the lithops library, which should be installed after the previous command.
lithops
command line utility is installed by runninglithops test
⚡ lithops test
2022-01-18 08:08:45,832 [INFO] lithops.config — Lithops v2.5.8
2022-01-18 08:08:45,833 [INFO] lithops.storage.backends.localhost.localhost — Localhost storage client created
2022-01-18 08:08:45,833 [INFO] lithops.localhost.localhost — Localhost compute client created
2022-01-18 08:08:45,833 [INFO] lithops.invokers — ExecutorID b9419a-0 | JobID A000 – Selected Runtime: python
2022-01-18 08:08:45,833 [INFO] lithops.invokers — Runtime python is not yet installed
2022-01-18 08:08:45,833 [INFO] lithops.localhost.localhost — Extracting preinstalled Python modules from python
2022-01-18 08:08:46,110 [INFO] lithops.invokers — ExecutorID b9419a-0 | JobID A000 – Starting function invocation: hello() – Total: 1 activations
2022-01-18 08:08:46,111 [INFO] lithops.invokers — ExecutorID b9419a-0 | JobID A000 – View execution logs at /tmp/lithops/logs/b9419a-0-A000.log
2022-01-18 08:08:46,111 [INFO] lithops.wait — ExecutorID b9419a-0 – Getting results from functions
100%|████████████████████████████████████████████████████████████| 1/1
2022-01-18 08:08:48,125 [INFO] lithops.executors — ExecutorID b9419a-0 – Cleaning temporary data
Hello fyoorer! Lithops is working as expected 🙂
If you see this, that means Lithops is installed and working as intended.
~/.lithops/config
and copy the following content into itvi ~/.lithops/config
lithops:
backend: aws_lambda
storage: aws_s3
aws:
access_key_id: AKIA[REDACTED] #changeme
secret_access_key: xxxx[REDACTED]xxxx #changeme
#account_id: # Optional
aws_lambda:
execution_role: arn:aws:iam::123123123123:role/lithops-execution-role #changeme
region_name: us-east-1
runtime_memory: 512
runtime_timeout: 330
aws_s3:
storage_bucket: mybucket #changeme
region_name : us-east-1
The lines marked with #changeme
need to be updated with the values noted above
access_key_id
& secret_access_key
– Your account’s API credentialsexecution_role
– Enter the IAM Role ARN noted abovestorage_bucket
– Enter the name of the bucket you wish to use for storing logsEnsure that the config file is placed at ~/.lithops/config
Now we need to build a container image with all our tools baked in which will be used by the serverless function.
Build the image using lithops build
command:
lithops runtime build sc-runtime -f Dockerfile
Next, register the runtime in your cloud environment with the following command:
lithops runtime create sc-runtime –memory 512 –timeout 300
Check runtime successfully registered
lithops runtime list
Copy the runtime name displayed in the output. We will need it in the next step.
Finally, update the config.py
with the name of your runtime and the bucket:
LITHOPS_RUNTIME=”lithops_v2-5-8_ke73/sc-runtime” #runtime name obtained from above
STORAGE_BUCKET=”mytestbucket” #name of the 2nd bucket created above
Finally we are ready run some lambdas!
Usage
python shadowclone.py -h
usage: cloudcli.py [-h] -i INPUT [-s SPLITNUM] [-o OUTPUT] -c COMMAND
optional arguments:
-h, –help show this help message and exit
-i INPUT, –input INPUT
-s SPLITNUM, –split SPLITNUM
Number of lines per chunk of file
-o OUTPUT, –output OUTPUT
-c COMMAND, –command COMMAND
command to execute
We create a container image during the initial setup and register it as a runtime for our function in AWS/GCP/Azure whatever. When you execute ShadowClone on your computer, instances of that container are activated automatically and are only active for the duration of its execution. How many instances to activate is dynamically decided at runtime depending on the size of the input file provided and the split factor. The input is then split into chunks and equally distributed between all the instances to execute in parallel. For example, if your input file has 10,000 lines and you set the split factor to 100 lines, then it will be split into 100 chunks of 100 lines each and 100 instances will be run in parallel!
This tool was inspired by the awesome Axiom and Fleex projects and goes beyond the concept of VPS for running the tools by using serverless functions and containers.
Features | Axiom/Fleex | ShadowClone |
---|---|---|
Instances | 10-100s* | 1000s |
Cost | Per instance/per minute | Mostly Free** |
Startup Time | 4-5 minutes | 2-3 seconds |
Max Execution Time | Unlimited | 15 minutes |
Idle Cost | $++ | Free |
On Demand Scalability | No | ∞ |
*
Most cloud providers do not allow spinning up too many instances by default, so you are limited to around 10-15 instances at max. You have to make a request to the support to increase this number.
**
AWS & Azure allow 1 million invocations per month for free. Google allows 2 million invocations per month for free. You will be charged only if you go above these limits
shadow-rs is a Windows kernel rootkit written in Rust, demonstrating advanced techniques for kernel manipulation…
Extract and execute a PE embedded within a PNG file using an LNK file. The…
Embark on the journey of becoming a certified Red Team professional with our definitive guide.…
This repository contains proof of concept exploits for CVE-2024-5836 and CVE-2024-6778, which are vulnerabilities within…
This took me like 4 days (+2 days for an update), but I got it…
MaLDAPtive is a framework for LDAP SearchFilter parsing, obfuscation, deobfuscation and detection. Its foundation is…