Peirates, a Kubernetes penetration tool, enables an attacker to escalate privilege and pivot through a Kubernetes cluster. It automates known techniques to steal and collect service accounts, obtain further code execution, and gain control of the cluster.
You run Peirates from a container running on Kubernetes.
Does Peirates Attack A Kubernetes Cluster?
Yes, it absolutely does. Talk to your lawyer and the cluster owners before using this tool in a Kubernetes cluster.
InGuardians’ CTO Jay Beale first conceived of Peirates and put together a group of InGuardians developers to create it with him, including Faith Alderson, Adam Crompton and Dave Mayer. Faith convinced us to all learn Golang, so she could implement the tool’s use of the kubectl library from the Kubernetes project. Adam persuaded the group to use a highly-interactive user interface. Dave brought contagious enthusiasm. Together, these four developers implemented attacks and began releasing this tool that we use on our penetration tests.
Modules
If you just want the peirates binary to start attacking things, grab the latest release from the releases page.
However, if you want to build from source, read on!
Get peirates
go get -v “github.com/inguardians/peirates”
Get libary sources if you haven’t already (Warning: this will take almost a gig of space because it needs the whole kubernetes repository)
go get -v “k8s.io/kubectl/pkg/cmd” “github.com/aws/aws-sdk-go”
Build the executable
cd $GOPATH/github.com/inguardians/peirates
./build.sh
This will generate an executable file named peirates
in the same directory.
Prompt injection is a type of security vulnerability that can be exploited to control the…
Firefly is an advanced black-box fuzzer and not just a standard asset discovery tool. Firefly…
Winit is a robust, cross-platform library designed for creating and managing windows in Rust applications.…
In today’s digital age, convenience often comes at the cost of security. One such overlooked…
Terminal GPT (tgpt) offers a seamless way to bring the power of ChatGPT 3.5 directly…
garak checks if an LLM can be made to fail in a way we don't…