Twayback – Downloading Deleted Tweets From The Wayback Machine, Made Easy

Sad News July 2023 – Due to recent changes to the Twitter platform, this tool is unable to work as intended.

Login is required to even view tweets not to mention the rate limiting. I’ll leave this repo up in case anyone wants to learn from it but I likely will not be updating the tool.

Thank you to everyone that contributed and found some utility in this project.

Finding and downloading deleted Tweets takes a lot of time. Thankfully, with this tool, it becomes a piece of cake!

Twayback is a portmanteau of Twitter and the Wayback Machine. Enter your desired Twitter username, and let Twayback do the rest!


  • Can download some or all of a user’s archived deleted Tweets.
  • Lets you extract Tweets text to a text file (yes, even quote retweets!)
  • Has ability to screenshot deleted Tweets.
  • Allows custom time range to narrow search for deleted Tweets archived between two dates.
  • Differentiates between accounts that are active, suspended, or don’t/no longer exist.
  • Lets you know if a target handle’s archived Tweets have been excluded from the Wayback Machine.
  • Saves a log of the deleted tweet URLs in case you want to view on the Wayback Machine.
  • Ability to rotate through a list of proxy servers to avoid 429 errors. You will need to do this for data sets larger than about 800 tweets.


-u, --username                                        Specify target user's Twitter handle

--batch-size                                          Specify how many URLs you would like to 
                                                      examine at a time. Expecting an integer between
                                                      1 and 100. A larger number will give you a speed
                                                      boost but at the risk of errors. Default = 100

--semaphore-size                                      Specify how many urls from --batch-size you would 
                                                      like to query asyncronously at once. Expecting an integer
                                                      between 1 and 50. A larger number will give you a speed
                                                      boost but at the risk of errors. Default = 50

-from, --fromdate                                     Narrow search for deleted Tweets *archived*
                                                      on and after this date
                                                      (can be combined with -to)
                                                      (format YYYY-MM-DD or YYYY/MM/DD
                                                      or YYYYMMDD, doesn't matter)
-to, --todate                                         Narrow search for deleted Tweets *archived*
                                                      on and before this date
                                                      (can be combined with -from)
                                                      (format YYYY-MM-DD or YYYY/MM/DD
                                                      or YYYYMMDD, doesn't matter)

--proxy-file                                          Provide a list of proxies to use. You'll need this for checking large groups of tweets
                                                      Each line should contain one url:port to use
                                                      The script will pick a new proxy from the list at random after each --batch-size       

Logs                                                  After checking a user's tweets but before you
                                                      make a download selection, a folder will be created
                                                      with that username. That folder will contain a log of:
                                                      <deleted-twitter-url>:<deleted-wayback-url> in case you needed them

twayback -u taylorswift13                             Downloads all of @taylorswift13's
                                                      deleted Tweets

twayback -u jack -from 2022-01-05                     Downloads all of @jack's
                                                      deleted Tweets
                                                      *archived* since January 5,
                                                      2022 until now

twayback -u drake -to 2022/02/09                      Downloads all of @drake's
                                                      deleted Tweets *archived*
                                                      since the beginning until
                                                      February 9, 2022

twayback -u EA -from 2020-08-30 -to 2020-09-15        Downloads all of @EA's
                                                      deleted Tweets *archived*
                                                      between August 30, 2020 to
                                                      September 15, 2020


git clone
cd twayback
pip3 install -r requirements.txt

or possibly

pip install -r requirements.txt

Run the command:

python3 -u USERNAME

(Replace USERNAME with your target handle).

For more information, check out the Usage section above.


Screenshots are done using Playwright. To successfully take screenshots, please follow these steps:

  1. Open a terminal window.
  2. Run: playwright install.


The default speed settings for --semaphore-size and --batch-size are set to the fastest possible execution. Reduce these numbers to slow down your execution and reduce the chance of errors. For checking large numbers of tweets (> than 800) you’ll need to use web proxies and --proxy-file flag

Things To Keep In Mind

  • Quality of the HTML files depends on how the Wayback Machine saved them. Some are better than others.
  • This tool is best for text. You might have some luck with photos. You cannot download videos.
  • By definition, if an account is suspended or no longer exists, all their Tweets would be considered deleted.
  • Custom date range is not about when Tweets were made, but rather when they were archived. For example, a Tweet from 2011 may have been archived today.

Agentic Security – Enhancing LLM Resilience With Open-Source Vulnerability Scanning

In an era where large language models (LLMs) are integral to technological advancements, ensuring their security is paramount.

Agentic Security offers a pioneering open-source vulnerability scanner designed to robustly test and enhance the resilience of LLMs.

This tool not only integrates seamlessly but also provides customizable attack simulations to safeguard against emerging threats.


  • Customizable Rule Sets or Agent based attacks
  • Comprehensive fuzzing for any LLMs
  • LLM API integration and stress testing
  • Wide range of fuzzing and attack techniques
Custom Huggingface Datasetsmarkush1/LLM-Jailbreak-Classifier
Local CSV Datasets

Note: Please be aware that Agentic Security is designed as a safety scanner tool and not a foolproof solution. It cannot guarantee complete protection against all possible threats.


To get started with Agentic Security, simply install the package using pip:

pip install agentic_security

Quick Start


2024-04-13 13:21:31.157 | INFO     | - Found 1 CSV files
2024-04-13 13:21:31.157 | INFO     | - CSV files: ['prompts.csv']
INFO:     Started server process [18524]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on (Press CTRL+C to quit)
python -m agentic_security
# or
agentic_security --help

agentic_security --port=PORT --host=HOST

LLM kwargs

Agentic Security uses plain text HTTP spec like:

Authorization: Bearer sk-xxxxxxxxx
Content-Type: application/json

     "model": "gpt-3.5-turbo",
     "messages": [{"role": "user", "content": "<<PROMPT>>"}],
     "temperature": 0.7

Where <<PROMPT>> will be replaced with the actual attack vector during the scan, insert the Bearer XXXXX header value with your app credentials.

Adding LLM Integration Templates



For more information click here.

Collector – The Ultimate OSINT Toolkit For Digital Sleuthing

Collector is a OSINT tool and information gathering. I build this tool for my fun and you can use this tool for do OSINT.

In github account and instagram account you can find information by username.


git clone
cd collector 
pip install -r requirements.txt


# Help 
python3 -h

# Phone number
python3 -n <phone number>

# Github account
python3 -g <target username>

# Ip address
python3 -i <ip address>

# Instagram account
python3 -ig <target username>

# Check update
python3 --update

# Login instagram account
python3 --login -u <YOUR USERNAME> -p <YOUR PASSWORD>

# Change instagram account
python3 --change -u <YOUR USERNAME> -p <YOUR PASSWORD>


Information Gathering Phone Numbers

  • Country code
  • National number
  • International format
  • National format
  • Time zone
  • ISP
  • Location

Information Gathering Github Account

  • Login
  • Id
  • Node id
  • Avatar url
  • Gravatar url
  • Url
  • Html url
  • Followers url
  • Following url
  • Gists url
  • Starred url
  • Subscriptions url
  • Organizations url
  • Repos url
  • Events url
  • Received events url
  • Type
  • Site admin
  • Name
  • Company
  • Blog
  • Location
  • Email
  • Hireable
  • Bio
  • Twitter username
  • Public repos
  • Public gists
  • Followers
  • Following
  • Created at
  • Updated at

Information Gathering IP Address

  • Ip
  • Version
  • City
  • Region
  • Region code
  • Country
  • Country name
  • Country code
  • Country code iso3
  • Country capital
  • Country tld
  • Continent code
  • In ue
  • Postal
  • Latitude
  • Longitude
  • Timezone
  • Utc offset
  • Country calling code
  • Currency
  • Currency name
  • Languages
  • Country area
  • Country population
  • Asn
  • Org


You cannot for tracking someone and cannot get accurate public ip.

Information Gathering Instagram Account

  • Username
  • Fullname
  • User id
  • Number of posts
  • Followers
  • Following
  • Bio
  • Is business account
  • Business type
  • External url
  • Is private
  • Highlights
  • Likes
  • Stories
  • Download posts & profile picture

Darvester GEN2 – Revolutionizing Discord Data Harvesting With Enhanced OSINT Capabilities

The next-generation tool in Discord user and guild information harvesting. Built on Dart/Flutter, this platform offers a comprehensive suite of OSINT capabilities, ensuring compliance with rate limits while enhancing automated processing.

Dive into the world of efficient data gathering with Darvester’s advanced features and a user-friendly Material 3 interface.

PoC Discord User And Guild Information Harvester

Darvester aims to provide safe Discord OSINT harvesting, abiding by sane rate limiting and providing automated processing – now written in Dart/Flutter

Repo Notice

Currently, there is no activity and much of this code is likely outdated. The main task at hand is making nyxx-self feature complete, comparable to the discord-py.self library.

Aside from that, the ported harvester loop in the gen2 branch is incomplete and is waiting for nyxx-self.

Although there is much to work on, the frontend supports importing an SQLite database populated with the Python branch.


  • Rate-limit/soft ban avoidance
  • Automated processing
  • Flexible configuration
  • Utilization of the Git version control system to provide chronological data
  • Detailed logging
  • and more

Data Logged For Each User

  • Profile created date, and first seen date
  • Username and discriminator
  • User ID (or Snowflake)
  • Bio/about me
  • Connected accounts (reddit, YouTube, Facebook, etc.)
  • Public Discord flags (Discord Staff, Early Bot Developer, Certified Mod, etc.)
  • Avatar URL
  • Status/Activity (“Playing”, “Listening to”, etc.)
  • Nitro tier

Data Logged For Each Guild

  • Name
  • Icon URL
  • Owner name and ID
  • Splash URL
  • Member count
  • Description
  • Features (thread length, community, etc.)
  • Nitro tier


Using this tool, you agree not to hold the contributors and developers accountable for any damages that may occur. This tool violates Discord terms of service and may result in your access to Discord services terminated.

What Does Darvester Do?

Darvester is meant to be the all-in-one solution for open-source intelligence on the Discord platform.

With the recent GEN2 releases, Darvester now provides an easy-to-use frontend UI along with new features such as multiple harvesting instances, or Isolates, for individual tokens, a refreshed Material 3 UI, and an all-in-one packaged app – no more Python dependencies and virtual environments!

AVOSINT – Harnessing Aviation Intelligence From Open Sources

A tool to search Aviation-related intelligence from public sources. AVOSINT is a cutting-edge tool designed to extract and analyze aviation-related intelligence from public sources.

It utilizes powerful OSINT techniques to monitor aircraft movements, gather historical data, and retrieve detailed aircraft information.

This article explores how AVOSINT can be deployed and its various capabilities in aviation intelligence gathering.


Launch parsr docker image (for pdf-file stored registers)

docker run -p 3001:3001 axarev/parsr

Launch Avosint

./ [--action ACTION] [--tail-number TAIL-NUMBER] [--icao ICAO]

With ACTION being either ICAOtailconvertmonitor

tail – Gather infos starting from tail number. Option --tail-number is required.

convert – Convert USA hex to ICAO. Option --icao is required.

monitor – Gathers positionnal information from osint sources and detects hovering patterns. Requires --icao number

Returns the following informations when possible:

  • Owner of the aircraft
  • User of the aircraft
  • Aircraft transponder id
  • Aircraft manufacturer serial number
  • Aircraft model
  • Aircraft picture links
  • Aircraft incident history

The following display is then presented:

Current Status: [Done]
Last action: tail
Current tail: {tail_n}
✈️ Aircraft infos:

        Manufacturer: {}
        Manufacturer Serial Number: {}
        Tail Number: {}
        Call Sign: {}
        Last known position: {}
        Last known altitude: {}
???? Owner infos

        Name: {} 
        Street: {}   
        City: {} 
        ZIP: {}
        Country: {}
New Action [ICAO, tail, convert, monitor, exit, quit] (None):


Install Python Requirements

pip install -r requirements.txt

This tool also uses the OpenSkyApi available. Install it using:

git clone 
pip install -e /path/to/repository/python

Install Parsr Docker Image

docker run -p 3001:3001 axarev/parsr


As some registers are in the form of a pdf file, AVOSINT uses parsr Due to a bug in the current version of the parsr library (axa-group/Parsr#565 (comment)) it is necessary to apply the following fix in the parsr-client python library:

return {
- 'file': file,
- 'config': config,
+ 'file': file_path,
+ 'config': config_path,
  'status_code': r.status_code,
  'server_response': r.text

Mr.Holmes – A Comprehensive Guide To Installing And Using The OSINT Tool

Mr.Holmes is a information gathering tool (OSINT). The main purpose is to gain information about domains,username and phone numbers with the help of public source avaiable on the internet also it use the google dorks attack for specific researchers.

It also use proxies for make your requests completley anonymous and a WhoIS Api for getting more information about a domain.


This Tool is Not 100% Accurate so it can fail somtimes. Also this tool is made for educational and research purposes only, i do not assume any kind of responsibility for any imprope use of this tool.


git clone
cd Mr.Holmes
sudo apt-get update
sudo chmod +x
sudo bash


if you encounter some errors in the python libraries installation use this method

git clone
sudo apt-get update
cd Mr.Holmes
python3 -m venv .lib_venv
sudo chmod +x
sudo bash
source .lib_venv/bin/activate
pip3 install -r requirements.txt


If you have git installed on your windows machine you can do the following commands:

git clone
cd Mr.Holmes

For more information click here.

Infoooze – Your Comprehensive Guide To OSINT Tools

Infoooze is a powerful and user-friendly OSINT (Open-Source Intelligence) tool that allows you to quickly and easily gather information about a specific target.

With Infoooze, you can easily search for information about websites, IP addresses, usernames, and more, all from the convenience of a simple command-line interface.

One of the key features of Infoooze is its ability to work as a global package, allowing you to use it from any directory on your computer.

It also has ability to automatically save the results of your searches to a text file. This means that you can easily access and refer to the information you have gathered at a later time.

Infoooze is easy to install and use, making it an ideal tool for anyone looking to gather information quickly and efficiently.


  1. InstaGram Recon
  2. Subdomain Scanner
  3. Ports Scan
  4. User Recon
  5. Mail finder
  6. URL Scanner
  7. Exif metadata
  8. Whois Lookup
  9. IP Lookup
  10. Header Info
  11. Website Age
  12. DNS Lookup
  13. UserAgent Lookup
  14. Git Recon
  15. URL Expander
  16. Youtube Lookup
  17. Instagram DP Viwer
  18. Save Results to file

Getting Started


You need NodeJs 12 or later to run this tool.
To install Node.js, follow the instructions for your operating system:

  • Linux
sudo apt-get install nodejs
  • On many distros NodeJs is installed by default.
  • Termux
pkg install nodejs-lts 
  • Windows
    • Download the latest LTS version from NodeJs.
    • Run the installer.
    • Follow the prompts in the installer (Accept the license agreement, click the NEXT button a bunch of times and accept the default installation settings).
    • Restart your computer. You won’t be able to run Node.js until you restart your computer.

For more information click here.

OSINT Template Engine – Revolutionizing Cybersecurity With Customizable Data Collection Templates

OSINT Template Engine is a research-grade tool for OSINT Information gathering & Attack Surface Mapping which uses customizable templates to collect data from sources.

It allows for new template creation and modification of existing ones which gives it a competitive advantage over other tools of the same category. For more information see the documentation.

There are thousands of sources that provide OSINT information about various targets through their API Endpoints.

OSINT Template Engine combines these capabilities into a template(a single template for a single OSINT source) giving you access to their entire available API enpoints.

Different from other already available tools, OSINT Template Engine allows you to easily modify the API endpoints interactions by customizing the endpoint’s links and parameters which helps in preventing dead links incases of API endpoint updates and upgrades by the OSINT sources.

Go Defender – Advanced Techniques To Shield Go Applications From Debugging And Virtualization Attacks

This Go package provides functionality to detect and defend against various forms of debugging tools and virtualization environments. By the way, for quick setup, run install.bat.


  • Triage Detection: Detects if the system is running in a triage or analysis environment.
  • Monitor Metrics: Monitors system metrics to identify abnormal behavior indicative of virtualization.
  • VirtualBox Detection: Detects the presence of Oracle VirtualBox.
  • VMware Detection: Detects the presence of VMware virtualization software.
  • KVM Check: Checks for Kernel-based Virtual Machine (KVM) hypervisor.
  • Username Check: Verifies if the current user is a default virtualization user.
  • Recent User Activity: Checks user activity; if there are fewer than 20 files, it exits.
  • USB Mount: Checks if a USB was ever plugged into the computer before.


This module includes functions to detect and prevent debugging and analysis of the running process.

  • IsDebuggerPresent: Checks if a debugger is currently attached to the process.
  • Remote Debugger: Detects if a remote debugger is connected to the process.
  • PC Uptime: Monitors system uptime to detect debugging attempts based on system restarts.
  • Check Blacklisted Windows Names: Verifies if the process name matches any blacklisted names commonly used by debuggers.
  • Running Processes: Retrieves a list of running processes and identifies potential malicious ones.
  • Parent Anti-Debug: Detects if the parent process is attempting to debug the current process.
  • Kill Bad Processes: Terminates known malicious processes detected on the system.
  • Detects Usermode AntiAntiDebuggers: Detects user-mode anti-anti-debuggers like ScyllaHide (BASIC).
  • Internet Connection Check: Checks if an internet connection is present.


This module focuses on critical processes that should be monitored or protected.

  • Critical Process: Implements functionality to manage critical processes essential for system operation.
  • SetDebugPrivilege: Grants better permissions.

Quick Nutshell

  • Detects most anti-anti-debugging hooking methods on common anti-debugging functions by checking for bad instructions on function addresses (most effective on x64). It also detects user-mode anti-anti-debuggers like ScyllaHide and can detect some sandboxes that use hooking to monitor application behavior/activity (like


  • Inspired me to start making this package. Without him, it wouldn’t be here. Check out his GitHub.
  • Provided ideas and much more. Check out his GitHub.
  • I made this because I noticed someone was trying to crack or analyze my other Go programs. Previously, I had many lines of anti-debugging code (I coded lazily and put everything into one), so I wanted to create something quick and reliable that would make a reverse engineer’s life harder. Thus, I made GoDefender.

TODO (V1.0.6 Plans):

  • Check Disk / RAM (If disk size is less than 100GB, exit; and if RAM size is less than 6GB, exit).
  • Flags and artifacts.
  • Execution time is lame, but I guess it can be added as well.
  • Hiding threads through (NtSetInformationThread).
  • Theres probably more, but i cant think of any right now.

X-Recon : Mastering XSS Vulnerability Scanning And Web Reconnaissance

A sophisticated tool designed for web application security enthusiasts.

This utility specializes in identifying web page inputs and performing comprehensive XSS scanning. Whether you’re looking to uncover subdomains, analyze forms, or test for XSS vulnerabilities, X-Recon provides all the necessary functionalities to enhance your security testing efforts.


  • Subdomain Discovery:
    • Retrieves relevant subdomains for the target website and consolidates them into a whitelist. These subdomains can be utilized during the scraping process.
  • Site-wide Link Discovery:
    • Collects all links throughout the website based on the provided whitelist and the specified max_depth.
  • Form and Input Extraction:
    • Identifies all forms and inputs found within the extracted links, generating a JSON output. This JSON output serves as a foundation for leveraging the XSS scanning capability of the tool.
  • XSS Scanning:
    • Once the start recon option returns a custom JSON containing the extracted entries, the X-Recon tool can initiate the XSS vulnerability testing process and furnish you with the desired results!


The scanning functionality is currently inactive on SPA (Single Page Application) web applications, and we have only tested it on websites developed with PHP, yielding remarkable results. In the future, we plan to incorporate these features into the tool.


$ git clone
$ cd X-Recon
$ python3 -m pip install -r requirements.txt
$ python3

Target For Test:

You can use this address in the Get URL section