Introducing Uscrapper 2.0, A powerfull OSINT webscrapper that allows users to extract various personal information from a website.
It leverages web scraping techniques and regular expressions to extract email addresses, social media links, author names, geolocations, phone numbers, and usernames from both hyperlinked and non-hyperlinked sources on the webpage, supports multithreading to make this process faster, Uscrapper 2.0 is equipped with advanced Anti-webscrapping bypassing modules and supports webcrawling to scrape from various sublinks within the same domain.
The tool also provides an option to generate a report containing the extracted details.
Uscrapper extracts the following details from the provided website:
git clone https://github.com/z0m31en7/Uscrapper.git
cd Uscrapper/install/
chmod +x ./install.sh && ./install.sh #For Unix/Linux systems
To run Uscrapper, use the following command-line syntax:
python Uscrapper-v2.0.py [-h] [-u URL] [-c (INT)] [-t THREADS] [-O] [-ns]
Nmap (Network Mapper) is a free tool that helps you find devices on a network,…
Introduction to the Model Context Protocol (MCP) The Model Context Protocol (MCP) is an open…
While file extensions in Linux are optional and often misleading, the file command helps decode what a…
The touch command is one of the quickest ways to create new empty files or update timestamps…
Handling large numbers of files is routine for Linux users, and that’s where the find command shines.…
Managing files and directories is foundational for Linux workflows, and the mv (“move”) command makes it easy…