CrossLinked is a LinkedIn enumeration tool that uses search engine scraping to collect valid employee names from an organization. This technique provides accurate results without the use of API keys, credentials, or accessing LinkedIn directly!
Install the last stable release from PyPi:
pip3 install crosslinked
Or, install the most recent code from GitHub:
git clone https://github.com/m8sec/crosslinked
cd crosslinked
python3 setup install
CrossLinked assumes the organization’s account naming convention has already been identified. This is required for execution and should be added to the CMD args based on your expected output. See the Naming Format and Example Usage sections below:
{f}.{last} = j.smith
{first.{last} = john.smith
CMP{first}{l} = CMP\johns
{f}{last}@company.com = jsmith@company.com
By default, CrossLinked will use google and bing search engines to identify employees of the target organization. After execution, two files (names.txt & names.csv) will appear in the current directory, unless modified in the CMD args.
Parse section below for more.python3 crosslinked.py -f ‘{first}.{last}@domain.com’ company_name
python3 crosslinked.py -f ‘domain{f}{last}’ -t 15 -j 2 company_name
Account naming convention changed after execution and now your hitting CAPTCHA requests? No Problem!
CrossLinked v0.2.0 now includes a names.csv output file, which stores all scraping data including: first name, last name, job title, and url. This can be ingested and parsed to reformat user accounts as needed.
python3 crosslinked.py -f ‘{f}{last}@domain.com’ names.csv
The latest version of CrossLinked provides proxy support to rotate source addresses. Users can input a single proxy with --proxy 127.0.0.1:8080 or use multiple via --proxy-file proxies.txt.
cat proxies.txt
127.0.0.1:8080
socks4://111.111.111.111
socks5://222.222.222.222
python3 crosslinked.py –proxy-file proxies.txt -f ‘{first}.{last}@company.com’ -t 10 “Company”
positional arguments:
company_name Target company name
optional arguments:
-h, –help show help message and exit
-t TIMEOUT Max timeout per search (Default=15)
-j JITTER Jitter between requests (Default=1)
Search arguments:
–search ENGINE Search Engine (Default=’google,bing’)
Output arguments:
-f NFORMAT Format names, ex: ‘domain{f}{last}’, ‘{first}.{last}@domain.com’
-o OUTFILE Change name of output file (omit_extension)
Proxy arguments:
–proxy PROXY Proxy requests (IP:Port)
–proxy-file PROXY Load proxies from file for rotation
Introduction Google Dorking is a technique where advanced search operators are used to uncover information…
Linux is renowned for its versatility, open-source nature, and security. Whether you're a beginner, developer,…
Cyber insurance helps businesses and individuals mitigate financial losses from data breaches, ransomware, extortion, legal…
Ransomware is one of the most dangerous and destructive forms of cybercrime today. With cybercriminals…
Social media is a key part of our daily lives, with millions of users sharing…
What Are Data Brokers? Data brokers are companies that collect, aggregate, and sell personal information,…