Information Gathering

CrossLinked: Mastering LinkedIn Enumeration with Search Engine Scraping

CrossLinked is a LinkedIn enumeration tool that uses search engine scraping to collect valid employee names from an organization. This technique provides accurate results without the use of API keys, credentials, or accessing LinkedIn directly!

Table of Contents

Sponsors

Scrape public LinkedIn profile data at scale with Proxycurl APIs.

• Scraping Public profiles are battle tested in court in HiQ VS LinkedIn case.
• GDPR, CCPA, SOC2 compliant
• High rate limit – 300 requests/minute
• Fast – APIs respond in ~2s
• Fresh data – 88% of data is scraped real-time, other 12% are not older than 29 days
• High accuracy
• Tons of data points returned per profile

Built for developers, by developers.

Install

PyPi

Install the last stable release from PyPi:

pip3 install crosslinked

Poetry

Install and run the latest code using Poetry:

git clone https://github.com/m8sec/subscraper
cd subscraper
poetry install
poetry run crosslinked -h

Python

Install the most recent code from GitHub:

git clone https://github.com/m8sec/crosslinked
cd crosslinked
pip3 install .

Prerequisites

CrossLinked assumes the organization’s account naming convention has already been identified. This is required for execution and should be added to the CMD args based on your expected output. See the Naming Format and Example Usage sections below:

Naming Format

{first.{last}           = john.smith
CMP\{first}{l}          = CMP\johns
{f}{last}@company.com   = jsmith@company.com

Advanced Formatting

New Feature

To be compatible with alternate naming conventions CrossLinked allows users to control the index position of the name extracted from search text. Should the name not be long enough, or errors encountered with the search string, CrossLinked will revert back to its default format.

Note: the search string array starts at 0. Negative numbers can also be used to count backwards from the last value.

# Default output
python3 crosslinked.py -f '{first}.{last}@company.com' Company
John David Smith = john.smith@company.com

# Use the second-to-last name as "last"
python3 crosslinked.py -f '{0:first}.{-2:last}@company.com' Company
John David Smith    = john.david@company.com
Jane Doe            = jane.doe@company.com

# Use the second item in the array as "last"
python3 crosslinked.py -f '{first}.{1:last}@company.com' Company
John David Smith    = john.david@company.com
Jane Doe            = jane.doe@company.com

Search

By default, CrossLinked will use google and bing search engines to identify employees of the target organization. After execution, two files (names.txt & names.csv) will appear in the current directory, unless modified in the CMD args.

  • names.txt – List of unique user accounts in the specified format.
  • names.csv – Raw search data. See the Parse section below for more.

Example Usage

python3 crosslinked.py -f '{first}.{last}@domain.com' company_name
python3 crosslinked.py -f 'domain\{f}{last}' -t 15 -j 2 company_name

Parse

Account naming convention changed after execution and now your hitting CAPTCHA requests? No Problem!

CrossLinked includes a names.csv output file, which stores all scraping data including: namejob title, and url. This can be ingested and parsed to reformat user accounts as needed.

Example Usage

python3 crosslinked.py -f '{f}{last}@domain.com' names.csv

Additional Options

Proxy Rotation

The latest version of CrossLinked provides proxy support to rotate source addresses. Users can input a single proxy with --proxy 127.0.0.1:8080 or use multiple via --proxy-file proxies.txt.

> cat proxies.txt
127.0.0.1:8080
socks4://111.111.111.111
socks5://222.222.222.222

> python3 crosslinked.py --proxy-file proxies.txt -f '{first}.{last}@company.com' -t 10 "Company"

Command-Line Arguments

positional arguments:
  company_name        Target company name

optional arguments:
  -h, --help          show help message and exit
  -t TIMEOUT          Max timeout per search (Default=15)
  -j JITTER           Jitter between requests (Default=1)

Search arguments:
  --search ENGINE     Search Engine (Default='google,bing')

Output arguments:
  -f NFORMAT          Format names, ex: 'domain\{f}{last}', '{first}.{last}@domain.com'
  -o OUTFILE          Change name of output file (omit_extension)

Proxy arguments:
  --proxy PROXY       Proxy requests (IP:Port)
  --proxy-file PROXY  Load proxies from file for rotation

Contribute

Contribute to the project by:

  • Like and share the tool!
  • Create an issue to report any problems or, better yet, initiate a PR.
  • Reach out with any potential features or improvements @m8sec.
Varshini

Varshini is a Cyber Security expert in Threat Analysis, Vulnerability Assessment, and Research. Passionate about staying ahead of emerging Threats and Technologies.

Recent Posts

How AI Puts Data Security at Risk

Artificial Intelligence (AI) is changing how industries operate, automating processes, and driving new innovations. However,…

14 hours ago

The Evolution of Cloud Technology: Where We Started and Where We’re Headed

Image credit:pexels.com If you think back to the early days of personal computing, you probably…

4 days ago

The Evolution of Online Finance Tools In a Tech-Driven World

In an era defined by technological innovation, the way people handle and understand money has…

4 days ago

A Complete Guide to Lenso.ai and Its Reverse Image Search Capabilities

The online world becomes more visually driven with every passing year. Images spread across websites,…

5 days ago

How Web Application Firewalls (WAFs) Work

General Working of a Web Application Firewall (WAF) A Web Application Firewall (WAF) acts as…

1 month ago

How to Send POST Requests Using curl in Linux

How to Send POST Requests Using curl in Linux If you work with APIs, servers,…

1 month ago