Hakrawler is a Go web crawler designed for easy, quick discovery of endpoints and assets within a web application. It can be used to discover:
The goal is to create the tool in a way that it can be easily chained with other tools such as subdomain enumeration tools and vulnerability scanners in order to facilitate tool chaining, for example:
assetfinder target.com | hakrawler | some-xss-scanner
Also Read – LaravelN00b : Automated Scan .env Files & Checking Debug Mode In Victim Host
Features
Installation
go get github.com/hakluke/hakrawler
~/go/bin/hakrawler
Note that if you need to do this, you probably want to add your Go bin directory to your $PATH to make things easier!
Usage
Note: multiple domains can be crawled by piping them into hakrawler from stdin. If only a single domain is being crawled, it can be added by using the -domain flag.
$ hakrawler -h
Usage of hakrawler:
-all
Include everything in output – this is the default, so this option is superfluous (default true)
-auth string
The value of this will be included as a Authorization header
-cookie string
The value of this will be included as a Cookie header
-depth int
Maximum depth to crawl, the default is 1. Anything above 1 will include URLs from robots, sitemap, waybackurls and the initial crawler as a seed. Higher numbers take longer but yield more results. (default 1)
-forms
Include form actions in output
-js
Include links to utilised JavaScript files
-linkfinder
Run linkfinder on javascript files.
-outdir string
Directory to save discovered raw HTTP requests
-plain
Don’t use colours or print the banners to allow for easier parsing
-robots
Include robots.txt entries in output
-scope string
Scope to include:
strict = specified domain only
subs = specified domain and subdomains
fuzzy = anything containing the supplied domain
yolo = everything (default “subs”)
-sitemap
Include sitemap.xml entries in output
-subs
Include subdomains in output
-url string
The url that you wish to crawl, e.g. google.com or https://example.com. Schema defaults to http
-urls
Include URLs in output
-usewayback
Query wayback machine for URLs and add them as seeds for the crawler
-v Display version and exit
-wayback
Include wayback machine entries in output
Have you ever come across a picture on the internet and wondered where it came…
Overview WhatsMyName is a free, community-driven OSINT tool designed to identify where a username exists…
Managing disk usage is a crucial task for Linux users and administrators alike. Understanding which…
Efficient disk space management is vital in Linux, especially for system administrators who manage servers…
Knowing how to check directory sizes in Linux is essential for managing disk space and…
Managing user accounts is a core responsibility for any Linux administrator. Whether you’re securing a…