! Version 1.5
! Auto activate JS during scan if the webite is full JS (website 2.0)
! Adding Dockerfile


  • URL fuzzing and dir/file detection
  • Test backup/old file on all the files found (index.php.bak, index.php~ …)
  • Check header information
  • Check DNS information
  • Check whois information
  • User-agent random or personal
  • Extract files
  • Keep a trace of the scan
  • Check @mail in the website and check if @mails leaked
  • CMS detection + version and vulns
  • Subdomain Checker
  • Backup system (if the script stopped, it take again in same place)
  • WAF detection
  • Add personal prefix
  • Auto update script
  • Auto or personal output of scan (scan.txt)
  • Check Github
  • Recursif dir/file
  • Scan with an authenfication cookie
  • Option –profil to pass profil page during the scan
  • HTML report
  • Work it with py2 and py3
  • Add option rate-limit if app is unstable (–timesleep)
  • Check in waybackmachine
  • Response error to WAF
  • Check if DataBase firebaseio existe and accessible
  • Automatic threads depending response to website (and reconfig if WAF detected too many times). Max: 30
  • Search S3 buckets in source code page
  • Testing bypass of waf if detected
  • Testing if it’s possible scanning with “localhost” host
  • Dockerfile
  • Active JavaScript on website 2.0 (full js)


P1 is the most important

  • JS parsing and analysis [P1]
  • Analyse html code webpage [P1]
  • On-the-fly writing report [P1]
  • Check HTTP headers/ssl security [P2]
  • Fuzzing amazonaws S3 Buckets [P2]
  • Anonymous routing through some proxy (http/s proxy list) [P2]
  • Check pastebin [P2]
  • Access token [P2]
  • Check source code and verify leak or sentsitive data in the Github [P2]
  • Check phpmyadmin version [P3]
  • Scan API endpoints/informations leaks [ASAP]


pip(3) install -r requirements.txt
If problem with pip3:
sudo python3 -m pip install -r requirements.txt

usage: [-h] [-u URL] [-w WORDLIST] [-s SUBDOMAINS] [-t THREAD] [-a USER_AGENT] [–redirect] [-r]

Optional Arguments:
-h, –help show this help message and exit
-u URL URL to scan [required]
-w WORDLIST Wordlist used for URL Fuzzing. Default: dico.txt
-s SUBDOMAINS Subdomain tester
-t THREAD Number of threads to use for URL Fuzzing. Default: 20
-a USER_AGENT Choice user-agent
–redirect For scan with redirect response (301/302)
-r Recursive dir/files
-p PREFIX Add prefix in wordlist to scan
-o OUTPUT Output to site_scan.txt (default in website directory)
-b Add a backup file scan like ‘,ā€¦’ but longer
–exclude EXCLUDE To define a page or response code status type to exclude during scan
–timesleep TS To define a timesleep/rate-limit if app is unstable during scan
–auto Automatic threads depending response to website. Max: 30
–update For automatic update


python -u -w dico_extra.txt
//With redirect
python -u -w dico_extra.txt -t 5 –redirect
//With backup files scan
python -u -w dico_extra.txt -t 5 -b
//With an exclude page
python -u -w dico_extra.txt -t 5 –exclude
//With an exclude response code
python -u -w dico_extra.txt -t 5 –exclude 403

Credit: Layno & Sanguinarius & Cyber_Ph4ntoM