Evine : Interactive CLI Web Crawler

Evine is a simple, fast, and interactive web crawler and web scraper written in Golang. Evine is useful for a wide range of purposes such as metadata and data extraction, data mining, reconnaissance and testing.

Install

From Binary

Pre-build binary releases are also available.

From source

go get github.com/saeeddhqan/evine
“$GOPATH/bin/evine” -h

From GitHub

git clone https://github.com/saeeddhqan/evine.git
cd evine
go build .
mv evine /usr/local/bin
evine –help

Note: golang 1.13.x required.

Commands & Usage

KeybindingDescription
EnterRun crawler (from URL view)
EnterDisplay response (from Keys and Regex views)
TabNext view
Ctrl+SpaceRun crawler
Ctrl+SSave response
Ctrl+ZQuit
Ctrl+RRestore to default values (from Options and Headers views)
Ctrl+QClose response save view (from Save view)

evine -h

It will displays help for the tool:

flagDescriptionExample
-urlURL to crawl forevine -url toscrape.com
-url-exclude stringExclude URLs maching with this regex (default “.*”)evine -url-exclude ?id=
-domain-exclude stringExclude in-scope domains to crawl. Separate with comma. default=root domainevine -domain-exclude host1.tld,host2.tld
-code-exclude stringExclude HTTP status code with these codes. Separate whit ‘|’ (default “.*”)evine -code-exclude 200,201
-delay intSleep between each request(Millisecond)evine -delay 300
-depthScraper depth search level (default 1)evine -depth 2
-thread intThe number of concurrent goroutines for resolving (default 5)evine -thread 10
-headerHTTP Header for each request(It should to separated fields by \n).evine -header KEY: VALUE\nKEY1: VALUE1
-proxy stringProxy by scheme://ip:portevine -proxy http://1.1.1.1:8080
-scheme stringSet the scheme for the requests (default “https”)evine -scheme http
-timeout intSeconds to wait before timing out (default 10)evine -timeout 15
-keys stringWhat do you want? write here(email,url,query_urls,all_urls,phone,media,css,script,cdn,comment,dns,network,all, or a file extension)evine -keys urls,pdf,txt
-regex stringSearch the Regular Expression on the page contentsevine -regex ‘User.+’
-max-regex intMax result of regex search for regex field (default 1000)evine -max-regex -1
-robotsScrape robots.txt for URLs and using them as seedsevine -robots
-sitemapScrape sitemap.xml for URLs and using them as seedsevine -sitemap
-waybackScrape WayBackURLs(web.archive.org) for URLs and using them as seedsevine -sitemap

VIEWS

  • URL: In this view, you should enter the URL string.
  • Options: This view is for setting options.
  • Headers: This view is for setting the HTTP Headers.
  • Keys: This view is used after the crawling web. It will be used to extract the data(docs, URLs, etc) from the web pages that have been crawled.
  • Regex: This view is useful to search the Regexes in web pages that have been crawled. Write your Regex in this view and press Enter.
  • Response: All of the results write in this view
  • Search: This view is used to search the Regexes in the Response content.