Ache – Web Crawler For Domain-Specific Search

ACHE is a focused web crawler. It collects web pages that satisfy some specific criteria, e.g., pages that belong to a given domain or that contain a user-specified pattern.

ACHE differs from generic crawlers in sense that it uses page classifiers to distinguish between relevant and irrelevant pages in a given domain.

A page classifier can be from a simple regular expression (that matches every page that contains a specific word, for example), to a machine-learning based classification model.

ACHE can also automatically learn how to prioritize links in order to efficiently locate relevant content while avoiding the retrieval of irrelevant content.

ACHE supports many features, such as:

  • Regular crawling of a fixed list of web sites
  • Discovery and crawling of new relevant web sites through automatic link prioritization
  • Configuration of different types of pages classifiers (machine-learning, regex, etc)
  • Continuous re-crawling of sitemaps to discover new pages
  • Indexing of crawled pages using Elasticsearch
  • Web interface for searching crawled pages in real-time
  • REST API and web-based user interface for crawler monitoring
  • Crawling of hidden services using TOR proxies

Also ReadManticore : Symbolic Execution Tool

Ache Installation

You can either build ACHE from the source code, download the executable binary using conda, or use Docker to build an image and run ACHE in a container.

Build from source with Gradle

Prerequisite: You will need to install recent version of Java (JDK 8 or latest).

To build ACHE from source, you can run the following commands in your terminal:

git clone https://github.com/ViDA-NYU/ache.git
cd ache
./gradlew installDist

which will generate an installation package under ache/build/install/. You can then make ache command available in the terminal by adding ACHE binaries to the PATH environment variable:

export ACHE_HOME="{path-to-cloned-ache-repository}/build/install/ache"
export PATH="$ACHE_HOME/bin:$PATH"

Running using Docker

Prerequisite: You will need to install a recent version of Docker. See https://docs.docker.com/engine/installation/ for details on how to install Docker for your platform.

We publish pre-built docker images on Docker Hub for each released version. You can run the latest image using:

docker run -p 8080:8080 vidanyu/ache:latest

Alternatively, you can build the image yourself and run it:

git clone https://github.com/ViDA-NYU/ache.git
cd ache
docker build -t ache .
docker run -p 8080:8080 ache

The Dockerfile exposes two data volumes so that you can mount a directory with your configuration files (at /config) and preserve the crawler stored data (at /data) after the container stops.

Download with Conda

Prerequisite: You need to have Conda package manager installed in your system.

If you use Conda, you can install ache from Anaconda Cloud by running:

conda install -c vida-nyu ache

NOTE: Only released tagged versions are published to Anaconda Cloud, so the version available through Conda may not be up-to-date. If you want to try the most recent version, please clone the repository and build from source or use the Docker version.

Running ACHE

Before starting a crawl, you need to create a configuration file named ache.yml. We provide some configuration samples in the repository’s config directory that can help you to get started.

You will also need a page classifier configuration file named pageclassifier.yml. For details on how configure a page classifier, refer to the page classifiers documentation.

After you have configured a classifier, the last thing you will need is a seed file, i.e, a plain text containing one URL per line. The crawler will use these URLs to bootstrap the crawl.

Finally, you can start the crawler using the following command:

ache startCrawl -o <data-output-path> -c <config-path> -s <seed-file> -m <model-path>

where,

  • <configuration-path> is the path to the config directory that contains ache.yml.
  • <seed-file> is the seed file that contains the seed URLs.
  • <model-path> is the path to the model directory that contains the file pageclassifier.yml.
  • <data-output-path> is the path to the data output directory.

Example of running ACHE using the sample pre-trained page classifier model and the sample seeds file available in the repository:

ache startCrawl -o output -c config/sample_config -s config/sample.seeds -m config/sample_model

The crawler will run and print the logs to the console. Hit Ctrl+C at any time to stop it (it may take some time). For long crawls, you should run ACHE in background using a tool like nohup.

Data Formats

ACHE can output data in multiple formats. The data formats currently available are:

  • FILES (default) – raw content and metadata is stored in rolling compressed files of fixed size.
  • ELATICSEARCH – raw content and metadata is indexed in an ElasticSearch index.
  • KAFKA – pushes raw content and metadata to an Apache Kafka topic.
  • WARC – stores data using the standard format used by the Web Archive and Common Crawl.
  • FILESYSTEM_HTML – only raw page content is stored in plain text files.
  • FILESYSTEM_JSON – raw content and metadata is stored using JSON format in files.
  • FILESYSTEM_CBOR – raw content and some metadata is stored using CBOR format in files.

Credit: aecio.santos@nyu.edu & kien.pham@nyu.edu

You can follow us on LinkedinTwitterFacebook for daily Cybersecurity updates also you can take the Best Cybersecurity courses online to keep your self-updated.

R K

Recent Posts

Bomber : Navigating Security Vulnerabilities In SBOMs

bomber is an application that scans SBOMs for security vulnerabilities. So you've asked a vendor…

14 hours ago

EmbedPayloadInPng : A Guide To Embedding And Extracting Encrypted Payloads In PNG Files

Embed a payload within a PNG file by splitting the payload across multiple IDAT sections.…

14 hours ago

Exploit Street – Navigating The New Terrain Of Windows LPEs

Exploit-Street, where we dive into the ever-evolving world of cybersecurity with a focus on Local…

3 days ago

ShadowDumper – Advanced Techniques For LSASS Memory Extraction

Shadow Dumper is a powerful tool used to dump LSASS (Local Security Authority Subsystem Service)…

4 days ago

Shadow-rs : Harnessing Rust’s Power For Kernel-Level Security Research

shadow-rs is a Windows kernel rootkit written in Rust, demonstrating advanced techniques for kernel manipulation…

2 weeks ago

ExecutePeFromPngViaLNK – Advanced Execution Of Embedded PE Files via PNG And LNK

Extract and execute a PE embedded within a PNG file using an LNK file. The…

3 weeks ago