ZIP File Raider – Burp Extension for ZIP File Payload Testing

ZIP File Raider is a Burp Suite extension for attacking web application with ZIP file upload functionality. You can easily inject Burp Scanner/Repeater payloads in ZIP content of the HTTP requests which is not feasible by default. This extension helps to automate the extraction and compression steps.

ZIP File Raider Installation

  1. Set up Jython standalone Jar in Extender > Options > Python Environment > “Select file…”.
  2. Add ZIP File Raider extension in Extender > Extensions > Add > CompressedPayloads.py (Extension type: Python)

Also ReadDeepSearch – Advanced Web Dir Scanner For Bruteforce

How to use?

Send the HTTP request with a compressed file to the ZIP File Raider

First, right click on the HTTP request with a compressed file in HTTP body and then select “Send request to ZIP File Raider extender Repeater” or Scanner.

Repeater

This Repeater tab makes it possible to edit the content of the compressed file and then repeats it to the server promptly.

Descriptions for ZIP File Raider – Repeater tab:

  1. Files and folders pane – list of files and folders in the compressed file which is sent from the previous step (Send request to …), select a file to edit its content.
  2. Edit pane – edit the content of selected file in text or hex mode (press “Save” after editing one file if you want to edit multiple files in a ZIP file).
  3. Request/Response pane – The HTTP request/response will be shown in this pane after clicking on the “Compress & Go” button.

Scanner

This Scanner tab is used for setting the §insertion point§ in the content of the ZIP file before sending it to Burp Scanner.

Descriptions for ZIP File Raider – Scanner tab:

  1. Files and folders pane – list of files and folders in the compressed file which is sent from the previous step (Send request to …), select a file that you want to set the §insertion points§.
  2. Set insertion point pane – set insertion point in the content of the selected file by clicking on the “Set insertion point” button. (The insertion point will be enclosed with a pair of § symbol)
  3. Config/Status pane – config the scanner and show the scanner status (Not Running/Running).

Credit: Natsasit Jirathammanuwat

You can follow us on LinkedinTwitterFacebook for daily Cybersecurity updates also you can take the Best Cybersecurity courses online to keep your self-updated.

NodeJsScan – Static Security Code Scanner For Node.js Applications

NodeJsScan is a static security code scanner (SAST) for Node.js applications.

Configure & Run

Install Postgres and configure SQLALCHEMY_DATABASE_URI in core/settings.py

pip3 install -r requirements.txt
python3 migrate.py # Run once to create database entries required
python3 app.py # Testing Environment
gunicorn -b 0.0.0.0:9090 app:app --workers 3 --timeout 10000 # Production Environment

This will run it on http://0.0.0.0:9090

If you need to debug, set DEBUG = True in core/settings.py

Also ReadOsmedeus – Automatic Reconnaisance and Scanning in Penetration Testing

NodeJsScan CLI

The command line interface (CLI) allows you to integrate it with DevSecOps CI/CD pipelines. The results are in JSON format. When you use CLI the results are never stored with it in the backend.

virtualenv venv -p python3
source venv/bin/activate
(venv)pip install nodejsscan
(venv)$ nodejsscan
usage: nodejsscan [-h] [-f FILE [FILE ...]] [-d DIRECTORY [DIRECTORY ...]]
                  [-o OUTPUT] [-v]

optional arguments:
  -h, --help            show this help message and exit
  -f FILE [FILE ...], --file FILE [FILE ...]
                        Node.js file(s) to scan
  -d DIRECTORY [DIRECTORY ...], --directory DIRECTORY [DIRECTORY ...]
                        Node.js source code directory/directories to scan
  -o OUTPUT, --output OUTPUT
                        Output file to save JSON report
  -v, --version         Show nodejsscan version

Python API

import core.scanner as njsscan
res_dir = njsscan.scan_dirs(['/Code/Node.Js-Security-Course'])
res_file = njsscan.scan_file(['/Code/Node.Js-Security-Course/deserialization.js'])
print(res_file)

[{'title': 'Deserialization Remote Code Injection', 'description': "User controlled data in 'unserialize()' or 'deserialize()' function can result in Object Injection or Remote Code Injection.", 'tag': 'rci', 'line': 11, 'lines': 'app.use(cookieParser())\n\napp.get(\'/\', function(req, res) {\n            if (req.cookies.profile) {\n                var str = new Buffer(req.cookies.profile, \'base64\').toString();\n                var obj = serialize.unserialize(str);\n                if (obj.username) {\n                    res.send("Hello " + escape(obj.username));\n                }\n            } else {', 'filename': 'deserialization.js', 'path': '/Users/ajin/Code/Node.Js-Security-Course/deserialization.js', 'sha2': '06f3f0ff3deed27aeb95955a17abc7722895d3538c14648af97789d8777cee50'}]

Docker

docker build -t nodejsscan .
docker run -it -p 9090:9090 nodejsscan

DockerHub

docker pull opensecurity/nodejsscan
docker run -it -p 9090:9090 opensecurity/nodejsscan:latest

You can follow us on LinkedinTwitterFacebook for daily Cybersecurity updates also you can take the Best Cybersecurity courses online to keep your self-updated.

Vba2Graph – Generate Call Graphs From VBA Code For Easier Analysis Of Malicious Documents

Vba2Graph is a tool for security researchers, who waste their time analyzing malicious Office macros. Generates a VBA call graph, with potential malicious keywords highlighted.

Allows for quick analysis of malicous macros, and easy understanding of the execution flow.

Vba2Graph Features

  • Keyword highlighting
  • VBA Properties support
  • External function declarion support
  • Tricky macros with “_Change” execution triggers
  • Fancy color schemes!

Pros

  • Pretty fast
  • Works well on most malicious macros observed in the wild

Cons

  • Static (dynamicaly resolved calls would not be recognized)

Also ReadBabySploit – Beginner Pentesting Toolkit/Framework Written in Python

Installation

Install oletools:

https://github.com/decalage2/oletools/wiki/Install

Install Python Requirements

pip2 install -r requirements.txt

Install Graphviz

Windows

Install Graphviz msi:

https://graphviz.gitlab.io/_pages/Download/Download_windows.html

Add “dot.exe” to PATH env variable or just:

set PATH=%PATH%;C:\Program Files (x86)\Graphviz2.38\bin

Mac

brew install graphviz

Ubuntu

sudo apt-get install graphviz

Arch

sudo pacman -S graphviz

Usage

usage: vba2graph.py [-h] [-o OUTPUT] [-c {0,1,2,3}] (-i INPUT | -f FILE)

optional arguments:
  -h, --help            show this help message and exit
  -o OUTPUT, --output OUTPUT
                        output folder (default: "output")
  -c {0,1,2,3}, --colors {0,1,2,3}
                        color scheme number [0, 1, 2, 3] (default: 0 - B&W)
  -i INPUT, --input INPUT
                        olevba generated file or .bas file
  -f FILE, --file FILE  Office file with macros

Usage Examples (All Platforms)

Only Python 2 is supported:

# Generate call graph directly from an Office file with macros [tnx @doomedraven]
python2 vba2graph.py -f malicious.doc -c 2    

# Generate vba code using olevba then pipe it to vba2graph
olevba malicious.doc | python2 vba2graph.py -c 1

# Generate call graph from VBA code
python2 vba2graph.py -i vba_code.bas -o output_folder

Output

You’ll get 4 folders in your output folder:

  • png: the actual graph image you are looking for
  • svg: same graph image, just in vector graphics
  • dot: the dot file which was used to create the graph image
  • bas: the VBA functions code that was recognized by the script (for debugging)

Examples

Example 1:

Trickbot downloader – utilizes object Resize event as initial trigger, followed by TextBox_Change triggers.

Example 2:

Credit: @MalwareCantFly

You can follow us on LinkedinTwitterFacebook for daily Cybersecurity updates also you can take the Best Cybersecurity courses online to keep your self-updated.

Ache – Web Crawler For Domain-Specific Search

ACHE is a focused web crawler. It collects web pages that satisfy some specific criteria, e.g., pages that belong to a given domain or that contain a user-specified pattern.

ACHE differs from generic crawlers in sense that it uses page classifiers to distinguish between relevant and irrelevant pages in a given domain.

A page classifier can be from a simple regular expression (that matches every page that contains a specific word, for example), to a machine-learning based classification model.

ACHE can also automatically learn how to prioritize links in order to efficiently locate relevant content while avoiding the retrieval of irrelevant content.

ACHE supports many features, such as:

  • Regular crawling of a fixed list of web sites
  • Discovery and crawling of new relevant web sites through automatic link prioritization
  • Configuration of different types of pages classifiers (machine-learning, regex, etc)
  • Continuous re-crawling of sitemaps to discover new pages
  • Indexing of crawled pages using Elasticsearch
  • Web interface for searching crawled pages in real-time
  • REST API and web-based user interface for crawler monitoring
  • Crawling of hidden services using TOR proxies

Also ReadManticore : Symbolic Execution Tool

Ache Installation

You can either build ACHE from the source code, download the executable binary using conda, or use Docker to build an image and run ACHE in a container.

Build from source with Gradle

Prerequisite: You will need to install recent version of Java (JDK 8 or latest).

To build ACHE from source, you can run the following commands in your terminal:

git clone https://github.com/ViDA-NYU/ache.git
cd ache
./gradlew installDist

which will generate an installation package under ache/build/install/. You can then make ache command available in the terminal by adding ACHE binaries to the PATH environment variable:

export ACHE_HOME="{path-to-cloned-ache-repository}/build/install/ache"
export PATH="$ACHE_HOME/bin:$PATH"

Running using Docker

Prerequisite: You will need to install a recent version of Docker. See https://docs.docker.com/engine/installation/ for details on how to install Docker for your platform.

We publish pre-built docker images on Docker Hub for each released version. You can run the latest image using:

docker run -p 8080:8080 vidanyu/ache:latest

Alternatively, you can build the image yourself and run it:

git clone https://github.com/ViDA-NYU/ache.git
cd ache
docker build -t ache .
docker run -p 8080:8080 ache

The Dockerfile exposes two data volumes so that you can mount a directory with your configuration files (at /config) and preserve the crawler stored data (at /data) after the container stops.

Download with Conda

Prerequisite: You need to have Conda package manager installed in your system.

If you use Conda, you can install ache from Anaconda Cloud by running:

conda install -c vida-nyu ache

NOTE: Only released tagged versions are published to Anaconda Cloud, so the version available through Conda may not be up-to-date. If you want to try the most recent version, please clone the repository and build from source or use the Docker version.

Running ACHE

Before starting a crawl, you need to create a configuration file named ache.yml. We provide some configuration samples in the repository’s config directory that can help you to get started.

You will also need a page classifier configuration file named pageclassifier.yml. For details on how configure a page classifier, refer to the page classifiers documentation.

After you have configured a classifier, the last thing you will need is a seed file, i.e, a plain text containing one URL per line. The crawler will use these URLs to bootstrap the crawl.

Finally, you can start the crawler using the following command:

ache startCrawl -o <data-output-path> -c <config-path> -s <seed-file> -m <model-path>

where,

  • <configuration-path> is the path to the config directory that contains ache.yml.
  • <seed-file> is the seed file that contains the seed URLs.
  • <model-path> is the path to the model directory that contains the file pageclassifier.yml.
  • <data-output-path> is the path to the data output directory.

Example of running ACHE using the sample pre-trained page classifier model and the sample seeds file available in the repository:

ache startCrawl -o output -c config/sample_config -s config/sample.seeds -m config/sample_model

The crawler will run and print the logs to the console. Hit Ctrl+C at any time to stop it (it may take some time). For long crawls, you should run ACHE in background using a tool like nohup.

Data Formats

ACHE can output data in multiple formats. The data formats currently available are:

  • FILES (default) – raw content and metadata is stored in rolling compressed files of fixed size.
  • ELATICSEARCH – raw content and metadata is indexed in an ElasticSearch index.
  • KAFKA – pushes raw content and metadata to an Apache Kafka topic.
  • WARC – stores data using the standard format used by the Web Archive and Common Crawl.
  • FILESYSTEM_HTML – only raw page content is stored in plain text files.
  • FILESYSTEM_JSON – raw content and metadata is stored using JSON format in files.
  • FILESYSTEM_CBOR – raw content and some metadata is stored using CBOR format in files.

Credit: aecio.santos@nyu.edu & kien.pham@nyu.edu

You can follow us on LinkedinTwitterFacebook for daily Cybersecurity updates also you can take the Best Cybersecurity courses online to keep your self-updated.

SSH Auditor – Scan For Weak SSH Passwords On Your Network

SSH Auditor is the best way to scan for weak ssh passwords on your network. SSH Auditor will automatically:

  • Re-check all known hosts as new credentials are added. It will only check the new credentials.
  • Queue a full credential scan on any new host discovered.
  • Queue a full credential scan on any known host whose ssh version or key fingerprint changes.
  • Attempt command execution as well as attempt to tunnel a TCP connection.
  • Re-check each credential using a per credential scan_interval – default 14 days.

It’s designed so that you can run ssh-auditor discover + ssh-auditor scanfrom cron every hour to to perform a constant audit.

Also ReadManticore : Symbolic Execution Tool

Installation SSH Auditor

$ brew install go # or however you want to install the go compiler
$ go get github.com/ncsa/ssh-auditor

or Build from a git clone

$ go build

Build A Static Binary Including SQLite

$ make static

Ensure you can use enough file descriptors

$ ulimit -n 4096

Create initial database and discover ssh servers

$ ./ssh-auditor discover -p 22 -p 2222 192.168.1.0/24 10.0.0.1/24

Add credential pairs to check

$ ./ssh-auditor addcredential root root
$ ./ssh-auditor addcredential admin admin
$ ./ssh-auditor addcredential guest guest --scan-interval 1 #check this once per day

Try credentials against discovered hosts in a batch of 20000

$ ./ssh-auditor scan

Output a report on what credentials worked

$ ./ssh-auditor vuln

RE-Check credentials that worked

$ ./ssh-auditor rescan

Output a report on duplicate key usage

$ ./ssh-auditor dupes

Video Demos

Earlier demo showing all of the features

Demo showing improved log output

You can follow us on LinkedinTwitterFacebook for daily Cybersecurity updates also you can take the Best Cybersecurity courses online to keep your self-updated.

Hassh : Tool Used To Identify Specific Client & Server SSH Implementations

HASSH is a network fingerprinting standard which can be used to identify specific Client and Server SSH implementations. The fingerprints can be easily stored, searched and shared in the form of a small MD5 fingerprint.

Also ReadWebMap : Nmap Web Dashboard and Reporting

HASSH help with?

  • Use in highly controlled, well understood environments, where any fingerprints outside of a known good set are alertable.
  • It is possible to detect, control and investigate brute force or Cred Stuffing password attempts at a higher level of granularity than IP Source - which may be impacted by NAT or botnet-like behaviour. The hassh will be a feature of the specific Client software implementation being used, even if the IP is NATed such that it is shared by many other SSH clients.
  • Detect covert exfiltration of data within the components of the Client algorithm sets. In this case, a specially coded SSH Client can send data outbound from a trusted to a less trusted environment within a series of SSH_MSG_KEXINIT packets. In a scenario similar to the more known exfiltration via DNS, data could be sent as a series of attempted, but incomplete and unlogged connections to an SSH server controlled by bad actors who can then record, decode and reconstitute these pieces of data into their original form. Until now such attempts – much less the contents of the clear text packets – are not logged even by mature packet analyzers or on end point systems. Detection of this style of exfiltration can now be performed easily by using anomaly detection or alerting on SSH Clients with multiple different hassh
  • Use in conjunction with other contextual indicators, for example detect Network discovery and Lateral movement attempts by unusual hassh such as those used by Paramiko, Powershell, Ruby, Meterpreter, Empire.
  • Share malicious hassh as Indicators of Compromise.
  • Create an additional level of Client application control, for example one could block all Clients from connecting to an SSH server that are outside of an approved known set of hassh values.
  • Contribute to Non Repudiation in a Forensic context – at a higher level of abstraction than IPSource – which may be impacted by NAT, or where multiple IP Sources are used.
  • Detect Deceptive Applications. Eg a hasshServer value known to belong to the Cowry/Kippo SSH honeypot server installation, which is purporting to be a common OpenSSH server in the Server String.
  • Detect devices having a hassh known to belong to IOT embedded systems. Examples may include cameras, mics, keyloggers, wiretaps that could be easily be hidden from view and communicating quietly over encrypted channels back to a control server.

How does it work?

“hassh” and “hasshServer” are MD5 hashes constructed from a specific set of algorithms that are supported by various SSH Client and Server Applications. These algorithms are exchanged after the initial TCP three-way handshake as clear-text packets known as “SSH_MSG_KEXINIT” messages, and are an integral part of the setup of the final encrypted SSH channel. The existence and ordering of these algorithms is unique enough such that it can be used as a fingerprint to help identify the underlying Client and Server application or unique implementation, regardless of higher level ostensible identifiers such as “Client” or “Server” strings.

Credits

hassh and hasshServer were conceived and developed by Ben Reardon within the Detection Cloud Team at Salesforce, with inspiration and contributions from Adel Karimi  and the JA3 crew crew:John B. Althouse , Jeff Atkinson and Josh Atkins.

You can follow us on LinkedinTwitterFacebook for daily Cybersecurity updates also you can take the Best Cybersecurity courses online to keep your self-updated.

Pastego – Scrape/Parse Pastebin Using GO & Expression Grammar

Pastego Scrape/Parse Pastebin using GO and grammar expression (PEG).

Pastego Installation

$ go get -u github.com/edoz90/pastego

Also ReadHackertarget: Tools And Network Intelligence To Help Organizations With Attack Surface Discovery

Usage

Search keywords are case sensitive

pastego -s "password,keygen,PASSWORD"

You can use boolean operators to reduce false positive

pastego -s "quake && ~earthquake, password && ~(php || sudo || Linux || '<body>')"

This command will search for bins with quake but not earthquake words and for bins with password but not php, sudo, Linux, <body> words.

usage: pastego [<flags>]

Flags:
      --help              Show context-sensitive help (also try --help-long and --help-man).
  -s, --search="pass"     Strings to search, i.e: "password,ssh"
  -o, --output="results"  Folder to save the bins
  -i, --insensitive       Search for case-insensitive strings

Supported expression/operators:

`&&` - and

`||` - or

`~` - not

`'string with space'`

`(myexpression && 'with operators')`

Keybindings

q, ctrl+c: quit pastego

k, : show previous bin

j, : show next bin

n: jump forward by 15 bins

p: jump backward by 15 bins

N: move to the next block of findings (in alphabet order)

P: move to the previous block of findings (in alphabet order)

d: delete file from file system

HOME: go to top

Requirements

goquery

go get -u "github.com/PuerkitoBio/goquery"

kingpin

go get -u "gopkg.in/alecthomas/kingpin.v2"

gocui

go get -u "github.com/jroimartin/gocui"

To create the code from PEG use pigeon:

go get -u github.com/mna/pigeon

You can follow us on LinkedinTwitterFacebook for daily Cybersecurity updates also you can take the Best Cybersecurity courses online to keep your self-updated.

CloudBunny – CloudBunny Is A Tool To Capture The Real IP Of The Server

CloudBunny is a tool to capture the real IP of the server that uses a WAF as a proxy or protection. In this tool we used three search engines to search domain information: Shodan, Censys and Zoomeye. CloudBunny is a tool to capture the origin server that uses a WAF as a proxy or protection.

How CloudBunny Works

In this tool we used three search engines to search domain information: Shodan, Censys and Zoomeye. To use the tools you need the API Keys, you can pick up the following links:

Shodan - https://account.shodan.io/
Censys - https://censys.io/account/api
ZoomEye - https://www.zoomeye.org/profile

NOTE: In Zoomeye you need to enter the login and password, it generates a dynamic api key and I already do this work for you. Just enter your login and password.

After that you need to put the credentials in the api.conf file.

Install the requirements:

$ sudo pip install -r requirements.txt

Also ReadHackertarget: Tools And Network Intelligence To Help Organizations With Attack Surface Discovery

Usage

By default the tool searches on all search engines (you can set this up by arguments), but you need to put the credentials as stated above. After you have loaded the credentials and installed the requirements, execute:

$ python cloudbunny.py -u securityattack.com.br

Check our help area:

$ python cloudbunny.py -h

Change securityattack.com.br for the domain of your choice.

Example

$ python cloudbunny.py -u site_example.com.br

	            /|      __  
	           / |   ,-~ /  
	          Y :|  //  /    
	          | jj /( .^  
	          >-"~"-v"  
	         /       Y    
	        jo  o    |  
	       ( ~T~     j   
	        >._-' _./   
	       /   "~"  |    
	      Y     _,  |      
	     /| ;-"~ _  l    
	    / l/ ,-"~    \  
	    \//\/      .- \  
	     Y        /    Y*  
	     l       I     ! 
	     ]\      _\    /"\ 
	    (" ~----( ~   Y.  )   
	~~~~~~~~~~~~~~~~~~~~~~~~~~    
CloudBunny - Bypass WAF with Search Engines 
Author: Eddy Oliveira (@Warflop)
https://github.com/Warflop 
    
[+] Looking for target on Shodan...
[+] Looking for target on Censys...
[+] Looking for certificates on Censys...
[+] Looking for target on ZoomEye...
[-] Just more some seconds...


+---------------+------------+-----------+----------------------------+
|   IP Address  |    ISP     |   Ports   |        Last Update         |
+---------------+------------+-----------+----------------------------+
|  55.14.232.4  | Amazon.com | [80, 443] | 2018-11-02T16:02:51.074543 |
| 54.222.146.40 | Amazon.com |    [80]   | 2018-11-02T10:16:38.166829 |
| 18.235.52.237 | Amazon.com | [443, 80] | 2018-11-08T01:22:11.323980 |
| 54.237.93.127 | Amazon.com | [443, 80] | 2018-11-05T15:54:40.248599 |
| 53.222.94.157 | Amazon.com | [443, 80] | 2018-11-06T08:46:03.377082 |
+---------------+------------+-----------+----------------------------+
    We may have some false positives :)

You can follow us on LinkedinTwitterFacebook for daily Cybersecurity updates also you can take the Best Cybersecurity courses online to keep your self-updated.

Osmedeus – Automatic Reconnaisance and Scanning in Penetration Testing

Osmedeus is a automatic Reconnaisance and Scanning in Penetration Testing. Osmedeus allow you to do boring stuff in Pentesting automatically like reconnaissance and scanning the target by run the collection of awesome tools.

Osmedeus Installation

git clone https://github.com/j3ssie/Osmedeus
cd Osmedeus
./install.sh

This install only focus on Kali linux.

How to use

If you have no idea what are you doing just type the command below

./osmedeus.py -t example.com

List all module

./osmedeus.py -M

Update

./osmedeus.py --update

Also ReadInvisi-Shell : Hide Your Powershell Script In Plain Sight(Bypass all Powershell security features)

Video Demo

Video Tutorial

https://www.youtube.com/watch?v=SnGPedyJvig

Credit: @j3ssiejjj

You can follow us on LinkedinTwitterFacebook for daily Cybersecurity updates also you can take the Best Cybersecurity courses online to keep your self-updated.

BabySploit – Beginner Pentesting Toolkit/Framework Written in Python

BabySploit is a penetration testing toolkit aimed at making it easy to learn how to use bigger, more complicated frameworks like Metasploit. With a very easy to use UI and toolkit, anybody from any experience level will find use out of BabySploit. Below are some screenshots of the framework.

BabySploit Installation

BabySploit is best run out of the home directory so to clone it there run:

git clone git://github.com/M4cs/BabySploit ~/BabySploit
cd ~/BabySploit

After cloning the installation you must install some pre-requisites. If you are on Kali you should already have all of these installed but it doesn’t hurt to do so anyways just in case. Do so by running the following:

If you are not on Kali you need to add the Kali repository to your APT Sources list and then run install.

~- From Within The BabySploit Directory -!
sudo apt-get update
sudo apt-get upgrade
sudo python3 install.py
virtualenv babysploit
source babysploit/bin/activate
pip3 install -r requirements.txt
python3 start.py

!- To Leave The Virtual Environment -!

deactivate

Also ReadInvisi-Shell : Hide Your Powershell Script In Plain Sight(Bypass all Powershell security features)

Getting Started

Setting Configuration Values

BabySploit uses ConfigParser in order to write and read configuration. Your config file is automatically generated and located at ./babysploit/config/config.cfg. You can manually change configuration settings by opening up the file and editing with a text editor or you can use the set command to set a new value for a key. Use the set command like so:

set rhost
>> Enter Value For rhost: 10
>> Config Key Saved!

If before running this command the rhost key had a value of 80, the rhost key after running this command has a value of 10. You can also add configuration variables to the config by using the set command with a new key after it like so:

set newkey
>> Enter Value For newkey: hello
>> Config Key Saved!

Before running this there was no key named “newkey”. After running this you will have a key named “newkey” in your config until you use the reset command which resets the saved configuration.

Running A Tool

In order to run a tool all you have to do is enter the name of the tool into BabySploit. You can use the tools command to display a menu with all the currently availble tools. If we run tools we get the depiction:

This menu will display the tools available and the description of each tool. To run a tool simply enter the tool name into BabySploit. Ex: ftpbruteforce – runs the ftpbruteforce tool.

Video Demonstration

You can follow us on LinkedinTwitterFacebook for daily Cybersecurity updates also you can take the Best Cybersecurity courses online to keep your self-updated.