WAF-A-MoLE : A Guided Mutation-Based Fuzzer For ML-based Web Application Firewalls

WAF-A-MoLE is a guided mutation-based fuzzer for ML-based Web Application Firewalls, inspired by AFL and based on the FuzzingBook by Andreas Zeller et al.

Given an input SQL injection query, it tries to produce a semantic invariant query that is able to bypass the target WAF. You can use this tool for assessing the robustness of your product by letting WAF-A-MoLE explore the solution space to find dangerous “blind spots” left uncovered by the target classifier.

Architecture

WAF-A-MoLE takes an initial payload and inserts it in the payload Pool, which manages a priority queue ordered by the WAF confidence score over each payload.

During each iteration, the head of the payload Pool is passed to the Fuzzer, where it gets randomly mutated, by applying one of the available mutation operators.

Mutation Operators

Mutations operators are all semantics-preserving and they leverage the high expressive power of the SQL language (in this version, MySQL).

Below are the mutation operators available in the current version of WAF-A-MoLE.

MutationExample
Case Swappingadmin' OR 1=1# admin' oR 1=1#
Whitespace Substitutionadmin' OR 1=1#admin'\t\rOR\n1=1#
Comment Injectionadmin' OR 1=1# admin'/**/OR 1=1#
Comment Rewritingadmin'/**/OR 1=1#admin'/*xyz*/OR 1=1#abc
Integer Encodingadmin' OR 1=1#admin' OR 0x1=(SELECT 1)#
Operator Swappingadmin' OR 1=1#admin' OR 1 LIKE 1#
Logical Invariantadmin' OR 1=1#admin' OR 1=1 AND 0<1#

How To Cite Us

WAF-A-MoLE implements the methodology presented in “WAF-A-MoLE: Evading Web Application Firewalls through Adversarial Machine Learning”.

If you want to cite us, please use the following (BibTeX) reference:

@inproceedings{demetrio20wafamole,
title={WAF-A-MoLE: evading web application firewalls through adversarial machine learning},
author={Demetrio, Luca and Valenza, Andrea and Costa, Gabriele and Lagorio, Giovanni},
booktitle={Proceedings of the 35th Annual ACM Symposium on Applied Computing},
pages={1745–1752},
year={2020}
}

Running WAF-A-MoLE

Prerequisites

Setup

pip install -r requirements.txt

Sample Usage

You can evaluate the robustness of your own WAF, or try WAF-A-MoLE against some example classifiers. In the first case, have a look at the Model class. Your custom model needs to implement this class in order to be evaluated by WAF-A-MoLE. We already provide wrappers for sci-kit learn and keras classifiers that can be extend to fit your feature extraction phase (if any).

Help

wafamole –help

Usage: wafamole [OPTIONS] COMMAND [ARGS]…
Options:
–help Show this message and exit.
Commands:
evade Launch WAF-A-MoLE against a target classifier.

wafamole evade –help

Usage: wafamole evade [OPTIONS] MODEL_PATH PAYLOAD
Launch WAF-A-MoLE against a target classifier.
Options:
-T, –model-type TEXT Type of classifier to load
-t, –timeout INTEGER Timeout when evading the model
-r, –max-rounds INTEGER Maximum number of fuzzing rounds
-s, –round-size INTEGER Fuzzing step size for each round (parallel fuzzing
steps)
–threshold FLOAT Classification threshold of the target WAF [0.5]
–random-engine TEXT Use random transformations instead of evolution
engine. Set the number of trials
–output-path TEXT Location were to save the results of the random
engine. NOT USED WITH REGULAR EVOLUTION ENGINE
–help Show this message and exit.

Evading example models

We provide some pre-trained models you can have fun with, located in wafamole/models/custom/example_models. The classifiers we used are listed in the table below.

Classifier nameAlgorithm
WafBrainRecurrent Neural Network
Token-basedNaive Bayes
Token-basedRandom Forest
Token-basedLinear SVM
Token-basedGaussian SVM
SQLiGoT – Directed ProportionalGaussian SVM
SQLiGoT – Directed UnproportionalGaussian SVM
SQLiGoT – Undirected ProportionalGaussian SVM
SQLiGoT – Undirected UnproportionalGaussian SVM

WAF-BRAIN – Recurrent Neural Newtork

Bypass the pre-trained WAF-Brain classifier using a admin' OR 1=1# equivalent.

wafamole evade –model-type waf-brain wafamole/models/custom/example_models/waf-brain.h5 “admin’ OR 1=1#”

Token-based – Native Bayes

Bypass the pre-trained token-based Naive Bayes classifier using a admin' OR 1=1# equivalent.

wafamole evade –model-type token wafamole/models/custom/example_models/naive_bayes_trained.dump “admin’ OR 1=1#”

Token-based – Random Forest

Bypass the pre-trained token-based Random Forest classifier using a admin' OR 1=1# equivalent.

wafamole evade –model-type token wafamole/models/custom/example_models/random_forest_trained.dump “admin’ OR 1=1#”

Token-based – Linear SVM

Bypass the pre-trained token-based Linear SVM classifier using a admin' OR 1=1# equivalent.

wafamole evade –model-type token wafamole/models/custom/example_models/lin_svm_trained.dump “admin’ OR 1=1#”

Token-based – Gaussian SVM

Bypass the pre-trained token-based Gaussian SVM classifier using a admin' OR 1=1# equivalent.

wafamole evade –model-type token wafamole/models/custom/example_models/gauss_svm_trained.dump “admin’ OR 1=1#”

SQLiGoT

Bypass the pre-trained SQLiGOT classifier using a admin' OR 1=1# equivalent. Use DPUPDU, or UU for (respectivly) Directed Proportional, Undirected Proportional, Directed Unproportional and Undirected Unproportional.

wafamole evade –model-type DP wafamole/models/custom/example_models/graph_directed_proportional_sqligot “admin’ OR 1=1#”

BEFORE LAUNCHING EVALUATION ON SQLiGoT

These classifiers are more robust than the others, as the feature extraction phase produces vectors with a more complex structure, and all pre-trained classifiers have been strongly regularized. It may take hours for some variants to produce a payload that achieves evasion (see Benchmark section).

Custom Adapters

First, create a custom Model class that implements the extract_features and classify methods.

class YourCustomModel(Model):
def extract_features(self, value: str):
# TODO: extract features
feature_vector = your_custom_feature_function(value)
return feature_vector
def classify(self, value):
# TODO: compute confidence
confidence = your_confidence_eval(value)
return confidence

Then, create an object from the model and instantiate an engine object that uses your model class.

model = YourCustomModel() #your init
engine = EvasionEngine(model)
result = engine.evaluate(payload, max_rounds, round_size, timeout, threshold)

Benchmark

We evaluated WAF-A-MoLE against all our example models.

The plot below shows the time it took for WAF-A-MoLE to mutate the admin' OR 1=1# payload until it was accepted by each classifier as benign.

On the x axis we have time (in seconds, logarithmic scale). On the y axis we have the confidence value, i.e., how sure a classifier is that a given payload is a SQL injection (in percentage).

Notice that being “50% sure” that a payload is a SQL injection is equivalent to flipping a coin. This is the usual classification threshold: if the confidence is lower, the payload is classified as benign.

Experiments were performed on DigitalOcean Standard Droplets.

R K

Recent Posts

Kali Linux 2024.4 Released, What’s New?

Kali Linux 2024.4, the final release of 2024, brings a wide range of updates and…

2 days ago

Lifetime-Amsi-EtwPatch : Disabling PowerShell’s AMSI And ETW Protections

This Go program applies a lifetime patch to PowerShell to disable ETW (Event Tracing for…

2 days ago

GPOHunter – Active Directory Group Policy Security Analyzer

GPOHunter is a comprehensive tool designed to analyze and identify security misconfigurations in Active Directory…

4 days ago

2024 MITRE ATT&CK Evaluation Results – Cynet Became a Leader With 100% Detection & Protection

Across small-to-medium enterprises (SMEs) and managed service providers (MSPs), the top priority for cybersecurity leaders…

7 days ago

SecHub : Streamlining Security Across Software Development Lifecycles

The free and open-source security platform SecHub, provides a central API to test software with…

1 week ago

Hawker : The Comprehensive OSINT Toolkit For Cybersecurity Professionals

Don't worry if there are any bugs in the tool, we will try to fix…

1 week ago