garak
checks if an LLM can be made to fail in a way we don’t want. garak
probes for hallucination, data leakage, prompt injection, misinformation, toxicity generation, jailbreaks, and many other weaknesses. If you know nmap
or msf
/ Metasploit Framework, garak does somewhat similar things to them, but for LLMs.
garak
focuses on ways of making an LLM or dialog system fail. It combines static, dynamic, and adaptive probes to explore this.
garak
‘s a free tool. We love developing it and are always interested in adding functionality to support applications.
currently supports:
garak
is a command-line tool. It’s developed in Linux and OSX.
pip
Just grab it from PyPI and you should be good to go:
python -m pip install -U garak
pip
The standard pip version of garak
is updated periodically. To get a fresher version from GitHub, try:
python -m pip install -U git+https://github.com/NVIDIA/garak.git@main
For more information click here.
Vermilion is a simple and lightweight CLI tool designed for rapid collection, and optional exfiltration…
ADCFFS is a PowerShell script that can be used to exploit the AD CS container…
Tartufo will, by default, scan the entire history of a git repository for any text…
Loco is strongly inspired by Rails. If you know Rails and Rust, you'll feel at…
A data hoarder’s dream come true: bundle any web page into a single HTML file.…
Mountpoint for Amazon S3 is a simple, high-throughput file client for mounting an Amazon S3…