garak
checks if an LLM can be made to fail in a way we don’t want. garak
probes for hallucination, data leakage, prompt injection, misinformation, toxicity generation, jailbreaks, and many other weaknesses. If you know nmap
or msf
/ Metasploit Framework, garak does somewhat similar things to them, but for LLMs.
garak
focuses on ways of making an LLM or dialog system fail. It combines static, dynamic, and adaptive probes to explore this.
garak
‘s a free tool. We love developing it and are always interested in adding functionality to support applications.
currently supports:
garak
is a command-line tool. It’s developed in Linux and OSX.
pip
Just grab it from PyPI and you should be good to go:
python -m pip install -U garak
pip
The standard pip version of garak
is updated periodically. To get a fresher version from GitHub, try:
python -m pip install -U git+https://github.com/NVIDIA/garak.git@main
For more information click here.
What is Bash Scripting? Bash scripting allows you to save multiple Linux commands in a file and…
When it comes to automating tasks on Linux, Bash scripting is an essential skill for both beginners…
Learn how to create and use Bash functions with this complete tutorial. Includes syntax, arguments,…
Introduction Unlock the full potential of your Linux system with this comprehensive guide to essential…
Playwright-MCP (Model Context Protocol) is a cutting-edge tool designed to bridge the gap between AI…
JBDev is a specialized development tool designed to streamline the creation and debugging of jailbreak…