garak
checks if an LLM can be made to fail in a way we don’t want. garak
probes for hallucination, data leakage, prompt injection, misinformation, toxicity generation, jailbreaks, and many other weaknesses. If you know nmap
or msf
/ Metasploit Framework, garak does somewhat similar things to them, but for LLMs.
garak
focuses on ways of making an LLM or dialog system fail. It combines static, dynamic, and adaptive probes to explore this.
garak
‘s a free tool. We love developing it and are always interested in adding functionality to support applications.
currently supports:
garak
is a command-line tool. It’s developed in Linux and OSX.
pip
Just grab it from PyPI and you should be good to go:
python -m pip install -U garak
pip
The standard pip version of garak
is updated periodically. To get a fresher version from GitHub, try:
python -m pip install -U git+https://github.com/NVIDIA/garak.git@main
For more information click here.
Pystinger is a Python-based tool that enables SOCKS4 proxying and port mapping through webshells. It…
Introduction When it comes to cybersecurity, speed and privacy are critical. Public vulnerability databases like…
Introduction When it comes to cybersecurity, speed and privacy are critical. Public vulnerability databases like…
If you are working with Linux or writing bash scripts, one of the most common…
What is a bash case statement? A bash case statement is a way to control…
Why Do We Check Files in Bash? When writing a Bash script, you often work…