garak checks if an LLM can be made to fail in a way we don’t want. garak probes for hallucination, data leakage, prompt injection, misinformation, toxicity generation, jailbreaks, and many other weaknesses. If you know nmap or msf / Metasploit Framework, garak does somewhat similar things to them, but for LLMs.

garak focuses on ways of making an LLM or dialog system fail. It combines static, dynamic, and adaptive probes to explore this.

garak‘s a free tool. We love developing it and are always interested in adding functionality to support applications.

Get Started

> See our user guide! docs.garak.ai

> Join our Discord!

> Project links & home: garak.ai

> Twitter: @garak_llm

> DEF CON slides!

LLM Support

currently supports:

  • hugging face hub generative models
  • replicate text models
  • openai api chat & continuation models
  • litellm
  • pretty much anything accessible via REST
  • gguf models like llama.cpp version >= 1046
  • .. and many more LLMs!

Install:

garak is a command-line tool. It’s developed in Linux and OSX.

Standard Install With pip

Just grab it from PyPI and you should be good to go:

python -m pip install -U garak

Install Development Version With pip

The standard pip version of garak is updated periodically. To get a fresher version from GitHub, try:

python -m pip install -U git+https://github.com/NVIDIA/garak.git@main

For more information click here.

LEAVE A REPLY

Please enter your comment!
Please enter your name here