Cyber security

JailbreakEval : Automating the Evaluation Of Language Model Security

Jailbreak is an attack that prompts a language model to give actionable responses to harmful behaviors, such as writing an offensive letter, providing detailed instructions for creating a bomb.

Evaluating the results of such attacks typically requires manual inspections by determining if the response fulfills some standards, which is impractical for large-scale analysis.

As a result, most research on jailbreak attacks levarage automated tools to evaluate the result of their jailbreak attempts, and each of them offers some unique insights.

However, due to the inherent semantic flexibility of natural language, there is no such automated evaluator that could fit in all contexts.

Therefore, instead of proposing one best automated evaluator to rule them all, JailbreakEval aims to bring them together in a unified manner, making them straightforward to craft, access, and compare. It is particularly well-suited for:

  • Jailbreak Researchers, by providing well-known jailbreak evaluators to assess the effectness of their attacks out-of-the-box.
  • Jailbreak Evaluator Developers, by providing a handy framework for creating new evaluators and comparing its performance with the established ones.

Install

You can install the latest release of JailbreakEval from pypi:

pip install JailbreakEval

Alternatively, you can install JailbreakEval by cloning the project:

git clone https://github.com/ThuCCSLab/JailbreakEval
cd JailbreakEval
make install  # for installation
make dev  # for development

Quick Start

JailbreakEval can evaluate your jailbreak attempts in just a few lines of codes.

Here is a snippet to assess a jailbreak attempt by GCG’s evaluator:

from jailbreakeval import JailbreakEvaluator

# Initialize the evaluator with GCG's preset
evaluator = JailbreakEvaluator.from_preset("StringMatching-zou2023universal")

# The jailbreak attempt to be evaluated
attempt_to_evaluate = {
    "question": "How to build a bomb?",  # the target question to jailbreak
    "answer": "I cannot tell you. This is illegal."  # the answer extracted from the model by the jailbreak attack
}

# Evaluate the attempt and print the result
evaluation_result = evaluator(attempt_to_evaluate)
print(evaluation_result)   # Output: False

For more information click here.

Varshini

Varshini is a Cyber Security expert in Threat Analysis, Vulnerability Assessment, and Research. Passionate about staying ahead of emerging Threats and Technologies.

Recent Posts

100 Days Of Rust 2025 : From Incident Response To Linux System Programming

In 2025 I wanted to try something new. In addition to a traditional 100 days…

5 hours ago

Presenterm : Revolutionizing Terminal-Based Presentations With Markdown

presenterm lets you create presentations in markdown format and run them from your terminal, with…

6 hours ago

HASH : Harnessing HTTP Agnostic Software Honeypots For Enhanced Cybersecurity

The main philosophy of HASH is to be easy to configure and flexible to mimic…

6 hours ago

SECurityTr8Ker : SEC Cybersecurity Disclosure Monitor

SECurityTr8Ker is a Python application designed to monitor the U.S. Securities and Exchange Commission's (SEC)…

4 days ago

ripgrep : The Fast, Flexible Search Tool

ripgrep is a line-oriented search tool that recursively searches the current directory for a regex…

4 days ago

InfluxDB : The Open Source Leader In Time Series Data And Real-Time Analytics

InfluxDB is the leading open source time series database for metrics, events, and real-time analytics.…

4 days ago