Cyber security

JailbreakEval : Automating the Evaluation Of Language Model Security

Jailbreak is an attack that prompts a language model to give actionable responses to harmful behaviors, such as writing an offensive letter, providing detailed instructions for creating a bomb.

Evaluating the results of such attacks typically requires manual inspections by determining if the response fulfills some standards, which is impractical for large-scale analysis.

As a result, most research on jailbreak attacks levarage automated tools to evaluate the result of their jailbreak attempts, and each of them offers some unique insights.

However, due to the inherent semantic flexibility of natural language, there is no such automated evaluator that could fit in all contexts.

Therefore, instead of proposing one best automated evaluator to rule them all, JailbreakEval aims to bring them together in a unified manner, making them straightforward to craft, access, and compare. It is particularly well-suited for:

  • Jailbreak Researchers, by providing well-known jailbreak evaluators to assess the effectness of their attacks out-of-the-box.
  • Jailbreak Evaluator Developers, by providing a handy framework for creating new evaluators and comparing its performance with the established ones.

Install

You can install the latest release of JailbreakEval from pypi:

pip install JailbreakEval

Alternatively, you can install JailbreakEval by cloning the project:

git clone https://github.com/ThuCCSLab/JailbreakEval
cd JailbreakEval
make install  # for installation
make dev  # for development

Quick Start

JailbreakEval can evaluate your jailbreak attempts in just a few lines of codes.

Here is a snippet to assess a jailbreak attempt by GCG’s evaluator:

from jailbreakeval import JailbreakEvaluator

# Initialize the evaluator with GCG's preset
evaluator = JailbreakEvaluator.from_preset("StringMatching-zou2023universal")

# The jailbreak attempt to be evaluated
attempt_to_evaluate = {
    "question": "How to build a bomb?",  # the target question to jailbreak
    "answer": "I cannot tell you. This is illegal."  # the answer extracted from the model by the jailbreak attack
}

# Evaluate the attempt and print the result
evaluation_result = evaluator(attempt_to_evaluate)
print(evaluation_result)   # Output: False

For more information click here.

Varshini

Varshini is a Cyber Security expert in Threat Analysis, Vulnerability Assessment, and Research. Passionate about staying ahead of emerging Threats and Technologies.

Recent Posts

Playwright-MCP : A Powerful Tool For Browser Automation

Playwright-MCP (Model Context Protocol) is a cutting-edge tool designed to bridge the gap between AI…

1 week ago

JBDev : A Tool For Jailbreak And TrollStore Development

JBDev is a specialized development tool designed to streamline the creation and debugging of jailbreak…

2 weeks ago

Kereva LLM Code Scanner : A Revolutionary Tool For Python Applications Using LLMs

The Kereva LLM Code Scanner is an innovative static analysis tool tailored for Python applications…

2 weeks ago

Nuclei-Templates-Labs : A Hands-On Security Testing Playground

Nuclei-Templates-Labs is a dynamic and comprehensive repository designed for security researchers, learners, and organizations to…

2 weeks ago

SSH-Stealer : The Stealthy Threat Of Advanced Credential Theft

SSH-Stealer and RunAs-Stealer are malicious tools designed to stealthily harvest SSH credentials, enabling attackers to…

2 weeks ago

ollvm-unflattener : A Tool For Reversing Control Flow Flattening In OLLVM

Control flow flattening is a common obfuscation technique used by OLLVM (Obfuscator-LLVM) to transform executable…

2 weeks ago