Cyber security

LLM Lies : Hallucinations Are Not Bugs, But Features As Adversarial Examples

LLMs (e.g., GPT-3.5, LLaMA, and PaLM) suffer from hallucination—fabricating non-existent facts to cheat users without perception. And the reasons for their existence and pervasiveness remain unclear.

We demonstrate that non-sense Out-of-Distribution(OoD) prompts composed of random tokens can also elicit the LLMs to respond with hallucinations.

This phenomenon forces us to revisit that hallucination may be another view of adversarial examples, and it shares similar features with conventional adversarial examples as the basic feature of LLMs.

Therefore, we formalize an automatic hallucination triggering method called hallucination attack in an adversarial way. Following is a fake news example generating by hallucination attack.

The Pipeline Of Hallucination Attack

We substitute tokens via gradient-based token replacing strategy, replacing token reaching smaller negative log-likelihood loss, and induce LLM within hallucinations.

Quick Start

Setup

You may config your own base models and their hyper-parameters within config.py. Then, you could attack the models or run our demo cases.

Demo

Clone this repo and run the code.

$ cd Hallucination-Attack

Install the requirements.

$ pip install -r requirements.txt

click here for more information.

Varshini

Varshini is a Cyber Security expert in Threat Analysis, Vulnerability Assessment, and Research. Passionate about staying ahead of emerging Threats and Technologies.

Recent Posts

Playwright-MCP : A Powerful Tool For Browser Automation

Playwright-MCP (Model Context Protocol) is a cutting-edge tool designed to bridge the gap between AI…

2 weeks ago

JBDev : A Tool For Jailbreak And TrollStore Development

JBDev is a specialized development tool designed to streamline the creation and debugging of jailbreak…

2 weeks ago

Kereva LLM Code Scanner : A Revolutionary Tool For Python Applications Using LLMs

The Kereva LLM Code Scanner is an innovative static analysis tool tailored for Python applications…

2 weeks ago

Nuclei-Templates-Labs : A Hands-On Security Testing Playground

Nuclei-Templates-Labs is a dynamic and comprehensive repository designed for security researchers, learners, and organizations to…

2 weeks ago

SSH-Stealer : The Stealthy Threat Of Advanced Credential Theft

SSH-Stealer and RunAs-Stealer are malicious tools designed to stealthily harvest SSH credentials, enabling attackers to…

2 weeks ago

ollvm-unflattener : A Tool For Reversing Control Flow Flattening In OLLVM

Control flow flattening is a common obfuscation technique used by OLLVM (Obfuscator-LLVM) to transform executable…

2 weeks ago