LLMs (e.g., GPT-3.5, LLaMA, and PaLM) suffer from hallucination—fabricating non-existent facts to cheat users without perception. And the reasons for their existence and pervasiveness remain unclear.
We demonstrate that non-sense Out-of-Distribution(OoD) prompts composed of random tokens can also elicit the LLMs to respond with hallucinations.
This phenomenon forces us to revisit that hallucination may be another view of adversarial examples, and it shares similar features with conventional adversarial examples as the basic feature of LLMs.
Therefore, we formalize an automatic hallucination triggering method called hallucination attack in an adversarial way. Following is a fake news example generating by hallucination attack.
We substitute tokens via gradient-based token replacing strategy, replacing token reaching smaller negative log-likelihood loss, and induce LLM within hallucinations.
You may config your own base models and their hyper-parameters within config.py
. Then, you could attack the models or run our demo cases.
Clone this repo and run the code.
$ cd Hallucination-Attack
Install the requirements.
$ pip install -r requirements.txt
click here for more information.
ROADTools is a powerful framework designed for exploring and interacting with Microsoft Azure Active Directory…
Microsoft 365 Groups (also known as M365 Groups or Unified Groups) are at the heart…
SeamlessPass is a specialized tool designed to leverage on-premises Active Directory Kerberos tickets to obtain…
PPLBlade is a powerful Protected Process Dumper designed to capture memory from target processes, hide…
HikPwn: Comprehensive Guide to Scanning Hikvision Devices for Vulnerabilities If you’re searching for an efficient…
What Are Bash Comments? Comments in Bash scripts, are notes in your code that the…