LLMs (e.g., GPT-3.5, LLaMA, and PaLM) suffer from hallucination—fabricating non-existent facts to cheat users without perception. And the reasons for their existence and pervasiveness remain unclear.
We demonstrate that non-sense Out-of-Distribution(OoD) prompts composed of random tokens can also elicit the LLMs to respond with hallucinations.
This phenomenon forces us to revisit that hallucination may be another view of adversarial examples, and it shares similar features with conventional adversarial examples as the basic feature of LLMs.
Therefore, we formalize an automatic hallucination triggering method called hallucination attack in an adversarial way. Following is a fake news example generating by hallucination attack.
We substitute tokens via gradient-based token replacing strategy, replacing token reaching smaller negative log-likelihood loss, and induce LLM within hallucinations.
You may config your own base models and their hyper-parameters within config.py. Then, you could attack the models or run our demo cases.
Clone this repo and run the code.
$ cd Hallucination-Attack Install the requirements.
$ pip install -r requirements.txt click here for more information.
In MySQL Server 5.5 and earlier versions, the MyISAM was the default storage engine. So,…
A newly disclosed vulnerability in Microsoft Authenticator could expose one time sign in codes or…
Modrinth is a modern platform that’s rapidly changing the landscape of Minecraft modding, providing an…
A new, highly sophisticated malware campaign named BlackSanta has emerged, primarily targeting HR and recruitment…
Perplexity has unveiled an exciting new feature, Personal Computer, which allows AI agents to seamlessly…
In a recent cyber incident, a group named CARDINAL, associated with the label Russian Legion,…