LLMs (e.g., GPT-3.5, LLaMA, and PaLM) suffer from hallucination—fabricating non-existent facts to cheat users without perception. And the reasons for their existence and pervasiveness remain unclear.
We demonstrate that non-sense Out-of-Distribution(OoD) prompts composed of random tokens can also elicit the LLMs to respond with hallucinations.
This phenomenon forces us to revisit that hallucination may be another view of adversarial examples, and it shares similar features with conventional adversarial examples as the basic feature of LLMs.
Therefore, we formalize an automatic hallucination triggering method called hallucination attack in an adversarial way. Following is a fake news example generating by hallucination attack.
We substitute tokens via gradient-based token replacing strategy, replacing token reaching smaller negative log-likelihood loss, and induce LLM within hallucinations.
You may config your own base models and their hyper-parameters within config.py
. Then, you could attack the models or run our demo cases.
Clone this repo and run the code.
$ cd Hallucination-Attack
Install the requirements.
$ pip install -r requirements.txt
click here for more information.
This repo contains all variants of information security & Bug bounty & Penetration Testing write-up…
site:*/sign-in site:*/account/login site:*/forum/ucp.php?mode=login inurl:memberlist.php?mode=viewprofile intitle:"EdgeOS" intext:"Please login" inurl:user_login.php intitle:"Web Management Login" site:*/users/login_form site:*/access/unauthenticated site:account.*.*/login site:admin.*.com/signin/…
Matrix is an open network for secure and decentralized communication. Users from every Matrix homeserver…
Linux Security And Monitoring Scripts are a collection of security and monitoring scripts you can…
A fiber is a unit of execution that must be manually scheduled by the application…
XSS Exploitation Tool is a penetration testing tool that focuses on the exploit of Cross-Site…