A comprehensive guide exploring the nuances of GPT jailbreaks, prompt injections, and AI security.
This article unpacks an arsenal of resources for both attack and defense strategies in the evolving landscape of large language models (LLMs).
Whether you’re a developer, security expert, or AI enthusiast, prepare to advance your knowledge with insights into prompt engineering and adversarial machine learning.
What Will You Find In Here:
- ChatGPT Jailbreaks
- GPT Assistants Prompt Leaks
- GPTs Prompt Injection
- LLM Prompt Security
- Super Prompts
- Prompt Hack
- Prompt Security
- Ai Prompt Engineering
- Adversarial Machine Learning
Legend:
- 🌟: Legendary!
- 🔥: Hot Stuff
Jailbreaks
- 🌟 | elder-plinius/L1B3RT45
- 🌟 | yueliu1999/Awesome-Jailbreak-on-LLMs
- 🔥 | verazuo/jailbreak_llms
- 🔥 | brayene/tr-ChatGPT-Jailbreak-Prompts
- 0xk1h0/ChatGPT_DAN
- tg12/gpt_jailbreak_status
- Cyberlion-Technologies/ChatGPT_DAN
- yes133/ChatGPT-Prompts-Jailbreaks-And-More
- GabryB03/ChatGPT-Jailbreaks
- jzzjackz/chatgptjailbreaks
- jackhhao/jailbreak-classification
- rubend18/ChatGPT-Jailbreak-Prompts
- deadbits/vigil-jailbreak-ada-002
GPT Agents System Prompt Leaks
- 🌟 | 0xeb/TheBigPromptLibrary
- 🔥 | LouisShark/chatgpt_system_prompt
- gogooing/Awesome-GPTs
- tjadamlee/GPTs-prompts
- linexjlin/GPTs
- B3o/GPTS-Prompt-Collection
- 1003715231/gptstore-prompts
- friuns2/Leaked-GPTs
- adamidarrha/TopGptPrompts
- friuns2/BlackFriday-GPTs-Prompts
- parmarjh/Leaked-GPTs
- lxfater/Awesome-GPTs
- Superdev0909/Awesome-AI-GPTs-main
- SuperShinyDev/ChatGPTApplication
For more information click here.