Cyber security

Agentic Security – Enhancing LLM Resilience With Open-Source Vulnerability Scanning

In an era where large language models (LLMs) are integral to technological advancements, ensuring their security is paramount.

Agentic Security offers a pioneering open-source vulnerability scanner designed to robustly test and enhance the resilience of LLMs.

This tool not only integrates seamlessly but also provides customizable attack simulations to safeguard against emerging threats.

Features

  • Customizable Rule Sets or Agent based attacks
  • Comprehensive fuzzing for any LLMs
  • LLM API integration and stress testing
  • Wide range of fuzzing and attack techniques
ToolSourceIntegrated
Garakleondz/garak
InspectAIUKGovernmentBEIS/inspect_ai
llm-adaptive-attackstml-epfl/llm-adaptive-attacks
Custom Huggingface Datasetsmarkush1/LLM-Jailbreak-Classifier
Local CSV Datasets

Note: Please be aware that Agentic Security is designed as a safety scanner tool and not a foolproof solution. It cannot guarantee complete protection against all possible threats.

Installation

To get started with Agentic Security, simply install the package using pip:

pip install agentic_security

Quick Start

agentic_security

2024-04-13 13:21:31.157 | INFO     | agentic_security.probe_data.data:load_local_csv:273 - Found 1 CSV files
2024-04-13 13:21:31.157 | INFO     | agentic_security.probe_data.data:load_local_csv:274 - CSV files: ['prompts.csv']
INFO:     Started server process [18524]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8718 (Press CTRL+C to quit)
python -m agentic_security
# or
agentic_security --help


agentic_security --port=PORT --host=HOST

LLM kwargs

Agentic Security uses plain text HTTP spec like:

POST https://api.openai.com/v1/chat/completions
Authorization: Bearer sk-xxxxxxxxx
Content-Type: application/json

{
     "model": "gpt-3.5-turbo",
     "messages": [{"role": "user", "content": "<<PROMPT>>"}],
     "temperature": 0.7
}

Where <<PROMPT>> will be replaced with the actual attack vector during the scan, insert the Bearer XXXXX header value with your app credentials.

Adding LLM Integration Templates

TBD

....

For more information click here.

Varshini

Varshini is a Cyber Security expert in Threat Analysis, Vulnerability Assessment, and Research. Passionate about staying ahead of emerging Threats and Technologies.

Recent Posts

Starship : Revolutionizing Terminal Experiences Across Shells

Starship is a powerful, minimal, and highly customizable cross-shell prompt designed to enhance the terminal…

1 day ago

Lemmy : A Decentralized Link Aggregator And Forum For The Fediverse

Lemmy is an innovative, open-source platform designed for link aggregation and discussion, providing a decentralized…

1 day ago

Massive UX Improvements, Custom Disassemblers, And MSVC Support In ImHex v1.37.0

The latest release of ImHex v1.37.0 introduces a host of exciting features and improvements, enhancing…

1 day ago

Ghauri : A Powerful SQL Injection Detection And Exploitation Tool

Ghauri is a cutting-edge, cross-platform tool designed to automate the detection and exploitation of SQL…

1 day ago

Writing Tools : Revolutionizing The Art Of Writing

Writing tools have become indispensable for individuals looking to enhance their writing efficiency, accuracy, and…

1 day ago

PatchWerk : A Tool For Cleaning NTDLL Syscall Stubs

PatchWerk is a proof-of-concept (PoC) tool designed to clean NTDLL syscall stubs by patching syscall…

2 days ago