Cyber security

Agentic Security – Enhancing LLM Resilience With Open-Source Vulnerability Scanning

In an era where large language models (LLMs) are integral to technological advancements, ensuring their security is paramount.

Agentic Security offers a pioneering open-source vulnerability scanner designed to robustly test and enhance the resilience of LLMs.

This tool not only integrates seamlessly but also provides customizable attack simulations to safeguard against emerging threats.

Features

  • Customizable Rule Sets or Agent based attacks
  • Comprehensive fuzzing for any LLMs
  • LLM API integration and stress testing
  • Wide range of fuzzing and attack techniques
ToolSourceIntegrated
Garakleondz/garak
InspectAIUKGovernmentBEIS/inspect_ai
llm-adaptive-attackstml-epfl/llm-adaptive-attacks
Custom Huggingface Datasetsmarkush1/LLM-Jailbreak-Classifier
Local CSV Datasets

Note: Please be aware that Agentic Security is designed as a safety scanner tool and not a foolproof solution. It cannot guarantee complete protection against all possible threats.

Installation

To get started with Agentic Security, simply install the package using pip:

pip install agentic_security

Quick Start

agentic_security

2024-04-13 13:21:31.157 | INFO     | agentic_security.probe_data.data:load_local_csv:273 - Found 1 CSV files
2024-04-13 13:21:31.157 | INFO     | agentic_security.probe_data.data:load_local_csv:274 - CSV files: ['prompts.csv']
INFO:     Started server process [18524]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8718 (Press CTRL+C to quit)
python -m agentic_security
# or
agentic_security --help


agentic_security --port=PORT --host=HOST

LLM kwargs

Agentic Security uses plain text HTTP spec like:

POST https://api.openai.com/v1/chat/completions
Authorization: Bearer sk-xxxxxxxxx
Content-Type: application/json

{
     "model": "gpt-3.5-turbo",
     "messages": [{"role": "user", "content": "<<PROMPT>>"}],
     "temperature": 0.7
}

Where <<PROMPT>> will be replaced with the actual attack vector during the scan, insert the Bearer XXXXX header value with your app credentials.

Adding LLM Integration Templates

TBD

....

For more information click here.

Varshini

Varshini is a Cyber Security expert in Threat Analysis, Vulnerability Assessment, and Research. Passionate about staying ahead of emerging Threats and Technologies.

Recent Posts

Pystinger : Bypass Firewall For Traffic Forwarding Using Webshell

Pystinger is a Python-based tool that enables SOCKS4 proxying and port mapping through webshells. It…

6 days ago

CVE-Search : A Tool To Perform Local Searches For Known Vulnerabilities

Introduction When it comes to cybersecurity, speed and privacy are critical. Public vulnerability databases like…

6 days ago

CVE-Search : A Tool To Perform Local Searches For Known Vulnerabilities

Introduction When it comes to cybersecurity, speed and privacy are critical. Public vulnerability databases like…

7 days ago

How to Bash Append to File: A Simple Guide for Beginners

If you are working with Linux or writing bash scripts, one of the most common…

7 days ago

Mastering the Bash Case Statement with Simple Examples

What is a bash case statement? A bash case statement is a way to control…

7 days ago

How to Check if a File Exists in Bash – Simply Explained

Why Do We Check Files in Bash? When writing a Bash script, you often work…

1 week ago