Cyber security

Agentic Security – Enhancing LLM Resilience With Open-Source Vulnerability Scanning

In an era where large language models (LLMs) are integral to technological advancements, ensuring their security is paramount.

Agentic Security offers a pioneering open-source vulnerability scanner designed to robustly test and enhance the resilience of LLMs.

This tool not only integrates seamlessly but also provides customizable attack simulations to safeguard against emerging threats.

Features

  • Customizable Rule Sets or Agent based attacks
  • Comprehensive fuzzing for any LLMs
  • LLM API integration and stress testing
  • Wide range of fuzzing and attack techniques
ToolSourceIntegrated
Garakleondz/garak
InspectAIUKGovernmentBEIS/inspect_ai
llm-adaptive-attackstml-epfl/llm-adaptive-attacks
Custom Huggingface Datasetsmarkush1/LLM-Jailbreak-Classifier
Local CSV Datasets

Note: Please be aware that Agentic Security is designed as a safety scanner tool and not a foolproof solution. It cannot guarantee complete protection against all possible threats.

Installation

To get started with Agentic Security, simply install the package using pip:

pip install agentic_security

Quick Start

agentic_security

2024-04-13 13:21:31.157 | INFO     | agentic_security.probe_data.data:load_local_csv:273 - Found 1 CSV files
2024-04-13 13:21:31.157 | INFO     | agentic_security.probe_data.data:load_local_csv:274 - CSV files: ['prompts.csv']
INFO:     Started server process [18524]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8718 (Press CTRL+C to quit)
python -m agentic_security
# or
agentic_security --help


agentic_security --port=PORT --host=HOST

LLM kwargs

Agentic Security uses plain text HTTP spec like:

POST https://api.openai.com/v1/chat/completions
Authorization: Bearer sk-xxxxxxxxx
Content-Type: application/json

{
     "model": "gpt-3.5-turbo",
     "messages": [{"role": "user", "content": "<<PROMPT>>"}],
     "temperature": 0.7
}

Where <<PROMPT>> will be replaced with the actual attack vector during the scan, insert the Bearer XXXXX header value with your app credentials.

Adding LLM Integration Templates

TBD

....

For more information click here.

Varshini

Varshini is a Cyber Security expert in Threat Analysis, Vulnerability Assessment, and Research. Passionate about staying ahead of emerging Threats and Technologies.

Recent Posts

How AI Puts Data Security at Risk

Artificial Intelligence (AI) is changing how industries operate, automating processes, and driving new innovations. However,…

8 hours ago

The Evolution of Cloud Technology: Where We Started and Where We’re Headed

Image credit:pexels.com If you think back to the early days of personal computing, you probably…

4 days ago

The Evolution of Online Finance Tools In a Tech-Driven World

In an era defined by technological innovation, the way people handle and understand money has…

4 days ago

A Complete Guide to Lenso.ai and Its Reverse Image Search Capabilities

The online world becomes more visually driven with every passing year. Images spread across websites,…

5 days ago

How Web Application Firewalls (WAFs) Work

General Working of a Web Application Firewall (WAF) A Web Application Firewall (WAF) acts as…

1 month ago

How to Send POST Requests Using curl in Linux

How to Send POST Requests Using curl in Linux If you work with APIs, servers,…

1 month ago