KageXKageX
Open Source

Find AI vulnerabilities automatically

AgentBreaker is a free, open-source tool that tests your AI system for security issues. Point it at your AI, run it, and see what breaks.

Learns how your AI works before testing it
Finds vulnerabilities a manual review would miss
Gives you a clear report of what to fix

Clone and install in under a minute. A pip-installable package is on the roadmap.

terminal
$ git clone https://github.com/kagexai/agentbreaker.git
$ cd agentbreaker
$ python3 -m venv .venv
$ source .venv/bin/activate
$ pip install -e .
$ agentbreaker validate --check-env
$ agentbreaker run <system-id> --loop
$ agentbreaker serve --port 1337

What makes it different

Finds real vulnerabilities

Tests for prompt injection, data leakage, jailbreaks, and more — the same attacks hackers use in the real world.

Scores what it finds

Every finding gets a severity score so you know what to fix first. Compare results over time to track improvement.

Gets smarter as it goes

Uses AI to plan its next attack based on what it's already discovered. Not just running the same tests over and over.

Maps to industry standards

Results are organized using OWASP and MITRE frameworks — the same ones auditors and compliance teams look for.

Works from your terminal

Install it, run a command, get results. No web UI to set up, no accounts to create. Just works.

Works with any AI system

Supports OpenAI, Anthropic, open-source models, and custom setups. If your AI has an API, AgentBreaker can test it.

Attack Taxonomy

14 attack strategies across 7 categories, each mapped to OWASP LLM Top 10 and MITRE ATLAS. Every finding comes with a compliance-ready classification.

LLM01

Prompt Injection

3 strategies

LLM02

Jailbreak

4 strategies

LLM05

Guardrail Bypass

3 strategies

LLM07

Prompt Extraction

4 strategies

LLM06

Tool Misuse

2 strategies

LLM08

Data Exfiltration

3 strategies

LLM01

Multimodal Injection

3 strategies

Run Your First Scan

AgentBreaker is open source and free. Clone the repo and start probing your AI systems.