Open Source
Find AI vulnerabilities automatically
AgentBreaker is a free, open-source tool that tests your AI system for security issues. Point it at your AI, run it, and see what breaks.
Learns how your AI works before testing it
Finds vulnerabilities a manual review would miss
Gives you a clear report of what to fix
Clone and install in under a minute. A pip-installable package is on the roadmap.
terminal
$ git clone https://github.com/kagexai/agentbreaker.git
$ cd agentbreaker
$ python3 -m venv .venv
$ source .venv/bin/activate
$ pip install -e .
$ agentbreaker validate --check-env
$ agentbreaker run <system-id> --loop
$ agentbreaker serve --port 1337What makes it different
Attack Taxonomy
14 attack strategies across 7 categories, each mapped to OWASP LLM Top 10 and MITRE ATLAS. Every finding comes with a compliance-ready classification.
LLM01
Prompt Injection
3 strategies
LLM02
Jailbreak
4 strategies
LLM05
Guardrail Bypass
3 strategies
LLM07
Prompt Extraction
4 strategies
LLM06
Tool Misuse
2 strategies
LLM08
Data Exfiltration
3 strategies
LLM01
Multimodal Injection
3 strategies
Run Your First Scan
AgentBreaker is open source and free. Clone the repo and start probing your AI systems.