- Explore MCP Servers
- agentic-radar
Agentic Radar
What is Agentic Radar
Agentic-radar is a security scanner designed specifically for LLM (Large Language Model) agentic workflows, ensuring that these workflows are secure and free from vulnerabilities.
Use cases
Use cases for agentic-radar include scanning AI-driven applications for security vulnerabilities, ensuring compliance in automated workflows, and enhancing the security posture of organizations using LLM technologies.
How to use
To use agentic-radar, users need to integrate it into their existing LLM workflows. This typically involves installing the package via PyPI and running the scanner on their workflows to identify potential security issues.
Key features
Key features of agentic-radar include vulnerability detection, real-time scanning, integration capabilities with various LLM frameworks, and comprehensive reporting on security findings.
Where to use
Agentic-radar can be used in various fields such as software development, data science, AI research, and any domain where LLMs are utilized to automate workflows.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Overview
What is Agentic Radar
Agentic-radar is a security scanner designed specifically for LLM (Large Language Model) agentic workflows, ensuring that these workflows are secure and free from vulnerabilities.
Use cases
Use cases for agentic-radar include scanning AI-driven applications for security vulnerabilities, ensuring compliance in automated workflows, and enhancing the security posture of organizations using LLM technologies.
How to use
To use agentic-radar, users need to integrate it into their existing LLM workflows. This typically involves installing the package via PyPI and running the scanner on their workflows to identify potential security issues.
Key features
Key features of agentic-radar include vulnerability detection, real-time scanning, integration capabilities with various LLM frameworks, and comprehensive reporting on security findings.
Where to use
Agentic-radar can be used in various fields such as software development, data science, AI research, and any domain where LLMs are utilized to automate workflows.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Content

A Security Scanner for your agentic workflows!
View Demo · Documentation · Report Bug · Request Feature

Table of Contents
Description 📝
The Agentic Radar is designed to analyze and assess agentic systems for security and operational insights. It helps developers, researchers, and security professionals understand how agentic systems function and identify potential vulnerabilities.
It allows users to create a security report for agentic systems, including:
- Workflow Visualization - a graph of the agentic system’s workflow✅
- Tool Identification - a list of all external and custom tools utilized by the system✅
- MCP Server Detection - a list of all MCP servers used by system’s agents✅
- Vulnerability Mapping - a table connecting identified tools to known vulnerabilities, providing a security overview✅
The comprehensive HTML report summarizes all findings and allows for easy reviewing and sharing.
Agentic Radar includes mapping of detected vulnerabilities to well-known security frameworks 🛡️.
Getting Started 🚀
Prerequisites
There are none! Just make sure you have Python (pip) installed on your machine.
Installation
pip install agentic-radar
# Check that it is installed
agentic-radar --version
Some features require extra installations, depending on the targeted agentic framework. See more below.
Advanced Installation
CrewAI Installation
CrewAI extras are needed when using one of the following features in combination with CrewAI:
- Agentic Radar Test
- Descriptions for predefined tools
You can install Agentic Radar with extra CrewAI dependencies by running:
pip install "agentic-radar[crewai]"
[!WARNING]
This will install thecrewai-tools
package which is only supported on Python versions >= 3.10 and < 3.13.
If you are using a different python version, the tool descriptions will be less detailed or entirely missing.
OpenAI Agents Installation
OpenAI Agents extras are needed when using one of the following features in combination with OpenAI Agents:
You can install Agentic Radar with extra OpenAI Agents dependencies by running:
pip install "agentic-radar[openai-agents]"
Usage
Agentic Radar now supports two main commands:
1. scan
Scan code for agentic workflows and generate a report.
agentic-radar scan [OPTIONS] FRAMEWORK:{langgraph|crewai|n8n|openai-agents|autogen}
Example:
agentic-radar scan langgraph -i path/to/langgraph/example/folder -o report.html
2. test
Test agents in an agentic workflow for various vulnerabilities.
Requires OPENAI_API_KEY set as environment variable.
agentic-radar test [OPTIONS] FRAMEWORK:{openai-agents} ENTRYPOINT_SCRIPT_WITH_ARGS
Example:
agentic-radar test openai-agents "path/to/openai-agents/example.py"
See more about this feature here.
Advanced Features ✨
Agentic Prompt Hardening
Agentic Prompt Hardening automatically improves detected system prompts in your agentic workflow and displays them in the report. It transforms simple agent instructions into high-quality structured system prompts which follow best prompt engineering practices.
[!NOTE]
Currently supported frameworks (with more to come): OpenAI Agents, CrewAI, Autogen
It is quite straightforward to use:
-
Set your OPENAI_API_KEY environment variable by running
export OPENAI_API_KEY=<api_key>
. -
Run Agentic Radar with the
--harden-prompts
flag, for example:
agentic-radar scan openai-agents --harden-prompts -i examples/openai-agents/ basic/lifecycle_example -o report.html
- Inspect hardened system prompts in the generated report:
🔍 Test for Vulnerabilities in Agentic Workflows
Agentic Radar now supports testing your agent workflows at runtime to identify critical vulnerabilities through simulated adversarial inputs.
This includes automated testing for:
- Prompt Injection
- PII Leakage
- Harmful Content Generation
- Fake News Generation
Currently supported for:
- OpenAI Agents ✅ (more frameworks coming soon)
🛠 How It Works
The test command launches your agentic workflow with a test suite designed to simulate malicious or adversarial inputs. These tests are designed based on real-world attack scenarios aligned with the OWASP LLM Top 10.
[!NOTE]
This feature requires OPENAI_API_KEY or AZURE_OPENAI_API_KEY set as an environment variable. You can set it via command line or inside a .env file.
Test is run like:
agentic-radar test <framework> "<path/to/the/workflow/main.py any-necessary-args>"
For example:
agentic-radar test openai-agents "examples/openai-agents/basic/lifecycle_example.py"
The tool injects itself into the agentic workflow provided by user, detects necessary information and runs the prepared tests.
Configurable Tests
You can pass custom tests to Agentic Radar by creating a YAML file with the following structure:
include_default_tests: false
tests:
- name: TestA
input: Mention A.
success_condition: Attack succeeded if A is mentioned.
- name: TestB
input: Mention B.
success_condition: Attack succeeded if B is mentioned.
name
is the name of the test, input
is the input text to be passed to the agent, and success_condition
is a description of what constitutes a successful attack (it will be used by an oracle LLM to determine if the test passed or failed).
You can then run Agentic Radar test with the --config
option pointing to your YAML file:
agentic-radar test openai-agents --config custom_tests.yaml "examples/openai-agents/basic/lifecycle_example.py"
By default, Agentic Radar will also include the built-in tests. You can disable them by setting include_default_tests
to false
in your YAML file.
📊 Rich Test Results
All test results are printed in a visually rich table format directly in the terminal.
Each row shows:
- Agent name
- Type of test
- Injected input
- Agent output
- ✅ Whether the test passed or failed
- 🛑 A short explanation of the result
This makes it easy to spot vulnerabilities at a glance—especially in multi-agent systems.

Roadmap 📈
This matrix shows which agentic frameworks support all the Agentic Radar features. With time we will strive towards covering all current frameworks with all existing features, as well as introducing new frameworks to the mix.
Feature | Scan | MCP Detection | Prompt Hardening | Agentic Test |
---|---|---|---|---|
OpenAI Agents | ✅ | ✅ | ✅ | ✅ |
CrewAI | ✅ | ❌ | ✅ | ❌ |
n8n | ✅ | ❌ | ❌ | ❌ |
LangGraph | ✅ | ✅ | ❌ | ❌ |
Autogen | ✅ | ❌ | ✅ | ❌ |
Are there some features you would like to see happen first? Vote anonymously here or open a GitHub Issue.
Blogs and Tutorials 💡
Community 🤝
We welcome contributions from the AI and security community! Join our Discord community or Slack community to connect with other developers, discuss features, get support and contribute to Agentic Radar 🚀
If you like what you see, give us a star! It keeps us inspired to improve and innovate and helps others discover the project 🌟
Frequently Asked Questions ❓
Q: Is my source code being shared or is everything running locally?
A: The main features (static workflow analysis and vulnerability mapping) are run completely locally and therefore your code is not shared anywhere. For optional advanced features, LLM’s might be used. Eg. when using Prompt Hardening, detected system prompts can get sent to LLM for analysis.
Contributing 💻
Code Of Conduct 📜
License ⚖️
DevTools Supporting MCP
The following are the main code editors that support the Model Context Protocol. Click the link to visit the official website for more information.