MCP ExplorerExplorer

Agentnull

@jaschadubon 17 days ago
1 MIT
FreeCommunity
AI Systems
#agent#agentic-ai#agentic-workflow#ai#blueteam#hacks#llm#llmops#poc#proof-of-concept#redteam#research#threat-intelligence#threat-modeling#mcp#mcp-security#mcp-server#mcp-servers
Collection of PoC for using Agents and MCP in bad ways

Overview

What is Agentnull

AgentNull is a collection of proof-of-concept (PoC) attack vectors aimed at exploiting vulnerabilities in autonomous AI agents, particularly those related to MCP (Multi-Channel Processing) systems.

Use cases

Use cases for AgentNull include testing the resilience of AI systems against various attack vectors, training security professionals on potential threats, and developing defensive strategies to protect against exploitation.

How to use

To use AgentNull, navigate to the specific attack vector folder within the ‘pocs/’ directory and follow the README instructions to replicate the attack scenario.

Key features

Key features of AgentNull include a comprehensive threat catalog, structured data formats for SOC/SIEM integration, and a variety of attack vectors ranging from full-schema poisoning to semantic DoS attacks.

Where to use

AgentNull can be utilized in cybersecurity research, red teaming exercises, and educational contexts to understand and mitigate risks associated with AI agents and MCP systems.

Content

🧠 AgentNull: AI System Security Threat Catalog + Proof-of-Concepts

This repository contains a red team-oriented catalog of attack vectors targeting AI systems including autonomous agents (MCP, LangGraph, AutoGPT), RAG pipelines, vector databases, and embedding-based retrieval systems, along with individual proof-of-concepts (PoCs) for each.

📘 Structure

  • catalog/AgentNull_Catalog.md — Human-readable threat catalog
  • catalog/AgentNull_Catalog.json — Structured version for SOC/SIEM ingestion
  • pocs/ — One directory per attack vector, each with its own README, code, and sample input/output

⚠️ Disclaimer

This repository is for educational and internal security research purposes only. Do not deploy any techniques or code herein in production or against systems you do not own or have explicit authorization to test.

🔧 Usage

Navigate into each pocs/<attack_name>/ folder and follow the README to replicate the attack scenario.

🤖 Testing with Local LLMs (Recommended)

For enhanced PoC demonstrations without API costs, use Ollama with local models:

Install Ollama

# Linux/macOS
curl -fsSL https://ollama.ai/install.sh | sh

# Or download from https://ollama.ai/download

Setup Local Model

# Pull a lightweight model (recommended for testing)
ollama pull gemma3

# Or use a more capable model
ollama pull deepseek-r1
ollama pull qwen3

Run PoCs with Local LLM

# Advanced Tool Poisoning with real LLM
cd pocs/AdvancedToolPoisoning
python3 advanced_tool_poisoning_agent.py local

# Other PoCs work with simulation mode
cd pocs/ContextPackingAttacks
python3 context_packing_agent.py

Ollama Configuration

  • Default endpoint: http://localhost:11434
  • Model selection: Edit the model name in PoC files if needed
  • Performance: Llama2 (~4GB RAM), Mistral (~4GB RAM), CodeLlama (~4GB RAM)

🧩 Attack Vectors Covered

🤖 MCP & Agent Systems

🧠 Memory & Context Systems

🔍 RAG & Vector Systems

💻 Code & File Systems

⚡ Resource & Performance

📚 Related Research & Attribution

Novel Attack Vectors (⭐)

The attack vectors marked with ⭐ represent novel concepts primarily developed within the AgentNull project, extending beyond existing documented attack patterns.

Known Attack Patterns with Research Links

Sponsored by ThirdKey

Tools

No tools

Comments