- Explore MCP Servers
- agent-atlassian
Agent Atlassian
What is Agent Atlassian
Agent-atlassian is an AI agent designed for Atlassian products like Jira and Confluence. It utilizes the MCP Server along with OpenAPI Codegen, LangGraph, and LangChain MCP Adapters to provide intelligent assistance.
Use cases
Use cases for agent-atlassian include automating task management in Jira, providing contextual assistance in Confluence, and enhancing team collaboration through intelligent insights.
How to use
To use agent-atlassian, integrate it with your Atlassian products via supported protocols such as AGNTCY ACP, Google A2A, or directly through the MCP Server. Follow the setup instructions in the README for configuration details.
Key features
Key features of agent-atlassian include LLM-powered capabilities, support for multiple agent transport protocols, and integration with popular Atlassian tools, enhancing productivity and collaboration.
Where to use
Agent-atlassian is primarily used in project management and collaboration environments, particularly where Atlassian products like Jira and Confluence are deployed.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Overview
What is Agent Atlassian
Agent-atlassian is an AI agent designed for Atlassian products like Jira and Confluence. It utilizes the MCP Server along with OpenAPI Codegen, LangGraph, and LangChain MCP Adapters to provide intelligent assistance.
Use cases
Use cases for agent-atlassian include automating task management in Jira, providing contextual assistance in Confluence, and enhancing team collaboration through intelligent insights.
How to use
To use agent-atlassian, integrate it with your Atlassian products via supported protocols such as AGNTCY ACP, Google A2A, or directly through the MCP Server. Follow the setup instructions in the README for configuration details.
Key features
Key features of agent-atlassian include LLM-powered capabilities, support for multiple agent transport protocols, and integration with popular Atlassian tools, enhancing productivity and collaboration.
Where to use
Agent-atlassian is primarily used in project management and collaboration environments, particularly where Atlassian products like Jira and Confluence are deployed.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Content
🚀 Atlassian AI Agent
🧪 Evaluation Badges
| Claude | Gemini | OpenAI | Llama |
|---|---|---|---|
- 🤖 Atlassian Agent is an LLM-powered agent built using the LangGraph ReAct Agent workflow and MCP tools.
- 🌐 Protocol Support: Compatible with A2A protocol for integration with external user clients.
- 🛡️ Secure by Design: Enforces Atlassian API token-based RBAC and supports external authentication for strong access control.
- 🔌 Integrated Communication: Uses langchain-mcp-adapters to connect with the Atlassian MCP server within the LangGraph ReAct Agent workflow.
- 🏭 First-Party MCP Server: The MCP server is generated by our first-party openapi-mcp-codegen utility, ensuring version/API compatibility and software supply chain integrity.
🚦 Getting Started
1️⃣ Configure Environment
- Ensure your
.envfile is set up as described in the cnoe-agent-utils usage guide based on your LLM Provider. - Refer to .env.example as an example.
Example .env file:
1️⃣ Create/Update .env
LLM_PROVIDER= AGENT_NAME=atlassian ATLASSIAN_TOKEN= ATLASSIAN_EMAIL= ATLASSIAN_API_URL= ATLASSIAN_VERIFY_SSL= ########### LLM Configuration ########### # Refer to: https://github.com/cnoe-io/cnoe-agent-utils#-usage
Use the following link to get your own Atlassian API Token:
https://id.atlassian.com/manage-profile/security/api-tokens
2️⃣ Start the Agent (A2A Mode)
Run the agent in a Docker container using your .env file:
docker run -p 0.0.0.0:8000:8000 -it\
-v "$(pwd)/.env:/app/.env"\
ghcr.io/cnoe-io/agent-atlassian:a2a-stable
3️⃣ Run the Client
Use the agent-chat-cli to interact with the agent:
uvx https://github.com/cnoe-io/agent-chat-cli.git a2a
🏗️ Architecture
flowchart TD subgraph Client Layer A[User Client A2A] end subgraph Agent Transport Layer B[Google A2A] end subgraph Agent Graph Layer C[LangGraph ReAct Agent] end subgraph Tools/MCP Layer D[LangGraph MCP Adapter] E[Atlassian MCP Server] F[Atlassian API Server] end A --> B --> C C --> D D -.-> C D --> E --> F --> E
✨ Features
- 🤖 LangGraph + LangChain MCP Adapter for agent orchestration
- 🧠 Azure OpenAI GPT-4o as the LLM backend
- 🔗 Connects to Atlassian via a dedicated Atlassian MCP agent
🧪 Usage
▶️ Test with Atlassian Server
🏃 Quick Start: Run Atlassian Locally with Minikube
If you don’t have an existing Atlassian server, you can quickly spin one up using Minikube:
- Start Minikube:
minikube start
- Install Atlassian in the
atlassiannamespace:
kubectl create namespace atlassian kubectl apply -n atlassian -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
- Expose the Atlassian API server:
kubectl port-forward svc/atlassian-server -n atlassian 8080:443
The API will be available at https://localhost:8080.
- Get the Atlassian admin password:
kubectl -n atlassian get secret atlassian-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d && echo
- (Optional) Install Atlassian CLI:
brew install atlassian
# or see https://argo-cd.readthedocs.io/en/stable/cli_installation/
For more details, see the official getting started guide.
2️⃣ Run the A2A Client
To interact with the agent in A2A mode:
make run-a2a-client
Sample Streaming Output
When running in A2A mode, you’ll see streaming responses like:
============================================================ RUNNING STREAMING TEST ============================================================ --- Single Turn Streaming Request --- --- Streaming Chunk --- The current version of Atlassian is **v2.13.3+a25c8a0**. Here are some additional details: - **Build Date:** 2025-01-03 - **Git Commit:** a25c8a0eef7830be0c2c9074c92dbea8ff23a962 - **Git Tree State:** clean - **Go Version:** go1.23.1 - **Compiler:** gc - **Platform:** linux/amd64 - **Kustomize Version:** v5.4.3 - **Helm Version:** v3.15.4+gfa9efb0 - **Kubectl Version:** v0.31.0 - **Jsonnet Version:** v0.20.0
🧬 Internals
- 🛠️ Uses
create_react_agentfor tool-calling - 🔌 Tools loaded from the Atlassian MCP server (submodule)
- ⚡ MCP server launched via
uv runwithstdiotransport - 🕸️ Single-node LangGraph for inference and action routing
📁 Project Structure
agent_atlassian/ │ ├── agent.py # LLM + MCP client orchestration ├── langgraph.py # LangGraph graph definition ├── __main__.py # CLI entrypoint ├── state.py # Pydantic state models └── atlassian_mcp/ # Git submodule: Atlassian MCP server
🧩 MCP Submodule (Atlassian Tools)
This project uses a first-party MCP module generated from the Atlassian OpenAPI specification using our openapi-mcp-codegen utility. The generated MCP server is included as a git submodule in atlassian_mcp/.
All Atlassian-related LangChain tools are defined by this MCP server implementation, ensuring up-to-date API compatibility and supply chain integrity.
🔌 MCP Integration
The agent uses MultiServerMCPClient to communicate with MCP-compliant services.
Example (stdio transport):
async with MultiServerMCPClient(
{
"atlassian": {
"command": "uv",
"args": ["run", "/abs/path/to/atlassian_mcp/server.py"],
"env": {
"ATLASSIAN_TOKEN": atlassian_token,
"ATLASSIAN_API_URL": atlassian_api_url,
"ATLASSIAN_VERIFY_SSL": "false"
},
"transport": "stdio",
}
}
) as client:
agent = create_react_agent(model, client.get_tools())
Example (SSE transport):
async with MultiServerMCPClient(
{
"atlassian": {
"transport": "sse",
"url": "http://localhost:8000"
}
}
) as client:
...
Evals
Running Evals
This evaluation uses agentevals to perform strict trajectory match evaluation of the agent’s behavior. To run the evaluation suite:
make evals
This will:
- Set up and activate the Python virtual environment
- Install evaluation dependencies (
agentevals,tabulate,pytest) - Run strict trajectory matching tests against the agent
Example Output
======================================= Setting up the Virtual Environment ======================================= Virtual environment already exists. ======================================= Activating virtual environment ======================================= To activate venv manually, run: source .venv/bin/activate . .venv/bin/activate Running Agent Strict Trajectory Matching evals... Installing agentevals with Poetry... . .venv/bin/activate && uv add agentevals tabulate pytest ... set -a && . .env && set +a && uv run evals/strict_match/test_strict_match.py ... Test ID: atlassian_agent_1 Prompt: show atlassian version Reference Trajectories: [['__start__', 'agent_atlassian']] Note: Shows the version of the Atlassian Server Version. ... Results: {'score': True} ...
Evaluation Results
Latest Strict Match Eval Results
📜 License
Apache 2.0 (see LICENSE)
👥 Maintainers
See MAINTAINERS.md
- Contributions welcome via PR or issue!
🙏 Acknowledgements
- LangGraph and LangChain for agent orchestration frameworks.
- langchain-mcp-adapters for MCP integration.
- AGNTCY Agent Gateway Protocol(AGP)
- AGNTCY Workflow Server Manager (WFSM) for deployment and orchestration.
- Model Context Protocol (MCP) for the protocol specification.
- Google A2A
- The open source community for ongoing support and contributions.
Dev Tools Supporting MCP
The following are the main code editors that support the Model Context Protocol. Click the link to visit the official website for more information.










