- Explore MCP Servers
- Axiom
Axiom
What is Axiom
Axiom is an AI Agent designed to assist users in creating various projects using natural language. It leverages modern AI frameworks, libraries, and tools, including LangGraph, MCP Docs, Chainlit, and Gemini.
Use cases
Use cases for Axiom include building AI Agents, creating retrieval-augmented generation (RAG) systems, developing chatbots, implementing authentication mechanisms, and generating production-ready code using images and graphs.
How to use
To use Axiom, clone the repository, set up a virtual environment, install dependencies, configure environment variables with your API keys, and run the MCP Doc server before launching the application.
Key features
Key features of Axiom include an interactive chat interface, access to multiple documentation sources, support for various Gemini models, image processing capabilities, customizable model settings, and Docker support for deployment.
Where to use
Axiom can be used in fields such as AI development, chatbot creation, full-stack development, and any project requiring natural language processing and AI integration.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Overview
What is Axiom
Axiom is an AI Agent designed to assist users in creating various projects using natural language. It leverages modern AI frameworks, libraries, and tools, including LangGraph, MCP Docs, Chainlit, and Gemini.
Use cases
Use cases for Axiom include building AI Agents, creating retrieval-augmented generation (RAG) systems, developing chatbots, implementing authentication mechanisms, and generating production-ready code using images and graphs.
How to use
To use Axiom, clone the repository, set up a virtual environment, install dependencies, configure environment variables with your API keys, and run the MCP Doc server before launching the application.
Key features
Key features of Axiom include an interactive chat interface, access to multiple documentation sources, support for various Gemini models, image processing capabilities, customizable model settings, and Docker support for deployment.
Where to use
Axiom can be used in fields such as AI development, chatbot creation, full-stack development, and any project requiring natural language processing and AI integration.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Content
Axiom - A Docs Expert Agent
Axiom is AI Agent specialized in modern AI frameworks, libraries and tools. It can assist in creating AI Agents, RAG systems, chatbots, authentication mechanisms, and even full-stack development. It is built with LangGraph, MCP Docs Server, Chainlit and Gemini Models, designed to help users create different projects using natural language instructions.

Features
- 🤖 Interactive chat interface
- 📚 Access to multiple documentation sources
- 🦾 Support for multiple Gemini models
- 🎨 Support for image processing and analysis
- 📈 Use images and graphs to create production-ready code
- 🛠️ Customizable model settings (temperature, model version)
- 🌐 Docker support for containerized deployment
Documentation Sources
Axiom used llms.txt of the given documentations and fetches content based on the URLs in llms.txt.
The agent has access to following documentations:
- LangGraph Python
- CrewAI
- Model Context Protocol (MCP)
- Chainlit
- FastHTML
- Supabase
- Pinecone
- Composio
- Mem0
- Zep
- Stripe
- Resend
- Upstash
- Netlify
- Clerk Auth
- Stack Auth
Prerequisites
- UV package manager
- Python 3.11+
- Google Gemini API Key
- Docker (optional): If you intend to use the
Dockerfile, you’ll need Docker installed.
Installation
- Clone the repository:
git clone https://github.com/aasherkamal216/Axiom.git
cd Axiom
- Create and Activate Virtual Environment:
uv venv
.venv\Scripts\activate # For Windows
source .venv/bin/activate # for Mac
- Install dependencies:
uv sync
- Set up environment variables:
cp .env.example .env
Add your API keys and other credentials in .env file.
[!NOTE]
If you want to disable authentication, you can removechainlit.yamlfile.
Also remove the Oauth Callback function fromsrc/axiom/app.py.
Usage
Run the application:
- First run the MCP Doc server:
uv run mcpdoc --yaml docs_config.yaml --transport sse --port 8082 --host localhost
- Then run chainlit interface:
uv run chainlit run app.py -w
The application will be available at http://localhost:8000.
Building the Docker image (Optional)
Alternatively, you can use Docker to run the application:
docker build -t axiom . docker run -p 7860:7860 -p 8082:8082 axiom
Adding More Docs
You can add more documentations in docs_config.yaml file. Any documentation with a llms.txt file can be added to the list.
Dev Tools Supporting MCP
The following are the main code editors that support the Model Context Protocol. Click the link to visit the official website for more information.










