- Explore MCP Servers
- turbular
Turbular
What is Turbular
Turbular is an open-source Model Context Protocol (MCP) server that facilitates seamless connectivity between Language Models (LLMs) and various databases, providing a unified API for interaction.
Use cases
Use cases for Turbular include generating dynamic queries for data retrieval in chatbots, normalizing data for machine learning models, and integrating various databases into a single AI application.
How to use
To use Turbular, deploy it via Docker or Docker Compose, configure your database connections, and utilize the provided API to interact with your data through LLMs.
Key features
Key features include multi-database support, schema normalization for LLM compatibility, secure connections with SSL, high performance for optimized queries, query transformation capabilities, and easy deployment with Docker.
Where to use
Turbular can be used in AI applications that require data retrieval from multiple databases, such as chatbots, data analysis tools, and machine learning models.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Overview
What is Turbular
Turbular is an open-source Model Context Protocol (MCP) server that facilitates seamless connectivity between Language Models (LLMs) and various databases, providing a unified API for interaction.
Use cases
Use cases for Turbular include generating dynamic queries for data retrieval in chatbots, normalizing data for machine learning models, and integrating various databases into a single AI application.
How to use
To use Turbular, deploy it via Docker or Docker Compose, configure your database connections, and utilize the provided API to interact with your data through LLMs.
Key features
Key features include multi-database support, schema normalization for LLM compatibility, secure connections with SSL, high performance for optimized queries, query transformation capabilities, and easy deployment with Docker.
Where to use
Turbular can be used in AI applications that require data retrieval from multiple databases, such as chatbots, data analysis tools, and machine learning models.
Clients Supporting MCP
The following are the main client software that supports the Model Context Protocol. Click the link to visit the official website for more information.
Content
Turbular
Turbular is an open-source Model Context Protocol (MCP) server that enables seamless database connectivity for Language Models (LLMs). It provides a unified API interface to interact with various database types, making it perfect for AI applications that need to work with multiple data sources.
✨ Features
- 🔌 Multi-Database Support: Connect to various database types through a single API
- 🔄 Schema Normalization: Automatically normalize database schemas to correct naming conventions for LLM compatibility
- 🔒 Secure Connections: Support for SSL and various authentication methods
- 🚀 High Performance: Optimizes your LLM generated queries
- 📝 Query Transformation: Let LLM generate queries against normalized layouts and transform them into their unnormalized form
- 🐳 Docker Support: Easy deployment with Docker and Docker Compose
- 🔧 Easy to Extend: Adding new database providers can be easily done by extending the BaseDBConnector interface
🗄️ Supported Databases
Database Type | Status | Icon |
---|---|---|
PostgreSQL | ✅ | |
MySQL | ✅ | |
SQLite | ✅ | |
BigQuery | ✅ | |
Oracle | ✅ | |
MS SQL | ✅ | |
Redshift | ✅ | ![]() |
🚀 Quick Start
Using Docker (Recommended)
-
Clone the repository:
git clone https://github.com/raeudigerRaeffi/turbular.git cd turbular
-
Start the development environment:
docker-compose -f docker-compose.dev.yml up --build
-
Test the connection:
./scripts/test_connection.py
Manual Installation
-
Install Python 3.11 or higher
-
Install dependencies:
pip install -r requirements.txt
-
Run the server:
uvicorn app.main:app --reload
🔌 API Reference
Database Operations
Get Database Schema
POST /get_schema
Retrieve the schema of a connected database for your LLM agent.
Parameters:
db_info
: Database connection argumentsreturn_normalize_schema
(optional): Return schema in LLM-friendly format
Execute Query
POST /execute_query
Optimizes query and then execute SQL queries on the connected database.
Parameters:
db_info
: Database connection argumentsquery
: SQL query stringnormalized_query
: Boolean indicating if query is normalizedmax_rows
: Maximum number of rows to returnautocommit
: Boolean for autocommit mode
File Management
Upload BigQuery Key
POST /upload-bigquery-key
Upload a BigQuery service account key file.
Parameters:
project_id
: BigQuery project IDkey_file
: JSON key file
Upload SQLite Database
POST /upload-sqlite-file
Upload a SQLite database file.
Parameters:
database_name
: Name to identify the databasedb_file
: SQLite database file (.db or .sqlite)
Utility Endpoints
Health Check
GET /health
Verify if the API is running.
List Supported Databases
GET /supported-databases
Get a list of all supported database types.
🔧 Development Setup
-
Fork and clone the repository
-
Create a development environment:
docker-compose -f docker-compose.dev.yml up --build
-
The development server includes:
- FastAPI server with hot reload
- PostgreSQL test database
- Pre-configured test data
-
Access the API documentation:
- Swagger UI: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
🤝 Contributing
We welcome contributions! Here’s how you can help:
- Check out our contribution guidelines
- Look for open issues
- Submit pull requests with improvements
- Help with documentation
- Share your feedback
Development Guidelines
- Follow PEP 8 style guide
- Write tests for new features
- Update documentation as needed
- Use meaningful commit messages
Roadmap
- Add more testing, formatting and commit hooks
- Add SSH support for database connection
- Add APIs as datasources using steampipe
- Enable local schema saving for databases to which the server has already connected
- Add more datasources (snowflake, mongodb, excel, etc.)
- Add authentication protection to routes
🧪 Testing
Run the test suite:
pytest
For development tests with the included PostgreSQL:
./scripts/test_connection.py
📚 Documentation
📝 Connection Examples
PostgreSQL
connection_info = {
"database_type": "PostgreSQL",
"username": "user",
"password": "password",
"host": "localhost",
"port": 5432,
"database_name": "mydb",
"ssl": False
}
BigQuery
connection_info = {
"database_type": "BigQuery",
"path_cred": "/path/to/credentials.json",
"project_id": "my-project",
"dataset_id": "my_dataset"
}
SQLite
connection_info = {
"type": "SQLite",
"database_name": "my_database"
}
📜 License
This project is licensed under the MIT License - see the LICENSE file for details.
🙏 Acknowledgments
- FastAPI for the amazing framework
- SQLAlchemy for database support
- @henryclickclack Henry Albert Jupiter Hommel as Co-Developer ❤️
- All our contributors and users
📞 Support
- Create an issue
- Email: [email protected]
Dev Tools Supporting MCP
The following are the main code editors that support the Model Context Protocol. Click the link to visit the official website for more information.