MCP ExplorerExplorer

Turbular

@raeudigerRaeffion 24 days ago
91 MIT
FreeCommunity
AI Systems
#agents#api#database#llm#llmops#mcp-server#open-source#openai#query#sql
A MCP server allowing LLM agents to easily connect and retrieve data from any database

Overview

What is Turbular

Turbular is an open-source Model Context Protocol (MCP) server that facilitates seamless connectivity between Language Models (LLMs) and various databases, providing a unified API for interaction.

Use cases

Use cases for Turbular include generating dynamic queries for data retrieval in chatbots, normalizing data for machine learning models, and integrating various databases into a single AI application.

How to use

To use Turbular, deploy it via Docker or Docker Compose, configure your database connections, and utilize the provided API to interact with your data through LLMs.

Key features

Key features include multi-database support, schema normalization for LLM compatibility, secure connections with SSL, high performance for optimized queries, query transformation capabilities, and easy deployment with Docker.

Where to use

Turbular can be used in AI applications that require data retrieval from multiple databases, such as chatbots, data analysis tools, and machine learning models.

Content

Turbular

FastAPI
Python
License

Turbular is an open-source Model Context Protocol (MCP) server that enables seamless database connectivity for Language Models (LLMs). It provides a unified API interface to interact with various database types, making it perfect for AI applications that need to work with multiple data sources.

✨ Features

  • 🔌 Multi-Database Support: Connect to various database types through a single API
  • 🔄 Schema Normalization: Automatically normalize database schemas to correct naming conventions for LLM compatibility
  • 🔒 Secure Connections: Support for SSL and various authentication methods
  • 🚀 High Performance: Optimizes your LLM generated queries
  • 📝 Query Transformation: Let LLM generate queries against normalized layouts and transform them into their unnormalized form
  • 🐳 Docker Support: Easy deployment with Docker and Docker Compose
  • 🔧 Easy to Extend: Adding new database providers can be easily done by extending the BaseDBConnector interface

🗄️ Supported Databases

Database Type Status Icon
PostgreSQL
MySQL
SQLite
BigQuery
Oracle
MS SQL
Redshift

🚀 Quick Start

Using Docker (Recommended)

  1. Clone the repository:

    git clone https://github.com/raeudigerRaeffi/turbular.git
    cd turbular
    
  2. Start the development environment:

    docker-compose -f docker-compose.dev.yml up --build
    
  3. Test the connection:

    ./scripts/test_connection.py
    

Manual Installation

  1. Install Python 3.11 or higher

  2. Install dependencies:

    pip install -r requirements.txt
    
  3. Run the server:

    uvicorn app.main:app --reload
    

🔌 API Reference

Database Operations

Get Database Schema

POST /get_schema

Retrieve the schema of a connected database for your LLM agent.

Parameters:

  • db_info: Database connection arguments
  • return_normalize_schema (optional): Return schema in LLM-friendly format

Execute Query

POST /execute_query

Optimizes query and then execute SQL queries on the connected database.

Parameters:

  • db_info: Database connection arguments
  • query: SQL query string
  • normalized_query: Boolean indicating if query is normalized
  • max_rows: Maximum number of rows to return
  • autocommit: Boolean for autocommit mode

File Management

Upload BigQuery Key

POST /upload-bigquery-key

Upload a BigQuery service account key file.

Parameters:

  • project_id: BigQuery project ID
  • key_file: JSON key file

Upload SQLite Database

POST /upload-sqlite-file

Upload a SQLite database file.

Parameters:

  • database_name: Name to identify the database
  • db_file: SQLite database file (.db or .sqlite)

Utility Endpoints

Health Check

GET /health

Verify if the API is running.

List Supported Databases

GET /supported-databases

Get a list of all supported database types.

🔧 Development Setup

  1. Fork and clone the repository

  2. Create a development environment:

    docker-compose -f docker-compose.dev.yml up --build
    
  3. The development server includes:

    • FastAPI server with hot reload
    • PostgreSQL test database
    • Pre-configured test data
  4. Access the API documentation:

🤝 Contributing

We welcome contributions! Here’s how you can help:

  1. Check out our contribution guidelines
  2. Look for open issues
  3. Submit pull requests with improvements
  4. Help with documentation
  5. Share your feedback

Development Guidelines

  • Follow PEP 8 style guide
  • Write tests for new features
  • Update documentation as needed
  • Use meaningful commit messages

Roadmap

  1. Add more testing, formatting and commit hooks
  2. Add SSH support for database connection
  3. Add APIs as datasources using steampipe
  4. Enable local schema saving for databases to which the server has already connected
  5. Add more datasources (snowflake, mongodb, excel, etc.)
  6. Add authentication protection to routes

🧪 Testing

Run the test suite:

pytest

For development tests with the included PostgreSQL:

./scripts/test_connection.py

📚 Documentation

📝 Connection Examples

PostgreSQL

connection_info = {
    "database_type": "PostgreSQL",
    "username": "user",
    "password": "password",
    "host": "localhost",
    "port": 5432,
    "database_name": "mydb",
    "ssl": False
}

BigQuery

connection_info = {
    "database_type": "BigQuery",
    "path_cred": "/path/to/credentials.json",
    "project_id": "my-project",
    "dataset_id": "my_dataset"
}

SQLite

connection_info = {
    "type": "SQLite",
    "database_name": "my_database"
}

📜 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙏 Acknowledgments

📞 Support


Made with ❤️ by the Turbular Team

Tools

No tools

Comments

Recommend MCP Servers

View All MCP Servers