- Explore MCP Servers
- tempo-llm-cli
Tempo Llm Cli
What is Tempo Llm Cli
Tempo LLM CLI is an interactive tool designed to facilitate communication with the Tempo tracing data using Claude and the Tempo MCP Server. It allows users to inquire about tracing metrics and data through natural language, streamlining the process of managing and analyzing trace data.
Use cases
Tempo LLM CLI can be used to quickly identify issues with service performance, such as determining which endpoints are slow, discovering error sources, and visualizing response time metrics. This tool enables developers and operations teams to gain insights into their applications and optimize performance effectively.
How to use
To use Tempo LLM CLI, start by running a local instance of Tempo and set up your Claude API key. Once the environment is ready, users can interact with the CLI to ask questions about tracing data and metrics, enabling efficient troubleshooting and performance analysis.
Key features
Key features of Tempo LLM CLI include integration with Claude for conversational queries, automatic fetching of available attributes and metrics, and the ability to visualize data in various formats. Users can ask specific questions and receive detailed, structured responses that help in diagnosing performance issues.
Where to use
Tempo LLM CLI is suitable for developers, DevOps engineers, and system administrators who require real-time insights into application performance and tracing data. It can be utilized in development and production environments where monitoring and troubleshooting are essential.
Overview
What is Tempo Llm Cli
Tempo LLM CLI is an interactive tool designed to facilitate communication with the Tempo tracing data using Claude and the Tempo MCP Server. It allows users to inquire about tracing metrics and data through natural language, streamlining the process of managing and analyzing trace data.
Use cases
Tempo LLM CLI can be used to quickly identify issues with service performance, such as determining which endpoints are slow, discovering error sources, and visualizing response time metrics. This tool enables developers and operations teams to gain insights into their applications and optimize performance effectively.
How to use
To use Tempo LLM CLI, start by running a local instance of Tempo and set up your Claude API key. Once the environment is ready, users can interact with the CLI to ask questions about tracing data and metrics, enabling efficient troubleshooting and performance analysis.
Key features
Key features of Tempo LLM CLI include integration with Claude for conversational queries, automatic fetching of available attributes and metrics, and the ability to visualize data in various formats. Users can ask specific questions and receive detailed, structured responses that help in diagnosing performance issues.
Where to use
Tempo LLM CLI is suitable for developers, DevOps engineers, and system administrators who require real-time insights into application performance and tracing data. It can be utilized in development and production environments where monitoring and troubleshooting are essential.
Content
Tempo LLM CLI
LLM Agent designed to interact with Claude and the Tempo MCP Server. Use natural language to ask questions about your tracing data in Tempo.
This is primarily a demo project to investigate the creation of LLM agents and interactions with MCP servers. However, feel free to use, extend,
and PR improvements if you find this work interesting.
Quick Start
docker-compose up
the local example in the Tempo main repo- If this PR is not merged you’ll need to use the PR branch.
- This starts Tempo exposed at http://localhost:3200 and pushes synthetic data into it.
- Set your claude API
# Set your Claude API key export ANTHROPIC_API_KEY=your_key_here
- Go!
# Build and run go run ./
Now you can ask Claude things like:
- “What’s broken right now?”
- “Show me the slowest traces from the last hour”
- “Which service is causing errors?”
- “Draw me a chart of response times”
What You Need
- Go 1.24+
- Claude API key from Anthropic
Sample Chat
👱 You: which endpoint is slowest right now? 🤖 Assistant: I'll help you find the slowest endpoint. Let me first check what attributes are available for endpoints and then query for the slowest one using metrics. 🔧 Tool Use: get-attribute-names map[scope:span] 📊 Tool Result (🏷️ Attribute Names): span (47 attributes): article.count http.flavor http.method http.request.header.accept http.request_content_length http.response.header.content-type http.response_content_length http.scheme http.status_code http.target http.url k6.0S5X7D9SjTA6B4N k6.1ciqVF24QgtjigU k6.1th67BzXcPKTngn k6.38qEKgBDP0EmRAz k6.3uHnIdM5hNzIEoW k6.4WHhMaOIX3uQyXi k6.8EsxHPjYAoFoq84 k6.9BQAWXEWC4WygVU k6.A0SUinRkUc8Exza ... (truncated after 20 attributes) 🤖 Assistant: Let me check for endpoint-related attributes and see what endpoint values are available: 🔧 Tool Use: get-attribute-values map[name:span.http.target] 📊 Tool Result (🏷️ Attribute Values): string values (10): /place-articles /set-billing /delete-product /update-payment /list-articles /authenticate /set-stock /payment /article-to-cart /get-article 🤖 Assistant: Now let me get the TraceQL metrics documentation to understand how to query for the slowest endpoint: 🔧 Tool Use: docs-traceql-metrics map[] 📊 Tool Result (📚 Documentation): TraceQL supports `rate`, `count_over_time`, `sum_over_time`, `min_over_time`, `avg_over_time`, `quantile_over_time`, `histogra... 🤖 Assistant: Now I'll query for the average duration by endpoint to find the slowest one: 🔧 Tool Use: traceql-metrics-instant map[query:{ span.http.target != nil } | avg_over_time(duration) by (span.http.target)] 📊 Tool Result (📊 Metrics Instant): Metric{span.http.target=/article-to-cart} : 0.7930 Metric{span.http.target=/authenticate} : 0.1995 Metric{span.http.target=/create-shipping} : 0.6470 Metric{span.http.target=/delete-product} : 0.6538 Metric{span.http.target=/get-article} : 0.3949 Metric{span.http.target=/list-articles} : 0.4300 Metric{span.http.target=/payment} : 0.1577 Metric{span.http.target=/place-articles} : 0.3846 Metric{span.http.target=/send-payment} : 0.6581 Metric{span.http.target=/set-billing} : 0.6455 ... (showing first 10 of 12 series) Search Metrics: Inspected Traces: 0 Inspected Bytes: 406686 Inspected Spans: 2589 Total Blocks: 2 Completed Jobs: 3/2 Total Block Bytes: 1562937 🤖 Assistant: Based on the metrics results, **`/article-to-cart`** is currently the slowest endpoint with an average duration of **0.793 seconds** (793 milliseconds). Here's the ranking of endpoints by average response time: 1. **`/article-to-cart`** - 793ms (slowest) 2. **`/set-stock`** - 692ms 3. **`/update-payment`** - 676ms 4. **`/send-payment`** - 658ms 5. **`/delete-product`** - 654ms 6. **`/set-billing`** - 646ms 7. **`/create-shipping`** - 647ms 8. **`/list-articles`** - 430ms 9. **`/get-article`** - 395ms 10. **`/place-articles`** - 385ms 11. **`/authenticate`** - 200ms 12. **`/payment`** - 158ms (fastest) The `/article-to-cart` endpoint is significantly slower than most others, taking nearly 800ms on average compared to the fastest endpoint (`/payment`) at 158ms.