MCP ExplorerExplorer

Arize Phoenix MCP Server

@Arize-aion 9 months ago
5905 Apache 2.0
FreeCommunity
AI Systems
#MCP#Model Context Protocol#Prompts Management#Datasets#Experiments#LLM#Phoenix#Arize
Phoenix MCP Server is an implementation of the Model Context Protocol that provides a unified interface to the capabilities of the Arize Phoenix platform, enabling prompts management, dataset exploration and synthesis, and experiment visualization with the help of a language model.

Overview

What is Arize Phoenix MCP Server

Phoenix is an open-source AI observability platform designed for experimentation, evaluation, and troubleshooting of machine learning models, particularly focusing on large language models (LLMs). It supports tracing, evaluation, and dataset management, making it a comprehensive tool for developers and data scientists to analyze and optimize their AI applications.

Use cases

Phoenix is utilized for a variety of AI-related tasks including tracing LLM applications to monitor their performance in real-time, evaluating model outputs through response and retrieval assessments, managing datasets for fine-tuning and experimentation, and conducting structured experiments to compare model performance. Users can also optimize prompts in a controlled environment.

How to use

To install Phoenix, you can use package managers such as pip or conda. For pip, run pip install arize-phoenix. Alternatively, Phoenix can be deployed using Docker containers or Kubernetes. Once installed, you can access various features such as tracing, evaluation, and prompt management through provided APIs and integrated frameworks.

Key features

Key features of Phoenix include OpenTelemetry-based tracing for monitoring and debugging, evaluation tools for assessing model performance, version-controlled datasets for experimentation, systematic management of prompts, and integrations with popular frameworks. It is also vendor and language agnostic, allowing broad applicability across various development environments.

Where to use

Phoenix can be employed in diverse environments such as local setups, Jupyter notebooks, or cloud-based applications. It is compatible with containerized deployments using Docker or Kubernetes, making it suitable for both individual developers and enterprise-level integrations.

Content

phoenix banner

Phoenix is an open-source AI observability platform designed for experimentation, evaluation, and troubleshooting. It provides:

  • Tracing - Trace your LLM application’s runtime using OpenTelemetry-based instrumentation.
  • Evaluation - Leverage LLMs to benchmark your application’s performance using response and retrieval evals.
  • Datasets - Create versioned datasets of examples for experimentation, evaluation, and fine-tuning.
  • Experiments - Track and evaluate changes to prompts, LLMs, and retrieval.
  • Playground- Optimize prompts, compare models, adjust parameters, and replay traced LLM calls.
  • Prompt Management- Manage and test prompt changes systematically using version control, tagging, and experimentation.

Phoenix is vendor and language agnostic with out-of-the-box support for popular frameworks (🦙LlamaIndex, 🦜⛓LangChain, Haystack, 🧩DSPy, 🤗smolagents) and LLM providers (OpenAI, Bedrock, MistralAI, VertexAI, LiteLLM, Google GenAI and more). For details on auto-instrumentation, check out the OpenInference project.

Phoenix runs practically anywhere, including your local machine, a Jupyter notebook, a containerized deployment, or in the cloud.

Installation

Install Phoenix via pip or conda

pip install arize-phoenix

Phoenix container images are available via Docker Hub and can be deployed using Docker or Kubernetes.

Packages

The arize-phoenix package includes the entire Phoenix platfom. However if you have deployed the Phoenix platform, there are light-weight Python sub-packages and TypeScript packages that can be used in conjunction with the platfrom.

Subpackages

Package Language Description
arize-phoenix-otel Python PyPI Version Provides a lightweight wrapper around OpenTelemetry primitives with Phoenix-aware defaults
arize-phoenix-client Python PyPI Version Lightweight client for interacting with the Phoenix server via its OpenAPI REST interface
arize-phoenix-evals Python PyPI Version Tooling to evaluate LLM applications including RAG relevance, answer relevance, and more
@arizeai/phoenix-client JavaScript NPM Version Client for the Arize Phoenix API
@arizeai/phoenix-mcp JavaScript NPM Version MCP server implementation for Arize Phoenix providing unified interface to Phoenix’s capabilities

Tracing Integrations

Phoenix is built on top of OpenTelemetry and is vendor, language, and framework agnostic. For details about tracing integrations and example applications, see the OpenInference project.

Python Integrations

Integration Package Version Badge
OpenAI openinference-instrumentation-openai PyPI Version
OpenAI Agents openinference-instrumentation-openai-agents PyPI Version
LlamaIndex openinference-instrumentation-llama-index PyPI Version
DSPy openinference-instrumentation-dspy PyPI Version
AWS Bedrock openinference-instrumentation-bedrock PyPI Version
LangChain openinference-instrumentation-langchain PyPI Version
MistralAI openinference-instrumentation-mistralai PyPI Version
Google GenAI openinference-instrumentation-google-genai PyPI Version
Google ADK openinference-instrumentation-google-adk PyPI Version
Guardrails openinference-instrumentation-guardrails PyPI Version
VertexAI openinference-instrumentation-vertexai PyPI Version
CrewAI openinference-instrumentation-crewai PyPI Version
Haystack openinference-instrumentation-haystack PyPI Version
LiteLLM openinference-instrumentation-litellm PyPI Version
Groq openinference-instrumentation-groq PyPI Version
Instructor openinference-instrumentation-instructor PyPI Version
Anthropic openinference-instrumentation-anthropic PyPI Version
Smolagents openinference-instrumentation-smolagents PyPI Version
Agno openinference-instrumentation-agno PyPI Version
MCP openinference-instrumentation-mcp PyPI Version
Pydantic AI openinference-instrumentation-pydantic-ai PyPI Version
Autogen AgentChat openinference-instrumentation-autogen-agentchat PyPI Version
Portkey openinference-instrumentation-portkey PyPI Version

JavaScript Integrations

Integration Package Version Badge
OpenAI @arizeai/openinference-instrumentation-openai NPM Version
LangChain.js @arizeai/openinference-instrumentation-langchain NPM Version
Vercel AI SDK @arizeai/openinference-vercel NPM Version
BeeAI @arizeai/openinference-instrumentation-beeai NPM Version
Mastra @arizeai/openinference-mastra NPM Version

Platforms

Phoenix has native integrations with LangFlow, LiteLLM Proxy, and BeeAI.

Community

Join our community to connect with thousands of AI builders.

Breaking Changes

See the migration guide for a list of breaking changes.

Copyright, Patent, and License

Copyright 2025 Arize AI, Inc. All Rights Reserved.

Portions of this code are patent protected by one or more U.S. Patents. See the IP_NOTICE.

This software is licensed under the terms of the Elastic License 2.0 (ELv2). See LICENSE.

Tools

list-prompts
Get a list of all the prompts. Prompts (templates, prompt templates) are versioned templates for input messages to an LLM. Each prompt includes both the input messages, but also the model and invocation parameters to use when generating outputs. Returns a list of prompt objects with their IDs, names, and descriptions. Example usage: List all available prompts Expected return: Array of prompt objects with metadata. Example: [{ "name": "article-summarizer", "description": "Summarizes an article into concise bullet points", "source_prompt_id": null, "id": "promptid1234" }]
get-latest-prompt
Get the latest version of a prompt. Returns the prompt version with its template, model configuration, and invocation parameters. Example usage: Get the latest version of a prompt named 'article-summarizer' Expected return: Prompt version object with template and configuration. Example: { "description": "Initial version", "model_provider": "OPENAI", "model_name": "gpt-3.5-turbo", "template": { "type": "chat", "messages": [ { "role": "system", "content": "You are an expert summarizer. Create clear, concise bullet points highlighting the key information." }, { "role": "user", "content": "Please summarize the following {{topic}} article: {{article}}" } ] }, "template_type": "CHAT", "template_format": "MUSTACHE", "invocation_parameters": { "type": "openai", "openai": {} }, "id": "promptversionid1234" }
get-prompt-by-identifier
Get a prompt's latest version by its identifier (name or ID). Returns the prompt version with its template, model configuration, and invocation parameters. Example usage: Get the latest version of a prompt with name 'article-summarizer' Expected return: Prompt version object with template and configuration. Example: { "description": "Initial version", "model_provider": "OPENAI", "model_name": "gpt-3.5-turbo", "template": { "type": "chat", "messages": [ { "role": "system", "content": "You are an expert summarizer. Create clear, concise bullet points highlighting the key information." }, { "role": "user", "content": "Please summarize the following {{topic}} article: {{article}}" } ] }, "template_type": "CHAT", "template_format": "MUSTACHE", "invocation_parameters": { "type": "openai", "openai": {} }, "id": "promptversionid1234" }
get-prompt-version
Get a specific version of a prompt using its version ID. Returns the prompt version with its template, model configuration, and invocation parameters. Example usage: Get a specific prompt version with ID 'promptversionid1234' Expected return: Prompt version object with template and configuration. Example: { "description": "Initial version", "model_provider": "OPENAI", "model_name": "gpt-3.5-turbo", "template": { "type": "chat", "messages": [ { "role": "system", "content": "You are an expert summarizer. Create clear, concise bullet points highlighting the key information." }, { "role": "user", "content": "Please summarize the following {{topic}} article: {{article}}" } ] }, "template_type": "CHAT", "template_format": "MUSTACHE", "invocation_parameters": { "type": "openai", "openai": {} }, "id": "promptversionid1234" }
upsert-prompt
Create or update a prompt with its template and configuration. Creates a new prompt and its initial version with specified model settings. Example usage: Create a new prompt named 'email_generator' with a template for generating emails Expected return: A confirmation message of successful prompt creation
list-prompt-versions
Get a list of all versions for a specific prompt. Returns versions with pagination support. Example usage: List all versions of a prompt named 'article-summarizer' Expected return: Array of prompt version objects with IDs and configuration. Example: [ { "description": "Initial version", "model_provider": "OPENAI", "model_name": "gpt-3.5-turbo", "template": { "type": "chat", "messages": [ { "role": "system", "content": "You are an expert summarizer. Create clear, concise bullet points highlighting the key information." }, { "role": "user", "content": "Please summarize the following {{topic}} article: {{article}}" } ] }, "template_type": "CHAT", "template_format": "MUSTACHE", "invocation_parameters": { "type": "openai", "openai": {} }, "id": "promptversionid1234" } ]
1 / 3

Comments

Recommend MCP Servers

View All MCP Servers