While Anthropic’s Model Context Protocol (MCP) tutorial provides an excellent foundation for building MCP clients, it’s designed specifically for Claude’s API. But what if you want to leverage the growing ecosystem of OpenAI-compatible models from providers like OpenAI, DeepSeek, or local inference servers? This article walks through adapting the official MCP client to work with OpenAI’s API format, dramatically expanding your model options.
The Challenge: Beyond Claude
Anthropic’s official MCP client tutorial demonstrates how to build a client that connects to MCP servers and uses Claude for processing. While this works beautifully with Claude, it leaves out the vast ecosystem of models that implement OpenAI-compatible APIs.
Many leading AI providers now offer OpenAI-compatible endpoints. The key insight is that while the MCP protocol remains the same, we need to adapt the client’s AI model interaction layer to work with different API formats.
Understanding the Official Tutorial
Before diving into modifications, let’s understand what the official MCP client does:
Official Client Architecture
The original client follows this flow:
- Connection: Connects to MCP servers via stdio transport
- Tool Discovery: Lists available tools from connected servers
- AI Processing: Sends user queries to Claude with available tools
- Tool Execution: Executes tools based on Claude’s decisions
- Response: Returns processed results to the user
Key Components of the Original Client
from anthropic import Anthropic
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
class MCPClient:
def __init__(self):
self.session: Optional[ClientSession] = None
self.exit_stack = AsyncExitStack()
self.anthropic = Anthropic() # Claude API client
The original implementation uses Anthropic’s message format and tool calling conventions, which differ from OpenAI’s approach.
API Differences: Anthropic vs OpenAI
Understanding the key differences between these APIs is crucial for successful adaptation:
1. Tool Format Differences
Anthropic Format:
available_tools = [{
"name": tool.name,
"description": tool.description,
"input_schema": tool.inputSchema
} for tool in response.tools]
OpenAI Format:
available_tools = [{
"type": "function",
"function": {
"name": tool.name,
"description": tool.description,
"parameters": tool.inputSchema
}
} for tool in response.tools]
2. API Call Structure
Anthropic:
response = self.anthropic.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1000,
messages=messages,
tools=available_tools
)
OpenAI:
response = self.openai.chat.completions.create(
model="gpt-4o",
max_tokens=1000,
messages=messages,
tools=available_tools
)
3. Tool Call Handling
Anthropic embeds tool calls within message content blocks, while OpenAI uses a separate tool_calls
attribute in assistant messages.
Building the OpenAI-Compatible Client
Let’s walk through the key modifications needed to support OpenAI’s API format:
1. Initial Setup and Dependencies
First, we replace the Anthropic client with OpenAI:
import asyncio
from typing import Optional
from contextlib import AsyncExitStack
import json
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from openai import OpenAI # Changed from Anthropic
from dotenv import load_dotenv
load_dotenv()
class MCPClient:
def __init__(self):
self.session: Optional[ClientSession] = None
self.exit_stack = AsyncExitStack()
self.openai = OpenAI() # OpenAI client instead of Anthropic
2. Tool Format Conversion
The most critical change is converting MCP tool schemas to OpenAI’s expected format:
async def process_query(self, query: str) -> str:
"""Process a query using OpenAI and available tools"""
messages = [{"role": "user", "content": query}]
response = await self.session.list_tools()
# Convert MCP tools to OpenAI format
available_tools = [{
"type": "function", # OpenAI requires this wrapper
"function": {
"name": tool.name,
"description": tool.description,
"parameters": tool.inputSchema # Direct mapping
}
} for tool in response.tools]
3. API Call Adaptation
The API call structure changes significantly:
# Initial OpenAI API call
response = self.openai.chat.completions.create(
model="gpt-4o", # Can use any OpenAI-compatible model
max_tokens=1000,
messages=messages,
tools=available_tools
)
assistant_message = response.choices[0].message
4. Tool Call Processing
OpenAI’s tool call handling is more structured than Anthropic’s:
# Check if there are tool calls
if assistant_message.tool_calls:
# Add assistant message to conversation
messages.append({
"role": "assistant",
"content": assistant_message.content,
"tool_calls": assistant_message.tool_calls
})
# Process each tool call
for tool_call in assistant_message.tool_calls:
tool_name = tool_call.function.name
try:
tool_args = json.loads(tool_call.function.arguments)
except json.JSONDecodeError:
tool_args = {}
print(f"Calling tool {tool_name} with args {tool_args}")
# Execute tool call via MCP
result = await self.session.call_tool(tool_name, tool_args)
# Add tool result to messages
messages.append({
"role": "tool",
"tool_call_id": tool_call.id,
"content": str(result.content[0].text if result.content else "No result")
})
5. Final Response Handling
After tool execution, we get the final response:
# Get final response from OpenAI after tool execution
final_response = self.openai.chat.completions.create(
model="gpt-4o",
max_tokens=1000,
messages=messages,
tools=available_tools
)
final_assistant_message = final_response.choices[0].message
if final_assistant_message.content:
final_text.append(final_assistant_message.content)
Complete Implementation Comparison
Here’s a side-by-side comparison of the key differences:
Original (Anthropic) Tool Processing:
# Anthropic's approach - tools in content blocks
for content in response.content:
if content.type == 'tool_use':
tool_name = content.name
tool_args = content.input
result = await self.session.call_tool(tool_name, tool_args)
Modified (OpenAI) Tool Processing:
# OpenAI's approach - structured tool_calls
if assistant_message.tool_calls:
for tool_call in assistant_message.tool_calls:
tool_name = tool_call.function.name
tool_args = json.loads(tool_call.function.arguments)
result = await self.session.call_tool(tool_name, tool_args)
Testing the OpenAI Client
Environment Setup
Create a .env
file with your OpenAI API key:
OPENAI_API_KEY=your_openai_api_key_here
For DeepSeek or other providers, you might need:
OPENAI_API_KEY=your_deepseek_api_key
OPENAI_BASE_URL=https://api.deepseek.com
Running the Client
python client-openai.py weather_server.py
Testing Results
I tested the modified client with various OpenAI-compatible models:
✅ Successful Tests:
- OpenAI GPT-4o: Excellent tool usage and reasoning
- DeepSeek Chat: Great performance with tool calling
⚠️ Considerations:
- Some models have varying quality in tool call decision-making
- Local models may need specific prompting for optimal tool usage
- Token limits vary across providers
Conclusion
Adapting Anthropic’s MCP client tutorial to work with OpenAI-compatible APIs demonstrates the flexibility and power of the MCP protocol. By understanding the key differences in API formats and tool calling conventions, we can leverage a much broader ecosystem of AI models while maintaining the same MCP server infrastructure.
The main changes required are:
- Tool format conversion from MCP to OpenAI schema
- API call structure adaptation for OpenAI’s chat completions format
- Tool call handling using OpenAI’s structured approach
- Message flow management with OpenAI’s conversation format
This modification not only works with OpenAI’s models but also with the growing ecosystem of OpenAI-compatible providers, giving developers much more flexibility in their AI model choices.
The future of MCP isn’t just about connecting tools to AI models—it’s about creating a unified interface that works across the entire AI ecosystem. This OpenAI-compatible client is a step toward that vision, making MCP more accessible and valuable for developers working with diverse AI platforms.
Whether you’re building with cutting-edge cloud models, cost-effective alternatives, or privacy-focused local deployments, the MCP protocol provides the standardized foundation you need. The key is adapting the client layer to speak each model’s language while keeping the powerful MCP server ecosystem intact.