Integration Guide

OpenAI Agents SDK + MCP Tools Integration: The Missing Bridge

OpenAI's Agents SDK gives you a clean framework for building tool-using agents. The Model Context Protocol gives you access to hundreds of tools. But the SDK does not speak MCP natively. This guide shows how to bridge the gap: fetch MCP tools as OpenAI function definitions, pass them to the Agents SDK, and execute tool calls through a single gateway.

April 15, 202610 min readToolRoute Team

The Problem: Two Ecosystems, No Native Bridge

OpenAI's Agents SDK is built around function calling. You define tools as JSON Schema objects, the model decides when to call them, and your code executes the call. It is a clean, well-documented pattern that works for any tool you define manually.

The Model Context Protocol (MCP) is a different ecosystem entirely. MCP servers expose tools through a JSON-RPC transport. Anthropic's Claude clients speak MCP natively. So does Cursor, Windsurf, and a growing list of IDE integrations. But OpenAI's SDK does not.

This means if you are building agents with the OpenAI Agents SDK, you are locked out of the MCP tool ecosystem by default. You cannot call a Semgrep scan, query Context7 documentation, send an email through Resend, or manage DNS records through GoDaddy's MCP server. Not without writing your own adapter for each one.

The bridge is a translation layer: something that takes MCP tools and presents them as OpenAI function definitions, then routes the SDK's tool calls back to the MCP servers that own them.

How the Translation Works

An MCP tool definition and an OpenAI function definition are structurally similar. Both have a name, a description, and a JSON Schema for the input parameters. The difference is the transport: MCP uses JSON-RPC, OpenAI uses REST with a specific function-calling format.

MCP Tool Definition
{
  "name": "tavily_search",
  "description": "Search the web",
  "inputSchema": {
    "type": "object",
    "properties": {
      "query": { "type": "string" }
    },
    "required": ["query"]
  }
}
OpenAI Function Definition
{
  "type": "function",
  "function": {
    "name": "tavily_search",
    "description": "Search the web",
    "parameters": {
      "type": "object",
      "properties": {
        "query": { "type": "string" }
      },
      "required": ["query"]
    }
  }
}

The mapping is mechanical: wrap the MCP definition in an outer { type: "function", function: {...} } envelope and rename inputSchema to parameters. ToolRoute's /api/v1/tools?format=openai endpoint does this automatically for every tool in the registry.

Step 1: Fetch MCP Tools as OpenAI Function Definitions

One API call gives you every MCP tool in the gateway, already formatted for OpenAI's function calling interface. No manual schema writing. No per-tool adapters.

python
import requests
import os

TOOLROUTE_KEY = os.environ["TOOLROUTE_KEY"]
BASE = "https://toolroute.ai/api/v1"

# Fetch all MCP tools formatted as OpenAI function definitions
tools = requests.get(f"{BASE}/tools?format=openai").json()

print(f"Loaded {len(tools)} tools as OpenAI function definitions")
# Loaded 87 tools as OpenAI function definitions

# Each tool is already in the right shape:
# { "type": "function", "function": { "name": "...", "parameters": {...} } }
print(tools[0]["function"]["name"])
# e.g. "tavily_search"

You can also filter by category to give your agent a focused toolset instead of all 87:

python
# Only fetch security tools
security_tools = requests.get(
    f"{BASE}/tools?format=openai&category=security"
).json()

# Only fetch research and communication tools
research_tools = requests.get(
    f"{BASE}/tools?format=openai&category=research,communication"
).json()

Step 2: Pass Tool Definitions to the OpenAI Agents SDK

The Agents SDK expects tool definitions in the tools parameter of a chat completion call. Since the gateway already returns the correct format, you pass them directly:

python
from openai import OpenAI
import json

client = OpenAI()

# Fetch MCP tools in OpenAI format
tools = requests.get(f"{BASE}/tools?format=openai").json()

# Create the initial completion with MCP tools available
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {
            "role": "system",
            "content": "You are a research assistant with access to web search, "
                       "code scanning, email, DNS management, and 80+ other tools."
        },
        {
            "role": "user",
            "content": "Search for the latest MCP protocol updates and email "
                       "me a summary at team@example.com"
        }
    ],
    tools=tools,
    tool_choice="auto"
)

# The model will generate tool_calls for the tools it wants to use
message = response.choices[0].message
print(f"Model wants to call: {[tc.function.name for tc in message.tool_calls]}")
# Model wants to call: ['tavily_search']

The model sees the full catalog of MCP tools as if they were native OpenAI functions. It picks the right tool based on the user's request, generates the arguments, and returns a tool_calls array.

Step 3: Execute Tool Calls Through the Gateway

When the model produces a tool call, your code needs to actually execute it. Instead of connecting to each MCP server individually, route every call through ToolRoute's POST /api/v1/execute endpoint. One key, one endpoint, any tool.

python
HEADERS = {
    "Authorization": f"Bearer {TOOLROUTE_KEY}",
    "Content-Type": "application/json"
}

def execute_tool_call(tool_call) -> str:
    """Route an OpenAI tool call through the ToolRoute gateway."""
    name = tool_call.function.name
    args = json.loads(tool_call.function.arguments)

    # Every MCP tool executes through the same endpoint
    resp = requests.post(f"{BASE}/execute", headers=HEADERS, json={
        "tool": name,
        "input": args
    })
    resp.raise_for_status()
    return json.dumps(resp.json())

# Execute each tool call the model requested
tool_results = []
for tc in message.tool_calls:
    result = execute_tool_call(tc)
    tool_results.append({
        "role": "tool",
        "tool_call_id": tc.id,
        "content": result
    })

print(f"Executed {len(tool_results)} tool calls through the gateway")

The gateway handles the MCP transport, authentication with the upstream provider, and response normalization. Your code never touches JSON-RPC, never manages per-provider API keys, and never parses provider-specific error formats. For a deeper comparison of how MCP differs from traditional REST APIs in agent architectures, see MCP vs REST APIs for AI Agents.

Step 4: Close the Loop — Full Agent Cycle

A real agent does not stop after one tool call. It feeds the tool results back into the model, which may call more tools or produce a final answer. Here is the complete loop:

python
from openai import OpenAI
import requests, json, os

client = OpenAI()
TOOLROUTE_KEY = os.environ["TOOLROUTE_KEY"]
BASE = "https://toolroute.ai/api/v1"
HEADERS = {
    "Authorization": f"Bearer {TOOLROUTE_KEY}",
    "Content-Type": "application/json"
}

def execute_tool_call(tool_call) -> str:
    """Route any tool call through the ToolRoute gateway."""
    resp = requests.post(f"{BASE}/execute", headers=HEADERS, json={
        "tool": tool_call.function.name,
        "input": json.loads(tool_call.function.arguments)
    })
    resp.raise_for_status()
    return json.dumps(resp.json())

def run_agent(user_message: str, max_turns: int = 5) -> str:
    """Run a full agent loop with MCP tools via OpenAI Agents SDK."""

    # 1. Fetch MCP tools as OpenAI function definitions
    tools = requests.get(f"{BASE}/tools?format=openai").json()

    messages = [
        {"role": "system", "content": (
            "You are an assistant with access to web search, code scanning, "
            "email, DNS management, database queries, and 80+ other tools. "
            "Use them to complete the user's request."
        )},
        {"role": "user", "content": user_message}
    ]

    for turn in range(max_turns):
        # 2. Call the model with MCP tools available
        response = client.chat.completions.create(
            model="gpt-4o",
            messages=messages,
            tools=tools,
            tool_choice="auto"
        )

        message = response.choices[0].message
        messages.append(message)

        # 3. If no tool calls, we have our final answer
        if not message.tool_calls:
            return message.content

        # 4. Execute each tool call through the gateway
        for tc in message.tool_calls:
            result = execute_tool_call(tc)
            messages.append({
                "role": "tool",
                "tool_call_id": tc.id,
                "content": result
            })

        print(f"Turn {turn + 1}: executed "
              f"{len(message.tool_calls)} tool calls")

    return messages[-1].content if messages[-1].role == "assistant" \
        else "Max turns reached"

# Run it
answer = run_agent(
    "Search for the latest OpenAI Agents SDK release notes, "
    "then check if toolroute.ai has valid DNS records"
)
print(answer)

That is about 50 lines for a fully functional agent that can use any of the 87 MCP tools in the gateway. No MCP client library. No per-provider SDKs. No credential management beyond one environment variable.

Filtering Tools for Focused Agents

Giving an agent 87 tools can dilute its decision-making. A security auditor does not need email tools. A content writer does not need DNS management. Filter the catalog to match the agent's role:

python
# Security auditor — only gets security + code analysis tools
security_agent_tools = requests.get(
    f"{BASE}/tools?format=openai&category=security"
).json()

# Research assistant — search + documentation tools
research_agent_tools = requests.get(
    f"{BASE}/tools?format=openai&category=research,documentation"
).json()

# DevOps agent — infrastructure + monitoring tools
devops_agent_tools = requests.get(
    f"{BASE}/tools?format=openai&category=infrastructure,monitoring"
).json()

# Each agent gets a focused toolset
print(f"Security agent: {len(security_agent_tools)} tools")
print(f"Research agent: {len(research_agent_tools)} tools")
print(f"DevOps agent: {len(devops_agent_tools)} tools")

Fewer tools means faster model decisions, fewer hallucinated tool calls, and lower token costs (since tool definitions count as input tokens).

Error Handling and Retries

Production agents need to handle failures gracefully. The gateway normalizes error responses across all providers, so you write one error handler that works for every tool:

python
import time

def execute_tool_call_safe(tool_call, retries: int = 1) -> str:
    """Execute with retry logic and standardized error handling."""
    name = tool_call.function.name
    args = json.loads(tool_call.function.arguments)

    for attempt in range(retries + 1):
        try:
            resp = requests.post(f"{BASE}/execute", headers=HEADERS, json={
                "tool": name,
                "input": args
            })
            resp.raise_for_status()
            return json.dumps(resp.json())

        except requests.HTTPError as e:
            error = e.response.json().get("error", {})

            # Gateway returns uniform error shape across all 47 adapters:
            # { "error": { "code": "rate_limited", "message": "...", "retry_after": 30 } }
            if error.get("code") == "rate_limited" and attempt < retries:
                wait = error.get("retry_after", 10)
                time.sleep(wait)
                continue

            # Return error as tool result — let the model adapt
            return json.dumps({
                "error": True,
                "code": error.get("code", "unknown"),
                "message": error.get("message", str(e))
            })

    return json.dumps({"error": True, "message": "Max retries exceeded"})

Returning the error as a tool result instead of raising an exception lets the model recover. It might retry with different arguments, choose an alternative tool, or explain the failure to the user.

Why Not Just Build MCP Adapters Yourself?

You can. Connect to each MCP server directly, translate the schemas manually, manage the transport, handle auth per provider. Here is what that looks like at scale:

ConcernDIY (per tool)Via Gateway
Schema translationWrite MCP-to-OpenAI mapper?format=openai
AuthenticationManage N API keysOne ToolRoute key
TransportJSON-RPC client per serverREST POST to /execute
Error normalizationParse each provider's formatUniform error shape
New toolsWrite adapter + deployAvailable on next fetch
BillingTrack per providerPrepaid credits or BYOK

For one or two tools, the DIY approach is fine. For ten or more, the engineering overhead of maintaining adapters, rotating keys, and normalizing errors across providers eclipses the time spent on the actual agent logic.

What About Claude and Other MCP-Native Clients?

If you are using Claude, Cursor, or another MCP-native client, you do not need this translation layer at all. Those clients speak MCP directly and can connect to the gateway as an MCP server:

json
// .mcp.json — MCP-native clients connect directly
{
  "toolroute": {
    "type": "http",
    "url": "https://toolroute.ai/mcp",
    "headers": {
      "Authorization": "Bearer tr_live_abc123..."
    }
  }
}

The OpenAI-compatible endpoint exists specifically to bridge the gap for frameworks and SDKs that speak function calling but not MCP. The gateway serves both protocols from the same tool catalog, so an MCP-native client and an OpenAI Agents SDK agent see the exact same tools.

The Bottom Line

OpenAI's Agents SDK is a solid foundation for building tool-using agents. MCP is the fastest-growing ecosystem of AI tools. The two do not talk to each other natively, but the bridge is straightforward: a translation endpoint that serves MCP tools as OpenAI function definitions, and an execution endpoint that routes tool calls back through the MCP infrastructure.

With ToolRoute, the integration is four lines of code: fetch the tools, pass them to the SDK, execute calls through the gateway, feed results back. Your agent gets access to 87 MCP tools without importing a single MCP library.

ToolRoute bridges MCP tools into any framework that speaks OpenAI function calling. Browse the tool catalog, read the API docs, or get started with a free key.