Guide

How to Use MCP Tools Without Managing Servers

The standard way to use an MCP tool involves cloning a repo, installing dependencies, running a server process, and configuring your AI client to connect. There is a faster way.

April 16, 20266 min readToolRoute Team

The Problem: MCP Server Management at Scale

The Model Context Protocol made it possible for AI agents to call external tools through a standard interface. But "standard interface" does not mean "zero infrastructure." To use a single MCP server, you typically need to:

  1. Clone the server repository
  2. Install Node.js, Python, or Rust dependencies
  3. Configure environment variables (API keys, secrets)
  4. Start the server process (stdio or HTTP transport)
  5. Point your AI client to the server endpoint
  6. Handle process management (restarts, port conflicts, memory)

For one tool, this is manageable. For ten tools, you are running ten server processes. For fifty tools, you have a distributed systems problem that has nothing to do with the AI work you are trying to accomplish.

We learned this firsthand. Running 17 companies with AI agents that collectively use 50+ tools, we hit a wall: 50 MCP servers running via stdio transport spawned one process per session. Eight concurrent sessions meant 400+ Node.js processes consuming 128 GB of RAM. The machine crashed.

The Solution: Call Tools Through an API Gateway

An MCP gateway runs the tools on its side and exposes them through a unified API. You do not install, configure, or run anything. You make HTTP requests.

Step 1: Get an API Key

Sign up and generate a key. This single key authenticates against every tool in the gateway.

bash
# Your single API key for all tools
export TOOLROUTE_KEY="tr_live_abc123..."

Step 2: Find the Tool You Need

The gateway exposes a catalog of every available tool, its operations, and its cost.

bash
# Browse all tools
curl https://toolroute.ai/api/v1/tools

# Search for a specific capability
curl "https://toolroute.ai/api/v1/tools?q=web+search"

# Get OpenAI function-calling format
curl "https://toolroute.ai/api/v1/tools?format=openai"

Step 3: Execute via One Endpoint

Every tool call goes through the same endpoint. The gateway handles routing, authentication with the upstream provider, and response normalization.

bash
# Search the web via Tavily
curl -X POST https://toolroute.ai/api/v1/execute \
  -H "Authorization: Bearer $TOOLROUTE_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "tool": "tavily",
    "operation": "search",
    "input": { "query": "MCP server security 2026" }
  }'

# Send an email via Resend
curl -X POST https://toolroute.ai/api/v1/execute \
  -H "Authorization: Bearer $TOOLROUTE_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "tool": "resend",
    "operation": "send_email",
    "input": {
      "to": "user@example.com",
      "subject": "Report ready",
      "html": "<p>Your report is attached.</p>"
    }
  }'

Notice what is missing: no Tavily API key, no Resend API key, no server processes, no dependency management. The gateway holds the upstream credentials and routes the request.

Step 4: Add Credits or Bring Your Own Key

Free tools (Context7, Playwright, Semgrep) cost nothing through the gateway. Paid tools deduct from your prepaid credit balance. If you already have an account with a specific provider, you can register your key via BYOK and skip the credit charge for that tool.

Free Tools

No credits needed. Context7, Playwright, Semgrep, GitHub MCP, and 30+ others.

Credits (Pay-As-You-Go)

Buy credits. Each call deducts based on the tool's cost. Auto-top-up available.

BYOK (Your Own Key)

Register your existing API key. Calls go direct to the provider at their rates.

Connecting Your AI Framework

The gateway speaks multiple protocols. Use whichever your framework supports:

Claude Code / MCP Clients

Add the gateway as an MCP server in your configuration. Every tool becomes available as an MCP tool call.

json
// .mcp.json
{
  "toolroute": {
    "type": "http",
    "url": "https://toolroute.ai/mcp",
    "headers": {
      "Authorization": "Bearer tr_live_abc123..."
    }
  }
}

OpenAI Function Calling

Fetch the tool catalog in OpenAI format and pass it as function definitions.

python
import requests

# Get tools as OpenAI function definitions
tools = requests.get(
    "https://toolroute.ai/api/v1/tools?format=openai"
).json()

# Pass to OpenAI / any compatible LLM
response = client.chat.completions.create(
    model="gpt-4",
    messages=[...],
    tools=tools
)

Direct REST

For custom integrations, the REST endpoint works from any language that can make HTTP requests.

What You Give Up (and What You Gain)

FactorSelf-Hosted MCPGateway
Setup time per tool5-30 min0 (already running)
API keys to manageOne per toolOne total
Server processesOne per toolZero
LatencyDirect (lowest)+50-100ms (gateway hop)
CustomizationFull controlStandard operations only
Cost at scaleInfrastructure + API feesCredits only

The tradeoff is clear: you gain zero infrastructure management and unified billing. You give up direct control and accept a small latency penalty. For most agent workflows where tool calls take seconds anyway, the 50-100ms gateway overhead is negligible.

When to Self-Host Instead

A gateway is not always the right answer. Self-host when:

  • You need sub-10ms latency (real-time voice, trading)
  • You require custom tool modifications not available through the standard API
  • Compliance requires that data never leaves your infrastructure
  • You use only 1-2 tools and the setup cost is minimal

For everything else, a gateway eliminates infrastructure complexity so you can focus on building the agent, not managing its dependencies.

ToolRoute provides 87 tools through one endpoint. Read the quickstart, try the playground, or see pricing.