Technical

Understanding Model Context Protocol (MCP) for AI Agents

Vaibhav Solanki
6 min read
Understanding Model Context Protocol (MCP) for AI Agents

Table of Contents

  1. What is MCP?
  2. Why MCP Matters
  3. Core Concepts & Architecture
  4. MCP in Action: A Step‑by‑Step Walkthrough
  5. Building Your Own MCP‑Compliant Tool
  6. Security & Governance
  7. Common Pitfalls & How to Avoid Them
  8. Future of MCP
  9. Conclusion
  10. Further Reading & Resources

What is MCP?

The Model Context Protocol (MCP) is a lightweight, JSON‑based specification that defines how a language model (or any AI “agent”) can request, receive, and manipulate context from external systems. Think of it as a standardized API contract between your AI and the world outside the model’s training data.

Key goals of MCP:

GoalDescription
InteroperabilityOne protocol, many tools.
ExtensibilityAdd new “tool types” without breaking existing agents.
SecurityFine‑grained permissioning via scopes and tokens.
ObservabilityStructured logs and metrics for every call.

Why MCP Matters

1. The “Black‑Box” Problem

Traditional LLMs generate responses purely from pre‑trained weights. When you need up‑to‑date data or domain‑specific operations (e.g., querying a CRM, running a spreadsheet calculation), you either:

  • Embed the logic in the prompt – brittle, hard to maintain.
  • Wrap the LLM in a monolithic service – no reuse, hard to audit.

MCP decouples what the model wants from how it gets it.

2. One Protocol for All

  • Tool developers can expose functionality once (e.g., a weather API, a database connector, a code‑execution sandbox).
  • Agent builders can consume any MCP‑compliant tool without writing custom adapters.

3. Auditable Interactions

Every MCP call is a JSON payload that can be logged, replayed, or replayed in sandbox mode, enabling compliance with regulations like GDPR or HIPAA.


Core Concepts & Architecture

1. MCP Message Types

TypePurposeExample
tool_requestAgent asks for a tool to run.{ "type":"tool_request", "tool":"calc_sum", "params":{"a":5,"b":7} }
tool_responseTool returns result.{ "type":"tool_response", "tool":"calc_sum", "result":12 }
context_updateTool updates shared context (e.g., cache).{ "type":"context_update", "key":"latest_weather","value":"sunny"}
errorTool or protocol error.{ "type":"error", "code":"403","message":"Access denied"}

2. Tool Registry

A central catalog (often a JSON file or HTTP endpoint) that lists available tools, their schemas, and permissions. Example:

{
  "tools": [
    {
      "name": "calc_sum",
      "description": "Adds two numbers",
      "params_schema": {
        "type":"object",
        "properties": {
          "a": {"type":"number"},
          "b": {"type":"number"}
        },
        "required": ["a","b"]
      },
      "output_schema": {
        "type":"number"
      }
    }
  ]
}

3. Execution Environment

The runtime that receives tool_request messages, validates them, invokes the underlying code (Python, Node, Docker, etc.), and returns tool_response. The environment also enforces:

  • Rate limits
  • Resource quotas (CPU, memory)
  • Security sandboxing (e.g., no network access unless whitelisted)

4. Context Store

A key‑value store shared between the agent and tools. Agents can read from it (context_update or tool_request with context_key) and tools can write back. This enables stateful conversations.


MCP in Action: A Step‑by‑Step Walkthrough

Let’s walk through a typical scenario: an agent answering a user’s request to “calculate the sum of 12 and 30” while also fetching the current temperature from a weather API.

User:  What is 12 plus 30? Also, what's the temperature in New York?

Agent (LLM) → MCP: tool_request(calc_sum, {a:12,b:30})
Agent → MCP: tool_request(get_weather, {city:"New York"})

Tool (calc_sum) → MCP: tool_response(calc_sum, 42)
Tool (get_weather) → MCP: tool_response(get_weather, {"temp": 68,"unit":"F"})

Agent → User: 12 + 30 = 42. The temperature in New York is 68°F.

Key points:

  1. The agent’s only interaction with the world is via MCP messages.
  2. The agent remains stateless; all state lives in the context store if needed.
  3. Each tool is isolated and auditable.

Building Your Own MCP‑Compliant Tool

Below is a minimal Python example that implements a calc_sum tool.

# calc_sum.py
import json
import sys

def run(params):
    a = params["a"]
    b = params["b"]
    return a + b

if __name__ == "__main__":
    # Read MCP request from stdin
    request = json.load(sys.stdin)
    if request["type"] != "tool_request" or request["tool"] != "calc_sum":
        raise ValueError("Invalid request")

    result = run(request["params"])
    response = {
        "type": "tool_response",
        "tool": "calc_sum",
        "result": result
    }
    json.dump(response, sys.stdout)

Deploying the tool

  1. Add the tool to the registry (as shown in the Tool Registry section).
  2. Configure the execution environment (e.g., Docker container) to run calc_sum.py.
  3. Expose the environment behind a secure endpoint or as a local service.

Security & Governance

AspectRecommendation
AuthenticationUse OAuth2 or API keys scoped to specific tools.
AuthorizationMCP messages include a scopes array; the environment checks against the user’s permissions.
Input ValidationEnforce JSON Schema for params_schema. Reject malformed requests.
SandboxingRun each tool in a minimal container with no network unless explicitly allowed.
Audit LoggingPersist every request/response pair with timestamps and user IDs.

Common Pitfalls & How to Avoid Them

PitfallSymptomFix
Over‑loading the agentThe LLM tries to execute too many tools in parallel.Implement a queue or rate limiter in the execution environment.
Circular dependenciesTool A updates context that Tool B depends on, leading to race conditions.Use a deterministic ordering or transactional context updates.
Unvalidated inputTools crash or produce incorrect results.Strictly enforce params_schema before invoking the tool.
Missing error handlingAgent stalls when a tool fails.Always return an error message; the agent can retry or fallback.

Future of MCP

  1. Dynamic Tool Discovery – Agents can query the registry on‑the‑fly and decide which tools to load. |
  2. Multi‑Modal Context – Incorporate images, audio, or video into the context store. |
  3. Federated MCP – Share context across multiple agents securely (e.g., for collaborative workflows). |
  4. Standardization Efforts – Ongoing work with the AI Safety & Ethics Council to formalize MCP as an open standard. |

Conclusion

MCP transforms the way AI agents interact with the world. By providing a clear, auditable contract between models and external systems, it:

  • Eliminates brittle prompt‑engineering hacks. |
  • Enables modular, reusable tools. |
  • Gives developers and compliance teams full visibility into every AI‑tool interaction.

Whether you’re building a customer‑support bot, an automated data‑pipeline, or a research prototype, MCP is the glue that turns a raw LLM into a productive, secure, and maintainable agent.


Further Reading & Resources

ResourceDescription
MCP Specification v1.0Official JSON schema and protocol docs.
StitchGrid MCP SDKPython & JavaScript libraries for building tools.
MCP PlaygroundInteractive sandbox to test tool requests.
OpenAI Tool Integration GuideHow to integrate MCP with OpenAI’s new tool‑use API.

If you’d like to contribute, open an issue or PR on the official MCP repo.

#Model Context Protocol#MCP#AI tools#AI integration#standard protocol#AI context

Found this helpful?

Share this article with your network

This article was published on StitchGrid

Create your own AI agents on StitchGrid →