/ THE CORE

MCP and Tool Use: The Protocol Layer That Makes Agents Useful

The Model Context Protocol is quietly becoming the USB-C of AI integrations. Here's what it actually does, why it matters, and the patterns emerging around it.

Architecture diagram showing MCP protocol connecting an LLM to multiple external services

The integration problem

An LLM on its own can reason, summarize, and generate text. But to do anything useful in the real world — query a database, send an email, check an inventory system — it needs to call external tools. And for the past two years, every team building agent systems has been writing their own bespoke integration layer.

The result: fragmented tooling, duplicated effort, and security models that range from "carefully designed" to "we'll fix it later."

The Model Context Protocol (MCP) is an attempt to standardize this layer. Think of it as a common interface between LLMs and the external world — a shared contract for how models discover, invoke, and receive results from tools.

What MCP actually is

At its core, MCP defines three things:

  1. Tool discovery — A standard way for a model to learn what tools are available, what parameters they accept, and what they return
  2. Invocation — A standard format for the model to request a tool call and for the host to execute it
  3. Context management — A way to pass relevant context (documents, data, state) to the model alongside tool definitions

The protocol is transport-agnostic — it works over HTTP, stdio, WebSockets, or any other channel. This flexibility matters because agent architectures vary widely: some run tools in the same process, others delegate to remote services.

Why standardization changes the economics

Before MCP, integrating a new tool into an agent system required writing custom code for schema translation, authentication, error handling, and result parsing — for every tool, for every agent framework.

With a standard protocol, you write the integration once (as an MCP server) and any compliant agent can use it. This changes the economics in several ways:

The composability unlock The most interesting MCP deployments we've seen involve agents that discover tools at runtime — assembling custom workflows from available capabilities without any hardcoded tool list.

Patterns emerging in production

Pattern 1 — Tiered tool access

Not all tools carry the same risk. Reading data from a CRM is very different from sending an email on behalf of a user. Production MCP deployments typically implement tiered access:

This maps naturally to how permissions work in traditional software and is easy for users to understand.

Pattern 2 — Tool result summarization

Raw tool outputs often contain far more information than the model needs. A database query might return 500 rows when the agent only needs a count, or an API response might include metadata that's irrelevant to the task.

A common pattern is to insert a summarization step between the raw tool result and the model's context — reducing token consumption and helping the model focus on what matters.

Pattern 3 — Fallback chains

When a primary tool is unavailable (rate limited, down, or returning errors), agents need fallback strategies. MCP's standardized interface makes it natural to define tool equivalence classes — groups of tools that serve the same purpose — and automatically route to an alternative when the primary fails.

Security considerations

Tool use is where AI safety meets traditional application security. A few hard-won lessons:

The prompt injection surface Tool results are untrusted input. If a tool returns content that contains instructions ("ignore previous instructions and..."), the model might follow them. Sanitize tool results or use architectural isolation to prevent this.

What MCP doesn't solve (yet)

MCP standardizes the interface but leaves several hard problems to the implementer:

Where this is heading

The trajectory is clear: MCP or something very much like it will become the standard way AI systems interact with the external world. The ecosystem is growing rapidly — major SaaS providers are publishing MCP servers, and framework support is becoming table stakes.

For teams building agent systems today, investing in MCP compatibility is a low-risk bet with high option value. The protocol may evolve, but the core abstraction — standardized tool discovery, invocation, and context management — is here to stay.

Link copied!