Understand the difference between the MCP protocol and tool calling, and watch a live agent decide which tool to invoke, send the call, receive a result, and form its final answer.
The agent's ability to invoke a function at runtime. Instead of answering directly, the LLM
emits a structured tool_use block — a JSON object naming the tool and its input arguments.
The host framework intercepts it, executes the tool, and injects the tool_result back into context.
Model Context Protocol is a universal adapter layer for exposing tools to any agent. Before MCP, every framework (LangGraph, AutoGen, CrewAI) had its own custom tool format. MCP standardises how tools are defined, discovered, and called — write once, use everywhere.
A process that exposes a set of tools over the MCP protocol. Examples: a Google Calendar
MCP server exposes gcal_list_events, gcal_create_event.
A Gmail MCP server exposes gmail_search_messages. The agent connects to
one or many servers.
Lives inside the agent framework. Receives tool_use blocks from the LLM,
looks up which MCP server owns that tool, forwards the call, waits for the result, and
returns a tool_result block back to the LLM context.
Steps 2–4 are invisible to the user. From their perspective, the agent just "knew" what meetings they had. But under the hood, the agent never had that data — it fetched it in real time via tool calling over MCP.
For your consulting work, this is why the MCP server is a reusable asset: you write
gcal_list_events once as an MCP server, and any agent — LangGraph, AutoGen, Claude API —
can call it the same way, without rewriting the integration per framework.
Pick a preset message or type your own. Watch the right panel trace every step: LLM reasoning → tool call emitted → MCP routing → server result → final answer.