MCP Series Episode 2: Deeper in the protocol
Published:
Introduction
Dear reader, welcome back!
This is the episode 2 of the MCP series, if you missed the last episode you can find them all here.
In the previous episode we established the need for a protocol that address the M x N
complexity problem faced by AI models and external tools.
Let’s now move on a bit deeper on the protocol itself. We will dissect the roles of the core actors and describing the communication via the JSON-RPC protocol.
Actors and resources
The MCP protocol, at a glance, can be described like a server that make available some resources (prompts, tools and resources) that may be used by the Client on behalf of the Host (the end user)
To better clarify the responsibility of each actor we can describe them as below:
Entity | Description | Example |
---|---|---|
Host | Is the user-facing application that initiate the connection on the MCP server and orchestrate the user requests, the LLM and the external tools | ChatGPT |
Client | Defined within the host, it manages the communication with a specific server instance in a 1:1 bidirectional communication | Any piece of code implemented following the MCP protocol client interface |
Server | A separate process that running and waiting to provide services to a client instance that implement MCP protocol | Any piece of code implemented following the MCP protocol server interface |
As we said previously, the server could make available some resources, a brief description of each of them is described below:
Resource | Description | Example |
---|---|---|
Prompt | Is the catalyst of action made available to host via pre-defined templates | template to ask LLM to review a snippet of code, providing the output of the analysis |
Resource | Read-only data source that have the role of providing more context to the LLM | Function that access to a dump of files containing scientific resources |
Tool | Executable functions that an LLM model can execute to perform action or retrieve some data | Function that return the weather on a given location |
Sampling | For complex operation, a server initiate a request for the Client/Host to perform an LLM interaction | The server poses a question that (after human review and confirmation) is sent to an LLM to provide an answer that is returned to server. |
The underlying protocol
As already mentioned, the client-server communication is performed using JSON-RPC protocol. Key features:
- Lightweight (simple text-based requests and response)
- Agnostic about the transport method (HTTP, socket, other)
- Request-Response style (client performing a request, server provide a response)
- Support multiple requests and batching with notification (using the
id
attribute)
In the JSON-RPC protocol there are three main types of objects that can be used during the communication client/server:
// JSON-RPC Request Object
export interface JsonRpcRequest<T = any> extends JsonRpcBase {
jsonrpc: string; //is always "2.0"
id: string | number | null; //is used for server to provide response to client
method: string; //the routine to invoke on server (i.e. `prompts/get`)
params?: T;
}
// JSON-RPC Response Object (either result or error)
export interface JsonRpcResponse<T = any> extends JsonRpcBase {
id: string | number | null;
result?: T;
error?: JsonRpcError;
}
// JSON-RPC Notification Object (no id)
export interface JsonRpcNotification<T = any> extends JsonRpcBase {
method: string;
params?: T;
}
Bring all together
Let’s imagine a client that want to receive from the server the list of prompts available from the server. The communication flow will look like the following:
{
//from client to server
"jsonrpc": "2.0",
"id": 1,
"method": "prompts/list",
"params": {
"cursor": "optional-cursor-value"
}
}
//From server to client
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"prompts": [
{
"name": "code_review",
"title": "Request Code Review",
"description": "Asks the LLM to analyze code quality and suggest improvements",
"arguments": [
{
"name": "code",
"description": "The code to review",
"required": true
}
]
}
],
"nextCursor": "next-page-cursor"
}
}
Note: since the client was expecting an answer from the server, the client send the
id
attribute that is used by the server to answer back to the specific client request.
For those who are eager to get their hands dirty, I suggest getting your hand dirty following this tutorial or also this nice training.
If you prefer a watch, so I’ll learn better, I watched this YouTube workshop that I want to recommend.
For those who prefer a bit more of a lagging approach, they can wait till the next episode, where we’ll focus on the MCP authorization and transport specifications.
A.
References
https://modelcontextprotocol.io/specification/2025-06-18/basic
https://huggingface.co/learn/mcp-course/en/unit1/architectural-components
© 2025 Andrea Zaccaro. All rights reserved.