Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124


We have all heard of the model context protocol (MCP) in the context of artificial intelligence. In this article, we will delve into what MCP is and why it is more important every day. When APIs are already available, why do we need MCP? Although we have seen a huge increase in the popularity of MCP, does this new protocol have staying power? In the first section, we’ll look at the parallels between APIs and MCPs and then begin to explore what makes them different.
A single isolated computer has a limited amount of data it can access and that has a direct impact on its usability. APIs were created to allow data transfer between systems. Like APIs, Model Context Protocol (MCP) is the communication protocol between AI agents using large language models (LLM). APIs are written primarily for developers, while MCP servers are built for AI agents (Johnson, 2025).
Anthropic introduced MCP on November 25, 2024 as an open source standard to enable communication between AI assistants and external data sources. AI agents are limited by the fragmentation of data in isolated systems (Anthropic, 2024). the protocol Defines how agents can interact with external systems, obtain user information, and enable automated agents.
At its core, MCP uses the client-server model and there are three main features for clients and servers.
To keep this article concise, we will focus on the most important feature of both the client and the server. For MCP servers, tools are the primary way to perform complex tasks, and fetching is used by clients to enable two-way communication between the agent and the user.
Instead of explicitly calling APIs, agents select and use the appropriate tools (functions) based on the information they receive from the user. If a tool requires certain parameters, the agent will use data fetch to obtain the user’s data. This allows for a more responsive workflow where two-way communication between LLM and the user is possible.
A very valid question is if APIs are already present, why is MCP necessary? APIs are designed to connect fragmented data systems and SaaS applications already enable two-way communication with a user. So why do we need MCP now?
The main need for MCP is that the user of external data has changed from developers to AI agents. A developer will typically program an application using APIs that behave deterministically. While AI agents will use the user’s message and make autonomous decisions to execute the user’s request. By nature, the execution of a workflow by an AI agent is non-deterministic.
APIs are a machine-executable contract that acts deterministically. APIs work if their users know what action to take next (Posta, 2025). AI agents run on probabilistic LLMs that do not consistently deliver repeatable results across tasks (Atil, 2024). Variation in the response of an LLM is expected and this poses a problem for autonomous execution.
MCP solves the problem of variation in agent execution by providing a high-level abstraction that encompasses functionality rather than API endpoints. The tools allow LLM models to perform actions such as searching for a flight, booking a calendar, and more (Understanding MCP Servers, 2026).
A common misconception about tooling is that they are just an abstraction of existing API calls. The tools are not designed to be an abstraction of API calls, but rather an abstraction of functionality. If many APIs are simply exposed as tools, it will increase the cost and size of the context for the agent, which is not ideal (Johnson, 2025).
A tool can include multiple API calls in its implementation to achieve the desired result. An agent will review the list of available tools to automatically select the most suitable ones and determine the appropriate execution order.
Since its launch in 2024, MCP has seen a steady rise in popularity. The Google Trends chart below shows the relative interest in MCP since its launch.
Many companies have launched their own MCP servers to facilitate the creation of autonomous agents. As of February 2026, the official MCP registry already had more than 6,400 MCP servers registered. This number of MCP servers is expected to increase in the near future. The official registration for MCP servers is still in preview and the ecosystem has grown tremendously in less than a year.
Other major market players have adopted MCP and added support to their customers. OpenAI added MCP support to ChatGPT in March and Google added support a few weeks later in April 2025. This shows the staying power of the protocol and the rapid pace of adoption.
MCP is still in the early stages of widespread adoption, where many applications must mature and go into production. Leonardo Pineryo of Pento AI summed it up best: “MCP’s first year transformed the way AI systems connect with the world. Its second year will transform what they can achieve” (2025).
Safety barriers around tools is an area that will see further development, as trust is one of the biggest concerns for AI agents. With better guardrails in tools, an AI agent can be allowed to act with more autonomy. Over the next year, MCP is sure to see continued growth, both in the sophistication of its capabilities and the volume of its application.