In November Anthropic released a standard called MCP servers. I didnt realise then how valuable these servers are for Agentic AI, this is what they are and why they are valuable for enterprise:
MCP servers are an API gateway that sit on top of of your business APIs, data sources (structured and unstructured) and prompts. Inside the MCP server you can construct tools that LLMs or Agents can use to assist with their tasks. The server acts as a 'man in the middle' between your agent and LLMs and the data and tools it needs.
For instance, if you wanted an agent to be able to get current orders and create new orders for a customer, then these could be two MCP tools 'create_order(customer_id)' and 'list_orders(customer_id)' which make specific SQL requests to your databases & have suitable validation in place.
Anthropic have released many different implementations of their MCP standard, which are available on the repo:
https://github.com/modelcontextprotocol
What I see as the benefits of adopting a framework like this are:
By having a single point of access it makes any resources created for LLMs useable by multiple different agents & all resources are visible in a single place with Anthropics desktop client.
Business APIs are often very messy. By conciously constructing an API Gateway with Agents and LLMs in mind you can connect and tailor agents ability to interact with your business logic.
If all of your resources and tools are created to be generally accessible, it means that you can create โoff the shelfโ agents using tools like AWS bedrock and Azure AI Studio to quickly create agents as and when they are required.
You can limit access to tools and resources depending on the user id passed in so that your agent doesnt have to worry about permissions. This also means that you can store memory about previous user actions in one place for all agents to access.
โ