ButterCMS Logo

What is an MCP server (and how to connect one to ButterCMS APIs)

Published on
Updated on
8 min read
Blog header image stating MCP servers explained
Web Performance Optimization

Data living in silos has been a problem since the early days of software. Over time, APIs, inter-process communication (IPC) mechanisms, and composable architectures have helped fix some connectivity challenges, but significant gaps still remain in how modern systems share and process information dynamically. 

Take LLMs, for example. We all love using them. They're fast, flexible, and useful in almost every industry. But they come with certain limitations: their training data cuts off at a certain date, and they lack inherent access to specific, private datasets. 

If you ask an LLM a question, it can’t pull in the latest version of your internal docs, customer support logs, or live service data—unless you build that bridge yourself. Say you want your LLM to read from your CMS before it answers a customer. Or maybe you want it to pull in fresh data from your cloud database. Without a clean and reusable way to connect the model to those tools, you would be stuck building one-off integrations that are hard to manage and scale.

This is where Model Context Protocol (MCP) can help. Created by Anthropic, MCP is an open protocol that standardizes how LLMs connect to outside tools and data. In this post, we’ll explain what MCP is, how it works, and how you can use it to connect an LLM to a headless CMS.

Table of contents

What is MCP?

Model Context Protocol is an open-source specification that defines how an LLM can securely connect with external tools and data sources to get the context it needs to respond more accurately. 

Instead of training an LLM on every possible data source, MCP lets the model request live context from trusted tools. It’s kind of like plugging in a USB device and instantly getting access to what’s on it.

What can you do with MCP?

Here are some cool things you can achieve with MCP:

  • Create agents that run specific tasks based on external data (e.g., status updates, issue summaries, etc.)

  • Convert simple text instructions into structured API calls that your headless CMS understands.

  • Check facts against authoritative sources to ensure accuracy and reduce hallucinations.

  • Generate content that incorporates the latest information from your knowledge base. 

What is an MCP server?

An MCP Server is the practical implementation of the MCP specification. It's a dedicated service or application component that acts as the “plug” between the model and whatever data or tool you want it to use.

For better understanding, let’s consider an example. Suppose a user asks your private, MCP-enabled LLM to create a new landing page in ButterCMS with the provided text and structure. Here’s how that could work:

  1. The LLM client recognizes that the task requires it to integrate with an external service. It also identifies the specific action needed. 

  2. The client sends a request to the MCP server connected to ButterCMS. 

  3. The MCP server takes the structured request from the LLM client, formats it into a REST API call, and sends it to ButterCMS.

  4. The CMS processes the request and creates the landing page.

  5. The server sends a success message (or error, if something failed) back to the client.

  6. Now the client can say, “The landing page has been created,” and even include the page link.

What are the main components of MCP?

MCP follows a client-host-server architecture designed for security, modularity, and scalability. Let’s talk about its main components:

  • Host: The host is the main application that brings everything together. It controls the session, manages security, and ensures that the right data goes to the right place. Hosts can run multiple MCP clients.

  • Client: An MCP client is the piece that connects a host to a single MCP server. In other words, there’s a one-to-one connection between a client and a server. Clients manage session state, handle capability exchange, and keep communication smooth between the host and the server. 

  • Server: An MCP server is the part of the system that does the actual work of connecting to tools or data sources. It's a small, focused service that performs specific tasks, like creating new entries in a headless CMS or fetching records from a private database. Servers only receive the exact context they need for a task and never have access to the full conversation.

  • Resource: A resource is the tool or service that MCP servers connect to. This could be anything from a content repository to a local folder, data lake, or cloud API. 

What’s capability negotiation in MCP?

Before communication can begin, clients and servers must exchange details about what they can and cannot do. This is called capability negotiation. Both sides must stick to what they’ve agreed on for the session.

If needed, additional capabilities can also be added later through renegotiation or extensions to the protocol.

Why developers are talking about MCP servers

Developers are very excited about MCP servers, and for good reason:

  • Interoperability, without any integration headaches. One of the biggest wins with MCP is how different tools can talk to LLMs without needing custom APIs or deep wiring. Whether it's a CMS, a database, or a third-party API, MCP lets you plug it in and go. This means faster development, fewer bugs, and reduced time to market. 

  • Real-time data, fewer hallucinations. LLMs are known to generate plausible but incorrect information when working with outdated knowledge. MCP servers help solve this problem by allowing models to verify facts against authoritative sources before responding.

  • Clean, composable architecture. MCP servers are small, focused, and easy to manage. You don’t have to build one giant app that does everything. Instead, you can plug in separate MCP servers for each task or data source. This makes it easy to scale, debug, and update parts of your system without breaking the rest.

  • Works across LLM vendors. MCP doesn’t lock you into one LLM provider. Whether you're using Claude, ChatGPT, or anything else, MCP stays the same. You can swap out the model without having to rebuild your server logic or integrations. 

  • Easy to build, easy to extend. You don’t need a massive team to build an MCP server. The protocol is simple by design, and most servers can be built with just a few hundred lines of code. Plus, if you need more features later, you can add them through negotiated capabilities; no need to rewrite the whole thing.

How to build an MCP server to connect Claude Desktop with ButterCMS

Next, we will cover a step-by-step guide on how to build an MCP server and use it to connect a client (Claude Desktop) to a headless CMS (ButterCMS).

  1. First, we will create a simple Python project and install fastmcp, a minimal framework for building MCP servers. fastmcp handles the core protocol logic, so you can just focus on the actual functionality, like talking to the ButterCMS API.

Your server will expose basic MCP primitives (like tools or resources), and implement the logic needed to create a blog post, fetch content, or do anything else you want to automate.

  1. Once your server is running, you need to tell your client where to find it. For Claude Desktop, you’d have to open the config file (~/Library/Application Support/Claude/claude_desktop_config.json) and add an entry to the mcpServers object. For example:

	{
    "mcpServers": {
        "butter": {
            "command": "uv",
            "args": [
                "--directory",
                "/path/to/butter",
                "run",
                "sample.py"
            ]
        }
    }
}
  1. After saving the config file, restart Claude Desktop so it can read the new settings and try to connect to your server. 

  2. To confirm that Claude has successfully connected to your MCP server, perform these steps:

    1. Open Claude Desktop and click the hammer icon on the bottom right of the chat box.

    2. You should be able to see the new tools in the Available MCP tools window.

    3. You can also check your server logs for connection messages originating from Claude.

  1. Now you can test the integration by asking Claude to create a new post. For example, you could say:

“Create a new blog post in ButterCMS about "The Benefits of Headless CMS for AI Integration"? Include an introduction, three main benefits, and a conclusion.”

Claude should generate the content and then use the required tool to submit it to your ButterCMS instance.

Conclusion

With MCP, many of the roadblocks around using LLMs in real apps become easier to handle. Whether you want to give your model access to the latest documents, build context-aware features, or connect AI to tools like ButterCMS, MCP offers a simple and clean way to do so.

Author

Maab is an experienced software engineer who specializes in explaining technical topics to a wider audience.