MCP API Documentation: The Complete Guide

This guide explains how MCP API documentation works, how it differs from traditional documentation, and how to implement it for your own APIs.

Jump to section

Jump to section

Jump to section

The Model Context Protocol (MCP) creates a standardized way for large language models to interact with external tools and data sources. Think of it like a USB port for AI - it allows different AI systems to connect with various tools using a consistent interface.

In a typical MCP setup, there are three key components:

[Client] <---> [MCP Server] <---> [LLM]

This guide explains how MCP API documentation works, how it differs from traditional documentation, and how to implement it for your own APIs.

Why use MCP API documentation

Traditional API documentation is designed for human developers. It includes explanations, examples, and references that people read to understand how to use an API. MCP documentation, on the other hand, is structured for AI consumption.

When an LLM needs to use an API, it doesn't read paragraphs of text - it needs structured information about endpoints, parameters, and response formats. MCP provides this structure through standardized JSON schemas.

The benefits of using MCP documentation include:

  • Direct AI integration: LLMs can understand and use your API without human assistance

  • Standardized format: Works across different AI platforms that support the MCP protocol

  • Reduced development effort: Can be generated from existing OpenAPI specifications

  • Future compatibility: Prepares your API for emerging AI use cases

MCP documentation makes your API accessible to AI tools like Claude, ChatGPT, and other systems that support the Model Context Protocol.

How to create MCP documentation from OpenAPI specs

Converting OpenAPI specifications to MCP documentation involves transforming the structure to match what MCP expects. Here's how to approach this process:

Combining parameter types

OpenAPI separates parameters into different locations: path, query, header, and body. MCP tools need all these combined into a single schema.

For example, an endpoint that accepts both query parameters and a JSON body would need these merged into one schema that represents all possible inputs.

// Example of combining parameters
const combinedSchema = {
  type: "object",
  properties: {
    ...queryParams.properties,
    ...requestBody.properties
  },
  required: [...queryParams.required, ...requestBody.required]
};

This merging process often reveals naming conflicts that need resolution. For instance, if both the query parameters and request body have a field called "id", you'll need to rename one or establish a convention.

Resolving $ref references

OpenAPI specs frequently use $ref to point to reusable components. MCP tools need fully self-contained schemas without external references.

This means you need to "dereference" the schema - replacing each reference with its actual content. For simple cases, this is straightforward, but it gets complicated with:

  • Circular references (schemas that refer to themselves)

  • Deep nesting of references

  • References to external files or URLs

Tools like json-schema-ref-parser can help resolve these references automatically:

const deref = require('json-schema-ref-parser');
const schema = require('./openapi.json');

async function createMCPSchema() {
  const resolved = await deref.dereference(schema);
  // Now use the resolved schema for your MCP tool
}

Handling schema size limitations

Different MCP clients have varying limitations on schema size and complexity. Claude Desktop handles larger schemas well, but other clients might have stricter limits.

For large APIs, consider:

  1. Breaking down complex schemas into simpler components

  2. Exposing only the most relevant endpoints as MCP tools

  3. Creating multiple specialized MCP servers instead of one large one

Testing with multiple clients helps identify compatibility issues early in the development process.

Setting up an MCP server

Once you've prepared your API documentation in MCP format, you need to set up a server that exposes these tools to LLMs. Here's a simplified process:

  1. Choose an MCP server implementation that fits your technology stack

  2. Configure the server with your transformed schemas

  3. Set up authentication if your API requires it

  4. Deploy the server where it can be accessed by MCP clients

A basic MCP server configuration might look like this:

const { createServer } = require('mcp-server');
const apiSchemas = require('./mcp-schemas.json');

const server = createServer({
  tools: apiSchemas,
  auth: {
    type: 'apiKey',
    headerName: 'Authorization'
  }
});

server.listen(3000, () => {
  console.log('MCP server running on port 3000');
});

After setting up your server, validate it by connecting with an MCP client like Claude Desktop and testing various tool invocations.

Best practices for MCP API documentation

Based on our experience converting complex OpenAPI specs to MCP servers at Stainless, here are some recommended practices:

  • Start small: Begin with a subset of endpoints before attempting to convert your entire API

  • Test thoroughly: Validate with multiple MCP clients to ensure compatibility

  • Maintain simplicity: Simpler schemas generally work better across different clients

  • Document limitations: Note any features of your API that don't translate well to MCP

  • Update regularly: Keep your MCP documentation in sync with API changes

For large APIs, consider creating specialized MCP servers for different API sections rather than trying to expose everything through one server.

Tools and resources for MCP documentation

Several tools can help with creating and managing MCP documentation:

  • Model Context Protocol - Official MCP documentation

  • JSON Schema validators for testing schema validity

  • OpenAPI-to-MCP converters (like what we've built at Stainless)

  • MCP client simulators for testing without connecting to actual LLMs

These resources make it easier to create, validate, and maintain MCP documentation for your APIs.

Generate Docs for your MCP

The MCP ecosystem is still evolving. Current limitations around schema size and tool loading will likely improve as the protocol matures. Future developments may include:

  • Dynamic tool loading based on context

  • Improved schema compression techniques

  • Standardized patterns for common API types

  • Better handling of authentication flows

Staying current with MCP developments helps ensure your API documentation remains compatible with the latest clients and practices.

At Stainless, we're working on tools to automate the conversion of OpenAPI specifications to MCP-compatible formats, making it easier to expose your APIs as tools for LLMs. Our approach handles the complexities of schema transformation, reference resolution, and client compatibility testing.

FAQs about MCP API documentation

What is the difference between MCP documentation and traditional API documentation?

MCP documentation uses structured JSON schemas designed for AI consumption, while traditional documentation uses human-readable text, examples, and explanations. MCP enables direct API interaction by LLMs without human interpretation.

How can I convert my existing OpenAPI specifications to MCP format?

You can convert OpenAPI specs to MCP format by combining parameter types, resolving $ref references, and ensuring schema compatibility with MCP clients. Tools like those from Stainless can automate this process.

What should I do if my API has hundreds of endpoints?

For large APIs, consider grouping related endpoints into separate MCP servers, exposing only the most commonly used endpoints, or implementing dynamic loading mechanisms that present relevant tools based on context.

Can I test my MCP documentation without connecting to an actual LLM?

Yes, you can use MCP client simulators or testing tools to validate your documentation structure, schema validity, and server responses without connecting to production LLM services.

The Model Context Protocol (MCP) creates a standardized way for large language models to interact with external tools and data sources. Think of it like a USB port for AI - it allows different AI systems to connect with various tools using a consistent interface.

In a typical MCP setup, there are three key components:

[Client] <---> [MCP Server] <---> [LLM]

This guide explains how MCP API documentation works, how it differs from traditional documentation, and how to implement it for your own APIs.

Why use MCP API documentation

Traditional API documentation is designed for human developers. It includes explanations, examples, and references that people read to understand how to use an API. MCP documentation, on the other hand, is structured for AI consumption.

When an LLM needs to use an API, it doesn't read paragraphs of text - it needs structured information about endpoints, parameters, and response formats. MCP provides this structure through standardized JSON schemas.

The benefits of using MCP documentation include:

  • Direct AI integration: LLMs can understand and use your API without human assistance

  • Standardized format: Works across different AI platforms that support the MCP protocol

  • Reduced development effort: Can be generated from existing OpenAPI specifications

  • Future compatibility: Prepares your API for emerging AI use cases

MCP documentation makes your API accessible to AI tools like Claude, ChatGPT, and other systems that support the Model Context Protocol.

How to create MCP documentation from OpenAPI specs

Converting OpenAPI specifications to MCP documentation involves transforming the structure to match what MCP expects. Here's how to approach this process:

Combining parameter types

OpenAPI separates parameters into different locations: path, query, header, and body. MCP tools need all these combined into a single schema.

For example, an endpoint that accepts both query parameters and a JSON body would need these merged into one schema that represents all possible inputs.

// Example of combining parameters
const combinedSchema = {
  type: "object",
  properties: {
    ...queryParams.properties,
    ...requestBody.properties
  },
  required: [...queryParams.required, ...requestBody.required]
};

This merging process often reveals naming conflicts that need resolution. For instance, if both the query parameters and request body have a field called "id", you'll need to rename one or establish a convention.

Resolving $ref references

OpenAPI specs frequently use $ref to point to reusable components. MCP tools need fully self-contained schemas without external references.

This means you need to "dereference" the schema - replacing each reference with its actual content. For simple cases, this is straightforward, but it gets complicated with:

  • Circular references (schemas that refer to themselves)

  • Deep nesting of references

  • References to external files or URLs

Tools like json-schema-ref-parser can help resolve these references automatically:

const deref = require('json-schema-ref-parser');
const schema = require('./openapi.json');

async function createMCPSchema() {
  const resolved = await deref.dereference(schema);
  // Now use the resolved schema for your MCP tool
}

Handling schema size limitations

Different MCP clients have varying limitations on schema size and complexity. Claude Desktop handles larger schemas well, but other clients might have stricter limits.

For large APIs, consider:

  1. Breaking down complex schemas into simpler components

  2. Exposing only the most relevant endpoints as MCP tools

  3. Creating multiple specialized MCP servers instead of one large one

Testing with multiple clients helps identify compatibility issues early in the development process.

Setting up an MCP server

Once you've prepared your API documentation in MCP format, you need to set up a server that exposes these tools to LLMs. Here's a simplified process:

  1. Choose an MCP server implementation that fits your technology stack

  2. Configure the server with your transformed schemas

  3. Set up authentication if your API requires it

  4. Deploy the server where it can be accessed by MCP clients

A basic MCP server configuration might look like this:

const { createServer } = require('mcp-server');
const apiSchemas = require('./mcp-schemas.json');

const server = createServer({
  tools: apiSchemas,
  auth: {
    type: 'apiKey',
    headerName: 'Authorization'
  }
});

server.listen(3000, () => {
  console.log('MCP server running on port 3000');
});

After setting up your server, validate it by connecting with an MCP client like Claude Desktop and testing various tool invocations.

Best practices for MCP API documentation

Based on our experience converting complex OpenAPI specs to MCP servers at Stainless, here are some recommended practices:

  • Start small: Begin with a subset of endpoints before attempting to convert your entire API

  • Test thoroughly: Validate with multiple MCP clients to ensure compatibility

  • Maintain simplicity: Simpler schemas generally work better across different clients

  • Document limitations: Note any features of your API that don't translate well to MCP

  • Update regularly: Keep your MCP documentation in sync with API changes

For large APIs, consider creating specialized MCP servers for different API sections rather than trying to expose everything through one server.

Tools and resources for MCP documentation

Several tools can help with creating and managing MCP documentation:

  • Model Context Protocol - Official MCP documentation

  • JSON Schema validators for testing schema validity

  • OpenAPI-to-MCP converters (like what we've built at Stainless)

  • MCP client simulators for testing without connecting to actual LLMs

These resources make it easier to create, validate, and maintain MCP documentation for your APIs.

Generate Docs for your MCP

The MCP ecosystem is still evolving. Current limitations around schema size and tool loading will likely improve as the protocol matures. Future developments may include:

  • Dynamic tool loading based on context

  • Improved schema compression techniques

  • Standardized patterns for common API types

  • Better handling of authentication flows

Staying current with MCP developments helps ensure your API documentation remains compatible with the latest clients and practices.

At Stainless, we're working on tools to automate the conversion of OpenAPI specifications to MCP-compatible formats, making it easier to expose your APIs as tools for LLMs. Our approach handles the complexities of schema transformation, reference resolution, and client compatibility testing.

FAQs about MCP API documentation

What is the difference between MCP documentation and traditional API documentation?

MCP documentation uses structured JSON schemas designed for AI consumption, while traditional documentation uses human-readable text, examples, and explanations. MCP enables direct API interaction by LLMs without human interpretation.

How can I convert my existing OpenAPI specifications to MCP format?

You can convert OpenAPI specs to MCP format by combining parameter types, resolving $ref references, and ensuring schema compatibility with MCP clients. Tools like those from Stainless can automate this process.

What should I do if my API has hundreds of endpoints?

For large APIs, consider grouping related endpoints into separate MCP servers, exposing only the most commonly used endpoints, or implementing dynamic loading mechanisms that present relevant tools based on context.

Can I test my MCP documentation without connecting to an actual LLM?

Yes, you can use MCP client simulators or testing tools to validate your documentation structure, schema validity, and server responses without connecting to production LLM services.

The Model Context Protocol (MCP) creates a standardized way for large language models to interact with external tools and data sources. Think of it like a USB port for AI - it allows different AI systems to connect with various tools using a consistent interface.

In a typical MCP setup, there are three key components:

[Client] <---> [MCP Server] <---> [LLM]

This guide explains how MCP API documentation works, how it differs from traditional documentation, and how to implement it for your own APIs.

Why use MCP API documentation

Traditional API documentation is designed for human developers. It includes explanations, examples, and references that people read to understand how to use an API. MCP documentation, on the other hand, is structured for AI consumption.

When an LLM needs to use an API, it doesn't read paragraphs of text - it needs structured information about endpoints, parameters, and response formats. MCP provides this structure through standardized JSON schemas.

The benefits of using MCP documentation include:

  • Direct AI integration: LLMs can understand and use your API without human assistance

  • Standardized format: Works across different AI platforms that support the MCP protocol

  • Reduced development effort: Can be generated from existing OpenAPI specifications

  • Future compatibility: Prepares your API for emerging AI use cases

MCP documentation makes your API accessible to AI tools like Claude, ChatGPT, and other systems that support the Model Context Protocol.

How to create MCP documentation from OpenAPI specs

Converting OpenAPI specifications to MCP documentation involves transforming the structure to match what MCP expects. Here's how to approach this process:

Combining parameter types

OpenAPI separates parameters into different locations: path, query, header, and body. MCP tools need all these combined into a single schema.

For example, an endpoint that accepts both query parameters and a JSON body would need these merged into one schema that represents all possible inputs.

// Example of combining parameters
const combinedSchema = {
  type: "object",
  properties: {
    ...queryParams.properties,
    ...requestBody.properties
  },
  required: [...queryParams.required, ...requestBody.required]
};

This merging process often reveals naming conflicts that need resolution. For instance, if both the query parameters and request body have a field called "id", you'll need to rename one or establish a convention.

Resolving $ref references

OpenAPI specs frequently use $ref to point to reusable components. MCP tools need fully self-contained schemas without external references.

This means you need to "dereference" the schema - replacing each reference with its actual content. For simple cases, this is straightforward, but it gets complicated with:

  • Circular references (schemas that refer to themselves)

  • Deep nesting of references

  • References to external files or URLs

Tools like json-schema-ref-parser can help resolve these references automatically:

const deref = require('json-schema-ref-parser');
const schema = require('./openapi.json');

async function createMCPSchema() {
  const resolved = await deref.dereference(schema);
  // Now use the resolved schema for your MCP tool
}

Handling schema size limitations

Different MCP clients have varying limitations on schema size and complexity. Claude Desktop handles larger schemas well, but other clients might have stricter limits.

For large APIs, consider:

  1. Breaking down complex schemas into simpler components

  2. Exposing only the most relevant endpoints as MCP tools

  3. Creating multiple specialized MCP servers instead of one large one

Testing with multiple clients helps identify compatibility issues early in the development process.

Setting up an MCP server

Once you've prepared your API documentation in MCP format, you need to set up a server that exposes these tools to LLMs. Here's a simplified process:

  1. Choose an MCP server implementation that fits your technology stack

  2. Configure the server with your transformed schemas

  3. Set up authentication if your API requires it

  4. Deploy the server where it can be accessed by MCP clients

A basic MCP server configuration might look like this:

const { createServer } = require('mcp-server');
const apiSchemas = require('./mcp-schemas.json');

const server = createServer({
  tools: apiSchemas,
  auth: {
    type: 'apiKey',
    headerName: 'Authorization'
  }
});

server.listen(3000, () => {
  console.log('MCP server running on port 3000');
});

After setting up your server, validate it by connecting with an MCP client like Claude Desktop and testing various tool invocations.

Best practices for MCP API documentation

Based on our experience converting complex OpenAPI specs to MCP servers at Stainless, here are some recommended practices:

  • Start small: Begin with a subset of endpoints before attempting to convert your entire API

  • Test thoroughly: Validate with multiple MCP clients to ensure compatibility

  • Maintain simplicity: Simpler schemas generally work better across different clients

  • Document limitations: Note any features of your API that don't translate well to MCP

  • Update regularly: Keep your MCP documentation in sync with API changes

For large APIs, consider creating specialized MCP servers for different API sections rather than trying to expose everything through one server.

Tools and resources for MCP documentation

Several tools can help with creating and managing MCP documentation:

  • Model Context Protocol - Official MCP documentation

  • JSON Schema validators for testing schema validity

  • OpenAPI-to-MCP converters (like what we've built at Stainless)

  • MCP client simulators for testing without connecting to actual LLMs

These resources make it easier to create, validate, and maintain MCP documentation for your APIs.

Generate Docs for your MCP

The MCP ecosystem is still evolving. Current limitations around schema size and tool loading will likely improve as the protocol matures. Future developments may include:

  • Dynamic tool loading based on context

  • Improved schema compression techniques

  • Standardized patterns for common API types

  • Better handling of authentication flows

Staying current with MCP developments helps ensure your API documentation remains compatible with the latest clients and practices.

At Stainless, we're working on tools to automate the conversion of OpenAPI specifications to MCP-compatible formats, making it easier to expose your APIs as tools for LLMs. Our approach handles the complexities of schema transformation, reference resolution, and client compatibility testing.

FAQs about MCP API documentation

What is the difference between MCP documentation and traditional API documentation?

MCP documentation uses structured JSON schemas designed for AI consumption, while traditional documentation uses human-readable text, examples, and explanations. MCP enables direct API interaction by LLMs without human interpretation.

How can I convert my existing OpenAPI specifications to MCP format?

You can convert OpenAPI specs to MCP format by combining parameter types, resolving $ref references, and ensuring schema compatibility with MCP clients. Tools like those from Stainless can automate this process.

What should I do if my API has hundreds of endpoints?

For large APIs, consider grouping related endpoints into separate MCP servers, exposing only the most commonly used endpoints, or implementing dynamic loading mechanisms that present relevant tools based on context.

Can I test my MCP documentation without connecting to an actual LLM?

Yes, you can use MCP client simulators or testing tools to validate your documentation structure, schema validity, and server responses without connecting to production LLM services.

Featured MCP Resources

Essential events, guides and insights to help you master MCP server development.

Featured MCP Resources

Essential events, guides and insights to help you master MCP server development.

Featured MCP Resources

Essential events, guides and insights to help you master MCP server development.