The Model Context Protocol (MCP) and traditional SDKs solve different problems for API builders. An SDK provides language-specific methods for human developers to integrate your API into their applications. An MCP server exposes AI-discoverable tools that let language models interact with your API on behalf of users.
This guide breaks down the architectural differences, authentication trade-offs, and schema limitations you'll encounter when building both interfaces. You'll learn when to prioritize an SDK versus an MCP server, how to handle the unique constraints of AI clients, and why most mature APIs need both to serve their human developers and AI users effectively.
The Model Context Protocol (MCP) is an open standard that lets AI models interact with your API, while a traditional Software Development Kit (SDK) is a library for human developers. An SDK provides language-specific methods for developers to call your API directly. An MCP server provides a language-agnostic set of "tools" that an AI can discover and use on a person's behalf. The transformation from API to MCP requires understanding how to structure these tools effectively. This guide breaks down the practical differences to help you build the right interfaces for both your human and AI users.
What is MCP and how does it differ from traditional SDKs
The core difference between MCP and a traditional SDK lies in the intended user. A traditional SDK is built for a human developer to integrate your API into their application using a specific programming language like Python or TypeScript. An MCP server, on the other hand, is built for an AI agent to understand and interact with your API.
Think of it this way: your SDK provides methods like client.users.create()
. Your MCP server exposes a create_user
tool with a description and a JSON schema for its inputs. The developer calls the SDK method directly; the LLM reasons about the tool's description to decide when and how to call it.
This distinction drives different design choices. SDKs prioritize idiomatic code, strong typing, and developer ergonomics. MCP servers prioritize clear tool descriptions, simple schemas, and managing the LLM's context window. We believe most modern APIs need both, which is why you can generate a production-ready SDK and generate an MCP server from an OpenAPI spec using a single specification.
MCP architecture vs traditional SDK patterns
The architectural patterns for SDKs and MCP servers reflect their different consumers. An SDK call is a direct HTTP request from the client's application to your API. An MCP interaction involves an extra layer of communication between the LLM's client and your MCP server.
This extra server layer in MCP enables powerful features like dynamic tool discovery, but it also introduces new considerations. You have to think about the performance of the MCP server itself and the size of the tool schemas you send to the LLM.
Aspect | Traditional SDK | MCP Server |
---|---|---|
Request Flow |
|
|
Interface | Language-specific methods and types | Language-agnostic JSON-RPC tools |
Deployment | Published as a package (e.g., npm) | Deployed as a binary or container |
State | Stateless HTTP requests | Potentially stateful, long-lived connection |
MCP client implementation compared to SDK client usage
For a developer, using a traditional SDK is straightforward. You import the package, instantiate a client with your auth token, and start calling methods. Setting up an MCP client is a bit more involved because it requires configuring a transport, which defines how the client and server processes talk to each other.
Here's a conceptual comparison of the client-side code:
// Traditional SDK Usage (with a Stainless-generated SDK) import { MyApi } from "@my-org/my-api"; const client = new MyApi({ // Suppose the user has an auth token in an env variable authToken: process.env.MY_API_KEY, }); const task = await client.tasks.create({ name: "Buy milk" });
// MCP Client Usage (with Vercel AI SDK) - Stainless example import { streamText, openai, experimental_createProvider } from "ai"; import { Experimental_StdioMCPTransport } from "ai-mcp"; const myApi = experimental_createProvider({ id: "my-api", // Configure the transport to your Stainless-generated MCP server transport: Experimental_StdioMCPTransport({ // Replace "@my-org/my-api-mcp" with your actual MCP package name command: ["npx", "-y", "@my-org/my-api-mcp"], }), }); // The LLM uses the provider to access your MCP tools const { text } = await streamText({ model: openai("gpt-4-turbo"), tools: myApi.tools, prompt: "Create a task to buy milk", });
The MCP client needs a configured transport to communicate with the server. This transport defines how the two processes talk to each other.
Server-sent events transport
Server-Sent Events (SSE) is a common transport for remote MCP servers. It uses a standard HTTP connection to stream messages, which is firewall-friendly and works well for web-based AI clients connecting to a deployed server.
Standard I/O transport
The standard input/output (stdio) transport is ideal for local development. It allows an MCP client, like one running in your IDE, to communicate with an MCP server running as a local command-line process.
Custom transport options
The MCP specification is flexible, allowing for other transports like WebSockets or even in-process connections for testing. This flexibility ensures MCP can adapt to various communication needs as the ecosystem evolves.
Schema limits in MCP
A significant practical challenge with MCP is that different AI models and clients have varying, and often undocumented, support for the full JSON Schema specification. A schema that works perfectly for Claude might fail with an OpenAI agent.
Common limitations include:
Root unions: Some clients, like OpenAI's agents, don't support
anyOf
at the root of a schema.$ref pointers: Support for
$ref
pointers to definitions within the schema can be inconsistent.Complex types: Nested objects and recursive schemas can confuse some models.
In a traditional SDK, we can generate precise, strongly-typed models that reflect your OpenAPI spec perfectly. For MCP, schemas often need to be simplified or transformed to ensure broad compatibility. You can generate schemas that adapt to specific client limitations, for example, by automatically inlining $ref
pointers or splitting a tool with a root union into multiple, simpler tools.
For very large APIs, a dynamic tools mode can expose meta-tools like list_api_endpoints
and invoke_api_endpoint
, allowing the LLM to discover and use your API on-demand without loading the entire schema at once. The lessons learned from converting complex OpenAPI specs to MCP servers show this approach helps manage schema size limitations effectively.
Authentication trade-offs between OAuth and API keys
Authentication is another area with key differences. SDKs typically use a single, long-lived API key provided by the developer via an environment variable. This is simple and effective for server-to-server communication.
Remote MCP servers, especially those used by third-party web applications, cannot rely on this model. They require a user-delegated authentication flow, which is where OAuth 2.0 comes in.
API Keys: Best for SDKs and local MCP servers where the developer controls the environment. It's a simple, direct authentication method.
OAuth 2.0: Essential for remote MCP servers. The user is redirected to a consent screen where they grant the AI application permission to access their data via your API on their behalf.
Implementing an OAuth flow can be complex. To simplify this, you can use a platform that generates a deployable application, like a Cloudflare Worker, with the OAuth logic pre-built. This allows you to support remote MCP use cases without building the entire authentication and token management flow from scratch.
When should API builders choose MCP over traditional SDKs
The choice isn't MCP or an SDK; for a mature API, the answer is often both. Each serves a different audience and purpose. Your human developers need a great SDK, and your AI users need a well-designed MCP server.
Here’s a simple guide to help you prioritize:
Start with a traditional SDK: If you are building a new API, a high-quality, idiomatic SDK is fundamental for developer adoption and reduces your support load, because your API isn't finished until the SDK ships.
Add an MCP server: When you want to enable AI-native experiences or integrate with agentic workflows, an MCP server is the next logical step.
Build them together: Using the Stainless SDK generator that produces both from your OpenAPI spec ensures your SDK and MCP server stay in sync as your API evolves.
Start by providing a first-class SDK for your developers. Once that's in place, layer on an MCP server to open up your product to the growing world of AI agents.
Frequently asked questions about MCP for API builders
What is the difference between MCP and an AI SDK?
MCP is a protocol standard for how AIs interact with tools. An AI SDK, like Vercel's, is a library that helps developers build AI applications, including clients that can speak the MCP protocol.
Does OpenAI support MCP?
OpenAI's function calling is conceptually similar to MCP tools but not directly compliant with the protocol. You can use a generator that applies the necessary schema transformations to make your tools compatible with OpenAI agents.
Can I use MCP and traditional SDKs together?
Yes, and it's the recommended approach. Your SDK serves human developers, and your MCP server serves AI agents. Using a unified generation tool ensures they are both derived from the same API definition.
Ready to build a first-class developer experience for both humans and AI? Get started for free.
The Model Context Protocol (MCP) and traditional SDKs solve different problems for API builders. An SDK provides language-specific methods for human developers to integrate your API into their applications. An MCP server exposes AI-discoverable tools that let language models interact with your API on behalf of users.
This guide breaks down the architectural differences, authentication trade-offs, and schema limitations you'll encounter when building both interfaces. You'll learn when to prioritize an SDK versus an MCP server, how to handle the unique constraints of AI clients, and why most mature APIs need both to serve their human developers and AI users effectively.
The Model Context Protocol (MCP) is an open standard that lets AI models interact with your API, while a traditional Software Development Kit (SDK) is a library for human developers. An SDK provides language-specific methods for developers to call your API directly. An MCP server provides a language-agnostic set of "tools" that an AI can discover and use on a person's behalf. The transformation from API to MCP requires understanding how to structure these tools effectively. This guide breaks down the practical differences to help you build the right interfaces for both your human and AI users.
What is MCP and how does it differ from traditional SDKs
The core difference between MCP and a traditional SDK lies in the intended user. A traditional SDK is built for a human developer to integrate your API into their application using a specific programming language like Python or TypeScript. An MCP server, on the other hand, is built for an AI agent to understand and interact with your API.
Think of it this way: your SDK provides methods like client.users.create()
. Your MCP server exposes a create_user
tool with a description and a JSON schema for its inputs. The developer calls the SDK method directly; the LLM reasons about the tool's description to decide when and how to call it.
This distinction drives different design choices. SDKs prioritize idiomatic code, strong typing, and developer ergonomics. MCP servers prioritize clear tool descriptions, simple schemas, and managing the LLM's context window. We believe most modern APIs need both, which is why you can generate a production-ready SDK and generate an MCP server from an OpenAPI spec using a single specification.
MCP architecture vs traditional SDK patterns
The architectural patterns for SDKs and MCP servers reflect their different consumers. An SDK call is a direct HTTP request from the client's application to your API. An MCP interaction involves an extra layer of communication between the LLM's client and your MCP server.
This extra server layer in MCP enables powerful features like dynamic tool discovery, but it also introduces new considerations. You have to think about the performance of the MCP server itself and the size of the tool schemas you send to the LLM.
Aspect | Traditional SDK | MCP Server |
---|---|---|
Request Flow |
|
|
Interface | Language-specific methods and types | Language-agnostic JSON-RPC tools |
Deployment | Published as a package (e.g., npm) | Deployed as a binary or container |
State | Stateless HTTP requests | Potentially stateful, long-lived connection |
MCP client implementation compared to SDK client usage
For a developer, using a traditional SDK is straightforward. You import the package, instantiate a client with your auth token, and start calling methods. Setting up an MCP client is a bit more involved because it requires configuring a transport, which defines how the client and server processes talk to each other.
Here's a conceptual comparison of the client-side code:
// Traditional SDK Usage (with a Stainless-generated SDK) import { MyApi } from "@my-org/my-api"; const client = new MyApi({ // Suppose the user has an auth token in an env variable authToken: process.env.MY_API_KEY, }); const task = await client.tasks.create({ name: "Buy milk" });
// MCP Client Usage (with Vercel AI SDK) - Stainless example import { streamText, openai, experimental_createProvider } from "ai"; import { Experimental_StdioMCPTransport } from "ai-mcp"; const myApi = experimental_createProvider({ id: "my-api", // Configure the transport to your Stainless-generated MCP server transport: Experimental_StdioMCPTransport({ // Replace "@my-org/my-api-mcp" with your actual MCP package name command: ["npx", "-y", "@my-org/my-api-mcp"], }), }); // The LLM uses the provider to access your MCP tools const { text } = await streamText({ model: openai("gpt-4-turbo"), tools: myApi.tools, prompt: "Create a task to buy milk", });
The MCP client needs a configured transport to communicate with the server. This transport defines how the two processes talk to each other.
Server-sent events transport
Server-Sent Events (SSE) is a common transport for remote MCP servers. It uses a standard HTTP connection to stream messages, which is firewall-friendly and works well for web-based AI clients connecting to a deployed server.
Standard I/O transport
The standard input/output (stdio) transport is ideal for local development. It allows an MCP client, like one running in your IDE, to communicate with an MCP server running as a local command-line process.
Custom transport options
The MCP specification is flexible, allowing for other transports like WebSockets or even in-process connections for testing. This flexibility ensures MCP can adapt to various communication needs as the ecosystem evolves.
Schema limits in MCP
A significant practical challenge with MCP is that different AI models and clients have varying, and often undocumented, support for the full JSON Schema specification. A schema that works perfectly for Claude might fail with an OpenAI agent.
Common limitations include:
Root unions: Some clients, like OpenAI's agents, don't support
anyOf
at the root of a schema.$ref pointers: Support for
$ref
pointers to definitions within the schema can be inconsistent.Complex types: Nested objects and recursive schemas can confuse some models.
In a traditional SDK, we can generate precise, strongly-typed models that reflect your OpenAPI spec perfectly. For MCP, schemas often need to be simplified or transformed to ensure broad compatibility. You can generate schemas that adapt to specific client limitations, for example, by automatically inlining $ref
pointers or splitting a tool with a root union into multiple, simpler tools.
For very large APIs, a dynamic tools mode can expose meta-tools like list_api_endpoints
and invoke_api_endpoint
, allowing the LLM to discover and use your API on-demand without loading the entire schema at once. The lessons learned from converting complex OpenAPI specs to MCP servers show this approach helps manage schema size limitations effectively.
Authentication trade-offs between OAuth and API keys
Authentication is another area with key differences. SDKs typically use a single, long-lived API key provided by the developer via an environment variable. This is simple and effective for server-to-server communication.
Remote MCP servers, especially those used by third-party web applications, cannot rely on this model. They require a user-delegated authentication flow, which is where OAuth 2.0 comes in.
API Keys: Best for SDKs and local MCP servers where the developer controls the environment. It's a simple, direct authentication method.
OAuth 2.0: Essential for remote MCP servers. The user is redirected to a consent screen where they grant the AI application permission to access their data via your API on their behalf.
Implementing an OAuth flow can be complex. To simplify this, you can use a platform that generates a deployable application, like a Cloudflare Worker, with the OAuth logic pre-built. This allows you to support remote MCP use cases without building the entire authentication and token management flow from scratch.
When should API builders choose MCP over traditional SDKs
The choice isn't MCP or an SDK; for a mature API, the answer is often both. Each serves a different audience and purpose. Your human developers need a great SDK, and your AI users need a well-designed MCP server.
Here’s a simple guide to help you prioritize:
Start with a traditional SDK: If you are building a new API, a high-quality, idiomatic SDK is fundamental for developer adoption and reduces your support load, because your API isn't finished until the SDK ships.
Add an MCP server: When you want to enable AI-native experiences or integrate with agentic workflows, an MCP server is the next logical step.
Build them together: Using the Stainless SDK generator that produces both from your OpenAPI spec ensures your SDK and MCP server stay in sync as your API evolves.
Start by providing a first-class SDK for your developers. Once that's in place, layer on an MCP server to open up your product to the growing world of AI agents.
Frequently asked questions about MCP for API builders
What is the difference between MCP and an AI SDK?
MCP is a protocol standard for how AIs interact with tools. An AI SDK, like Vercel's, is a library that helps developers build AI applications, including clients that can speak the MCP protocol.
Does OpenAI support MCP?
OpenAI's function calling is conceptually similar to MCP tools but not directly compliant with the protocol. You can use a generator that applies the necessary schema transformations to make your tools compatible with OpenAI agents.
Can I use MCP and traditional SDKs together?
Yes, and it's the recommended approach. Your SDK serves human developers, and your MCP server serves AI agents. Using a unified generation tool ensures they are both derived from the same API definition.
Ready to build a first-class developer experience for both humans and AI? Get started for free.
The Model Context Protocol (MCP) and traditional SDKs solve different problems for API builders. An SDK provides language-specific methods for human developers to integrate your API into their applications. An MCP server exposes AI-discoverable tools that let language models interact with your API on behalf of users.
This guide breaks down the architectural differences, authentication trade-offs, and schema limitations you'll encounter when building both interfaces. You'll learn when to prioritize an SDK versus an MCP server, how to handle the unique constraints of AI clients, and why most mature APIs need both to serve their human developers and AI users effectively.
The Model Context Protocol (MCP) is an open standard that lets AI models interact with your API, while a traditional Software Development Kit (SDK) is a library for human developers. An SDK provides language-specific methods for developers to call your API directly. An MCP server provides a language-agnostic set of "tools" that an AI can discover and use on a person's behalf. The transformation from API to MCP requires understanding how to structure these tools effectively. This guide breaks down the practical differences to help you build the right interfaces for both your human and AI users.
What is MCP and how does it differ from traditional SDKs
The core difference between MCP and a traditional SDK lies in the intended user. A traditional SDK is built for a human developer to integrate your API into their application using a specific programming language like Python or TypeScript. An MCP server, on the other hand, is built for an AI agent to understand and interact with your API.
Think of it this way: your SDK provides methods like client.users.create()
. Your MCP server exposes a create_user
tool with a description and a JSON schema for its inputs. The developer calls the SDK method directly; the LLM reasons about the tool's description to decide when and how to call it.
This distinction drives different design choices. SDKs prioritize idiomatic code, strong typing, and developer ergonomics. MCP servers prioritize clear tool descriptions, simple schemas, and managing the LLM's context window. We believe most modern APIs need both, which is why you can generate a production-ready SDK and generate an MCP server from an OpenAPI spec using a single specification.
MCP architecture vs traditional SDK patterns
The architectural patterns for SDKs and MCP servers reflect their different consumers. An SDK call is a direct HTTP request from the client's application to your API. An MCP interaction involves an extra layer of communication between the LLM's client and your MCP server.
This extra server layer in MCP enables powerful features like dynamic tool discovery, but it also introduces new considerations. You have to think about the performance of the MCP server itself and the size of the tool schemas you send to the LLM.
Aspect | Traditional SDK | MCP Server |
---|---|---|
Request Flow |
|
|
Interface | Language-specific methods and types | Language-agnostic JSON-RPC tools |
Deployment | Published as a package (e.g., npm) | Deployed as a binary or container |
State | Stateless HTTP requests | Potentially stateful, long-lived connection |
MCP client implementation compared to SDK client usage
For a developer, using a traditional SDK is straightforward. You import the package, instantiate a client with your auth token, and start calling methods. Setting up an MCP client is a bit more involved because it requires configuring a transport, which defines how the client and server processes talk to each other.
Here's a conceptual comparison of the client-side code:
// Traditional SDK Usage (with a Stainless-generated SDK) import { MyApi } from "@my-org/my-api"; const client = new MyApi({ // Suppose the user has an auth token in an env variable authToken: process.env.MY_API_KEY, }); const task = await client.tasks.create({ name: "Buy milk" });
// MCP Client Usage (with Vercel AI SDK) - Stainless example import { streamText, openai, experimental_createProvider } from "ai"; import { Experimental_StdioMCPTransport } from "ai-mcp"; const myApi = experimental_createProvider({ id: "my-api", // Configure the transport to your Stainless-generated MCP server transport: Experimental_StdioMCPTransport({ // Replace "@my-org/my-api-mcp" with your actual MCP package name command: ["npx", "-y", "@my-org/my-api-mcp"], }), }); // The LLM uses the provider to access your MCP tools const { text } = await streamText({ model: openai("gpt-4-turbo"), tools: myApi.tools, prompt: "Create a task to buy milk", });
The MCP client needs a configured transport to communicate with the server. This transport defines how the two processes talk to each other.
Server-sent events transport
Server-Sent Events (SSE) is a common transport for remote MCP servers. It uses a standard HTTP connection to stream messages, which is firewall-friendly and works well for web-based AI clients connecting to a deployed server.
Standard I/O transport
The standard input/output (stdio) transport is ideal for local development. It allows an MCP client, like one running in your IDE, to communicate with an MCP server running as a local command-line process.
Custom transport options
The MCP specification is flexible, allowing for other transports like WebSockets or even in-process connections for testing. This flexibility ensures MCP can adapt to various communication needs as the ecosystem evolves.
Schema limits in MCP
A significant practical challenge with MCP is that different AI models and clients have varying, and often undocumented, support for the full JSON Schema specification. A schema that works perfectly for Claude might fail with an OpenAI agent.
Common limitations include:
Root unions: Some clients, like OpenAI's agents, don't support
anyOf
at the root of a schema.$ref pointers: Support for
$ref
pointers to definitions within the schema can be inconsistent.Complex types: Nested objects and recursive schemas can confuse some models.
In a traditional SDK, we can generate precise, strongly-typed models that reflect your OpenAPI spec perfectly. For MCP, schemas often need to be simplified or transformed to ensure broad compatibility. You can generate schemas that adapt to specific client limitations, for example, by automatically inlining $ref
pointers or splitting a tool with a root union into multiple, simpler tools.
For very large APIs, a dynamic tools mode can expose meta-tools like list_api_endpoints
and invoke_api_endpoint
, allowing the LLM to discover and use your API on-demand without loading the entire schema at once. The lessons learned from converting complex OpenAPI specs to MCP servers show this approach helps manage schema size limitations effectively.
Authentication trade-offs between OAuth and API keys
Authentication is another area with key differences. SDKs typically use a single, long-lived API key provided by the developer via an environment variable. This is simple and effective for server-to-server communication.
Remote MCP servers, especially those used by third-party web applications, cannot rely on this model. They require a user-delegated authentication flow, which is where OAuth 2.0 comes in.
API Keys: Best for SDKs and local MCP servers where the developer controls the environment. It's a simple, direct authentication method.
OAuth 2.0: Essential for remote MCP servers. The user is redirected to a consent screen where they grant the AI application permission to access their data via your API on their behalf.
Implementing an OAuth flow can be complex. To simplify this, you can use a platform that generates a deployable application, like a Cloudflare Worker, with the OAuth logic pre-built. This allows you to support remote MCP use cases without building the entire authentication and token management flow from scratch.
When should API builders choose MCP over traditional SDKs
The choice isn't MCP or an SDK; for a mature API, the answer is often both. Each serves a different audience and purpose. Your human developers need a great SDK, and your AI users need a well-designed MCP server.
Here’s a simple guide to help you prioritize:
Start with a traditional SDK: If you are building a new API, a high-quality, idiomatic SDK is fundamental for developer adoption and reduces your support load, because your API isn't finished until the SDK ships.
Add an MCP server: When you want to enable AI-native experiences or integrate with agentic workflows, an MCP server is the next logical step.
Build them together: Using the Stainless SDK generator that produces both from your OpenAPI spec ensures your SDK and MCP server stay in sync as your API evolves.
Start by providing a first-class SDK for your developers. Once that's in place, layer on an MCP server to open up your product to the growing world of AI agents.
Frequently asked questions about MCP for API builders
What is the difference between MCP and an AI SDK?
MCP is a protocol standard for how AIs interact with tools. An AI SDK, like Vercel's, is a library that helps developers build AI applications, including clients that can speak the MCP protocol.
Does OpenAI support MCP?
OpenAI's function calling is conceptually similar to MCP tools but not directly compliant with the protocol. You can use a generator that applies the necessary schema transformations to make your tools compatible with OpenAI agents.
Can I use MCP and traditional SDKs together?
Yes, and it's the recommended approach. Your SDK serves human developers, and your MCP server serves AI agents. Using a unified generation tool ensures they are both derived from the same API definition.
Ready to build a first-class developer experience for both humans and AI? Get started for free.