Generate an MCP server from an OpenAPI spec
MCP Server generation is currently experimental, so we may not handle all types of OpenAPI spec yet. Contact support if you have any questions or feedback.
Model Context Protocol is a protocol that standardizes how applications provide context and actions to LLMs.
You can use Stainless to automatically generate an MCP Server that wraps your API so that AIs like Claude can retrieve data and take actions.
Enabling MCP Server generation
If you don't have a Stainless project yet, create one. MCP Servers are generated from TypeScript, so choose TypeScript as your first language.
You can use our legacy node
target instead of the typescript
target if you haven't yet migrated.
Once you have a project, click "Add SDKs" and choose "MCP Server".
targets:
typescript: # or node
package_name: my-org-name
production_repo: null
publish:
npm: false
options:
mcp_server:
package_name: my-org-name-mcp # this is the default
enable_all_resources: true
The project will generate a subpackage within your TypeScript SDK at packages/mcp-server
that can be independently published and imported by users.
Deploying
To deploy the MCP Server, publish your SDK by setting up a production repo and making a release.
The MCP server is published at the same time as your main TypeScript SDK and with the same version, but in a separate NPM package.
By default, the package name is <my-npm-package>-mcp
, but you can customize it in the target options.
Installation via Claude Desktop
Consult the README.md
file within the packages/mcp-server
directory of your TypeScript SDK to get more specific instructions for your project.
See the Claude desktop user guide for setup.
Once it's set up, find your claude_desktop_config.json
file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
- Windows:
%APPDATA%\Claude\claude_desktop_config.json
Add the following value to your mcpServers
section. Make sure to provide any necessary environment variables (like API keys) as well.
{
"mcpServers": {
"my_org_api": {
"command": "npx",
"args": ["-y", "my-org-mcp"],
"env": {
"MY_API_KEY": "123e4567-e89b-12d3-a456-426614174000"
}
}
}
}
Customization
Choosing endpoints to expose
By default, all endpoints will be generated, but if some endpoints don't make sense for MCP, you can disable specific resources or endpoints.
Additionally, you can set enable_all_resources
to false
to opt-in only certain resources or endpoints. Note: end-users can also filter
which endpoints they import into their MCP client.
Example resources and endpoints opted in
resources:
my_resource:
mcp: true # enable MCP generation for all endpoints in this resource
methods: ...
another_resource:
methods:
create:
mcp: true # enable this endpoint for MCP generation
endpoint: post /v1/create
update: ...
Fine-tuning tool names and descriptions
By default, tool names and descriptions are generated from your OpenAPI spec and Stainless config. However, if you want to provide additional context to LLMs for certain tools, you can override these values:
resources:
my_resource:
methods:
create:
endpoint: post /v1/create
mcp:
tool_name: my_custom_tool_name
description: |
This is an LLM-specific tool description
End-users filtering the set of tools that are imported
Sometimes with a large API, it doesn't make sense to import every tool into an MCP client at once (it can be confusing to have too many for an LLM to choose from, and context windows are limited). Therefore, end-users can flexibly filter which tools they want to import.
They can do this by providing additional arguments to your MCP server:
--tool
to include a specific tool by name--resource
to include all tools under a specific resource, and it can have wildcards, e.g.my.resource*
--operation
to include just read (get/list) or just write operations--tag
to include endpoints that have a custom tag associated with them
Configuring custom tags
There are no tags on tools by default, but if you want to provide a custom grouping of tools for end users to filter on, you can tag resources or endpoints:
resources:
my_resource:
mcp:
tags:
- my_custom_resource_tag
methods:
create:
mcp:
tags:
- my_custom_endpoint_tag # also inherits my_custom_resource_tag
Enabling Dynamic Tools for Larger APIs
Large APIs with many endpoints can be difficult for LLMs to work with effectively when all endpoints are exposed as individual tools. The MCP server provides a "dynamic tools" mode to address this issue.
When you specify --tools=dynamic
to the MCP server, instead of exposing one tool per API endpoint, it exposes just three powerful meta-tools:
list_api_endpoints
- Discovers available endpoints, with optional filtering by search queryget_api_endpoint_schema
- Gets detailed schema information for a specific endpointinvoke_api_endpoint
- Executes any endpoint with the appropriate parameters
This approach allows the LLM to dynamically discover, learn about, and invoke endpoints as needed, without requiring the entire API schema to be loaded into its context window at once. The LLM will use these tools together to search for, look up, and call endpoints on demand.
For larger APIs (with more than 50 endpoints), dynamic tools are automatically enabled by default. You can override this by specifying --tools=all
or using other filtering options.
# Enable dynamic tools mode
npx -y my-org-mcp --tools=dynamic
# Use both dynamic tools and specific endpoints
npx -y my-org-mcp --tools=dynamic --resource=payments
Note that due to the indirect nature of dynamic tools, the LLM might struggle a bit more with providing the correct properties compared to when endpoints are exposed as individual tools. You can enable both approaches simultaneously by
explicitly including tools with --tool
or other filters, or specify --tools=all
to include all tools.
Client Capabilities
Different MCP clients (and the LLMs behind them) have varying levels of support for complex JSON schemas. The MCP server can automatically adjust tool schemas to work around these limitations.
You can specify the client you're using with the --client
argument:
# Configure for Claude AI
npx -y my-org-mcp --client=claude
Valid client values include:
openai-agents
- OpenAI's agents platformclaude
- Claude AI web interfaceclaude-code
- Claude Code CLIcursor
- Cursor editor
For other clients or to fine-tune capabilities, you can manually enable or disable specific capabilities:
# Disable support for $ref pointers and set maximum tool name length
npx -y my-org-mcp --no-capability=refs --capability=tool-name-length=40
Available capabilities include:
top-level-unions
- Support for top-level union types in schemasvalid-json
- Support for correctly parsing JSON string argumentsrefs
- Support for $ref pointers in schemasunions
- Support for union types (anyOf) in schemasformats
- Support for format validations in schemastool-name-length=N
- Maximum length for tool names
For more detailed information about capabilities, run:
npx -y my-org-mcp --describe-capabilities
Deploy a Remote MCP Server
In the examples above, your MCP Server is a local program that accepts environment variables to handle authorization. This works great for developers using local clients like Claude Desktop or Cursor. However, this doesn't work well for end-users who might want to use MCP from a web-app (like claude.ai).
Deployment choices
Remote MCP servers that require authentication must support OAuth instead of API keys. You have a few options to support a remote MCP server based on your use-case:
-
Direct API token collection - Collect necessary API tokens during the OAuth redirect flow. Users provide their own API keys through a form during authorization.
- Best for: APIs where users already have their own API keys
- Example: Users enter their OpenAI API key during OAuth consent
-
Public MCP server - Bake a special API key into your server's secrets and publish a "public" MCP server that users can use without individual authentication.
- Best for: Read-only APIs or when you want to provide free access to certain endpoints
- Example: A weather API where you pay for all requests on behalf of users
-
OAuth provider integration - Support an existing OAuth provider (like Google, GitHub) and exchange that auth token for your API keys.
- Best for: When your API already supports OAuth or when you want to leverage existing identity providers
- Example: Users sign in with GitHub, and you use their GitHub token to authenticate API requests
-
Custom authentication flow - Implement a custom redirect page that handles your specific authentication needs.
- Best for: Complex authentication scenarios or when integrating with enterprise SSO systems
- Example: Redirect to your own login page that handles multi-factor authentication
Cloudflare supports all of these use-cases - we generate option #1 by default (direct API token collection), but you can customize the worker to support other OAuth schemes or your own custom authentication flow using custom code.
Setup with Cloudflare
Start by adding generate_cloudflare_worker: true
to your MCP options:
options:
mcp_server:
enable_all_resources: true
generate_cloudflare_worker: true
This generates a packages/mcp-server/cloudflare-worker
directory, which contains a Cloudflare worker repo that can be deployed.
It implements OAuth and collects the API keys needed to initialize the SDK client during the redirect flow.
You will need to release your SDK and MCP packages before deploying the Cloudflare worker so it can import the dependencies from npm.
Consult the generated README file within the cloudflare-worker
directory for more info on how to run and test locally.
Once you have published your SDK and MCP packages, you can deploy to Cloudflare using the "Deploy to Cloudflare" button in the README.
The deploy button provides one-click deployment directly from your GitHub repository:
- It automatically configures your Cloudflare worker from the template
- It copies the repo into your GitHub org for further modification
Protocol Support
The Cloudflare worker supports two protocols for MCP communication:
- Streaming HTTP (
/mcp
endpoint) - The newer, recommended protocol - Server-Sent Events (SSE) (
/sse
endpoint) - Legacy protocol for backward compatibility
Most modern MCP clients will automatically use the appropriate protocol.
Configuring ServerConfig
A default ServerConfig is generated in src/index.ts
and contains the properties needed to initialize the client SDK. These properties are collected from users during the OAuth consent screen. You can tweak these
as desired if you want to add custom logic when initializing the MCP server.
/**
* The information displayed on the OAuth consent screen
*/
const serverConfig: ServerConfig = {
orgName: 'MyPackageMcp',
instructionsUrl: undefined, // Set a url for where you show users how to get an API key
logoUrl: undefined, // Set a custom logo url to appear during the OAuth flow
clientProperties: [
{
key: 'authToken',
label: 'Auth Token',
description: 'The token to use for authentication',
required: true,
default: undefined,
placeholder: '123e4567-e89b-12d3-a456-426614174000',
type: 'string',
},
{
key: 'orgId',
label: 'Org ID',
description: 'The organization ID context',
required: true,
default: undefined,
placeholder: 'my_org',
type: 'string',
},
],
};
Additionally, you can optionally set custom instructions and a custom logo:
instructionsUrl
is a URL that points to instructions for getting an API key, if you have them.logoUrl
is a URL that points to a logo image to display during the consent screen.
Advanced Field Types
The clientProperties
array supports various input types for the OAuth consent screen:
string
- Text input field (default)number
- Numeric input fieldpassword
- Password input field (masked)select
- Dropdown menu with predefined options
These will be collected during the OAuth consent screen and persisted into props
, which is
available during the init
function in your McpAgent
class.
OAuth Consent Screen Features
The generated OAuth consent screen automatically includes:
- Client capability detection based on the selected MCP client
- Read-only mode toggle that filters out write operations
- Dynamic tools option for large APIs
- Help tooltips for each configuration field
Customizing
You can customize which endpoints get served from the Cloudflare worker by importing Stainless' generated tools directly and filtering, modifying, or adding to them.
import { McpOptions, initMcpServer, server, ClientOptions, endpoints } from '[my-package-mcp]/server';
import { Endpoint } from '[my-package-mcp]/tools.mjs';
import StainlessStore from '[my-package]';
export class MyMCP extends McpAgent<Env, unknown, MCPProps> {
server = server;
async init() {
const newEndpoints: Endpoint[] = [
{
metadata: {
resource: 'user',
operation: 'read',
tags: ['user'],
},
tool: {
name: 'get_user',
description: 'Get a user by ID',
inputSchema: {
type: 'object',
properties: {
id: {
type: 'string',
},
},
},
},
handler: async (client: MyMcpClient, args: any) => {
const user = await client.get(`/users/${args.id}`);
return {
content: [
{
type: 'text',
text: JSON.stringify(user),
},
],
};
},
},
];
initMcpServer({
server: this.server,
clientOptions: this.props.clientProps,
mcpOptions: this.props.clientConfig,
endpoints: [...endpoints, ...newEndpoints],
});
}
}
Static Assets and Styling
The Cloudflare worker template includes support for serving static assets from the ./static/
directory:
static/home.md
- The landing page content (rendered as Markdown)- Custom CSS and JavaScript files
- Images and other assets
You can customize the appearance and branding by:
- Modifying the Tailwind CSS configuration in the worker
- Adding custom static assets to the
./static/
directory - Updating the home page content to match your brand
The worker automatically serves these files and renders Markdown content with syntax highlighting and responsive design.
Next Steps
MCP Server support is still experimental, so please reach out to us if you have any feedback or ideas for how we can make it work better for you!