Generate an MCP server from an OpenAPI spec
MCP server generation is currently experimental, so we may not handle all types of OpenAPI spec yet. Contact support if you have any questions or feedback.
The Model Context Protocol is an open, JSON-RPC-based standard that defines how applications provide context and actions to large language models (LLMs). It acts as a universal adapter to external data (like Google Drive, Git repos, or databases) and capabilities (ike sending emails or running SQL queries).
Stainless automatically generates MCP servers from your OpenAPI spec, enabling your API to work seamlessly with Claude Desktop, Cursor, and other MCP clients.
Enable MCP server generation
If you don't have a Stainless project yet, create one. The MCP server will be a subpackage of your TypeScript SDK, so choose TypeScript as your first language.
You can use our legacy node
target instead of the typescript
target if you haven't yet migrated.
Once you have a project, click "Add SDKs" and choose "MCP Server". This will update your Stainless config like so:
targets:
typescript: # or node
package_name: my-org-name
production_repo: null
publish:
npm: false
options:
mcp_server:
package_name: my-org-name-mcp # this is the default
enable_all_resources: true
The project will generate a subpackage within your TypeScript SDK at packages/mcp-server
.
Publish your server
Once your server is ready to use, you can publish it for end users to find in multiple places.
Publish to GitHub
For every release, we'll create an MCPB file which allows your users to install your MCP server locally with one-click. Provide a link to this file or include it with your installer.
Publish to NPM
To publish your MCP server to NPM, set up a production repo and publish your SDK by creating a release.
The MCP server is published at the same time as your main TypeScript SDK and with the same version, but in a separate NPM package.
By default, the package name is <your-npm-package>-mcp
, but you can customize this name in the target options.
Once your NPM package is published, it can be run from the command line using npx
:
# Pull and run the latest version
npx -y my-org-name-mcp@latest
# Run a specific version
npx -y my-org-name-mcp@1.2.3
# With environment variables
export MY_API_KEY=your-api-key
npx -y my-org-name-mcp@latest
Publish Docker images
For easier distribution and deployment, you can enable Docker image publishing for your MCP server. This creates a containerized version of your MCP server that can be easily deployed and distributed.
Set up Docker MCP in your Stainless config
Under the TypeScript target in your Stainless config, add the mcp_server.publish.docker
option with your Docker image name:
targets:
typescript:
options:
mcp_server:
enable_all_resources: true
publish:
docker: myorg/my-api-mcp # Your Docker image name
Your Docker image will be published to Docker Hub by default, but you can specify a different registry if you wish:
targets:
typescript:
options:
mcp_server:
enable_all_resources: true
publish:
docker:
image_name: myorg/my-api-mcp # Your Docker image name
registry: ghcr.io # Custom registry (defaults to docker.io)
Create a Docker Hub Access Token
You can create either a Personal Access Token or an Organization Access Token depending on your setup:
Personal Access Token (Individual accounts)
Use this for individual Docker Hub accounts or when you don't have a Docker Team/Business subscription:
- Sign in to your Docker Hub account at hub.docker.com
- Navigate to Account Settings > Security > Personal Access Tokens
- Click Generate new token
- Provide a descriptive token name (e.g., "MCP Server GitHub Actions Publishing")
- Set access permissions to Read & Write (required for pushing images)
- Click Generate and immediately copy the token
Treat access tokens like passwords and keep them secret. Store the token securely as you won't be able to see it again.
Organization Access Token (Team/Business accounts)
Use this if you have a Docker Team or Business subscription and want centralized token management:
- Sign in to Docker Hub and navigate to the Admin Console
- Select your organization
- Go to Access tokens > Generate token
- Configure:
- Label and description: "MCP Server GitHub Actions Publishing"
- Expiration date: Set appropriate expiration
- Repository access: Select repositories this token can access
- Access permissions: Enable Read & Write
- Generate and immediately copy the token
Treat access tokens like passwords and keep them secret. Store the token securely as you won't be able to see it again.
Organization Access Tokens require a Docker Team or Business subscription and provide better security and management features, but are incompatible with Docker Desktop.
For more detailed information, see the Docker documentation:
Add your Docker Hub credentials to your production repo
- Open your production repo in GitHub
- Navigate to Settings > Secrets and variables > Actions > New repository secret
- Add a
DOCKERHUB_TOKEN
secret with your access token as the value - Navigate to Settings > Secrets and variables > Actions > Variables tab > New repository variable
- Add a
DOCKERHUB_USERNAME
variable with the value set to one of the following:- If you're using a personal access token, use your Docker Hub username
- If you're using an organization access token, use organization name
Publish to Docker Hub
After Docker publishing is set up in your Stainless config, the next build will generate the necessary Docker assets (Dockerfile
, .dockerignore
, bin/docker-tags
script, and a GitHub Actions workflow) in your SDK.
Versioning follows the same strategy as npm publishing, with exact version tags, latest
for stable releases, and pre-release tags like alpha
or beta
when appropriate. See versioning and releases for details.
Your Docker image will be published automatically when a release PR is merged. You can also run the GitHub workflow manually if needed.
Use your published Docker images
Once published, users can run your MCP server using Docker:
# Pull and run the latest version
docker run --rm -i myorg/my-api-mcp
# Run a specific version
docker run --rm -i myorg/my-api-mcp:1.2.3
# With environment variables
export MY_API_KEY=your-api-key
docker run --rm -i -e MY_API_KEY myorg/my-api-mcp
Installation to desktop clients
Consult the README.md
file within the packages/mcp-server
directory of your TypeScript SDK to get more specific instructions for your project.
Different desktop clients enable MCP servers in different ways. Some clients allow installation via MCPB files, which enable quick installation of the server.
Other clients use the mcpServers.json
format. If your client does, find the mcpServers.json
file and add the following value to your mcpServers
section. If the server requires authorization, such as API keys, provide that information using additional environment variables.
{
"mcpServers": {
"my_org_api": {
"command": "npx",
"args": ["-y", "my-org-mcp"],
"env": {
"MY_API_KEY": "your-api-key"
}
}
}
}
The args
can also be extended to filter the tools exposed to the LLM, and to use dynamic tools or code execution tool modes. As an example,
{
"mcpServers": {
"my_org_api": {
"command": "npx",
"args": ["-y", "my-org-mcp", "--resource=example", "--tools=dynamic"],
"env": {
"MY_API_KEY": "your-api-key"
}
}
}
}
Customization
Choose methods to expose
By default, all of your resources will be exposed as MCP tools. If you'd prefer not to expose some of your SDK's methods, set mcp: false
at the resource or method level to opt them out.
Alternatively, you can switch to an opt-in model: set enable_all_resources
to false
in your TypeScript target options, then set mcp: true
on the resources you want to expose as MCP tools.
Example resources and methods opted in
resources:
my_resource:
mcp: true # enable MCP generation for all methods in this resource
methods: ...
another_resource:
methods:
create:
mcp: true # enable this method for MCP generation
endpoint: post /v1/create
update: ...
Fine-tune tool names and descriptions
By default, tool names and descriptions are generated from your OpenAPI spec and Stainless config. However, if you want to provide additional context to LLMs for certain tools, or change what's been generated, you can override these values:
resources:
my_resource:
methods:
create:
endpoint: post /v1/create
mcp:
tool_name: my_custom_tool_name
description: |
This is an LLM-specific tool description
Filter the set of tools that are imported at runtime
Sometimes with a large API, it's not desirable to import every tool into an MCP client at once (it can be confusing to have too many for an LLM to choose from, and context windows are limited). Therefore, end-users can flexibly filter which tools they want to import.
They can do this by providing additional arguments to your MCP server:
--tool
to include a specific tool by name--resource
to include all tools under a specific resource, and it can have wildcards, e.g.my.resource*
--operation
to include just read (get/list) or just write operations--tag
to include methods that have a custom tag associated with them
For example, when using Claude Desktop, end-users might specify the following to include only read-only tools:
{
"mcpServers": {
"my_org_api": {
"command": "npx",
"args": ["-y", "--operation=read"],
"env": {
"MY_API_KEY": "your-api-key"
}
}
}
}
Or the following to include tools in the my-resource
resource:
{
"mcpServers": {
"my_org_api": {
"command": "npx",
"args": ["-y", "--resource=my-resource"],
"env": {
"MY_API_KEY": "your-api-key"
}
}
}
}
Configuring custom tags
There are no tags on tools by default, but if you want to provide a custom grouping of tools for end users to filter on, you can tag resources or methods:
resources:
my_resource:
mcp:
tags:
- my_custom_resource_tag
methods:
create:
mcp:
tags:
- my_custom_method_tag # also inherits my_custom_resource_tag
Automatic filtering of API Responses
To keep irrelevant data from using a lot of space in context windows, your MCP server automatically provides a jq_filter
parameter for each tool call. MCP clients can use the filter to transform API responses with the jq utility to omit anything the LLM doesn't need, which reduces token usage and improves performance.
You can also disable jq filtering in options if desired:
options:
mcp_server:
enable_jq_filter: false
Dynamic tools
Large APIs with many methods can be difficult for LLMs to work with if every method is exposed as an individual tool. Your MCP server provides a dynamic tools mode to address this issue.
When you specify --tools=dynamic
your individual methods won't be exposed as individual tools. Instead, three meta-tools will be provided:
list_api_endpoints
- Discovers available methods, with optional filtering by search queryget_api_endpoint_schema
- Gets detailed schema information for a specific methodinvoke_api_endpoint
- Executes any method with the appropriate parameters
This approach allows the LLM to dynamically discover, learn about, and invoke methods as needed, without loading the entire API schema into its context window at once.
For larger APIs (more than 50 methods), dynamic tools are enabled by default. You can override this by specifying --tools=all
at launch, or by using other filtering options.
# Enable dynamic tools mode
npx -y my-org-mcp --tools=dynamic
# Use both dynamic tools and specific methods
npx -y my-org-mcp --tools=dynamic --resource=payments
Note that due to the indirect nature of dynamic tools, the LLM might struggle a bit more with providing the correct properties compared to when methods are exposed as individual tools. You can enable both approaches simultaneously by
explicitly including tools with --tool
or other filters, or specify --tools=all
to include all tools.
Code execution tool
This tool is experimental and subject to change.
To help deal with large APIs and give a smoother experience, the MCP server offers code execution tool functionality. When code execution is enabled, your MCP server will expose a single tool which accepts TypeScript code, instead of exposing one tool per method. The code provided by the LLM will be run against your Stainless SDK in a Deno sandbox. Code execution will not work if Deno is not installed.
A single tool takes up less space in an LLM's context window, and multiple operations can be performed in a single call with arbitrary complexity. This often makes code execution faster, more accurate, and more token-efficient than exposing one tool per method.
The code execution tool is strictly opt-in. To use the code execution tool, start the server with the argument --tools=code
:
npx -y my-org-mcp --tools=code
For an example setup in mcpServers.json
format:
{
"mcpServers": {
"my-org-mcp": {
"command": "npx",
"args": ["-y", "my-org-mcp", "--tools=code"],
"env": {
"MY_API_KEY": "your-api-key"
}
}
}
}
The code execution tool is currently only available in local MCP servers, but support for remote servers is coming soon.
You can disable generating the code execution tool altogether:
options:
mcp_server:
enable_code_tool: false
Docs search tool
This tool is experimental and subject to change.
The MCP server also provides a docs search tool to look up documentation for how to use your Stainless-generated SDKs. LLMs can use this tool to answer questions about SDK usage, and AI agents can also use this tool to write code to interact with your API.
The Stainless API serves the documentation in a Markdown format optimized for LLM consumption, based on your project's latest OpenAPI spec and Stainless config. This is a public API endpoint enabled for all projects by default. You can change this setting by going to your project in the SDK Studio, then navigating to Release > Setup OpenAPI publishing > Expose experimental SDK docs search API.
By default, the docs search tool is always included, even when filtering tools, using dynamic tools, or using the code execution tool. You can disable including the docs search tool via --no-tools=docs
. You can also disable generating the code execution tool:
options:
mcp_server:
enable_docs_tool: false
Client capabilities
MCP is a new standard, and not all clients support the full specification. For example, some clients do not support complex JSON schemas, and others have bounds on the length of tool names. Your MCP server can automatically adjust tool schemas to work around these limitations. By default, your MCP server will attempt to detect what client is calling into it and adjust schemas accordingly. This can also be specified directly with the --client
argument:
# Configure for Claude AI
npx -y my-org-mcp --client=claude
# Configure for Cursor
npx -y my-org-mcp --client=cursor
Valid client values include:
infer
- Best-effort guess as to which client is being used (default)openai-agents
- OpenAI's agents platformclaude
- Claude AI web or desktop interfaceclaude-code
- Claude Code CLIcursor
- Cursor editor
If you're using a client not listed above, you can manually enable or disable specific capabilities when running your MCP server to match your client's capabilities:
# Disable support for $ref pointers and set maximum tool name length
npx -y my-org-mcp --no-capability=refs --capability=tool-name-length=40
Available capabilities include:
top-level-unions
- Support for top-level union types in schemasvalid-json
- Support for correctly parsing JSON string argumentsrefs
- Support for $ref pointers in schemasunions
- Support for union types (anyOf) in schemasformats
- Support for format validations in schemastool-name-length=N
- Maximum length for tool names
For more detailed information about capabilities, run:
npx -y my-org-mcp --describe-capabilities
Deploy a remote MCP server
In the examples above, the MCP server is a program that runs on the user's computer, and accepts environment variables to handle authorization. This allows strong customizability for developers, and works natively with desktop clients like Claude Desktop or Cursor. However, this doesn't work for end-users who are using a web app like Claude.ai, or agentic flows like Langchain. To support these cases, you might want to deploy a remote MCP server.
The default Stainless-generated remote MCP server should work well for public servers, servers intended strictly for agentic workflows or developers, or servers over an API that already supports OAuth. For servers that are intended for general use that require authentication but don't support OAuth, consider the Cloudflare worker.
Your MCP server can be run as a remote server by passing the flag --transport=http
:
npx -y my-org-mcp --transport=http
# if you publish your server via Docker hub, this can be run from the Docker image
docker run --rm -i myorg/my-api-mcp --transport=http
The flag --port
can be used to specify which port the server will run on ahead of time.
Test the remote server locally
You can test that your remote server is working as intended by running a local instance of it. The following command starts an instance of your MCP server locally:
npx -y my-org-mcp --transport=http --port=3000
After you run the command above, connect your MCP client to your local testing server at http://localhost:3000
.
One helpful client for testing is MCP Inspector which runs in your browser. You can start MCP Inspector using this command:
npx -y @modelcontextprotocol/inspector@latest
Then, select Streamable HTTP as the transport type, and http://localhost:3000
as the URL.
Deployment
The Docker image or NPM package can be used to host the server in your own infrastructure.
If we don't support your use case, let us know so we can help you get up and running.
Public setup
If your API does not require authentication, there's no need to provide any additional configuration. Clients will be able to connect to your server as-is without providing credentials.
If your API does require authentication, but you want to expose some functionality to the public without requiring clients to authenticate, set the API key you want to use for public requests as an environment variable when starting your server as shown below.
This will allow anyone who can access your server to perform actions as if they were authenticated. Use this only for information you want to expose publicly. We strongly recommend you limit this to a small subset of your endpoints, or to only read-only endpoints. In the command below, --operation=read
is used to filter to only read-only routes, for example.
export MY_API_KEY=your-api-key
npx -y my-org-mcp --transport=http --operation=read
Header-based auth
Your MCP server will use the same authentication approach your TypeScript SDK is configured to use, such as the Authentication
header with Bearer
or Basic
auth schemes, as well as a set of generated headers based on the names of your SDK's client options. The exact set of headers that can be used to authenticate will be in your MCP server's README.md
.
This auth flow can be used with most desktop clients and agentic use cases, but some web-based clients may not support setting custom headers.
OAuth
If your API supports OAuth, you can configure your server to use it for auth:
targets:
typescript:
options:
mcp_server:
oauth_resource_metadata:
authorization_servers: [https://auth.example.com] # your OAuth auth server URL
With this configuration, MCP clients will automatically redirect to the specified authorization server when credentials are required.
If your API supports dynamic client registration you can deploy your remote MCP server anywhere.
If your API doesn't support dynamic client registration, consider using a Cloudflare worker with its own OAuth server that can be used with any authentication scheme, including your own OAuth implementation.
Using a deployed server
Once a server is deployed at a URL, many clients will be able to connect to it with just specifying that URL.
Some desktop clients, like Claude Desktop and Cursor, use the mcpServers.json
format to configure MCP clients. For clients like this, the MCP server can be set up as follows:
{
"mcpServers": {
"my-org-mcp": {
"url": "https://mcp.example.com/",
"headers": { // Only necessary if using header-based auth
"Authorization": "my-token-here"
}
}
}
}
The headers
entry is only necessary if using header-based auth, and is not required otherwise.
Some desktop clients only officially support local (stdio) servers. For these clients, mcp-remote
allows you to connect to a remote server as it if was a local server:
End-user configurability
End users can provide query parameters to configure the tools presented in the remote MCP server, similar to filtering tools at runtime. For example, a URL of https://mcp.example.com/?operation=read
will filter to only read operations (GET and QUERY), and https://mcp.example.com/?tools=dynamic
will use dynamic tools.
Remote MCP server on Cloudflare workers
For servers that do not support OAuth or do not support dynamic client registration, we offer an alternate server that acts as its own OAuth auth server. It accepts tokens from the client directly through a web UI, and stores them locally, while giving the MCP client a token specific to the server. This may also be a good option for anyone already deploying servers on Cloudflare.
Add generate_cloudflare_worker: true
to your MCP options:
options:
mcp_server:
enable_all_resources: true
generate_cloudflare_worker: true
This generates a packages/mcp-server/cloudflare-worker
directory, which contains a Cloudflare worker repo that can be deployed.
It implements OAuth and collects the API keys needed to initialize the SDK client during the redirect flow.
You will need to release your SDK and MCP packages before deploying the Cloudflare worker so it can import the dependencies from npm.
Consult the generated README.md
file within the cloudflare-worker
directory for more info on how to run and test locally.
Once you have published your SDK and MCP packages, you can deploy to Cloudflare using the "Deploy to Cloudflare" button in the README.md
.
The deploy button provides one-click deployment directly from your GitHub repository:
- It automatically configures your Cloudflare worker from the template
- It copies the repo into your GitHub org for further modification
Configuring ServerConfig
A default ServerConfig is generated in src/index.ts
and contains the properties needed to initialize the client SDK. These properties are collected from users during the OAuth consent screen. This can be tweaked using custom code.
/**
* The information displayed on the OAuth consent screen
*/
const serverConfig: ServerConfig = {
orgName: 'MyPackageMcp',
instructionsUrl: undefined, // Set a url for where you show users how to get an API key
logoUrl: undefined, // Set a custom logo url to appear during the OAuth flow
clientProperties: [
{
key: 'authToken',
label: 'Auth Token',
description: 'The token to use for authentication',
required: true,
default: undefined,
placeholder: '123e4567-e89b-12d3-a456-426614174000',
type: 'string',
},
{
key: 'orgId',
label: 'Org ID',
description: 'The organization ID context',
required: true,
default: undefined,
placeholder: 'my_org',
type: 'string',
},
],
};
Additionally, you can optionally set custom instructions and a custom logo:
instructionsUrl
is a URL that points to instructions for getting an API key, if you have them.logoUrl
is a URL that points to a logo image to display during the consent screen.
Advanced Field Types
The clientProperties
array supports various input types for the OAuth consent screen:
string
- Text input field (default)number
- Numeric input fieldpassword
- Password input field (masked)select
- Dropdown menu with predefined options
These will be collected during the OAuth consent screen and persisted into props
, which is
available during the init
function in your McpAgent
class.
OAuth Consent Screen Features
The generated OAuth consent screen automatically includes:
- Client capability detection based on the selected MCP client
- Read-only mode toggle that filters out write operations
- Dynamic tools option for large APIs
- Help tooltips for each configuration field
Customizing
You can customize which methods get served from the Cloudflare worker by importing Stainless's generated tools directly and filtering, modifying, or adding to them.
import { McpOptions, initMcpServer, server, ClientOptions, endpoints } from '[my-package-mcp]/server';
import { Endpoint } from '[my-package-mcp]/tools.mjs';
import StainlessStore from '[my-package]';
export class MyMCP extends McpAgent<Env, unknown, MCPProps> {
server = server;
async init() {
const newEndpoints: Endpoint[] = [
{
metadata: {
resource: 'user',
operation: 'read',
tags: ['user'],
},
tool: {
name: 'get_user',
description: 'Get a user by ID',
inputSchema: {
type: 'object',
properties: {
id: {
type: 'string',
},
},
},
},
handler: async (client: MyMcpClient, args: any) => {
const user = await client.get(`/users/${args.id}`);
return {
content: [
{
type: 'text',
text: JSON.stringify(user),
},
],
};
},
},
];
initMcpServer({
server: this.server,
clientOptions: this.props.clientProps,
mcpOptions: this.props.clientConfig,
endpoints: [...endpoints, ...newEndpoints],
});
}
}
Static Assets and Styling
The Cloudflare worker template includes support for serving static assets from the ./static/
directory:
static/home.md
- Landing page content (will be rendered as Markdown)- Custom CSS and JavaScript files
- Images and other assets
You can customize the appearance and branding by:
- Modifying the Tailwind CSS configuration in the worker
- Adding custom static assets to the
./static/
directory - Updating the home page content to match your brand
The worker automatically serves these files and renders Markdown content with syntax highlighting and responsive design.
Next Steps
MCP server support is still experimental, so please reach out to us if you have any feedback or ideas for how we make it work better for you! If you want to learn more, check out our detailed OpenAPI spec to MCP server blog post.