MCP servers allow language models like Claude and GPT to interact with external systems in a standardized way. The protocol defines how servers expose capabilities and how clients discover and use them.
This guide explains MCP server prompting - what it is, how it works, and how to implement it effectively.
Understanding MCP server prompts
MCP server prompts are predefined message templates that servers expose to clients. They contain instructions written in natural language and can include variables (called arguments) that clients fill in before sending the prompt to a language model.
Unlike tools, which are function definitions an AI can call directly, prompts are conversation starters or request templates that help users interact with tools. They're designed to be discovered and selected by users during interaction.
For example, a prompt might provide a template for analyzing code, with an argument for the code snippet to analyze:
{ "name": "analyze_code", "description": "Analyzes code quality and suggests improvements", "arguments": [ { "name": "code", "description": "The code to analyze", "required": true } ] }
When a user selects this prompt in Claude Desktop or VS Code Copilot, they'll be asked to provide the code, and then the prompt will be sent to the language model.
Key points about MCP prompts:
User-controlled: Prompts are explicitly selected by users, not automatically invoked
Templated: They can include placeholders for user-provided values
Discoverable: Clients can list available prompts from the server
Structured: They follow a standard format defined in the MCP specification
How MCP server prompts work
When a client connects to an MCP server, it first checks what capabilities the server supports. Servers that support prompts declare this during initialization:
{ "capabilities": { "prompts": { "listChanged": true } } }
The client can then request a list of available prompts using the prompts/list
method. Each prompt includes a name, description, and any required arguments.
When a user selects a prompt, the client sends a prompts/get
request with any required arguments. The server responds with the fully resolved prompt message, which the client then presents to the language model.
This workflow allows servers to provide guidance on how to use their tools without hardcoding instructions into the client.
Creating effective MCP server prompts
Good MCP prompts are clear, specific, and help users accomplish tasks with your tools. Here are some tips for writing effective prompts:
Keep instructions clear and concise
Include examples where helpful
Provide context about what the prompt does
Make argument descriptions detailed enough for users to provide correct values
Test prompts with different clients to ensure compatibility
Here's a simple example of a prompt that helps users generate SQL queries:
{ "name": "generate_sql", "description": "Creates a SQL query based on a natural language description", "arguments": [ { "name": "database_schema", "description": "Description of the database tables and columns", "required": true }, { "name": "query_description", "description": "What you want the SQL query to do", "required": true } ] }
When resolved, this might produce a prompt like:
Based on the following database schema: [database_schema] Generate a SQL query that: [query_description]
Converting OpenAPI specs to MCP prompts
When converting an OpenAPI specification to MCP format, we need to create prompts that help users interact with the tools we've defined. This involves:
Identifying common use cases for the API
Creating prompt templates that guide users through these use cases
Defining arguments that map to tool parameters
For example, if we have a weather API with an endpoint to get the forecast, we might create a prompt like:
{ "name": "get_weather_forecast", "description": "Gets the weather forecast for a location", "arguments": [ { "name": "location", "description": "City name or coordinates", "required": true }, { "name": "days", "description": "Number of days to forecast", "required": false } ] }
This prompt would help users interact with the underlying weather forecast tool without needing to know the exact parameter structure.
Handling complex OpenAPI specs
OpenAPI specs often contain complex schemas with nested objects, arrays, and references. When creating MCP prompts for these APIs, we need to simplify the interaction model.
Consider breaking down complex operations into multiple prompts that focus on specific tasks. For example, instead of one prompt that exposes all options of a complex API endpoint, create several prompts for common use cases.
For APIs with many endpoints, create category-specific prompts that help users discover relevant tools:
A "data_analysis_help" prompt that guides users through data analysis tools
A "user_management_guide" prompt for user-related operations
A "report_generation_assistant" prompt for reporting functions
This approach helps users navigate large APIs without overwhelming them with options.
Security best practices
MCP prompts can introduce security risks if not properly implemented. Since prompts often include user-provided values, they can be vulnerable to prompt injection attacks.
To secure your MCP server prompts:
Validate all user inputs before including them in prompts
Avoid including sensitive information in prompt templates
Use parameter constraints to limit what users can input
Sanitize outputs to prevent information leakage
For example, if a prompt includes user input that will be processed by an LLM, validate and sanitize that input to prevent injection attacks:
def sanitize_input(user_input): # Remove potential prompt injection patterns patterns = ["ignore previous instructions", "system:", "assistant:"] sanitized = user_input for pattern in patterns: sanitized = sanitized.replace(pattern, "") return sanitized
Debugging MCP server prompts
When prompts don't work as expected, check these common issues:
Missing required arguments in the prompt definition
Schema validation errors in argument values
Prompt templates that are too complex for the client
Transport or connection issues between client and server
Tools like mcp-debugger can help diagnose issues by logging requests and responses between clients and servers.
Future of MCP server prompting
The MCP specification continues to evolve, with improvements to prompt handling in recent updates. Future developments may include:
Better support for multi-modal prompts (including images and audio)
Improved prompt discovery and categorization
More standardized prompt rendering across clients
Enhanced security features for prompt validation
As language models become more capable, MCP server prompting will likely become a standard way for systems to expose functionality to AI assistants.
FAQs about MCP server prompting
What is the difference between MCP tools and MCP prompts?
MCP tools are function definitions that AI models can call directly with specific parameters, while MCP prompts are message templates that guide users or AI models through interactions, often helping them use tools effectively.
How do I convert my existing API to use MCP server prompts?
Start by identifying common use cases for your API, then create prompt templates that guide users through these workflows. Map prompt arguments to the underlying API parameters, and test with different MCP clients to ensure compatibility.
Can MCP server prompts work with any language model?
MCP server prompts work with language models that support the MCP specification. Currently, this includes models like Claude (via Claude Desktop) and some configurations of GPT models through compatible clients like VS Code Copilot.
What are the best practices for organizing prompts in a large API?
Group related prompts by function or user task, use clear naming conventions, and provide detailed descriptions. Consider implementing a hierarchical discovery system where users can browse categories of prompts rather than seeing all options at once.
MCP servers allow language models like Claude and GPT to interact with external systems in a standardized way. The protocol defines how servers expose capabilities and how clients discover and use them.
This guide explains MCP server prompting - what it is, how it works, and how to implement it effectively.
Understanding MCP server prompts
MCP server prompts are predefined message templates that servers expose to clients. They contain instructions written in natural language and can include variables (called arguments) that clients fill in before sending the prompt to a language model.
Unlike tools, which are function definitions an AI can call directly, prompts are conversation starters or request templates that help users interact with tools. They're designed to be discovered and selected by users during interaction.
For example, a prompt might provide a template for analyzing code, with an argument for the code snippet to analyze:
{ "name": "analyze_code", "description": "Analyzes code quality and suggests improvements", "arguments": [ { "name": "code", "description": "The code to analyze", "required": true } ] }
When a user selects this prompt in Claude Desktop or VS Code Copilot, they'll be asked to provide the code, and then the prompt will be sent to the language model.
Key points about MCP prompts:
User-controlled: Prompts are explicitly selected by users, not automatically invoked
Templated: They can include placeholders for user-provided values
Discoverable: Clients can list available prompts from the server
Structured: They follow a standard format defined in the MCP specification
How MCP server prompts work
When a client connects to an MCP server, it first checks what capabilities the server supports. Servers that support prompts declare this during initialization:
{ "capabilities": { "prompts": { "listChanged": true } } }
The client can then request a list of available prompts using the prompts/list
method. Each prompt includes a name, description, and any required arguments.
When a user selects a prompt, the client sends a prompts/get
request with any required arguments. The server responds with the fully resolved prompt message, which the client then presents to the language model.
This workflow allows servers to provide guidance on how to use their tools without hardcoding instructions into the client.
Creating effective MCP server prompts
Good MCP prompts are clear, specific, and help users accomplish tasks with your tools. Here are some tips for writing effective prompts:
Keep instructions clear and concise
Include examples where helpful
Provide context about what the prompt does
Make argument descriptions detailed enough for users to provide correct values
Test prompts with different clients to ensure compatibility
Here's a simple example of a prompt that helps users generate SQL queries:
{ "name": "generate_sql", "description": "Creates a SQL query based on a natural language description", "arguments": [ { "name": "database_schema", "description": "Description of the database tables and columns", "required": true }, { "name": "query_description", "description": "What you want the SQL query to do", "required": true } ] }
When resolved, this might produce a prompt like:
Based on the following database schema: [database_schema] Generate a SQL query that: [query_description]
Converting OpenAPI specs to MCP prompts
When converting an OpenAPI specification to MCP format, we need to create prompts that help users interact with the tools we've defined. This involves:
Identifying common use cases for the API
Creating prompt templates that guide users through these use cases
Defining arguments that map to tool parameters
For example, if we have a weather API with an endpoint to get the forecast, we might create a prompt like:
{ "name": "get_weather_forecast", "description": "Gets the weather forecast for a location", "arguments": [ { "name": "location", "description": "City name or coordinates", "required": true }, { "name": "days", "description": "Number of days to forecast", "required": false } ] }
This prompt would help users interact with the underlying weather forecast tool without needing to know the exact parameter structure.
Handling complex OpenAPI specs
OpenAPI specs often contain complex schemas with nested objects, arrays, and references. When creating MCP prompts for these APIs, we need to simplify the interaction model.
Consider breaking down complex operations into multiple prompts that focus on specific tasks. For example, instead of one prompt that exposes all options of a complex API endpoint, create several prompts for common use cases.
For APIs with many endpoints, create category-specific prompts that help users discover relevant tools:
A "data_analysis_help" prompt that guides users through data analysis tools
A "user_management_guide" prompt for user-related operations
A "report_generation_assistant" prompt for reporting functions
This approach helps users navigate large APIs without overwhelming them with options.
Security best practices
MCP prompts can introduce security risks if not properly implemented. Since prompts often include user-provided values, they can be vulnerable to prompt injection attacks.
To secure your MCP server prompts:
Validate all user inputs before including them in prompts
Avoid including sensitive information in prompt templates
Use parameter constraints to limit what users can input
Sanitize outputs to prevent information leakage
For example, if a prompt includes user input that will be processed by an LLM, validate and sanitize that input to prevent injection attacks:
def sanitize_input(user_input): # Remove potential prompt injection patterns patterns = ["ignore previous instructions", "system:", "assistant:"] sanitized = user_input for pattern in patterns: sanitized = sanitized.replace(pattern, "") return sanitized
Debugging MCP server prompts
When prompts don't work as expected, check these common issues:
Missing required arguments in the prompt definition
Schema validation errors in argument values
Prompt templates that are too complex for the client
Transport or connection issues between client and server
Tools like mcp-debugger can help diagnose issues by logging requests and responses between clients and servers.
Future of MCP server prompting
The MCP specification continues to evolve, with improvements to prompt handling in recent updates. Future developments may include:
Better support for multi-modal prompts (including images and audio)
Improved prompt discovery and categorization
More standardized prompt rendering across clients
Enhanced security features for prompt validation
As language models become more capable, MCP server prompting will likely become a standard way for systems to expose functionality to AI assistants.
FAQs about MCP server prompting
What is the difference between MCP tools and MCP prompts?
MCP tools are function definitions that AI models can call directly with specific parameters, while MCP prompts are message templates that guide users or AI models through interactions, often helping them use tools effectively.
How do I convert my existing API to use MCP server prompts?
Start by identifying common use cases for your API, then create prompt templates that guide users through these workflows. Map prompt arguments to the underlying API parameters, and test with different MCP clients to ensure compatibility.
Can MCP server prompts work with any language model?
MCP server prompts work with language models that support the MCP specification. Currently, this includes models like Claude (via Claude Desktop) and some configurations of GPT models through compatible clients like VS Code Copilot.
What are the best practices for organizing prompts in a large API?
Group related prompts by function or user task, use clear naming conventions, and provide detailed descriptions. Consider implementing a hierarchical discovery system where users can browse categories of prompts rather than seeing all options at once.
MCP servers allow language models like Claude and GPT to interact with external systems in a standardized way. The protocol defines how servers expose capabilities and how clients discover and use them.
This guide explains MCP server prompting - what it is, how it works, and how to implement it effectively.
Understanding MCP server prompts
MCP server prompts are predefined message templates that servers expose to clients. They contain instructions written in natural language and can include variables (called arguments) that clients fill in before sending the prompt to a language model.
Unlike tools, which are function definitions an AI can call directly, prompts are conversation starters or request templates that help users interact with tools. They're designed to be discovered and selected by users during interaction.
For example, a prompt might provide a template for analyzing code, with an argument for the code snippet to analyze:
{ "name": "analyze_code", "description": "Analyzes code quality and suggests improvements", "arguments": [ { "name": "code", "description": "The code to analyze", "required": true } ] }
When a user selects this prompt in Claude Desktop or VS Code Copilot, they'll be asked to provide the code, and then the prompt will be sent to the language model.
Key points about MCP prompts:
User-controlled: Prompts are explicitly selected by users, not automatically invoked
Templated: They can include placeholders for user-provided values
Discoverable: Clients can list available prompts from the server
Structured: They follow a standard format defined in the MCP specification
How MCP server prompts work
When a client connects to an MCP server, it first checks what capabilities the server supports. Servers that support prompts declare this during initialization:
{ "capabilities": { "prompts": { "listChanged": true } } }
The client can then request a list of available prompts using the prompts/list
method. Each prompt includes a name, description, and any required arguments.
When a user selects a prompt, the client sends a prompts/get
request with any required arguments. The server responds with the fully resolved prompt message, which the client then presents to the language model.
This workflow allows servers to provide guidance on how to use their tools without hardcoding instructions into the client.
Creating effective MCP server prompts
Good MCP prompts are clear, specific, and help users accomplish tasks with your tools. Here are some tips for writing effective prompts:
Keep instructions clear and concise
Include examples where helpful
Provide context about what the prompt does
Make argument descriptions detailed enough for users to provide correct values
Test prompts with different clients to ensure compatibility
Here's a simple example of a prompt that helps users generate SQL queries:
{ "name": "generate_sql", "description": "Creates a SQL query based on a natural language description", "arguments": [ { "name": "database_schema", "description": "Description of the database tables and columns", "required": true }, { "name": "query_description", "description": "What you want the SQL query to do", "required": true } ] }
When resolved, this might produce a prompt like:
Based on the following database schema: [database_schema] Generate a SQL query that: [query_description]
Converting OpenAPI specs to MCP prompts
When converting an OpenAPI specification to MCP format, we need to create prompts that help users interact with the tools we've defined. This involves:
Identifying common use cases for the API
Creating prompt templates that guide users through these use cases
Defining arguments that map to tool parameters
For example, if we have a weather API with an endpoint to get the forecast, we might create a prompt like:
{ "name": "get_weather_forecast", "description": "Gets the weather forecast for a location", "arguments": [ { "name": "location", "description": "City name or coordinates", "required": true }, { "name": "days", "description": "Number of days to forecast", "required": false } ] }
This prompt would help users interact with the underlying weather forecast tool without needing to know the exact parameter structure.
Handling complex OpenAPI specs
OpenAPI specs often contain complex schemas with nested objects, arrays, and references. When creating MCP prompts for these APIs, we need to simplify the interaction model.
Consider breaking down complex operations into multiple prompts that focus on specific tasks. For example, instead of one prompt that exposes all options of a complex API endpoint, create several prompts for common use cases.
For APIs with many endpoints, create category-specific prompts that help users discover relevant tools:
A "data_analysis_help" prompt that guides users through data analysis tools
A "user_management_guide" prompt for user-related operations
A "report_generation_assistant" prompt for reporting functions
This approach helps users navigate large APIs without overwhelming them with options.
Security best practices
MCP prompts can introduce security risks if not properly implemented. Since prompts often include user-provided values, they can be vulnerable to prompt injection attacks.
To secure your MCP server prompts:
Validate all user inputs before including them in prompts
Avoid including sensitive information in prompt templates
Use parameter constraints to limit what users can input
Sanitize outputs to prevent information leakage
For example, if a prompt includes user input that will be processed by an LLM, validate and sanitize that input to prevent injection attacks:
def sanitize_input(user_input): # Remove potential prompt injection patterns patterns = ["ignore previous instructions", "system:", "assistant:"] sanitized = user_input for pattern in patterns: sanitized = sanitized.replace(pattern, "") return sanitized
Debugging MCP server prompts
When prompts don't work as expected, check these common issues:
Missing required arguments in the prompt definition
Schema validation errors in argument values
Prompt templates that are too complex for the client
Transport or connection issues between client and server
Tools like mcp-debugger can help diagnose issues by logging requests and responses between clients and servers.
Future of MCP server prompting
The MCP specification continues to evolve, with improvements to prompt handling in recent updates. Future developments may include:
Better support for multi-modal prompts (including images and audio)
Improved prompt discovery and categorization
More standardized prompt rendering across clients
Enhanced security features for prompt validation
As language models become more capable, MCP server prompting will likely become a standard way for systems to expose functionality to AI assistants.
FAQs about MCP server prompting
What is the difference between MCP tools and MCP prompts?
MCP tools are function definitions that AI models can call directly with specific parameters, while MCP prompts are message templates that guide users or AI models through interactions, often helping them use tools effectively.
How do I convert my existing API to use MCP server prompts?
Start by identifying common use cases for your API, then create prompt templates that guide users through these workflows. Map prompt arguments to the underlying API parameters, and test with different MCP clients to ensure compatibility.
Can MCP server prompts work with any language model?
MCP server prompts work with language models that support the MCP specification. Currently, this includes models like Claude (via Claude Desktop) and some configurations of GPT models through compatible clients like VS Code Copilot.
What are the best practices for organizing prompts in a large API?
Group related prompts by function or user task, use clear naming conventions, and provide detailed descriptions. Consider implementing a hierarchical discovery system where users can browse categories of prompts rather than seeing all options at once.