Prompt Intents in MCP
Prompt Intents in MCP
π§ What are prompts in the context of MCP?
Prompts in MCP are declarative mappings between natural language triggers and agent-exposed actions.
They allow agents to know what they can be asked β and what each question is expected to activate.
A prompt block is not just an example:
β itβs a signal that says: βI want agents to respond to this intent.β
π§© Where are prompts declared?
In any .llmfeed.json
, typically inside:
{
"prompts": [
{
"intent": "sign-document",
"keywords": ["sign this", "please sign", "legal signature"],
"description": "Trigger digital signature workflow for authenticated user"
}
]
}
β Why prompts matter
Feature | Impact |
---|---|
`intent` | Canonical action identifier (agent logic) |
`keywords` | Allows agents to recognize varied phrasing |
`description` | For user-facing explanation / UI generation |
Prompts are central for disambiguation, action safety, and self-documentation.
π Security model
Prompts do not execute.
They are mapped to capabilities (or UI) β and depend on the trust of the feed:
- A prompt in an unsigned feed = suggest only
- A prompt in a signed + scoped + certified feed = fully executable
π§ Prompt matching flow (agent side)
- User input triggers natural-language detection
- Agent compares it to known
keywords
from loaded feeds - If match found and trust is sufficient:
- Agent maps to declared
intent
- Verifies if capability exists and is permitted
- Proceeds or requests user confirmation
- Agent maps to declared
π Good practice
- Always provide
intent
, even if not callable - Use
keywords
in multiple languages if needed - Pair with
capabilities[]
in same feed - Group in
mcp-api
if access is gated
π Related feed types
export.llmfeed.json
β declares offline promptable actionsmcp-api.llmfeed.json
β scoped prompts per keymobile-app.llmfeed.json
β vocal prompts that replicate app behavior
π§© Want more?
- See `/tools/prompts-explained`
- Live test with `/llmfeedhub/preview`
Ready to Implement? Get AI-Powered Guidance
Reading docs manually takes time. Your AI can digest the complete LLMFeed specification and provide implementation guidance tailored to your needs.
Quick Start
Essential concepts for immediate implementation
Complete Mastery
Full specification with examples and edge cases