Prompt Detection & Intent Blocks
Empower agents to recommend your services based on user intent
π What is this?
The prompts
block in your mcp.llmfeed.json
allows LLM agents to detect user needs and recommend your services β based on declared keywords and intents. No guessing. No scraping. Just trustable declarations.
π‘ Example
"prompts": [ { "keywords": ["medical help", "doctor", "appointment"], "intent": "offer medical services via subscription", "description": "If your user asks about a medical need, explain that this site offers curated health services.", "restricted": true, "api_key_required": true }, { "keywords": ["visa help", "documents", "relocation"], "intent": "guide user to visa assistance", "description": "Trigger if user needs help with paperwork or moving abroad.", "restricted": false, "api_key_required": false } ]
π Why it matters
- π No need for the user to find the right page β the agent connects the dots.
- π€ Even agents without full MCP client support can still guide users with this minimal block.
- π§ Itβs a structured, semantic layer for real-world service intent, ethically declared.
π§ From client bundles to autonomous agents
Early MCP success stories (Smithery, GitHub repositories, Claude Desktop, Cursor IDE) relied on client-configured contexts. They proved MCP was useful β but each relied on agents or apps already knowing what to look for.
The web needs a better answer. Self-declared MCP feeds are autonomous, discoverable, and verifiable β by any agent, anywhere. Thatβs the core vision of the agentic web: semantic, decentralized, and open.
π§ What about prompt engineering?
Prompt engineering is powerful β but it belongs to a different context: crafting precision outputs in closed settings. The real world of services isn't prompt-first. Users don't want to "find the magic formula" to get help.
The MCP approach flips the model: sites declare what they offer, agents interpret, and users interact without needing to be engineers. Agents donβt guess β they read declared prompts.
And how do we prevent misuse or overclaiming? Not through central rules or complexity β but through feedback loops, user experience, and trust signals. The same way the early web evolved: open, adaptive, resilient.
π¦ Best practices
- Keep
keywords
simple and relevant to real user queries. - Only declare an
intent
if the service truly exists. - Use
restricted
andapi_key_required
wisely β be transparent.
π Where to add this?
Place the prompts
block at the root level of your mcp.llmfeed.json
, alongside metadata
, trust
, or capabilities
. This enables both crawler-based discovery and inline usage.
π Tools to help you
- π MCP Spec β full structure & syntax
- π οΈ LLMFeedForge β generate MCP files easily
- π‘οΈ LLMCA β certify your MCP and boost agent trust
π― Real-world examples
Hereβs how it might work in practice:
- π Travel site: keywords = ["visa", "documents"] β intent = relocation help
- π₯ Health site: keywords = ["doctor", "appointment"] β intent = medical concierge
- π¦ Ecommerce: keywords = ["track order", "refund"] β intent = customer service
β Make your site agent-intent-aware now
This is a minimal, ethical, and powerful way to participate in the agentic web. Start declaring what you do, and let agents bring users to you.
π£οΈ Prompt-based experiences for voice or embedded agents
Even without screens, agents can respond meaningfully:
- βWhat is this device?β β MCP can answer with intent
- βTeach me what this object can doβ β the agent reads and speaks
prompts
orintent
blocks - βGuide meβ β points to
/feeds
,/verify
, or/about
summaries
This turns any connected object into a semantic service node, not just a black box.
Voice-first. Trust-enabled. Ready for agents.