Prompt Detection & Intent Blocks

Empower agents to recommend your services based on user intent

πŸ” What is this?

The prompts block in your mcp.llmfeed.json allows LLM agents to detect user needs and recommend your services β€” based on declared keywords and intents. No guessing. No scraping. Just trustable declarations.

πŸ’‘ Example

"prompts": [
  {
    "keywords": ["medical help", "doctor", "appointment"],
    "intent": "offer medical services via subscription",
    "description": "If your user asks about a medical need, explain that this site offers curated health services.",
    "restricted": true,
    "api_key_required": true
  },
  {
    "keywords": ["visa help", "documents", "relocation"],
    "intent": "guide user to visa assistance",
    "description": "Trigger if user needs help with paperwork or moving abroad.",
    "restricted": false,
    "api_key_required": false
  }
]

πŸš€ Why it matters

  • πŸ”‘ No need for the user to find the right page β€” the agent connects the dots.
  • πŸ€– Even agents without full MCP client support can still guide users with this minimal block.
  • 🧠 It’s a structured, semantic layer for real-world service intent, ethically declared.

🧠 From client bundles to autonomous agents

Early MCP success stories (Smithery, GitHub repositories, Claude Desktop, Cursor IDE) relied on client-configured contexts. They proved MCP was useful β€” but each relied on agents or apps already knowing what to look for.

The web needs a better answer. Self-declared MCP feeds are autonomous, discoverable, and verifiable β€” by any agent, anywhere. That’s the core vision of the agentic web: semantic, decentralized, and open.

🧠 What about prompt engineering?

Prompt engineering is powerful β€” but it belongs to a different context: crafting precision outputs in closed settings. The real world of services isn't prompt-first. Users don't want to "find the magic formula" to get help.

The MCP approach flips the model: sites declare what they offer, agents interpret, and users interact without needing to be engineers. Agents don’t guess β€” they read declared prompts.

And how do we prevent misuse or overclaiming? Not through central rules or complexity β€” but through feedback loops, user experience, and trust signals. The same way the early web evolved: open, adaptive, resilient.

πŸ“¦ Best practices

  • Keep keywords simple and relevant to real user queries.
  • Only declare an intent if the service truly exists.
  • Use restricted and api_key_required wisely β€” be transparent.

πŸ›  Where to add this?

Place the prompts block at the root level of your mcp.llmfeed.json, alongside metadata, trust, or capabilities. This enables both crawler-based discovery and inline usage.

πŸ”— Tools to help you

🎯 Real-world examples

Here’s how it might work in practice:

  • 🌍 Travel site: keywords = ["visa", "documents"] β†’ intent = relocation help
  • πŸ₯ Health site: keywords = ["doctor", "appointment"] β†’ intent = medical concierge
  • πŸ“¦ Ecommerce: keywords = ["track order", "refund"] β†’ intent = customer service

βœ… Make your site agent-intent-aware now

This is a minimal, ethical, and powerful way to participate in the agentic web. Start declaring what you do, and let agents bring users to you.

πŸ—£οΈ Prompt-based experiences for voice or embedded agents

Even without screens, agents can respond meaningfully:

  • β€œWhat is this device?” β†’ MCP can answer with intent
  • β€œTeach me what this object can do” β†’ the agent reads and speaks prompts or intent blocks
  • β€œGuide me” β†’ points to /feeds, /verify, or /about summaries

This turns any connected object into a semantic service node, not just a black box.
Voice-first. Trust-enabled. Ready for agents.