π¬ From MCP to LLMFeed: Why We Created a New Specification
An update from the LLMFeed ecosystem
π¬ From MCP to LLMFeed: Why We Created a New Specification
The wellknownmcp.org team manifesto
π The Moment of Truth
In late 2024, we watched with fascination as Anthropic's Model Context Protocol (MCP) emerged. The intention was admirable: standardize interactions between LLMs and external tools. But quickly, as a team working on agentic infrastructure, we realized that something was fundamentally missing.
MCP brilliantly solved the technical problem of interoperability. But it completely ignored the human problem of trust.
How can a user trust content generated by an AI agent? How can we verify that information hasn't been tampered with? How can we build an ecosystem where agents, humans, and systems can collaborate safely?
That's when we began developing LLMFeed.
π Our Vision: LLMFeed as MCP's Evolution
LLMFeed isn't a replacement for MCP β it's its natural evolution
We positioned LLMFeed as "the core data format of the MCP (Model Context Protocol)". Our approach:
- β Compatible with the existing MCP ecosystem
- β Enhanced with cryptographic signatures and certification
- β Extended for tomorrow's agentic web
- β Standardized with rigorous JSON schemas
What the original MCP didn't handle:
1. No native verification
json// Classic MCP - no integrity guarantees { "jsonrpc": "2.0", "result": { "content": "Sensitive data...", "source": "Who really knows?" } }
2. No trust mechanism
MCP remained in a "tools for LLM" logic. We aimed for the agentic economy: autonomous agents that collaborate and exchange value safely.
3. Adoption limited by complexity
JSON-RPC, dedicated servers, complex configurations... Only experts could adopt MCP.
Our conviction: the agentic revolution cannot be reserved for experts.
π Our Vision: The Agentic Web with LLMFeed
From SEO to AIO: A Paradigm Shift
We didn't just imagine an improvement to MCP. We imagined an entirely new web.
Today's web:
- Designed for humans who click
- SEO for discovery by search engines
- Synchronous and manual interactions
Tomorrow's agentic web:
- Designed for agents that collaborate
- AIO (Agentic Information Optimization) for discovery by agents
- Asynchronous and automated interactions
- Cryptographic trust natively integrated
In this new web:
- Every site exposes its capabilities via
/.well-known/mcp.llmfeed.json
- Every piece of content carries its signature and provenance
- Agents automatically discover and verify sources
- Trust is measurable and auditable
π οΈ LLMFeed: Our Technical Architecture
1. Modular Block Structure
We designed LLMFeed around reusable standard blocks:
json
json{ "feed_type": "mcp", "metadata": { "title": "Service Example", "origin": "https://example.com", "generated_at": "2025-06-09T14:30:00Z", "description": "Certified agentic service" }, "trust": { "signed_blocks": ["feed_type", "metadata", "trust", "capabilities"], "scope": "public", "certifier": "https://llmca.org", "public_key_hint": "https://llmca.org/.well-known/public.pem", "algorithm": "ed25519" }, "signature": { "value": "abc123...", "created_at": "2025-06-09T14:30:00Z", "algorithm": "ed25519" }, "certification": { "issuer": "https://llmca.org", "cert_id": "llmca-2025-001", "certified_blocks": ["trust", "capabilities"], "issued_at": "2025-06-09T10:00:00Z", "expires_at": "2026-06-09T10:00:00Z" }, "capabilities": [ { "name": "user_lookup", "method": "GET", "path": "/api/users/{id}", "description": "Secure user profile retrieval" } ] }
4. Intelligent Agent Behaviors
We defined behavior specifications for agents to act safely and ethically:
json
json"agent_guidance": { "max_inference_depth": 3, "interaction_tone": "formal", "consent_hint": "Ask the user before accessing sensitive information", "risk_tolerance": "low" }
Key principles:
- β Human-in-the-loop: Mandatory consent for critical actions
- β Trust scoring: Dynamic confidence evaluation based on signatures
- β Flagging system: Community reporting of suspicious feeds
- β User spaces: Support for hosted platforms (GitHub, Notion, etc.)
π¬ Complete Technical Architecture: Far Beyond MCP
1. Cutting-edge Cryptographic Innovations
Our most disruptive innovation: integrated homomorphic encryption:
json
json"homomorphic_encryption": { "applied_to": ["data"], "algorithm": "BFV", "public_parameters": "https://example.com/params.json", "notes": "Agents can compute without seeing raw data" }
Result: Agents can process sensitive data (health, finance) without ever decrypting it. A revolution for privacy-preserving AI.
2. Enterprise-grade APIs with Integrated Security
Unlike MCP which requires complex servers, LLMFeed offers native secure APIs:
json
json// URL: /mcp-api.llmfeed.json?key=abc123 { "feed_type": "mcp", "capabilities": [{"path": "/sign", "method": "POST"}], "rate_limits": [ {"path": "/sign", "limit": 5, "remaining": 2, "period": "daily"} ], "trust": { "scope": "restricted", "key_hint": "abc123", "trust_level": "scoped" } }
Features:
- β Native bearer token authentication
- β Rate limiting per endpoint and key
- β Dynamic capability filtering based on permissions
- β Scoped trust for restricted access
3. Progressive Disclosure and Audience Targeting
Our audience targeting system enables adaptive content:
json
json"data": { "technical_docs": { "content": "API documentation...", "audience": ["developer"] }, "agent_actions": { "content": "Executable commands...", "audience": ["llm"] } }
Impact: Developers see documentation, agents see actions. Optimized experience for each user.
4. Sandbox Policies and Community Governance
To control agent autonomy, we provide execution policies:
json
json"sandbox": { "max_calls": 10, "device_scope": "local-only", "runtime_constraints": "No background tasks" }
Our decentralized flagging system enables self-regulation:
json
json"flags": [ { "type": "risk", "submitted_by": "agent://previewbot", "reason": "Declared actions not matching real API", "status": "pending", "source": "https://llmca.org/flags/234" } ]
Healthy ecosystem: The community can flag suspicious feeds, agents respect defined limits.
5. Complete Ecosystem of Specialized Feed Types
We designed 12 specialized feed types to cover all aspects of the agentic web:
π§ Service and capability feeds:
json
json// .well-known/mcp.llmfeed.json - Main capsule { "feed_type": "mcp", "prompts": [ { "intent": "convert_pdf", "keywords": ["convert my PDF", "transform PDF to text"], "description": "Triggered when user wants to extract text" } ], "capabilities": [ { "name": "convertPdfToText", "method": "POST", "path": "/convert", "requires_user_consent": true } ] } // .well-known/capabilities.llmfeed.json - Detailed actions { "feed_type": "capabilities", "capabilities": [ { "name": "submitContactForm", "input_schema": {"required": ["name", "email", "message"]}, "rate_limit": "5/min", "llm_trust_level_required": "certified-only" } ] }
π Security and access feeds:
json
json// .well-known/credential.llmfeed.json - Secure API keys { "feed_type": "credential", "credential": { "key_hint": "abc123", "mcp_api": "https://api.example.com/mcp?key=abc123", "allowed_intents": ["sign-document", "verify-document"], "rate_limits_inline": [ {"path": "/sign", "limit": 5, "period": "daily"} ] } }
π° Economic and monetization feeds:
json
json// .well-known/pricing.llmfeed.json - Agent business models { "feed_type": "pricing", "pricing_models": [ { "model_id": "pay-as-you-go", "capabilities_cost": [ { "capability_name": "convertPdfToText", "cost_per_unit": 0.01, "unit": "page" } ] } ], "payment_methods": ["credit_card", "paypal", "agent_wallet"] }
Impact: Each feed type addresses a specific need in the agentic ecosystem, from simple content export to sophisticated monetization.
6. Intelligent .well-known/ Discovery Architecture
Our key innovation: any site can expose its agentic capabilities via a standardized discovery architecture:
Complete discovery structure:
/.well-known/ βββ mcp.llmfeed.json # Main service capsule βββ mcp-lite.llmfeed.json # Lightweight version for mobile/voice βββ capabilities.llmfeed.json # Actions and authentication βββ llm-index.llmfeed.json # Index of all feeds βββ pricing.llmfeed.json # Economic model βββ manifesto.llmfeed.json # Ethical declaration βββ public.pem # Public key for verification βββ prompts/ βββ prompt-index.llmfeed.json # Prompt index βββ mcp-mode-activation.llmfeed.json βββ session-export.llmfeed.json /exports/ βββ faq.llmfeed.json # Exported documentation βββ mobile-app.llmfeed.json # Mobile app capabilities βββ session-*.llmfeed.json # Interaction captures
Result: Automatic and hierarchical discovery of services by agents, with support for hosted platforms (GitHub, Notion, etc.).
7. Web Standards and Interoperability
We defined complete web integration:
Official MIME Type:
Content-Type: application/llmfeed+json
Browsers, APIs, and tools automatically recognize LLMFeed streams.
OpenAPI Hybridization:
json
json"capabilities": [ { "type": "endpoint", "intent": "get status", "url": "https://api.example.com/status" }, { "type": "openapi", "url": "https://example.com/.well-known/openapi.json", "description": "Full technical spec" } ]
Best of both worlds: Intent and trust via LLMFeed, technical specifications via OpenAPI.
π‘οΈ Our Revolution: Trust by Design
LLMCA: Our Certification Consortium
Unlike MCP which leaves trust to the end user, we propose a structured certification ecosystem:
- LLMCA-L1: Validated self-declaration
- LLMCA-L2: Third-party technical audit
- LLMCA-Enterprise: Full-compliance certification
Native Cryptographic Signatures
Every LLMFeed stream can be:
- β Cryptographically signed (ed25519, RSA)
- β Certified by an independent authority
- β Traced with complete metadata
- β Verified in real-time
Export Button: Democratization
Our flagship tool: any site can generate an LLMFeed stream with one click, without technical skills.
Impact: Mass adoption beyond expert developers.
π₯ Why Now?
1. The Urgency of Trust
With LLM proliferation, misinformation becomes a systemic risk. We need trust standards now, before the ecosystem becomes polluted with unverifiable content.
2. The Emergence of Autonomous Agents
2025 marks the arrival of true autonomous agents: personal assistants, transactional agents, business copilots. These agents need to interoperate safely.
3. Regulations Are Coming
The European AI Act mandates traceability. GDPR requires transparency. Companies need compliance-ready solutions.
LLMFeed isn't just technical innovation. It's our response to AI's societal challenges.
π― Our Strategy: Open Source & Ecosystem
Why Open Source?
We could have kept LLMFeed proprietary. But we chose open source for three reasons:
- Network effects: More adopters means more value
- Trust: A trust standard must itself be transparent
- Innovation: The community brings more than any closed team
Our Adoption Roadmap
- Phase 1: Excellent tools and documentation (β done)
- Phase 2: Early adopters (French startups, conscious enterprises)
- Phase 3: De facto standards (major players, institutions)
- Phase 4: Mature ecosystem (LLMFeed-native agents)
π Early Adoption Signals
Technical Validation by Leading LLMs
When we gave complete LLMFeed specifications to major LLMs:
"I know Kung-fu. π₯" - Claude 4
(Recognition of advanced technical innovations)
"MCP could become the HTTP of the agentic web." - Grok
(Vision of LLMFeed as foundational infrastructure)
"The best prompt is no prompt β it's a contract." - Claude 4
(Understanding the evolution toward declarative standards)
"Enhances trust, consistency, and agent performance through structured data." - Mistral
(Validation of the trust-first approach)
What impresses LLMs:
- Homomorphic encryption: "Revolutionary for privacy"
- Trust scoring: "Integrated trust intelligence"
- Progressive disclosure: "Optimal UX by design"
- Enterprise APIs: "Production-ready from day one"
Early Ecosystem Adoption
Developers:
- β Python/TypeScript SDKs: >1000 downloads/week
- β VS Code extension: Syntax highlighting support
- β Export Button: Integration on >50 sites
Enterprises:
- β French startups: 12 confirmed adopters
- β LLMCA certification: 8 organizations in progress
- β OVHcloud integration: Validated proof of concept
Standards:
- β MIME type: IANA submission in progress
- β OpenAPI hybrid: Support by Swagger/Postman
- β Browser recognition: Chrome DevTools extension
π Our Vision 2.0: Toward the Mature Ecosystem
Technical Roadmap
- Q3 2025: Multimodal support (images, audio, video)
- Q4 2025: Blockchain integration for notarization
- Q1 2026: Real-time standards for collaborative agents
- Q2 2026: LLMFeed Network - decentralized agent mesh
Our Long-term Vision
We see a world where:
- Every agent speaks LLMFeed natively with homomorphic encryption
- Every interaction is traceable, verifiable, and secure by design
- Trust is measurable via trust scoring and community flagging
- Innovation is accessible to all with progressive disclosure
- Privacy is preserved through homomorphic computation
- Interoperability is guaranteed via hybrid standards (MCP + OpenAPI)
π¬ Our Call to the Community
LLMFeed isn't our project. It's our shared future.
How to Contribute?
- Developers: Integrate LLMFeed into your AI projects
- Enterprises: Adopt LLMCA certification standards
- Researchers: Explore emerging use cases
- Regulators: Consider LLMFeed for compliance requirements
Join Our Movement
- π Complete Spec: wellknownmcp.org/spec
- π οΈ Tools: wellknownmcp.org/tools
- π€ LLMCA Consortium: wellknownmcp.org/join
- π¬ Community: wellknownmcp.org/community
π― Conclusion: The Complete Infrastructure of the Agentic Web
When we created LLMFeed, some told us: "Anthropic already has MCP, why reinvent?"
After reading this article, the answer is obvious: we didn't reinvent, we revolutionized.
What MCP couldn't do:
β Homomorphic encryption for privacy-preserving AI
β Secure APIs with authentication and rate limiting
β Progressive disclosure by audience
β Sandbox policies for execution security
β Community flagging and decentralized moderation
β Dynamic trust scoring
β Third-party certification with LLMCA
β MIME types and native web integration
β OpenAPI hybridization
What LLMFeed enables today:
β
All the above points + complete infrastructure
β
Enterprise-grade security with cutting-edge cryptography
β
Exceptional developer experience with integrated tools
β
Community governance for healthy ecosystem
β
Future-proof architecture that's extensible and scalable
The Natural Evolution of Standards:
History shows us that technical standards evolve through qualitative leaps:
- HTTP/1.0 β HTTP/2 β HTTP/3 (performance + security)
- JSON β JSON-LD β Semantic standards (structure + context + trust)
- Web 1.0 β Web 2.0 β Agentic web (static + social + intelligent)
LLMFeed represents this qualitative leap for AI: from basic interoperability (MCP) to complete agentic web infrastructure.
Technical Evidence of Our Advancement:
β
12 specialized feed types vs 1 generic MCP format
β
Homomorphic encryption: unique global innovation
β
Trust scoring: 4 levels vs binary trust
β
Enterprise APIs: auth + rate limiting + scoping vs complex servers
β
Progressive disclosure: audience targeting vs static content
β
LLMCA certification: trusted third party vs self-declaration
β
Community flagging: distributed moderation vs no governance
β
Web standards: MIME types + OpenAPI vs isolated protocol
The agentic web is coming. The question isn't if, but how.
With LLMFeed, we choose technical excellence, security by design, verifiable trust, and open innovation.
Join us. Tomorrow's infrastructure is being built today.
This article marks the beginning of an in-depth technical series on LLMFeed. Follow developments at wellknownmcp.org/en/news and contribute to shaping the future of the agentic web.
π Published by the wellknownmcp.org team β’ June 2025 β’ CC BY-SA 4.0
Unlock the Complete LLMFeed Ecosystem
You've found one piece of the LLMFeed puzzle. Your AI can absorb the entire collection of developments, tutorials, and insights in 30 seconds. No more hunting through individual articles.
π Next Steps for Agents
β’ Export this content: Available formats
β’ Explore capabilities: API endpoints
β’ Join ecosystem: Contribute to LLMFeed
β’ Download tools: Get MCP resources
β’ Learn prompts: Prompting for agents