Why MCP preserves order in `.llmfeed.json`

An update from the protocol ecosystem

πŸ€– Agent-ready β€’ Capabilities: verification

Why MCP preserves order in .llmfeed.json

When signing .llmfeed.json feeds, MCP takes a deliberate stance: we do NOT sort keys during canonicalization.

This is not an oversight β€” it is a conscious design choice, and here is why.

LLMs process tokens in order

Large Language Models do not parse JSON as structured data.
They consume JSON as raw text, token by token, in sequence.

This means:

  • The order of keys in the JSON affects how the LLM builds its internal context.
  • Important keys placed first may receive more attention.
  • Keys placed last may be ignored, especially in long contexts or with "early exit" models.

The Easter Egg Effect

In testing .llmfeed.json feeds, we observed the following:

  • When placing an easter egg instruction at the end of the feed, some LLMs ignored it.
  • When moving it to the top, the same LLMs consistently followed the instruction.

Conclusion: token order matters.

Why sorting keys breaks this guarantee

If MCP used sort_keys=True:

  • A feed author could design an intentional order.
  • But another tool re-serializing the feed (or even re-verifying it) could change that order without breaking the signature.
  • The LLM would then interpret the feed differently β€” even though the signature "validates".

This is unacceptable in an agentic context.

Our position

MCP declares:

In .llmfeed.json, signature MUST guarantee token order integrity.

Therefore:

  • MCP canonicalization preserves key order.
  • Changing key order WILL break the signature β€” as it should.

Conclusion

For generic APIs, sorting keys might be useful.
For LLM-targeted feeds, it is counterproductive and unsafe.

By preserving order, MCP:

βœ… Protects the feed as seen by the LLM
βœ… Allows intentional design of token flow
βœ… Guarantees semantic integrity β€” not just data integrity


LLMCA β€” Model Context Protocol Working Group