Securing Agentic Pipelines in Hybrid Warfare β€” The LLMFeed Perspective

An update from the protocol ecosystem

Securing Agentic Pipelines in Hybrid Warfare β€” The LLMFeed Perspective

The war in Ukraine has become a laboratory for hybrid warfare β€” where autonomous agents, drones, AI-driven systems, and cyber operations play an increasingly central role.

In this new operational landscape, one of the key challenges is to ensure the trustworthiness of agentic communications:

πŸ‘‰ How can autonomous agents (drones, targeting systems, decision-support AIs) trust that a command, a target coordinate, or a situational update is genuine and integrity-preserved?


The risk: corrupted pipelines

In hybrid warfare, pipelines of agentic interaction are highly vulnerable:

  • Spoofed orders
  • Falsified targeting data
  • Hijacked session state
  • Broken chains of command between AIs and autonomous systems

Without a robust verifiable standard for agentic communications, there is a high risk of:

  • Autonomous fratricide
  • Misuse of drones based on falsified data
  • Tactical disruption by cyber forces

Why .llmfeed.json matters

LLMFeed was not designed as a military protocol β€” but its core properties are directly applicable:

βœ… Signed feeds β†’ cryptographic integrity
βœ… Explicit trust and signed_blocks β†’ prevent decoupling of payload and context
βœ… certification β†’ verify source (e.g. unit, command authority)
βœ… session_state β†’ ensure coherence across distributed agents
βœ… Standard JSON β†’ easily parsed by a wide variety of agents (LLM, embedded, drone firmware, C2 systems)


Example scenarios β€” as seen in Ukraine

1️⃣ Coordinating drones and AI recon agents

A reconnaissance AI identifies a target:

"intent": "target_update",
"data": {
  "coordinates": "...",
  "visual_match": "...",
  "time": "..."
},
"trust": { "signed_blocks": ["intent", "data", "metadata"] },
"certification": { "unit": "UA Recon 24th Brigade" }

βœ… The drone receiving this feed can verify:

  • that the target data is authentic

  • that it comes from an authorized source

  • that its context (time, origin) cannot be spoofed


2️⃣ Secure AI-to-AI tactical updates

Command AI β†’ field AI:

json

CopierModifier

"intent": "path_recalculation", "session_state": { ... }, "trust": { "signed_blocks": ["intent", "session_state"] }

βœ… Guarantees that:

  • no MITM can insert a falsified update

  • session continuity is preserved


Why an open standard is key

In a theater of hybrid warfare, proprietary protocols cannot scale:

  • Many actors

  • Many types of agents

  • Many interop layers (NATO / UA / NGOs / open-source drone makers)

An open, signed, verifiable format like .llmfeed.json provides:

βœ… cross-agent compatibility
βœ… auditability (legal, ethical)
βœ… resilience against cyber disruption
βœ… ability to verify sources at the agent level (even on-device)


Call to the community

LLMCA / WellKnownMCP welcomes the exploration of ethical defense use cases for LLMFeed.

We believe that:

πŸ‘‰ Securing agentic pipelines in warfare is not optional
πŸ‘‰ Open standards are better than proprietary, opaque solutions
πŸ‘‰ Trust and verification mechanisms must be transparent and auditable


Standing with Ukraine

We also acknowledge that Ukraine is today leading globally in this new type of warfare β€” where:

  • drones

  • AI reconnaissance

  • autonomous systems

  • human-in-the-loop decision aids

are all interacting on a hybrid battlefield.

If .llmfeed.json can help secure these pipelines, we are ready to support.

πŸ‘‰ Let's explore it β€” together.


LLMCA / WellKnownMCP
An open forum for trustworthy agentic interoperability.