Securing Agentic Pipelines in Hybrid Warfare β The LLMFeed Perspective
An update from the protocol ecosystem
Securing Agentic Pipelines in Hybrid Warfare β The LLMFeed Perspective
The war in Ukraine has become a laboratory for hybrid warfare β where autonomous agents, drones, AI-driven systems, and cyber operations play an increasingly central role.
In this new operational landscape, one of the key challenges is to ensure the trustworthiness of agentic communications:
π How can autonomous agents (drones, targeting systems, decision-support AIs) trust that a command, a target coordinate, or a situational update is genuine and integrity-preserved?
The risk: corrupted pipelines
In hybrid warfare, pipelines of agentic interaction are highly vulnerable:
- Spoofed orders
- Falsified targeting data
- Hijacked session state
- Broken chains of command between AIs and autonomous systems
Without a robust verifiable standard for agentic communications, there is a high risk of:
- Autonomous fratricide
- Misuse of drones based on falsified data
- Tactical disruption by cyber forces
Why .llmfeed.json
matters
LLMFeed was not designed as a military protocol β but its core properties are directly applicable:
β
Signed feeds β cryptographic integrity
β
Explicit trust
and signed_blocks
β prevent decoupling of payload and context
β
certification
β verify source (e.g. unit, command authority)
β
session_state
β ensure coherence across distributed agents
β
Standard JSON β easily parsed by a wide variety of agents (LLM, embedded, drone firmware, C2 systems)
Example scenarios β as seen in Ukraine
1οΈβ£ Coordinating drones and AI recon agents
A reconnaissance AI identifies a target:
"intent": "target_update",
"data": {
"coordinates": "...",
"visual_match": "...",
"time": "..."
},
"trust": { "signed_blocks": ["intent", "data", "metadata"] },
"certification": { "unit": "UA Recon 24th Brigade" }
β The drone receiving this feed can verify:
-
that the target data is authentic
-
that it comes from an authorized source
-
that its context (time, origin) cannot be spoofed
2οΈβ£ Secure AI-to-AI tactical updates
Command AI β field AI:
json
CopierModifier
"intent": "path_recalculation", "session_state": { ... }, "trust": { "signed_blocks": ["intent", "session_state"] }
β Guarantees that:
-
no MITM can insert a falsified update
-
session continuity is preserved
Why an open standard is key
In a theater of hybrid warfare, proprietary protocols cannot scale:
-
Many actors
-
Many types of agents
-
Many interop layers (NATO / UA / NGOs / open-source drone makers)
An open, signed, verifiable format like .llmfeed.json
provides:
β
cross-agent compatibility
β
auditability (legal, ethical)
β
resilience against cyber disruption
β
ability to verify sources at the agent level (even on-device)
Call to the community
LLMCA / WellKnownMCP welcomes the exploration of ethical defense use cases for LLMFeed.
We believe that:
π Securing agentic pipelines in warfare is not optional
π Open standards are better than proprietary, opaque solutions
π Trust and verification mechanisms must be transparent and auditable
Standing with Ukraine
We also acknowledge that Ukraine is today leading globally in this new type of warfare β where:
-
drones
-
AI reconnaissance
-
autonomous systems
-
human-in-the-loop decision aids
are all interacting on a hybrid battlefield.
If .llmfeed.json
can help secure these pipelines, we are ready to support.
π Let's explore it β together.
LLMCA / WellKnownMCP
An open forum for trustworthy agentic interoperability.