· Agentic AI · 7 min read
The A2A Protocol: Standardizing Handoffs Between Heterogeneous Agents
How the A2A standard allows multi-vendor agents to discover, negotiate, and delegate tasks safely.

Key Takeaways
- Single-vendor, monolithic agentic systems are giving way to diverse, multi-vendor “swarms” of specialized agents.
- Without a standardized communication layer, agents from different platforms cannot negotiate, verify credentials, or pass context safely to one another.
- The emerging Agent-to-Agent (A2A) protocol solves this by establishing a universal handshake for discovery, delegation, and state transfer across heterogeneous AI systems.
We have spent the last two years hyper-focusing on the interaction between humans and agents. We built conversational interfaces, prompt engineering frameworks, and elaborate human-in-the-loop approval dashboards. But as enterprise AI matures, the most critical communication pathway is no longer Human-to-Agent. It is Agent-to-Agent (A2A).
Imagine a scenario where your company uses a highly specialized financial auditing agent. Simultaneously, your supply chain vendor uses a logistics agent built on a completely different tech stack and a different foundational model. When your financial agent needs to verify a discrepancy in a shipping invoice, it should not alert a human to send an email. It should directly query the vendor’s logistics agent, negotiate the data exchange, verify the findings, and update your ledger automatically.
This seamless interaction is the holy grail of autonomous enterprise. But until recently, it was technologically impossible without writing brittle, custom API wrappers for every single integration.
This is exactly why I previously wrote MCP: The End of the API Wrapper. But MCP only solves how an agent reads data. How does an agent talk to another autonomous agent that might be running a completely different reasoning loop?
This is where the A2A Protocol comes in.
The Problem with Heterogeneous Swarms
If you build an entire multi-agent system within a single framework (like LangGraph, AutoGen, or similar orchestration libraries), inter-agent communication is trivial. The framework acts as the central router, passing Python objects and state dictionaries between nodes.
But the real world is not a single framework. The real world is a messy, heterogeneous mix of vendors, models, and architectures.
When Agent A (a custom internal application) needs to delegate a task to Agent B (a SaaS product from a third-party vendor), it encounters three massive friction points:
- Discovery: How does Agent A even know Agent B exists, what its capabilities are, and what input format it requires?
- Context Transfer: How does Agent A pass its current reasoning state (the “scratchpad”) to Agent B without losing nuance or overflowing Agent B’s context window?
- Trust and Verifiability: As discussed in A2A Architectures: Tools are not just Functions, how does Agent A prove it has the authorization to request this action, and how does it verify the response came from the legitimate Agent B?
Without a protocol, you end up hardcoding these integrations. You build brittle systems that break the moment a vendor updates their API schema or alters their prompt template.
The Mechanics of the A2A Protocol
The A2A (Agent-to-Agent) protocol acts as the universal handshake for autonomous systems. It is not an execution framework; it is an open standard for negotiation, authentication, and data exchange.
Let us break down a typical A2A interaction flow using standard cloud primitives.
Suppose an internal HR Agent (running on a standard Kubernetes cluster) needs to provision a new laptop for a new hire. It must delegate the hardware procurement to an external IT Vendor’s Agent.
sequenceDiagram
participant HR as Internal HR Agent (Client)
participant Auth as Identity Provider (OIDC)
participant IT as IT Vendor Agent (Server)
Note over HR,IT: Phase 1: Discovery
HR->>IT: HTTP OPTIONS /a2a/procurement
IT-->>HR: 200 OK (Returns A2A Manifest JSON)
Note over HR,IT: Phase 2: Trust & Authentication
HR->>Auth: Request short-lived token for Vendor Scope
Auth-->>HR: Returns OIDC Token
Note over HR,IT: Phase 3: Contextual Delegation
HR->>IT: POST /a2a/procurement (Payload + Breadcrumbs + Token)
IT-->>HR: 202 Accepted (Returns Callback URL)
Note over HR: Agent sleeps to save compute
Note over IT: IT Agent performs physical procurement actions
Note over HR,IT: Phase 4: Async Callback
IT->>HR: POST /webhook/callback (Success payload)
HR->>HR: Update internal state & proceedPhase 1: The Handshake and Discovery The HR Agent initiates a connection to the IT Agent’s public A2A endpoint. It does not send the prompt yet. First, it sends an OPTIONS request. The IT Agent responds with an A2A Manifest. This is a standardized JSON schema detailing its exact capabilities, its required inputs (e.g., employee ID, budget code, shipping address), and its expected output structure. This allows the HR Agent to format its request dynamically without prior hardcoding.
Phase 2: Authentication and Trust The HR Agent must prove it is authorized to spend the company’s money. It uses standard OpenID Connect (OIDC) or a cloud-native identity federation mechanism to generate a short-lived cryptographic token. This token is passed in the A2A header, cryptographically proving to the IT Agent that this request legitimately originated from the verified HR system.
Phase 3: Contextual Delegation The HR Agent formats the request exactly as the manifest requires. Crucially, it does not just send a raw string of text. It sends a structured payload that includes the specific Task, the Constraints (e.g., “Must be under $2000, must arrive by Tuesday”), and the Breadcrumbs.
Breadcrumbs are a critical feature of the A2A protocol. They are a highly compressed summary of the reasoning steps the HR Agent took to reach this point. This provides the IT Agent with context without forcing it to read the entire prior conversation history, saving VRAM and compute cycles.
Phase 4: The Handoff and Callback The IT Agent accepts the payload. If the procurement takes five minutes (perhaps it needs to check real-time inventory across multiple databases), it does not hold the HTTP connection open. That would be a massive waste of resources. It returns a 202 Accepted status with a callback webhook URL. The HR Agent goes to sleep. When the IT Agent secures the laptop, it fires a structured JSON response back to the HR Agent’s webhook, completing the loop.
State Transfer vs Prompt Chaining
The critical innovation of the A2A protocol is that it moves the industry away from “prompt chaining.” In the early days of agents, if Agent A wanted Agent B to do something, it literally wrote an English paragraph and passed it as a string to Agent B.
This is incredibly inefficient. It relies on the receiving agent to perfectly parse the English text, which introduces hallucinations and non-deterministic behavior.
A2A mandates that state transfer happens via strict, typed JSON schemas. The agents might use frontier LLMs to reason internally, but when they talk to each other across the network boundary, they speak in rigid, validated data structures. This is the only way to achieve the reliability required for enterprise production.
Why Standardization is Inevitable
We are witnessing the exact same evolution that occurred with microservices in the 2010s. Initially, microservices communicated using a chaotic mix of custom REST protocols, XML, and bespoke RPC calls. Eventually, the industry converged on standards like gRPC, OpenAPI, and GraphQL because the friction of custom integrations became unbearable.
The A2A protocol is the OpenAPI of the agentic era.
By defining a strict, predictable standard for how agents discover each other, authenticate, and exchange structured intent, we remove the engineering bottleneck of autonomous systems. We stop building monolithic “God Agents” that try to do everything, and we start building highly specialized, lean agents that know exactly how to ask for help from the wider ecosystem.
If your enterprise AI strategy relies on building one massive agent to rule your entire company, you are going to fail. The future belongs to the swarm, and the swarm speaks A2A.



