· Architectural  Â· 8 min read

ADK vs. LangChain: The Protocol-First Shift

Class-based chains are a legacy pattern. Discover why Google ADK and its open Agent Protocol are the future of interoperable, production-grade multi-agent systems.

Featured image for: ADK vs. LangChain: The Protocol-First Shift

In the early days of a new technology, we tend to build in silos. We create libraries that wrap functionality into neat, proprietary classes. We build “frameworks” that promise to handle everything for us, as long as we stay inside their specific walled garden.

In the world of AI agents, LangChain (v0.1 and v0.2) was the first of these gardens. It served a vital purpose: it provided the primitives for a community that was still trying to figure out how to chain a prompt to a database. But as we move from “cool demos” to “mission-critical infrastructure,” the limitations of a library-centric approach—even one as evolved as LangChain v0.3+—are becoming the center of architectural debate.

We are entering the era of Protocol-First AI. And the shift from LangChain “Chains” to the Google Agent Development Kit (ADK) is the perfect illustration of this transition.

The Problem with Library-Locked Intelligence

If you have spent any time building with LangChain, you know the “Abstraction Trap.” You start with a simple LLMChain. It works. Then you want to add memory, so you import BufferMemory. Then you want to add tools, so you use the AgentExecutor.

Before you know it, your business logic is buried under six layers of inheritance. You are no longer writing code; you are configuration-wrangling a library.

The issue is that LangChain—even with the powerful state management of LangGraph v0.2+—is primarily an SDK-driven ecosystem. To move an agent from one environment to another, or to have two agents talk to each other across different tech stacks, you are often tethered to the library’s specific message schemas and serialization logic. This is fine for a monolith, but it can become a friction point for a truly distributed, polyglot multi-agent future.

ADK: Intelligence as a Protocol

The Google Agent Development Kit (ADK) takes a fundamentally different approach. It does not try to be a monolithic library that “contains” your agent. Instead, it treats the agent as a Service that communicates via an open Agent Protocol.

In the ADK, an agent is defined by its ability to handle standardized JSON-RPC or HTTP messages. It does not matter if Agent A is written in TypeScript and Agent B is written in Python. As long as they both speak the same protocol, they can collaborate.

This is the “REST moment” for AI. Just as we moved from proprietary RPC systems to the open, interoperable world of Web APIs, we are now moving from library-locked “chains” to protocol-governed “sessions.”

Sessions & Artifacts: Framework vs. Protocol

The architectural tradeoff becomes most apparent in how systems handle State.

In LangChain v0.3+, state management has reached a high level of maturity. Features like RunnableWithMessageHistory and LangGraph Threads provide robust ways to persist conversation history and manage complex agent states across turns. Similarly, the introduction of ToolMessage.artifact and support for the Model Context Protocol (MCP) allow LangChain agents to produce and consume structured outputs.

However, in the LangChain model, these are framework-level abstractions. The “Session” is a wrapper around a runnable; the “Artifact” is a property of a message object. They exist because the library provides the code to manage them.

ADK introduces the concept of Sessions & Artifacts as Protocol Primitives.

A Session in ADK is not a library object; it is a first-class data entity defined by the Agent Protocol (a JSON-RPC/REST standard). When an agent produces an Artifact—a structured record of work like a code snippet or a research finding—it is registering that data into the session at the network layer.

The difference is subtle but stark:

  • LangChain (SDK-Centric): You use the library to create a session. Interoperability depends on the “other side” speaking the same SDK dialect.
  • ADK (Protocol-Centric): The protocol defines the session. You can build a protocol-compliant agent in Rust, Go, or a legacy C++ service, and it will “just work” with the same session and artifacts without needing the ADK library installed.

This makes the ADK model arguably more resilient for “Async Agency.” An agent can perform a task, register an artifact, and completely shut down. A totally different service—perhaps one that doesn’t even use AI—can then “wake up,” read the artifact from the session via a standard API call, and continue the workflow.

Observability as a Foundation, Not an Afterthought

Finally, there is the issue of debugging. Debugging a complex LangChain AgentExecutor is a notoriously difficult task. You often find yourself digging through deep stack traces to understand why a specific tool was (or wasn’t) called.

ADK is built on top of the same observability principles we discussed in earlier posts. Because everything is protocol-based, tracing is built into the foundation. Every message, every turn, and every artifact generation is automatically logged with high-resolution telemetry.

You don’t “add” logging to ADK; the ADK is the log of the agent’s behavior. This makes it infinitely easier to audit, evaluate, and optimize your agents as they scale.

The Agent Protocol: Architecture of the Message

To truly move away from library-lock, we have to agree on a common language. The Agent Protocol (which ADK implements) is that language. It defines a set of standard endpoints and message formats that any agent, regardless of its internal logic, must adhere to.

A typical interaction in the Agent Protocol follows a request-response cycle that looks like this:

  1. Request: A “Task” is created. This is a high-level goal (e.g., “Analyze this quarterly report”).
  2. Steps: The agent executes a series of “Steps.” Each step is a discrete unit of work, like a tool call or a reasoning block.
  3. Artifacts: Each step can produce “Artifacts.” These are the tangible outputs of the work.

Because this structure is defined at the network layer, you can use standard web tools to monitor it. You can use a generic HTTP proxy to inspect the “thoughts” of your agent. You can use a standard database to store the “artifacts” without needing a specialized LangChain VectorStore wrapper. This simplicity is its greatest strength.

Artifact-Driven Workflows: Passing the Baton

In LangChain, if you want Agent A to pass data to Agent B, you often have to build a “Multi-Agent Chain” where the output of one is piped into the input of the other. This creates a hard coupling. If the format of Agent A’s output changes slightly, Agent B breaks.

ADK solves this through Artifact-Driven Workflows.

Agent A doesn’t “call” Agent B. Agent A simply finishes its task and registers an Artifact in the session (e.g., Type: FinancialTable, URI: gcs://bucket/table.json). Agent B is then triggered by the session manager. Agent B looks at the session, sees the FinancialTable artifact, and begins its work.

This is a Pub/Sub model for intelligence. It allows you to swap out Agent A (the “Producer”) with a newer, better model without ever touching the code for Agent B (the “Consumer”). As long as the artifact schema is maintained, the system remains decoupled and resilient.

Decision Matrix: When to Shift?

Should you rewrite your entire LangChain stack in ADK tomorrow? Not necessarily. Library-based frameworks are still excellent for “Low-Stakes Logic”—internal tools, simple chatbots, or experimental prototypes where interoperability isn’t a concern.

However, if your roadmap includes any of the following, the shift to a protocol-first architecture like ADK is mandatory:

  • Cross-Lingual Agents: You need a JAX-based Python agent to collaborate with a React-based frontend agent.
  • Human-in-the-loop Persistence: You need to pause an agentic task for 24 hours while a human approves an artifact.
  • Audit-Scale Telemetry: You need to store the reasoning traces of millions of transactions for regulatory compliance.
  • Agent Marketplaces: You want to use a 3rd-party “Research Agent” as a module in your own proprietary “Strategy Agent.”

Protocol Versioning: The Contract of the Future

One of the often-overlooked benefits of the protocol-first approach is Versioning. In a library-based system, a “breaking change” means hours of refactoring your internal class structures. In a protocol-first system, a breaking change is simply a new version of the message schema.

By using tools like Protocol Buffers (Protobuf) or gRPC inside the ADK framework, we can ensure forward and backward compatibility. This allows you to run a “Legacy Logic” agent alongside a “State-of-the-Art” agent, both fulfilling different parts of the same session without conflict. The session manager acts as the mediator, ensuring that each agent receives data in the format it expects. This level of decoupling is what allows an enterprise to evolve its AI stack piece-by-piece, rather than being forced into a high-risk “big-bang” migration every time a new model version is released.

Conclusion: Choosing the Foundation

LangChain served us well during the “Wild West” of early 2023. It showed us what was possible. But as we build the permanent infrastructure of the AI economy, we need foundations that are built on protocols, not just wrappers.

Google ADK represents the professionalization of the agentic stack. It is the move from “chains” to “sessions,” from “objects” to “protocols,” and from “magic” to “observability.”

In the long run, the libraries that win are the ones that disappear. They become the invisible plumbing of the network. By choosing ADK, you are betting on the protocol. And in the history of technology, the protocol always wins.

Back to Blog

Related Posts

View All Posts »