Frameworks / Orchestration
Google ADK 2.0 Alpha Adds a Workflow Runtime and Task API
Google ADK 2.0 alpha introduces graph-based workflow orchestration and structured task delegation. Here is what it changes for AI agent builders.

News coverage
Frameworks / Orchestration
Agent News Watch for teams building and operating AI agents.
Google's ADK 2.0 alpha is one of the clearest framework moves in the current agent tooling cycle because it does not just add another helper or model integration. It changes the center of gravity of the framework. With a new workflow runtime and a Task API for structured delegation, Google is signaling that production agent teams want first-class orchestration primitives, not just a nicer interface around single-agent tool calling.
Announcement summary
The official v2.0.0a1 release note introduces two headline capabilities. The first is a workflow runtime: a graph-based execution engine for deterministic execution flows in agentic applications, with routing, fan-out and fan-in, loops, retry, state management, dynamic nodes, human-in-the-loop, and nested workflows. The second is a Task API: structured agent-to-agent delegation with multi-turn task mode, single-turn controlled output, mixed delegation patterns, human-in-the-loop, and task agents as workflow nodes.
The alpha README makes the release posture clear. This is an early preview with breaking changes to the agent API, event model, and session schema, and Google explicitly warns teams not to use ADK 1.x databases or sessions with the new version. The installation guidance also matters: you only get the alpha if you pin google-adk==2.0.0a1 directly, which is Google's way of telling builders to treat this as an intentional evaluation track.
What the new runtime actually changes
The biggest practical change is that workflow control becomes a native object in the framework instead of something teams are expected to assemble on their own. The alpha README shows a Workflow object that connects agent nodes through edges. That may seem like a small API choice, but it changes how builders think about the system. A multi-step agent flow is no longer just a prompt loop with tools. It is a graph with explicit structure, state, and control points, which is exactly the shift our AI Agent Architecture and AI Agent Orchestration guides are meant to clarify.
That matters because many teams hit the same ceiling with first-generation agent frameworks. Single-agent tool use is easy to demo, but harder to scale into systems that require deterministic branching, retry, approvals, nested subflows, or controlled delegation. ADK 2.0 alpha is Google's answer to that ceiling. Instead of treating those needs as bolt-ons, it moves them into the runtime layer.
The Task API pushes the same direction. Structured delegation means a parent agent or workflow can hand work to another agent with more explicit task semantics instead of relying on loosely framed internal prompts. If Google follows through on this model, teams may be able to reason about inter-agent work in a more inspectable and governable way than the usual agent-calls-another-agent pattern.
Which teams benefit first
The first beneficiaries are teams building multi-step internal workflows where predictability matters more than maximal autonomy. Think support triage pipelines, compliance review flows, developer-assist systems with approval gates, or research agents that need deterministic branching before synthesis. Those teams usually want the flexibility of LLM-driven reasoning, but they also want graph-level controls for retries, pauses, and human checkpoints.
Platform teams are another obvious audience. If your job is to create a shared agent runtime for multiple internal teams, a graph-based workflow engine plus structured delegation is more useful than a framework that only makes single-agent tool use easier. It gives you clearer surfaces for observability, policy, and failure handling. That also makes AI Agent Use Cases the right companion when the real question is which workflow has enough value and coordination complexity to justify a graph runtime in the first place.
The release could also interest teams that have been debating whether to keep orchestration outside their framework layer. ADK 2.0 alpha makes the argument that orchestration belongs inside the framework, at least for teams that want a more integrated stack.
Compatibility, migration, and lock-in considerations
The release note and alpha README are unusually explicit about the risks, which is good. This is not a soft point release. Google calls out breaking changes across the agent API, event model, and session schema, and warns that 1.x databases and sessions are incompatible. That makes the short-term recommendation straightforward: prototype, do not port.
There is also a strategic tradeoff to watch. ADK describes itself as an open-source, code-first toolkit for building, evaluating, and deploying sophisticated AI agents, and the main README says it is model-agnostic and deployment-agnostic. That is a positive positioning signal. But once a framework owns your workflow runtime and task delegation model, migration costs usually rise. Teams evaluating ADK 2.0 should test not only the feature set, but also how portable their flows and task semantics would be if they ever needed to swap the surrounding stack. That is a good moment to run the rollout questions in AI Agent Evaluation, not just a feature checklist.
Competitive context
Compared with LangGraph, whose repository description is literally Build resilient language agents as graphs, Google is moving closer to graph-native orchestration as a core framework identity rather than a secondary concern. Compared with OpenAI Agents Python 0.13.0, which spends this release cycle on MCP capabilities, realtime defaults, and runtime stability fixes, ADK 2.0 alpha is a more architectural bet. Compared with CrewAI 1.11.1, which adds flow introspection and production fixes, Google's move is earlier in the stack: it is redefining the runtime rather than just improving how flows are inspected.
That makes ADK 2.0 alpha important even for teams that will not adopt it. It is a signal that the framework market increasingly competes on orchestration depth, not only on model integrations or agent syntax.
Builder bottom line
Google ADK 2.0 alpha is worth attention because it shows where framework expectations are moving: toward explicit graphs, structured delegation, and more governable runtime behavior. But the release is equally clear about what it is not. It is not a frictionless upgrade path from ADK 1.x, and it is not yet something most teams should roll into production without a dedicated evaluation cycle.
The best use of this release right now is to test whether the workflow runtime and Task API reduce custom orchestration code in your stack. If they do, ADK 2.0 becomes a framework to watch closely. If they do not, the release still tells you what the rest of the market will probably ship next.
Keep reading
Use AI Agent Use Cases to decide which workflow deserves this much runtime complexity, AI Agent Architecture to map where the runtime belongs, Multi-Agent Architecture to decide whether specialist roles or task handoffs are worth the extra coordination cost, AI Agent Frameworks to compare the surrounding stack choices, AI Agent Orchestration to evaluate whether the runtime actually reduces workflow glue code, and AI Agent Evaluation to pressure-test the upgrade path before you adopt a graph runtime.
For more context, read the weekly AI agent launch roundup and our A2A v1.0.0 protocol brief.
Sources
OpenAI Agents Python repository
Turn this update into build work
Use these next reads to map the announcement into pilot fit, architecture, interoperability, or rollout controls.

Guide coverage
Foundations / Implementation
Agent News Watch for teams building and operating AI agents.
Foundations / Implementation
Learn the best AI agent use cases for product, ops, engineering, and support teams, plus how to choose the right autonomy level, architecture, and rollout path.

Guide coverage
Architecture
Agent News Watch for teams building and operating AI agents.
Architecture
Learn how AI agent architecture works across models, tools, memory, orchestration, guardrails, and multi-agent patterns with practical reference designs.

Guide coverage
Architecture
Agent News Watch for teams building and operating AI agents.
Architecture
Learn when multi-agent architecture outperforms single-agent systems, which coordination patterns fit best, and how to manage context, reliability, security, and cost.

Guide coverage
Frameworks
Agent News Watch for teams building and operating AI agents.
Frameworks
Compare AI agent frameworks, understand when you need one, and learn how to choose the right stack for workflows, coding agents, and multi-agent systems.

Guide coverage
Implementation
Agent News Watch for teams building and operating AI agents.
Implementation
Learn AI agent orchestration patterns for coordinating state, tools, retries, approvals, and multi-step workflows without overbuilding your stack.

Guide coverage
Evaluation
Agent News Watch for teams building and operating AI agents.
Evaluation
Learn how to evaluate AI agents with task-based evals, regression checks, human review, and production metrics across tools, safety, latency, and cost.