LCEL chains are powerful but limited—they can’t loop, branch dynamically, or maintain complex state between steps. LangGraph solves this by modeling agent workflows as state machines: graphs where nodes are processing steps and edges define control flow. This explicit structure enables cycles, conditional routing, and persistent state that production agents require.
From Chains to Graphs: Why the Shift Matters
The previous post showed how LCEL creates elegant linear pipelines. But consider what happens when an agent needs to:
- Call a tool, examine the result, and decide whether to call another tool
- Retry a failed operation with modified parameters
- Wait for human approval before proceeding
- Remember what it did in previous conversation turns
These patterns share a common structure—they require cycles (revisiting previous steps) and conditional branching (different paths based on runtime state). LCEL’s linear composition can’t express these directly.
LangGraph provides the missing abstraction: directed graphs where nodes represent processing steps and edges represent transitions. This isn’t just a different syntax—it’s a fundamentally different computational model.
graph LR
subgraph LCEL["LCEL Chains"]
A1[Prompt] --> B1[Model] --> C1[Parser]
end
subgraph LangGraph["LangGraph"]
A2[Start] --> B2[Process]
B2 --> C2{Condition}
C2 -->|Yes| D2[Action A]
C2 -->|No| E2[Action B]
D2 --> F2[Evaluate]
E2 --> F2
F2 -->|Retry| B2
F2 -->|Done| G2[End]
end
classDef blueClass fill:#4A90E2,stroke:#333,stroke-width:2px,color:#fff
classDef orangeClass fill:#F39C12,stroke:#333,stroke-width:2px,color:#fff
class A1,B1,C1 blueClass
class A2,B2,C2,D2,E2,F2,G2 orangeClass
State Machines in Computing
State machines are one of computer science’s oldest abstractions, dating to the 1950s. They model systems as:
- A set of states (configurations the system can be in)
- Transitions between states (triggered by events or conditions)
- Actions that occur during transitions or within states
LangGraph applies this model to LLM workflows. The “state” is the accumulated data (messages, tool results, intermediate computations). “Transitions” are edges between processing nodes. “Actions” are the node functions themselves.
This framing provides important guarantees. Every execution follows a well-defined path through the graph. State changes are explicit and traceable. Loops are bounded by the graph structure rather than hidden in recursive code.
The Three Pillars: State, Nodes, and Edges
LangGraph workflows rest on three concepts that map directly to state machine theory:
| Component | Purpose | State Machine Analog |
|---|---|---|
| State | Data container flowing through the graph | Machine’s memory/configuration |
| Nodes | Functions that process and transform state | State transition actions |
| Edges | Connections defining allowed transitions | Transition rules |
State: The Memory of Your Agent
State in LangGraph is a typed data structure—typically a TypedDict or Pydantic model—that carries all information the agent needs. Unlike LCEL where data flows linearly from output to input, LangGraph state persists and accumulates across the entire execution.
1 | from typing import TypedDict, Annotated |
The Annotated syntax with add (or any binary function) creates reducers—rules for how to combine old and new values. Without a reducer, new values simply overwrite old ones. With add, lists concatenate, enabling patterns like message history that grows with each turn.
This design choice has deep implications. Accumulating state means nodes don’t need to pass everything through return values. A tool execution node can add its results to state; a later summarization node can access those results directly.
Nodes: Processing Steps
Nodes are pure functions that receive the current state and return updates. They don’t modify state directly—they return dictionaries describing what should change.
1 | def process_query(state: AgentState) -> dict: |
This immutable approach enables:
- Debugging: You can inspect state before and after each node
- Replay: Re-run nodes with identical inputs for testing
- Checkpointing: Save state at any point for resumption
Edges: Control Flow
Edges connect nodes and come in two varieties:
Fixed edges always transition to a specific next node:
1 | graph.add_edge("process", "respond") # Always go from process to respond |
Conditional edges evaluate state and choose among multiple targets:
1 | def should_continue(state: AgentState) -> str: |
The routing function receives current state and returns a string key. The mapping dictionary translates keys to target node names. This indirection allows renaming nodes without changing routing logic.
Building Your First Graph
Let’s construct a simple workflow that classifies queries and routes them to specialized handlers.
graph TD
A[START] --> B[Classify]
B --> C{Route}
C -->|technical| D[Tech Handler]
C -->|billing| E[Billing Handler]
C -->|general| F[General Handler]
D --> G[END]
E --> G
F --> G
classDef blueClass fill:#4A90E2,stroke:#333,stroke-width:2px,color:#fff
classDef orangeClass fill:#F39C12,stroke:#333,stroke-width:2px,color:#fff
classDef greenClass fill:#27AE60,stroke:#333,stroke-width:2px,color:#fff
class A,G greenClass
class B,C orangeClass
class D,E,F blueClass
1 | from langgraph.graph import StateGraph, START, END |
The compile() step transforms the graph definition into an executable Runnable. It validates that all edges point to valid nodes, that there’s a path from START to every node, and that every node has a path to END.
The Tool-Calling Agent Loop
The most common LangGraph pattern implements the agent loop: call model → check for tools → execute tools → repeat.
graph TD
A[START] --> B[Call Model]
B --> C{Has Tool Calls?}
C -->|Yes| D[Execute Tools]
D --> B
C -->|No| E[Finish]
E --> F[END]
classDef blueClass fill:#4A90E2,stroke:#333,stroke-width:2px,color:#fff
classDef orangeClass fill:#F39C12,stroke:#333,stroke-width:2px,color:#fff
classDef greenClass fill:#27AE60,stroke:#333,stroke-width:2px,color:#fff
class A,F greenClass
class B,D blueClass
class C,E orangeClass
This graph has a cycle—execute_tools loops back to call_model. LCEL can’t express this directly, but LangGraph handles it naturally. The cycle continues until the conditional edge routes to finish instead of back to tools.
The key insight is that tool execution changes state, adding ToolMessage objects that the model sees on the next iteration. This accumulated context lets the model decide whether it needs more information or can answer.
State Accumulation in Action
Consider an agent answering “What’s the weather in the city where Apple HQ is located?”
Initial state:
1 | {"messages": [HumanMessage("What's the weather...")]} |
After first model call:
1 | { |
After tool execution:
1 | { |
After second model call:
1 | { |
Each iteration adds to state rather than replacing it. The model sees its full history, enabling coherent multi-step reasoning.
Checkpointing: Persistence and Time Travel
Production agents need to survive restarts, support long-running conversations, and enable debugging. LangGraph’s checkpointing system addresses all three.
1 | from langgraph.checkpoint.memory import MemorySaver |
The MemorySaver stores state in memory (good for development). Production systems use SqliteSaver or Redis-backed implementations for durability.
Time Travel Debugging
Checkpointing enables time travel—inspecting or resuming from any previous state:
1 | # Get full execution history |
This capability transforms debugging. Instead of adding print statements and re-running, you can examine the exact state at any point and branch off with different inputs.
MessagesState: The Common Case
Most agents are conversational, maintaining message history as their primary state. LangGraph provides MessagesState as a convenience:
1 | from langgraph.graph import MessagesState |
This pattern is so common that LangGraph pre-defines it. Your nodes receive state with a messages key, and any messages you return are appended automatically.
Streaming: Real-Time Feedback
For interactive applications, waiting for the complete response is too slow. LangGraph supports streaming at multiple levels:
Values mode: Emit complete state after each node
1 | for state in agent.stream({"messages": [HumanMessage("Tell me a story")]}, |
Updates mode: Emit only the changes from each node
1 | for update in agent.stream({"messages": [HumanMessage("Tell me a story")]}, |
For LLM token streaming (word-by-word output), you combine LangGraph streaming with LangChain’s model streaming—a pattern we’ll cover in production deployment.
Visualization and Introspection
LangGraph graphs are inspectable. You can generate visual representations for documentation or debugging:
1 | # Get Mermaid diagram syntax |
This capability closes the loop on documentation. The graph you define in code is the documentation—there’s no separate diagram to keep in sync.
Key Takeaways
Graphs extend what’s possible: Cycles, conditional branching, and complex state management require graph structures that LCEL can’t express.
State is explicit and typed: TypedDict or Pydantic schemas define exactly what data flows through your agent, with reducers controlling accumulation.
Nodes are pure functions: They receive state and return updates. This immutability enables debugging, replay, and checkpointing.
Edges control flow explicitly: Fixed edges for sequential steps, conditional edges for dynamic routing. The routing logic is separated from the processing logic.
Checkpointing enables production features: Persistence, conversation resumption, and time-travel debugging come from treating state as first-class.
The tool loop is a graph pattern: Call model → route on tool calls → execute tools → loop back. This cycle is the foundation of most agents.
Next: Connecting LangGraph Agents to APIs and Databases - We’ll integrate external systems, build SQL agents, and address security considerations for real-world deployments.
Comments