Single agents hit a ceiling. When tasks require diverse expertise—research, coding, analysis, writing—a single agent either becomes overloaded with tools or produces mediocre results across domains. Multi-agent systems solve this by decomposing work among specialized agents, each focused on what it does best.
The Specialization Principle
Human organizations discovered centuries ago that specialization improves outcomes. A team with a researcher, developer, and technical writer produces better documentation than three generalists working independently. Each person develops deep expertise in their domain and delivers higher quality work in less time.
The same principle applies to AI agents. A single agent with 20 tools must make increasingly difficult decisions about which tool to use. Its context window fills with tool descriptions, examples, and results from diverse domains. Performance degrades as cognitive load increases.
Multi-agent systems mirror organizational design. Instead of one overloaded generalist, you deploy specialists who excel at narrow tasks. A research agent becomes expert at information gathering. A coding agent focuses on implementation. A review agent concentrates on quality assessment. Each agent has a focused prompt, relevant tools, and domain-specific examples.
This isn’t just about division of labor—it’s about emergent capabilities. When agents can communicate and hand off work, the system can tackle tasks none could handle individually.
Why Multiple Agents?
Single-agent architectures face fundamental limits:
Multi-agent systems also enable parallelism—while one agent researches, another can write.
Multi-Agent Patterns
Choosing the right pattern depends on your coordination needs:
Pattern
Coordination
Best For
Orchestrator
Central planner delegates
Predictable workflows, clear task decomposition
Supervisor
Monitor quality, reassign on failure
Quality-critical work, iterative refinement
Peer-to-Peer
Agents communicate directly
Creative collaboration, debate, synthesis
Pattern 1: Orchestrator
The orchestrator pattern mirrors traditional project management. A central coordinator analyzes the task, creates a plan, and delegates to specialists:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
defcreate_plan(state: OrchestratorState) -> dict: """Orchestrator breaks task into steps for specialists.""" prompt = f"""Break this task into steps: Task: {state['task']} Specialists: researcher, coder, writer Return JSON: [{{"agent": "...", "task": "..."}}]"""
plan = json.loads(llm.invoke(prompt).content) return {"plan": plan, "current_step": 0}
defroute_to_agent(state: OrchestratorState) -> str: """Route to next specialist or synthesize if done.""" if state["current_step"] >= len(state["plan"]): return"synthesize" return state["plan"][state["current_step"]]["agent"]
The orchestrator’s strength is predictability—you can trace exactly which agent handled which part of the task.
Pattern 2: Supervisor with Workers
The supervisor monitors progress and can reassign or retry:
# Accept and move forward return { "messages": [AIMessage(content=f"Accepted with score {score}/10")], "quality_scores": {**state.get("quality_scores", {}), "final": score} }
defshould_continue(state: SupervisorState) -> str: """Determine if we need more work or can finalize.""" iterations = state.get("iterations", 0) scores = state.get("quality_scores", {})
if scores.get("final", 0) >= 7: return"finalize" if iterations >= 3: return"finalize"# Give up after 3 attempts return"revise"
graph TD
A[Supervisor] --> B{Evaluate Quality}
B -->|Score < 7| C[Request Revision]
C --> D[Worker Revises]
D --> A
B -->|Score >= 7| E[Accept & Continue]
B -->|Max Iterations| E
E --> F[Next Task or Finalize]
classDef blueClass fill:#4A90E2,stroke:#333,stroke-width:2px,color:#fff
classDef orangeClass fill:#F39C12,stroke:#333,stroke-width:2px,color:#fff
classDef greenClass fill:#27AE60,stroke:#333,stroke-width:2px,color:#fff
class A blueClass
class B,C,D orangeClass
class E,F greenClass
Pattern 3: Peer-to-Peer Collaboration
Agents communicate directly without a central coordinator:
defresearcher_agent(state: CollaborativeState) -> dict: """Research agent contributes facts and evidence.""" context = "\n".join([ f"{msg['agent']}: {msg['content']}" for msg in state.get("discussion", []) ])
prompt = f"""You are a research agent in a collaborative discussion. Topic: {state['topic']} Previous discussion: {context or'No discussion yet.'} Provide factual research insights. Be specific and cite sources when possible. Keep response under 150 words."""
defcritic_agent(state: CollaborativeState) -> dict: """Critic agent challenges assumptions and identifies weaknesses.""" context = "\n".join([ f"{msg['agent']}: {msg['content']}" for msg in state.get("discussion", []) ])
prompt = f"""You are a critical analyst in a collaborative discussion. Topic: {state['topic']} Previous discussion: {context} Identify weaknesses, gaps, or counterarguments. Be constructive. Keep response under 150 words."""
defsynthesizer_agent(state: CollaborativeState) -> dict: """Synthesizer combines insights into a coherent conclusion.""" context = "\n".join([ f"{msg['agent']}: {msg['content']}" for msg in state.get("discussion", []) ])
prompt = f"""You are synthesizing a collaborative discussion. Topic: {state['topic']} Discussion: {context} Create a balanced synthesis that addresses the key points and critiques. Provide a clear conclusion."""
# Approach 1: Shared State # All agents read/write to common state classSharedState(TypedDict): research_findings: list[str] code_artifacts: list[str] review_comments: list[str]
defresearch_with_handoff(state: HandoffState) -> dict: """Research agent that hands off to coder.""" # Do research work findings = "Found 3 relevant APIs for the task..."
# Prepare handoff return create_handoff( from_agent="researcher", to_agent="coder", summary="Research complete. Found APIs: X, Y, Z. Recommend starting with X.", context={ "findings": findings, "recommended_approach": "Use API X with pagination", "constraints": ["Rate limit: 100/min", "Auth required"] } )
prompt = f"""You are a coding agent. You received this handoff: Findings: {context.get('findings', 'None')} Approach: {context.get('recommended_approach', 'None')} Constraints: {context.get('constraints', [])} Implement the solution based on this context."""
classScopedState(TypedDict): # Global - all agents can access task: str final_result: str
# Agent-specific - only certain agents write research_data: dict# Only researcher writes code_output: dict# Only coder writes review_notes: dict# Only reviewer writes
defmake_scoped_node(agent_name: str, writable_keys: list[str]): """Create a node that can only write to specific state keys."""
defscoped_node(state: ScopedState) -> dict: # Agent does its work result = do_agent_work(state, agent_name)
# Filter to only writable keys filtered = {k: v for k, v in result.items() if k in writable_keys}
# Each project maintains its own state config = {"configurable": {"thread_id": f"project-{project_id}"}} result = multi_agent_system.invoke(initial_state, config=config)
Building a Complete Multi-Agent System
Let’s build a research and writing system with three specialized agents:
from langgraph.graph import StateGraph, START, END from langchain_openai import ChatOpenAI from langchain_core.messages import HumanMessage from langchain_core.tools import tool from typing import TypedDict, Annotated, Literal from operator import add
defresearch_agent(state: ResearchWritingState) -> dict: """Gather information on the topic.""" prompt = f"""You are a research agent. Gather comprehensive information on: Topic: {state['topic']} Use available tools to find academic and news sources. Compile your findings as detailed research notes."""
# Process tool calls if any notes = [response.content] if response.content else []
if response.tool_calls: for tool_call in response.tool_calls: tool_name = tool_call["name"] tool_fn = {t.name: t for t in research_tools}[tool_name] result = tool_fn.invoke(tool_call["args"]) notes.append(f"[{tool_name}]: {result}")
# Writer Agent defwriter_agent(state: ResearchWritingState) -> dict: """Create outline and draft based on research.""" notes = "\n".join(state.get("research_notes", []))
# First create outline outline_prompt = f"""Based on this research, create an article outline: Research Notes: {notes} Create a structured outline with 4-6 main sections."""
# Then write draft draft_prompt = f"""Write a complete article based on this outline: Outline: {outline_response.content} Research: {notes} Write an engaging, informative article of 500-800 words."""
# Editor Agent defeditor_agent(state: ResearchWritingState) -> dict: """Review and refine the draft.""" review_prompt = f"""Review this article draft for: 1. Accuracy (based on research notes) 2. Clarity and flow 3. Engagement 4. Grammar and style Draft: {state['draft']} Research Notes: {chr(10).join(state.get('research_notes', [])[:3])} Provide specific feedback and suggestions."""
# Apply edits edit_prompt = f"""Revise this article based on feedback: Original Draft: {state['draft']} Feedback: {feedback.content} Produce the final polished article."""
final = llm.invoke([HumanMessage(content=edit_prompt)])
graph LR
A[START] --> B[Research Agent]
B --> C[Writer Agent]
C --> D[Editor Agent]
D --> E[END]
subgraph Research["Research Phase"]
B1[Search Academic] --> B
B2[Search News] --> B
end
subgraph Writing["Writing Phase"]
C --> C1[Create Outline]
C1 --> C2[Write Draft]
end
subgraph Editing["Editing Phase"]
D --> D1[Review]
D1 --> D2[Polish]
end
classDef blueClass fill:#4A90E2,stroke:#333,stroke-width:2px,color:#fff
classDef orangeClass fill:#F39C12,stroke:#333,stroke-width:2px,color:#fff
classDef greenClass fill:#27AE60,stroke:#333,stroke-width:2px,color:#fff
class A,E greenClass
class B,B1,B2 blueClass
class C,C1,C2 orangeClass
class D,D1,D2 blueClass
Running the System
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
# Execute the multi-agent workflow result = research_writing_system.invoke({ "topic": "The impact of large language models on software development", "research_notes": [], "outline": [], "draft": "", "review_feedback": "", "final_article": "", "current_phase": "researching" })
print("=== Research Notes ===") for note in result["research_notes"]: print(f"- {note[:100]}...")
print("\n=== Final Article ===") print(result["final_article"])
Conflict Resolution
When agents disagree, you need resolution strategies:
defcollect_proposals(state: ConflictState) -> dict: """Each agent submits a proposal.""" # Agents have already submitted via their nodes return {}
defvoting_round(state: ConflictState) -> dict: """Agents vote on proposals (can't vote for own).""" proposals = state.get("proposals", {}) votes = {}
for agent in proposals: # Each agent evaluates and votes other_proposals = {k: v for k, v in proposals.items() if k != agent}
if other_proposals: prompt = f"""You are {agent}. Vote for the best proposal: {chr(10).join([f'{k}: {v}'for k, v in other_proposals.items()])} Return only the name of the agent whose proposal you vote for."""
defresolve_conflict(state: ConflictState) -> dict: """Determine winner or synthesize if tied.""" votes = state.get("votes", {}) proposals = state.get("proposals", {})
# Count votes vote_counts = {} for voted_for in votes.values(): vote_counts[voted_for] = vote_counts.get(voted_for, 0) + 1
ifnot vote_counts: # No votes, synthesize resolution = "No consensus - using first proposal" return {"resolution": list(proposals.values())[0] if proposals else resolution}
# Check for tie max_votes = vote_counts[winner] tied = [k for k, v in vote_counts.items() if v == max_votes]
iflen(tied) > 1: # Tie - synthesize tied_proposals = {k: proposals.get(k, "") for k in tied} synthesis_prompt = f"""Synthesize these tied proposals: {chr(10).join([f'{k}: {v}'for k, v in tied_proposals.items()])} Create a combined solution that takes the best from each."""
Comments