Financial analysis often requires examining multiple dimensions simultaneously - market data, sentiment, risk factors, regulatory compliance. Rather than processing these sequentially, parallel workflows distribute work across multiple agents for speed and diverse perspectives. Combined with evaluator-optimizer patterns for quality assurance, these techniques form the backbone of production-grade financial AI systems.
The Power of Parallel Processing
Parallelization in agentic workflows means multiple agents work on different parts of a task - or even the same task - simultaneously. Think of it like a research team: instead of one analyst doing everything sequentially, multiple specialists tackle their domains in parallel, then synthesize findings.
flowchart TD
I[Investment Research Request] --> D[Distributor]
D --> F[Fundamental
Analysis Agent]
D --> T[Technical
Analysis Agent]
D --> S[Sentiment
Analysis Agent]
D --> R[Risk
Analysis Agent]
F --> A[Aggregator]
T --> A
S --> A
R --> A
A --> O[Comprehensive Report]
classDef blueClass fill:#4A90E2,stroke:#333,stroke-width:2px,color:#fff
classDef orangeClass fill:#F39C12,stroke:#333,stroke-width:2px,color:#fff
class D blueClass
class A orangeClass
This follows the scatter-gather pattern: a problem is scattered to multiple workers, and their individual findings are gathered into a final result.
The Independence Requirement
The golden rule for effective parallelization: subtasks must be largely independent. Agent A working on Subtask A shouldn’t need to wait for Agent B to finish Subtask B. If outputs depend on each other, sequential chaining is more appropriate.
flowchart TB
subgraph Good["Good: Independent Tasks"]
direction LR
G1[Analyze Fundamentals] -.-> |parallel| G2[Analyze Technicals]
G2 -.-> |parallel| G3[Analyze Sentiment]
end
subgraph Bad["Bad: Dependent Tasks"]
direction LR
B1[Get Price] --> B2[Calculate Returns]
B2 --> B3[Assess Risk]
end
classDef greenClass fill:#27AE60,stroke:#333,stroke-width:2px,color:#fff
classDef pinkClass fill:#E74C3C,stroke:#333,stroke-width:2px,color:#fff
class Good greenClass
class Bad pinkClass
Task Decomposition Strategies
Sectioning (Sharding)
For large, divisible inputs, split the data and process chunks in parallel:
asyncdefanalyze_chunk(positions: List[dict]) -> dict: prompt = f""" Analyze these portfolio positions: {positions} For each position, assess: - Current value and P&L - Risk exposure - Sector allocation """ returnawait llm.complete(prompt)
Financial use cases:
Analyzing large portfolios
Processing batch transactions
Reviewing multiple compliance reports
Aspect-Based Decomposition
When analyzing a single subject from multiple angles:
asyncdefrobust_trade_decision(order: dict) -> dict: """Get multiple independent opinions on a trade"""
# Run same analysis with different prompts/temperatures opinions = await asyncio.gather( analyze_trade(order, perspective="conservative"), analyze_trade(order, perspective="moderate"), analyze_trade(order, perspective="aggressive") )
# Voting mechanism approvals = sum(1for o in opinions if o["recommendation"] == "APPROVE")
asyncdefanalyze_trade(order: dict, perspective: str) -> dict: prompt = f""" As a {perspective} risk analyst, evaluate this trade: {order} Provide recommendation: APPROVE or REJECT Include reasoning. """ returnawait llm.complete(prompt)
Financial use cases:
High-stakes trading decisions
Fraud detection (multiple detectors)
Regulatory interpretations
Aggregation Strategies
Once parallel tasks complete, their outputs must be combined:
Concatenation
Simply join outputs together:
1 2 3
defconcatenate_reports(reports: List[str]) -> str: """Combine section reports into full document""" return"\n\n---\n\n".join(reports)
asyncdefselect_best_analysis(analyses: List[dict]) -> dict: """Evaluate and select the best analysis"""
evaluation_prompt = f""" Compare these stock analyses and select the best one: {json.dumps(analyses, indent=2)} Criteria: - Depth of analysis - Data accuracy - Actionable insights - Risk consideration Return the index (0-based) of the best analysis and explain why. """
result = await llm.complete(evaluation_prompt) best_index = extract_index(result) return analyses[best_index]
Best for: Creative generation, strategy selection
Voting / Majority Rule
Consensus from multiple independent assessments:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
defmajority_vote(decisions: List[str]) -> str: """Return most common decision""" from collections import Counter counts = Counter(decisions) return counts.most_common(1)[0][0]
defweighted_vote(decisions: List[dict]) -> str: """Vote weighted by confidence scores""" weighted_counts = {} for d in decisions: decision = d["decision"] confidence = d["confidence"] weighted_counts[decision] = weighted_counts.get(decision, 0) + confidence
asyncdefsynthesize_research(analyses: dict) -> str: """Combine multiple analyses into unified report"""
synthesis_prompt = f""" You are a senior investment strategist. Synthesize these analyses into a coherent investment recommendation: Fundamental Analysis: {analyses['fundamental']} Technical Analysis: {analyses['technical']} Sentiment Analysis: {analyses['sentiment']} Risk Analysis: {analyses['risk']} Create a unified recommendation that: - Weighs evidence from all sources - Addresses contradictions between analyses - Provides clear actionable guidance - Includes risk-adjusted position sizing """
returnawait llm.complete(synthesis_prompt)
Best for: Complex multi-source analysis, research reports
The Evaluator-Optimizer Pattern
While parallelization handles distribution, the evaluator-optimizer pattern ensures quality through iterative refinement. This is critical for high-stakes financial outputs where accuracy and compliance are non-negotiable.
flowchart TD
T[Task] --> O[Optimizer/Generator]
O --> OUT[Output v1]
OUT --> E[Evaluator]
E --> D{Meets Criteria?}
D -->|Yes| F[Final Output]
D -->|No| FB[Feedback]
FB --> O
classDef orangeClass fill:#F39C12,stroke:#333,stroke-width:2px,color:#fff
classDef greenClass fill:#27AE60,stroke:#333,stroke-width:2px,color:#fff
class E orangeClass
class D greenClass
Two Key Roles
Optimizer (Generator) Agent:
Takes initial task and generates output
Refines output based on evaluator feedback
Focuses on improving specific issues identified
Evaluator (Critique) Agent:
Acts as expert reviewer
Assesses output against predefined criteria
Provides specific, actionable feedback
Essential Elements for Effective Loops
1. Clear Evaluation Criteria
Criteria must be specific, measurable, and unambiguous:
Feedback must be specific enough to guide improvement:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
evaluator_prompt = """ Review this investment recommendation against our criteria: Recommendation: {recommendation} Criteria: {criteria} For each criterion that is NOT met, provide: 1. What is missing or incorrect 2. Specific suggestion for how to fix it 3. Example of what a correct version would look like Rate overall quality 1-10. Respond APPROVED if score >= 8, otherwise provide detailed feedback. """
resolution_prompt = f""" These analyses have conflicting conclusions: {json.dumps(conflicts, indent=2)} For each conflict: 1. Explain why the disagreement exists 2. Assess which perspective has stronger evidence 3. Provide a balanced view that acknowledges uncertainty A conflict is NOT a reason to reject - it's information about uncertainty. """
COMPLIANCE_CRITERIA = { "risk_disclosure": "Must include clear risk warnings", "suitability": "Must consider investor profile", "conflicts_of_interest": "Must disclose any conflicts", "past_performance": "Must include past performance disclaimer", "regulatory_status": "Must include regulatory information" }
asyncdefcompliance_evaluation(recommendation: str) -> dict: """Check regulatory compliance""" prompt = f""" Review this investment recommendation for regulatory compliance: {recommendation} Check each requirement: {json.dumps(COMPLIANCE_CRITERIA, indent=2)} For any failed requirement, specify exactly what needs to be added. """
Parallelization requires independence - only parallelize tasks that don’t depend on each other’s outputs
Three decomposition strategies serve different needs: sectioning for large data, aspect-based for multi-angle analysis, and identical tasks for diversity/voting
Aggregation strategy matters - choose between concatenation, selection, voting, or synthesis based on the nature of outputs
Evaluator-optimizer loops ensure quality through iterative refinement with clear criteria, actionable feedback, and defined stopping conditions
Financial applications need special handling for conflicting analyses, compliance requirements, and audit trails
Combine patterns - parallelization for speed, evaluation-optimization for quality, creating robust pipelines for production use
This is the sixth post in my Applied Agentic AI for Finance series. Next: Orchestrating Financial Operations where we’ll explore the orchestrator-worker pattern for coordinating complex financial workflows.
Comments