AI & Machine Learning

Research Assistant

Andrius Putna
#multi-agent#research#langgraph#citations#synthesis

Single-agent “deep research” is useful but capped — one LLM working serially hits a wall on breadth. The 2026 answer is a multi-agent research system: a lead agent decomposes the question, dispatches specialist researchers in parallel, and synthesizes the output with citations. What took a senior analyst a day now runs in 20 minutes with sources you can check.

What it does

Decomposition. A lead agent reads the research question, identifies the sub-questions, and budgets how many parallel researchers to spawn. “Competitive landscape for X” becomes five researchers, each on a specific competitor; “state of the market for Y” becomes researchers for pricing, adoption, regulatory, and primary interviews.

Parallel researchers. Each researcher runs independently with its own web search, document access, and note-taking tool. They don’t block on each other. Typical run: 4-8 researchers, 15-20 minutes wall-clock.

Synthesis with citations. A synthesizer reads each researcher’s notes, produces a structured report, and every claim links back to the source the researcher found it in. Nothing gets into the report that isn’t in somebody’s notes.

Critic pass. A critic agent reads the synthesized report and flags: unsupported claims, missing counterarguments, stale sources. One more pass before the human sees it.

Tools it can use. WebSearch, WebFetch, your internal knowledge base (via DataConnect MCP), attached PDFs, interview transcripts, spreadsheet snapshots. Not just “whatever the LLM remembers.”

Where it fits

Competitive intelligence teams. Corp dev and M&A due diligence. Strategy and market research functions. Policy and legal research where the citation trail is the deliverable.

How it’s built

We use the research-agent pattern from Anthropic’s Claude Agent SDK (lead → parallel researchers → data analyst → report writer), adapted to your tool stack and research domain. LangGraph for orchestration when the workflow needs state; Claude Agent SDK subagents when it’s fan-out synchronous. Every agent and every tool call is traced so you can audit where a specific claim came from.

What “good” looks like

The deliverable is a markdown report with inline citations, structured front-matter metadata (scope, date, confidence), and an appendix of all sources consulted. We ship it with an eval suite of past research questions so you can see the model’s strengths and failure modes before relying on it for anything consequential.

Pilots scope one research question type — the kind your team already runs weekly — and wrap in 2-3 weeks.

← Back to agents
Start a conversation

AI is only as good as the infrastructure underneath.

Whether you're shipping your first agent or scaling a multi-cluster inference fleet, we can help you skip the expensive detours.