Workflow
- Receive research query
- Execute web searches via provider API
- Perform multi-step reasoning
- Generate comprehensive report with citations
Setup
Run — Python
Run — CLI
Run — agents.yaml
Serve API
OpenAI Deep Research
Gemini Deep Research
Features
Multi-Provider
Supports OpenAI, Gemini, and LiteLLM providers.
Real-time Streaming
See reasoning summaries and web searches as they happen.
Structured Citations
Get citations with titles and URLs.
Auto Detection
Provider automatically detected from model name.
Streaming Output
Streaming is enabled by default. You will see:- 💭 Reasoning summaries
- 🔎 Web search queries
- Final report text
Response Structure
Available Models
| Provider | Models |
|---|---|
| OpenAI | o3-deep-research, o4-mini-deep-research |
| Gemini | deep-research-pro |
Configuration Options
With Custom Instructions
Accessing Citations
Monitor / Verify
Cleanup
Features Demonstrated
| Feature | Implementation |
|---|---|
| Workflow | Multi-step reasoning with web search |
| Observability | --verbose flag, streaming output |
| Tools | Built-in web search via provider API |
| Resumability | interaction_id for Gemini follow-ups |
| Structured Output | Citations with titles and URLs |
Next Steps
- Research Agent for custom research workflows
- RAG for document-based research
- Memory for persistent research context

