A workflow that distributes tasks across multiple LLM calls simultaneously, aggregating results to handle complex or large-scale operations efficiently.Documentation Index
Fetch the complete documentation index at: https://docs.praison.ai/llms.txt
Use this file to discover all available pages before exploring further.
Quick Start
Requirements
- Python 3.10 or higher
- OpenAI API key. Generate OpenAI API key here. Use Other models using this guide.
- Basic understanding of Python and async programming
Understanding Parallelisation
What is Parallelisation?
Parallelisation enables:
- Concurrent execution of multiple tasks
- Improved performance through parallel processing
- Efficient handling of independent operations
- Aggregation of parallel task results
Features
Parallel Execution
Run multiple tasks simultaneously for improved performance.
Async Support
Built-in support for asynchronous execution.
Result Aggregation
Combine results from parallel tasks efficiently.
Process Control
Monitor and manage parallel task execution.
Configuration Options
Troubleshooting
Execution Issues
If parallel execution fails:
- Check async configuration
- Verify task independence
- Monitor resource usage
Result Aggregation
If aggregation is incorrect:
- Review task outputs
- Check context connections
- Verify aggregator logic
Next Steps
AutoAgents
Learn about automatically created and managed AI agents
Mini Agents
Explore lightweight, focused AI agents
For optimal results, ensure your parallel tasks are truly independent and your system has sufficient resources to handle concurrent execution.

