LLM Module
The LLM module provides the core language model client and utilities for agent interactions, supporting multiple providers through LiteLLM.Installation
Quick Start
Classes
LLM
Main LLM client class supporting multiple providers.Constructor
| Parameter | Type | Default | Description |
|---|---|---|---|
model | str | "gpt-4o-mini" | Model identifier |
temperature | float | 0.7 | Sampling temperature |
max_tokens | int | None | Maximum tokens in response |
api_key | str | None | API key (uses env var if not set) |
base_url | str | None | Custom API base URL |
Methods
| Method | Description |
|---|---|
chat(prompt, **kwargs) | Send a chat message |
chat_async(prompt, **kwargs) | Async chat message |
stream(prompt, **kwargs) | Stream response |
get_response(messages, **kwargs) | Get response from messages |
LLMContextLengthExceededException
Exception raised when context length is exceeded.OpenAIClient
OpenAI-compatible client for direct API access.ModelRouter
Intelligent model routing based on task complexity.TaskComplexity
| Level | Description |
|---|---|
LOW | Simple tasks, quick responses |
MEDIUM | Moderate complexity |
HIGH | Complex reasoning, long context |
ModelProfile
Profile for a model’s capabilities.Response Types
ChatCompletion
ChatCompletionMessage
ToolCall
CompletionUsage
Utility Functions
supports_structured_outputs
Check if a model supports structured outputs.supports_streaming_with_tools
Check if a model supports streaming with tool calls.process_stream_chunks
Process streaming response chunks.create_routing_agent
Create an agent with model routing.Usage Examples
Basic Chat
Streaming Response
With Tools
Model Routing
Different Providers
Environment Variables
| Variable | Description |
|---|---|
OPENAI_API_KEY | OpenAI API key |
ANTHROPIC_API_KEY | Anthropic API key |
GOOGLE_API_KEY | Google AI API key |
GROQ_API_KEY | Groq API key |
OPENROUTER_API_KEY | OpenRouter API key |

