Custom Provider Registry
The Custom Provider Registry allows you to register custom LLM providers that extend PraisonAI’s capabilities beyond the built-in providers (OpenAI, Anthropic, Google via LiteLLM).
Important: This registry is for custom provider extensions only. Built-in providers (OpenAI, Anthropic, Google, etc.) are handled automatically by LiteLLM in praisonaiagents. You only need this registry when integrating providers not supported by LiteLLM.
What Problem Does It Solve?
- Integrate proprietary or internal LLM APIs
- Add support for new providers before LiteLLM supports them
- Create mock providers for testing
- Build custom routing or caching layers
Quick Start
from praisonai.llm import register_llm_provider, create_llm_provider
# Define a custom provider
class MyCustomProvider:
provider_id = "my-custom"
def __init__(self, model_id, config=None):
self.model_id = model_id
self.config = config or {}
def generate(self, prompt):
# Your implementation here
return f"Response from {self.model_id}"
# Register the provider
register_llm_provider("my-custom", MyCustomProvider)
# Use it
provider = create_llm_provider("my-custom/my-model")
How Provider Resolution Works
The registry resolves providers in this order:
# Explicitly specify provider
provider = create_llm_provider("cloudflare/workers-ai-model")
# Resolves to: provider_id="cloudflare", model_id="workers-ai-model"
2. Model Prefix Inference
When no provider is specified, the model name prefix determines the provider:
| Model Prefix | Inferred Provider |
|---|
gpt-*, o1*, o3* | openai |
claude-* | anthropic |
gemini-* | google |
| Other | openai (default) |
# These are equivalent:
create_llm_provider("gpt-4o-mini")
create_llm_provider("openai/gpt-4o-mini")
# These are equivalent:
create_llm_provider("claude-3-5-sonnet")
create_llm_provider("anthropic/claude-3-5-sonnet")
3. Custom Provider with Explicit Name
# After registering "cloudflare"
create_llm_provider("cloudflare/workers-ai")
# Resolves to your registered CloudflareProvider
Registering Custom Providers Safely
Unique Name Rules
Provider names must be unique. Attempting to register a duplicate name raises an error:
register_llm_provider("my-provider", MyProvider)
register_llm_provider("my-provider", AnotherProvider) # Raises ValueError!
Error message:
ValueError: Provider 'my-provider' is already registered. Use override=True to replace it.
Alias Rules
Aliases provide alternative names for a provider:
register_llm_provider("cloudflare", CloudflareProvider, aliases=["cf", "workers-ai"])
# All of these now work:
create_llm_provider("cloudflare/model")
create_llm_provider("cf/model")
create_llm_provider("workers-ai/model")
Alias collision detection:
register_llm_provider("provider-a", ProviderA, aliases=["shared"])
register_llm_provider("provider-b", ProviderB, aliases=["shared"]) # Raises ValueError!
Error message:
ValueError: Alias 'shared' is already registered (points to 'provider-a'). Use override=True to replace it.
Override Flag
Use override=True only when you intentionally want to replace an existing registration:
# Replace existing provider
register_llm_provider("my-provider", NewProvider, override=True)
Avoid using override=True carelessly - it can break other code that depends on the original provider.
Avoiding Collisions
Recommended Naming Pattern
Use a namespace prefix to avoid collisions:
# Good: namespaced provider names
register_llm_provider("mycompany-internal-llm", InternalProvider)
register_llm_provider("myproject-mock", MockProvider)
# Avoid: generic names that might conflict
register_llm_provider("custom", CustomProvider) # Too generic
register_llm_provider("llm", LLMProvider) # Too generic
Reserved Names
These provider names are reserved for model prefix inference and should not be used for custom providers:
openai - Used for gpt-, o1, o3* models
anthropic - Used for claude-* models
google - Used for gemini-* models
You can register these names, but it will override the default inference behavior. Only do this if you’re intentionally replacing the built-in provider resolution.
Multi-Agent Guidance
Global Registry (Default)
The default global registry is shared across all code:
from praisonai.llm import register_llm_provider
# This affects all code using the default registry
register_llm_provider("shared-provider", SharedProvider)
Isolated Registry (Per Agent/Run)
For multi-agent scenarios where you need isolation:
from praisonai.llm import LLMProviderRegistry, create_llm_provider
# Create isolated registries
agent1_registry = LLMProviderRegistry()
agent2_registry = LLMProviderRegistry()
# Register different providers to each
agent1_registry.register("custom", Agent1Provider)
agent2_registry.register("custom", Agent2Provider)
# Use with create_llm_provider
provider1 = create_llm_provider("custom/model", registry=agent1_registry)
provider2 = create_llm_provider("custom/model", registry=agent2_registry)
# provider1 and provider2 are different instances from different classes
Benefits of isolated registries:
- No global state mutation
- Safe for concurrent/parallel agent runs
- Each agent can have its own provider configuration
- Testing isolation
API Reference
register_llm_provider
register_llm_provider(
name: str,
provider: ProviderClass | ProviderFactory,
*,
override: bool = False,
aliases: list[str] | None = None
) -> None
create_llm_provider
create_llm_provider(
input_value: str | dict | ProviderInstance,
*,
registry: LLMProviderRegistry | None = None,
config: dict | None = None
) -> ProviderInstance
LLMProviderRegistry
class LLMProviderRegistry:
def register(name, provider, *, override=False, aliases=None) -> None
def unregister(name) -> bool
def has(name) -> bool
def list() -> list[str]
def list_all() -> list[str] # Includes aliases
def resolve(name, model_id, config=None) -> ProviderInstance
def get(name) -> ProviderClass | None
Utility Functions
from praisonai.llm import (
get_default_llm_registry, # Get global registry
has_llm_provider, # Check if provider exists
list_llm_providers, # List registered providers
unregister_llm_provider, # Remove a provider
parse_model_string, # Parse "provider/model" strings
)
Minimal Example
from praisonai.llm import register_llm_provider, create_llm_provider
class EchoProvider:
"""Simple provider that echoes input."""
provider_id = "echo"
def __init__(self, model_id, config=None):
self.model_id = model_id
self.config = config or {}
def generate(self, prompt):
return f"[{self.model_id}] Echo: {prompt}"
# Register
register_llm_provider("echo", EchoProvider)
# Use
provider = create_llm_provider("echo/v1")
print(provider.generate("Hello!"))
# Output: [v1] Echo: Hello!
Troubleshooting
Common Errors
“Provider ‘X’ is already registered”
# Problem: Duplicate registration
register_llm_provider("my-provider", ProviderA)
register_llm_provider("my-provider", ProviderB) # Error!
# Solution 1: Use a different name
register_llm_provider("my-provider-v2", ProviderB)
# Solution 2: Override intentionally
register_llm_provider("my-provider", ProviderB, override=True)
“Unknown provider: ‘X’”
# Problem: Provider not registered
provider = create_llm_provider("unregistered/model") # Error!
# Solution: Register first
register_llm_provider("unregistered", MyProvider)
provider = create_llm_provider("unregistered/model")
“Alias ‘X’ conflicts with existing provider”
# Problem: Alias matches an existing provider name
register_llm_provider("provider-a", ProviderA)
register_llm_provider("provider-b", ProviderB, aliases=["provider-a"]) # Error!
# Solution: Use a unique alias
register_llm_provider("provider-b", ProviderB, aliases=["alias-b"])
- Lazy imports: The registry module only imports
typing - no heavy dependencies
- Import time: ~800μs for
praisonai.llm.registry
- O(1) operations:
has(), resolve(), and register() are all O(1)
- No LiteLLM coupling: The registry is completely independent of LiteLLM
See Also