Providers
Access 530+ models from 24 providers via models.dev integration with simplified configuration.
models.dev Integration
llms.py uses the models.dev open provider and model catalogue—the same actively maintained registry used by OpenCode. This integration provides access to 530+ models from 24 providers out of the box.
Your local llms.json provider configuration is a superset of models.dev/api.json. Provider definitions are automatically merged, allowing you to enable providers with minimal configuration.
Simplified Configuration
Enable providers by ID—all configuration is automatically inherited from models.dev:
{
"openai": { "enabled": true },
"anthropic": { "enabled": true },
"google": { "enabled": true }
}That's it. The base_url, supported models, pricing, and capabilities are all pulled from models.dev automatically.
Automatic Updates
Provider definitions are automatically updated daily into your ~/.llms/providers.json. You can also trigger a manual update:
llms --update-providersAs an optimization, only providers referenced in your llms.json are saved locally, keeping your configuration lightweight.
Supported Providers
Cloud Providers
| Provider | Models | Description |
|---|---|---|
| OpenAI | 20+ | GPT-4o, GPT-5, o1, o3 series |
| Anthropic | 10+ | Claude Opus 4.5, Sonnet 4, Haiku models |
| 25+ | Gemini 2.5 Flash/Pro, experimental models | |
| OpenRouter | 100+ | Aggregator with free and paid models |
| xAI | 5+ | Grok-4, Grok-4-fast models |
| Groq | 10+ | Fast inference, competitive pricing |
| Mistral | 15+ | Mistral Large, Codestral, open models |
| DeepSeek | 2 | DeepSeek V3, R1 reasoning model |
| Alibaba | 39 | Qwen series models |
| Fireworks AI | 12 | Fast inference platform |
| GitHub Copilot | 27 | Copilot-integrated models |
| GitHub Models | 55 | GitHub-hosted model catalog |
| Hugging Face | 14 | Open model inference |
| Nvidia | 24 | NIM inference models |
| Cerebras | 3 | Fast inference chips |
| Chutes | 56 | AI compute platform |
| MiniMax | 1 | Chinese AI provider |
| Moonshot AI | 5 | Kimi models |
| Zai | 6 | Z.ai models |
Local Providers
| Provider | Description |
|---|---|
| Ollama | Run open models locally |
| LMStudio | Desktop app for local models |
TIP
Raise an issue to add support for any missing providers from models.dev you would like to use.
Provider Configuration
Basic Configuration
Provider configuration lives in ~/.llms/llms.json. Most providers only need "enabled": true:
{
"google": { "enabled": true },
"groq": { "enabled": true },
"openrouter": { "enabled": true }
}Overriding Defaults
You can override any inherited setting from models.dev:
{
"openai": {
"enabled": true,
"api_key": "$MY_CUSTOM_OPENAI_KEY"
}
}Full Configuration Example
For providers not in models.dev or when you need complete control:
{
"enabled": true,
"id": "codestral",
"npm": "codestral",
"api": "https://codestral.mistral.ai/v1",
"env": [
"CODESTRAL_API_KEY"
],
"models": {
"codestral-latest": {
"id": "codestral-latest",
"name": "Codestral",
"attachment": false,
"reasoning": false,
"tool_call": true,
"temperature": true,
"knowledge": "2024-10",
"release_date": "2024-05-29",
"last_updated": "2025-01-04",
"modalities": {
"input": [
"text"
],
"output": [
"text"
]
},
"open_weights": true,
"cost": {
"input": 0.0,
"output": 0.0
},
"limit": {
"context": 256000,
"output": 4096
}
}
},
"check": {
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "1+1="
}
]
}
],
"stream": false
}
}Provider Fields
| Field | Description |
|---|---|
enabled | Whether the provider is active |
id | Provider identifier |
api | API endpoint URL |
env | Array of environment variable names for API keys |
models | Model definitions (object keyed by model ID) |
check | Test message configuration for connectivity checks |
Model Fields
Each model in the models object supports:
| Field | Description |
|---|---|
id | Model identifier used in API calls |
name | Human-readable display name |
attachment | Whether the model supports file attachments |
reasoning | Whether the model has reasoning/thinking capability |
tool_call | Whether the model supports function/tool calling |
temperature | Whether the model supports temperature parameter |
knowledge | Knowledge cutoff date (e.g., "2024-10") |
release_date | Model release date |
last_updated | When the model definition was last updated |
modalities.input | Supported input types (text, image, audio, file) |
modalities.output | Supported output types (text, image, audio) |
open_weights | Whether model weights are publicly available |
cost.input | Cost per 1M input tokens |
cost.output | Cost per 1M output tokens |
limit.context | Maximum context window size in tokens |
limit.output | Maximum output tokens per request |
Environment Variables
Set API keys as environment variables:
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GOOGLE_API_KEY="AIza..."
export OPENROUTER_API_KEY="sk-or-..."
export GROQ_API_KEY="gsk_..."
export XAI_API_KEY="xai-..."
export MISTRAL_API_KEY="..."
export DEEPSEEK_API_KEY="sk-..."
export DASHSCOPE_API_KEY="sk-..." # Alibaba/Qwen
export FIREWORKS_API_KEY="..."
export GITHUB_TOKEN="ghp_..." # GitHub Models
export HUGGINGFACE_API_KEY="hf_..."
export NVIDIA_API_KEY="nvapi-..."
export CHUTES_API_KEY="..."Reference in config with $ prefix:
{
"api_key": "$GROQ_API_KEY"
}Managing Providers
Enable/Disable via CLI
# Enable providers
llms --enable google anthropic openai
# Disable providers
llms --disable ollama
# List all providers and their models
llms ls
# List models for specific provider
llms ls googleEnable/Disable in UI
Toggle providers directly in the Model Selector:
- Click to enable/disable providers
- See available models per provider
- Changes persist to config file
Update Provider Definitions
# Manually update from models.dev
llms --update-providersCustom Providers
Adding to providers-extra.json
For providers not included in models.dev, add them to ~/.llms/providers-extra.json:
{
"my_custom_api": {
"id": "my_custom_api",
"npm": "@ai-sdk/openai-compatible",
"api": "https://my-api.example.com/v1",
"env": [
"MY_CUSTOM_API_KEY"
],
"models": {
"custom-model-1": "model-id-1",
"custom-model-2": "model-id-2"
}
}
}These definitions are merged into your providers.json on every update.
Then enable in your llms.json:
{
"my_custom_api": { "enabled": true }
}Extra Providers
This is used in the default providers-extra.json for image generation providers which are not yet supported in models.dev, like GLM-Image e.g:
{
"zai": {
"models": {
"glm-image": {
"name": "GLM-Image",
"modalities": {
"input": [
"text"
],
"output": [
"image"
]
},
"cost": {
"input": 0,
"output": 0.015
}
}
}
}
}Provider Types
llms.py providers are implemented as classes extending the base OpenAiCompatible class for OpenAI-compatible APIs and includes built-in implementations for several popular OpenAI Chat Completion providers.
| Type | npm | Description |
|---|---|---|
OpenAiCompatible | @ai-sdk/openai-compatible | OpenAI-compatible APIs (most providers) |
MistralProvider | @ai-sdk/mistral | Access Mistral models |
GroqProvider | @ai-sdk/groq | Access models hosted on Groq |
XaiProvider | @ai-sdk/xai | Access xAI models |
CodestralProvider | codestral | Access Mistral's Codestral models |
OllamaProvider | ollama | Access local models via Ollama |
LMStudioProvider | lmstudio | Access local models via LM Studio |
OpenAiLocalProvider | openai-local | Access generic OpenAI-compatible local endpoints |
Additional OpenAI-compatible providers are implemented and registered in the providers folder using the ctx.add_provider() API with a custom OpenAiCompatible, e.g:
from llms.main import OpenAiCompatible
class AnthropicProvider(OpenAiCompatible):
sdk = "@ai-sdk/anthropic"
#...
ctx.add_provider(AnthropicProvider)Custom Provider implementations
| Type | npm | Description |
|---|---|---|
AnthropicProvider | @ai-sdk/anthropic | Access Claude models using Anthropic's messages API |
CerebrasProvider | @ai-sdk/cerebras | Access Cerebras models |
GoogleProvider | @ai-sdk/google | Google models using the Gemini API |
OpenAiProvider | @ai-sdk/openai | Access OpenAI models using Chat Completions API |
Multi Modal Generation Providers
For providers that support multiple modalities (e.g., image generation), custom provider implementations should instead extend the GeneratorBase class as done in providers/openrouter.py that extends the GeneratorBase class to add support for image generation in OpenRouter.
def install(ctx):
from llms.main import GeneratorBase
# https://openrouter.ai/docs/guides/overview/multimodal/image-generation
class OpenRouterGenerator(GeneratorBase):
sdk = "openrouter/image"
#...
ctx.add_provider(OpenRouterGenerator)
__install__ = installThis new implementation can be used by registering it as the image modality whose npm matches the providers sdk in llms.json, e.g:
{
"openrouter": {
"enabled": true,
"id": "openrouter",
"modalities": {
"image": {
"name": "OpenRouter Image",
"npm": "openrouter/image"
}
}
}
}Existing Image Generation Providers implemented this way include:
| Type | npm | Description |
|---|---|---|
ChutesImage | openai-local | Chutes image generation provider |
NvidiaGenAi | nvidia/image | NVIDIA GenAI image generation provider |
OpenAiGenerator | openai/image | OpenAI image generation provider |
OpenRouterGenerator | openrouter/image | OpenRouter image generation provider |
ZaiGenerator | zai/image | Zai image generation provider |
Ollama (Local)
If no map_models have been configured the Ollama will use automatic model discovery to populate its models:
{
"ollama": {
"enabled": false,
"npm": "ollama",
"api": "http://localhost:11434"
}
}LM Studio (Local)
Likewise for LM Studio, which can be enabled with minimal configuration:
{
"lmstudio": {
"enabled": false,
"npm": "lmstudio",
"api": "http://127.0.0.1:1234/v1",
"models": {}
},
}Checking Provider Status
Test provider connectivity:
# Check all models for a provider
llms --check google
# Check specific models
llms --check openai gpt-4o gpt-5Shows:
- ✅ Working models with response times
- ❌ Failed models with error messages
Best Practices
Cost Optimization
- Free First: Enable free-tier providers (Groq, OpenRouter free models) first
- Local When Possible: Use Ollama for privacy and cost savings
- Monitor Costs: Use analytics to track spending
Reliability
- Multiple Providers: Enable multiple providers for automatic failover
- Test Regularly: Use
--checkto verify connectivity - Keep Updated: Run
--update-providersperiodically
Security
- Environment Variables: Never commit API keys to source control
- Minimal Permissions: Use least-privilege API keys
- Rotate Keys: Regularly rotate API keys