llms.py
Features

Providers

Access 530+ models from 24 providers via models.dev integration with simplified configuration.

models.dev Integration

llms.py uses the models.dev open provider and model catalogue—the same actively maintained registry used by OpenCode. This integration provides access to 530+ models from 24 providers out of the box.

Your local llms.json provider configuration is a superset of models.dev/api.json. Provider definitions are automatically merged, allowing you to enable providers with minimal configuration.

Simplified Configuration

Enable providers by ID—all configuration is automatically inherited from models.dev:

{
  "openai": { "enabled": true },
  "anthropic": { "enabled": true },
  "google": { "enabled": true }
}

That's it. The base_url, supported models, pricing, and capabilities are all pulled from models.dev automatically.

Automatic Updates

Provider definitions are automatically updated daily into your ~/.llms/providers.json. You can also trigger a manual update:

llms --update-providers

As an optimization, only providers referenced in your llms.json are saved locally, keeping your configuration lightweight.


Supported Providers

Cloud Providers

ProviderModelsDescription
OpenAI20+GPT-4o, GPT-5, o1, o3 series
Anthropic10+Claude Opus 4.5, Sonnet 4, Haiku models
Google25+Gemini 2.5 Flash/Pro, experimental models
OpenRouter100+Aggregator with free and paid models
xAI5+Grok-4, Grok-4-fast models
Groq10+Fast inference, competitive pricing
Mistral15+Mistral Large, Codestral, open models
DeepSeek2DeepSeek V3, R1 reasoning model
Alibaba39Qwen series models
Fireworks AI12Fast inference platform
GitHub Copilot27Copilot-integrated models
GitHub Models55GitHub-hosted model catalog
Hugging Face14Open model inference
Nvidia24NIM inference models
Cerebras3Fast inference chips
Chutes56AI compute platform
MiniMax1Chinese AI provider
Moonshot AI5Kimi models
Zai6Z.ai models

Local Providers

ProviderDescription
OllamaRun open models locally
LMStudioDesktop app for local models

TIP

Raise an issue to add support for any missing providers from models.dev you would like to use.


Provider Configuration

Basic Configuration

Provider configuration lives in ~/.llms/llms.json. Most providers only need "enabled": true:

{
  "google": { "enabled": true },
  "groq": { "enabled": true },
  "openrouter": { "enabled": true }
}

Overriding Defaults

You can override any inherited setting from models.dev:

{
  "openai": {
    "enabled": true,
    "api_key": "$MY_CUSTOM_OPENAI_KEY"
  }
}

Full Configuration Example

For providers not in models.dev or when you need complete control:

{
    "enabled": true,
    "id": "codestral",
    "npm": "codestral",
    "api": "https://codestral.mistral.ai/v1",
    "env": [
        "CODESTRAL_API_KEY"
    ],
    "models": {
        "codestral-latest": {
            "id": "codestral-latest",
            "name": "Codestral",
            "attachment": false,
            "reasoning": false,
            "tool_call": true,
            "temperature": true,
            "knowledge": "2024-10",
            "release_date": "2024-05-29",
            "last_updated": "2025-01-04",
            "modalities": {
                "input": [
                    "text"
                ],
                "output": [
                    "text"
                ]
            },
            "open_weights": true,
            "cost": {
                "input": 0.0,
                "output": 0.0
            },
            "limit": {
                "context": 256000,
                "output": 4096
            }
        }
    },
    "check": {
        "messages": [
            {
                "role": "user",
                "content": [
                    {
                        "type": "text",
                        "text": "1+1="
                    }
                ]
            }
        ],
        "stream": false
    }
}

Provider Fields

FieldDescription
enabledWhether the provider is active
idProvider identifier
apiAPI endpoint URL
envArray of environment variable names for API keys
modelsModel definitions (object keyed by model ID)
checkTest message configuration for connectivity checks

Model Fields

Each model in the models object supports:

FieldDescription
idModel identifier used in API calls
nameHuman-readable display name
attachmentWhether the model supports file attachments
reasoningWhether the model has reasoning/thinking capability
tool_callWhether the model supports function/tool calling
temperatureWhether the model supports temperature parameter
knowledgeKnowledge cutoff date (e.g., "2024-10")
release_dateModel release date
last_updatedWhen the model definition was last updated
modalities.inputSupported input types (text, image, audio, file)
modalities.outputSupported output types (text, image, audio)
open_weightsWhether model weights are publicly available
cost.inputCost per 1M input tokens
cost.outputCost per 1M output tokens
limit.contextMaximum context window size in tokens
limit.outputMaximum output tokens per request

Environment Variables

Set API keys as environment variables:

export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GOOGLE_API_KEY="AIza..."
export OPENROUTER_API_KEY="sk-or-..."
export GROQ_API_KEY="gsk_..."
export XAI_API_KEY="xai-..."
export MISTRAL_API_KEY="..."
export DEEPSEEK_API_KEY="sk-..."
export DASHSCOPE_API_KEY="sk-..."     # Alibaba/Qwen
export FIREWORKS_API_KEY="..."
export GITHUB_TOKEN="ghp_..."         # GitHub Models
export HUGGINGFACE_API_KEY="hf_..."
export NVIDIA_API_KEY="nvapi-..."
export CHUTES_API_KEY="..."

Reference in config with $ prefix:

{
  "api_key": "$GROQ_API_KEY"
}

Managing Providers

Enable/Disable via CLI

# Enable providers
llms --enable google anthropic openai

# Disable providers
llms --disable ollama

# List all providers and their models
llms ls

# List models for specific provider
llms ls google

Enable/Disable in UI

Toggle providers directly in the Model Selector:

  • Click to enable/disable providers
  • See available models per provider
  • Changes persist to config file

Update Provider Definitions

# Manually update from models.dev
llms --update-providers

Custom Providers

Adding to providers-extra.json

For providers not included in models.dev, add them to ~/.llms/providers-extra.json:

{
  "my_custom_api": {
    "id": "my_custom_api",
    "npm": "@ai-sdk/openai-compatible",
    "api": "https://my-api.example.com/v1",
    "env": [
      "MY_CUSTOM_API_KEY"
    ],
    "models": {
      "custom-model-1": "model-id-1",
      "custom-model-2": "model-id-2"
    }
  }
}

These definitions are merged into your providers.json on every update.

Then enable in your llms.json:

{
  "my_custom_api": { "enabled": true }
}

Extra Providers

This is used in the default providers-extra.json for image generation providers which are not yet supported in models.dev, like GLM-Image e.g:

{
    "zai": {
        "models": {
            "glm-image": {
                "name": "GLM-Image",
                "modalities": {
                    "input": [
                        "text"
                    ],
                    "output": [
                        "image"
                    ]
                },
                "cost": {
                    "input": 0,
                    "output": 0.015
                }
            }
        }
    }
}

Provider Types

llms.py providers are implemented as classes extending the base OpenAiCompatible class for OpenAI-compatible APIs and includes built-in implementations for several popular OpenAI Chat Completion providers.

TypenpmDescription
OpenAiCompatible@ai-sdk/openai-compatibleOpenAI-compatible APIs (most providers)
MistralProvider@ai-sdk/mistralAccess Mistral models
GroqProvider@ai-sdk/groqAccess models hosted on Groq
XaiProvider@ai-sdk/xaiAccess xAI models
CodestralProvidercodestralAccess Mistral's Codestral models
OllamaProviderollamaAccess local models via Ollama
LMStudioProviderlmstudioAccess local models via LM Studio
OpenAiLocalProvideropenai-localAccess generic OpenAI-compatible local endpoints

Additional OpenAI-compatible providers are implemented and registered in the providers folder using the ctx.add_provider() API with a custom OpenAiCompatible, e.g:

from llms.main import OpenAiCompatible

class AnthropicProvider(OpenAiCompatible):
    sdk = "@ai-sdk/anthropic"
    #...

ctx.add_provider(AnthropicProvider)

Custom Provider implementations

TypenpmDescription
AnthropicProvider@ai-sdk/anthropicAccess Claude models using Anthropic's messages API
CerebrasProvider@ai-sdk/cerebrasAccess Cerebras models
GoogleProvider@ai-sdk/googleGoogle models using the Gemini API
OpenAiProvider@ai-sdk/openaiAccess OpenAI models using Chat Completions API

Multi Modal Generation Providers

For providers that support multiple modalities (e.g., image generation), custom provider implementations should instead extend the GeneratorBase class as done in providers/openrouter.py that extends the GeneratorBase class to add support for image generation in OpenRouter.

def install(ctx):
    from llms.main import GeneratorBase

    # https://openrouter.ai/docs/guides/overview/multimodal/image-generation
    class OpenRouterGenerator(GeneratorBase):
        sdk = "openrouter/image"
        #...

    ctx.add_provider(OpenRouterGenerator)


__install__ = install

This new implementation can be used by registering it as the image modality whose npm matches the providers sdk in llms.json, e.g:

{
    "openrouter": {
        "enabled": true,
        "id": "openrouter",
        "modalities": {
            "image": {
                "name": "OpenRouter Image",
                "npm": "openrouter/image"
            }
        }
    }
}

Existing Image Generation Providers implemented this way include:

TypenpmDescription
ChutesImageopenai-localChutes image generation provider
NvidiaGenAinvidia/imageNVIDIA GenAI image generation provider
OpenAiGeneratoropenai/imageOpenAI image generation provider
OpenRouterGeneratoropenrouter/imageOpenRouter image generation provider
ZaiGeneratorzai/imageZai image generation provider

Ollama (Local)

If no map_models have been configured the Ollama will use automatic model discovery to populate its models:

{
  "ollama": {
    "enabled": false,
    "npm": "ollama",
    "api": "http://localhost:11434"
  }
}

LM Studio (Local)

Likewise for LM Studio, which can be enabled with minimal configuration:

{
    "lmstudio": {
        "enabled": false,
        "npm": "lmstudio",
        "api": "http://127.0.0.1:1234/v1",
        "models": {}
    },
}

Checking Provider Status

Test provider connectivity:

# Check all models for a provider
llms --check google

# Check specific models
llms --check openai gpt-4o gpt-5

Shows:

  • ✅ Working models with response times
  • ❌ Failed models with error messages

Best Practices

Cost Optimization

  1. Free First: Enable free-tier providers (Groq, OpenRouter free models) first
  2. Local When Possible: Use Ollama for privacy and cost savings
  3. Monitor Costs: Use analytics to track spending

Reliability

  1. Multiple Providers: Enable multiple providers for automatic failover
  2. Test Regularly: Use --check to verify connectivity
  3. Keep Updated: Run --update-providers periodically

Security

  1. Environment Variables: Never commit API keys to source control
  2. Minimal Permissions: Use least-privilege API keys
  3. Rotate Keys: Regularly rotate API keys

Next Steps