Providers
Connect to 18+ AI providers. Configure them in the extension or playground settings.
Providers are AI service adapters that connect WebLLM to different AI backends. You configure which providers to use, add your API keys, and set priorities.
When a website makes an AI request, WebLLM automatically selects the best provider based on your configuration and the task requirements.
Local AI
Run models directly on your device with zero external data sharing.
Aggregate Providers
Access hundreds of models through unified gateway APIs.
Model Providers
Direct access to AI model providers with your API keys.
OS and Browser Providers
Native inference engines from operating systems and browsers (expected soon).
Chrome Built-in AI
Coming soon
Windows Copilot Runtime
Coming soon
macOS Intelligence
Coming soon
Sponsored Gateways
Gateways provided by developers as a convenience to make WebLLM available to all users, including mobile and those who haven't set up the extension yet.
How to Configure Providers
Go to the Providers page in the extension or playground. Click on a provider and enter your API key from their website.
Drag providers to reorder them. Higher priority providers are preferred when routing requests.
Toggle providers on/off. Disabled providers won't be used even if they have API keys configured.
How WebLLM Selects Providers
WebLLM automatically selects the best provider for each request based on task requirements, model capabilities, and your priorities. The intelligent routing system scores providers by 16 criteria including speed, quality, cost, and capabilities.
Learn more about model routing →Next Steps
Set up API keys and priorities in the playground
Go to Providers →Understand how WebLLM selects models
View Routing Docs →