Privacy Policy
How WebLLM handles your data and interacts with third-party AI providers.
Last updated: January 2025
Overview
WebLLM is a browser-native protocol that enables websites to access AI capabilities through the navigator.llm API. This privacy policy explains how data flows through WebLLM and your responsibilities when using the service.
WebLLM itself is an orchestration layer. We do not train AI models or store your conversation content on our servers. However, when you use cloud AI providers through WebLLM, your data is processed according to each provider's privacy policy.
Chrome Extension Users: For detailed information about extension permissions, local data storage, and browser-specific privacy practices, see our Chrome Extension Privacy Policy.
How Your Data Flows
When using the browser extension, data is stored locally:
- API keys (encrypted in IndexedDB)
- Conversation history
- Permission grants per website
- Usage statistics
When using cloud AI providers, your prompts and data are sent to:
- OpenAI (GPT models)
- Anthropic (Claude models)
- Google (Gemini models)
- And other configured providers
Third-Party AI Providers
Important: Provider Data Policies Vary
WebLLM connects to various AI model providers. Each provider has different policies regarding:
Data Retention
Some providers retain prompts and responses for service improvement, others delete immediately. Review each provider's policy before use.
Training Usage
Some providers may use API data for model training (often opt-out available), others never do. Check provider settings for data usage controls.
Geographic Processing
Data may be processed in various jurisdictions depending on the provider. Consider this for compliance requirements (GDPR, CCPA, etc.).
Logging & Monitoring
Providers typically log requests for abuse prevention and debugging. Logs may be retained for varying periods.
Recommendation: Review the privacy policy of each provider you configure in WebLLM. For sensitive data, consider using local models (Ollama, LM Studio) which process everything on your device.
WebLLM Gateway Service
If you use WebLLM's hosted gateway service (for websites without extension support), the following applies:
Authentication: We store tokens and access credentials necessary to authenticate requests.
Request Metadata: We log request timestamps, token counts, and rate limiting data. We do not log prompt content.
Proxy Only: Gateway requests are forwarded to your configured provider. We do not store, read, or analyze the content of your AI conversations.
Your Rights & Controls
Contact
For privacy-related questions or data requests, please open an issue on our GitHub repository or reach out through our community channels.
Changes to This Policy
As WebLLM evolves from experimental preview to production, this privacy policy may be updated. Significant changes will be announced through our GitHub repository and website. Continued use of the service after changes constitutes acceptance of the updated policy.