The web platform has APIs for almost everything:
navigator.geolocation- Locationnavigator.mediaDevices- Camera/microphonelocalStorage/IndexedDB- Storagenavigator.bluetooth- Bluetoothnavigator.clipboard- Clipboard
What's missing? AI.
This is a technical proposal for navigator.llm—a standardized, permission-gated API for AI capabilities in browsers.
The Gap in Web Platform APIs
What Exists
Location:
navigator.geolocation.getCurrentPosition(callback);
User grants permission, browser provides location. Works the same across browsers.
Camera:
const stream = await navigator.mediaDevices.getUserMedia({ video: true });
User grants permission, browser provides camera access. Works the same across browsers.
Storage:
localStorage.setItem('key', 'value');
No permission needed (limited), consistent API across browsers.
What's Missing
AI:
// This doesn't exist natively
navigator.llm.prompt('Summarize this page');
Currently, developers must:
- Set up backend infrastructure
- Manage API keys
- Handle multiple providers
- Build their own permission/consent UI
Every app reinvents this. Users have no control.
Proposed API Surface
Basic Prompt
// Simple completion
if ('llm' in navigator) {
const response = await navigator.llm.prompt('What is 2+2?');
console.log(response); // "4"
}
Behavior:
- Checks user's AI configuration
- Triggers permission prompt if needed
- Routes to user's chosen provider
- Returns response
Streaming
// Real-time response streaming
const stream = navigator.llm.streamPrompt('Write a short story');
for await (const chunk of stream) {
output.textContent += chunk;
}
Behavior:
- Same permission model as non-streaming
- Yields chunks as they arrive
- Throws on error, can be cancelled
Sessions (Conversation Context)
// Maintain context across messages
const session = await navigator.llm.createSession({
system: 'You are a helpful coding assistant.'
});
const response1 = await session.prompt('What is a closure?');
const response2 = await session.prompt('Show me an example');
// Second response knows we're discussing closures
await session.close();
Behavior:
- Sessions maintain conversation history
- System message sets behavior
- Must be explicitly closed
- Optional: persist across page loads
Permission Management
// Query current state
const status = await navigator.permissions.query({ name: 'llm' });
console.log(status.state); // 'granted', 'denied', or 'prompt'
// Listen for changes
status.addEventListener('change', () => {
console.log('Permission changed:', status.state);
});
// Request permission explicitly
const permission = await navigator.llm.requestPermission();
Behavior:
- Integrates with Permissions API
- First use triggers prompt (if not pre-granted)
- Remembered per-origin
- Revocable via browser settings
Capability Detection
// Check for support
if ('llm' in navigator) {
// API available (extension or native)
}
// Query capabilities (future)
const caps = await navigator.llm.getCapabilities();
// {
// maxTokens: 4096,
// streaming: true,
// tools: true,
// vision: false
// }
Error Handling
try {
const result = await navigator.llm.prompt(input);
} catch (error) {
switch (error.name) {
case 'NotAllowedError':
// User denied permission
break;
case 'NotSupportedError':
// No AI provider configured
break;
case 'AbortError':
// Request was cancelled
break;
case 'QuotaExceededError':
// Rate limited or quota exceeded
break;
case 'NetworkError':
// Provider unavailable
break;
}
}
Error types follow web platform conventions (same as fetch, etc.)
Options and Parameters
Prompt Options
const response = await navigator.llm.prompt(input, {
// Sampling parameters
temperature: 0.7, // 0-2, default varies by provider
maxTokens: 1000, // Max response length
// Provider hints (may be ignored)
preferLocal: true, // Prefer local model if available
timeout: 30000, // Timeout in ms
// Cancellation
signal: abortController.signal
});
Session Options
const session = await navigator.llm.createSession({
system: 'You are a helpful assistant.', // System message
temperature: 0.5, // Default for session
maxTokens: 2000, // Default for session
});
Design Principles
1. Permission-Gated (Like Geolocation)
┌─────────────────────────────────────────────────────┐
│ example.com wants to use AI │
│ │
│ This site is requesting AI assistance. │
│ Your AI provider is: Ollama (Local) │
│ │
│ [Allow] [Allow for this session] [Deny] │
│ │
│ ☑ Remember my decision for this site │
└─────────────────────────────────────────────────────┘
- Clear what's being requested
- User controls per-site
- Revocable anytime
2. Provider Agnostic
The API doesn't specify providers. User configures:
- OpenAI
- Anthropic
- Ollama (local)
- Mistral
- Any OpenAI-compatible endpoint
Website doesn't know or care which provider.
3. Progressive Enhancement
async function summarize(text) {
// Best: Browser AI
if ('llm' in navigator) {
return await navigator.llm.prompt(`Summarize: ${text}`);
}
// Fallback: Server-side
return await fetch('/api/summarize', { body: text });
// Or: Feature unavailable
}
Apps should work without AI.
4. Minimal Surface
Start simple:
prompt()- Basic completionstreamPrompt()- StreamingcreateSession()- Conversations
Add later (if needed):
generateImage()- Image generationembed()- Embeddingstranscribe()- Speech to text
5. Web Platform Conventions
Follows existing patterns:
- Promise-based
- Async iterators for streaming
- Permissions API integration
- Standard error types
- AbortController support
Comparison with Chrome's Approach
Chrome's Built-in AI
// Chrome-only API
const session = await ai.languageModel.create();
const result = await session.prompt("Hello");
Characteristics:
- Chrome-specific (not standardized)
- Gemini-only (no user choice)
- Different API shape
Proposed navigator.llm
// Standardizable API
const result = await navigator.llm.prompt("Hello");
Characteristics:
- Designed for standardization
- User chooses provider
- Follows web platform conventions
Implementation Path
Phase 1: Extension (Now)
WebLLM provides navigator.llm via extension:
- Validates API design
- Real-world usage feedback
- Cross-browser via extension
Phase 2: Browser Experiments
- Chrome Origin Trial
- Firefox Nightly
- Safari Technology Preview
- Multiple implementations test interop
Phase 3: Standards Process
- W3C/WHATWG proposal
- Multi-stakeholder input
- Privacy and security review
- Formal specification
Phase 4: Native Shipping
- Chrome stable
- Firefox stable
- Safari stable
- Universal availability
Estimated timeline: 2-4 years (based on WebGPU precedent)
Security Considerations
Permission Model
- Per-origin grants
- User explicitly approves
- Revocable anytime
- Clear indicator when AI is active
Rate Limiting
- Browser may enforce limits
- Prevents abuse
- Provider-specific limits apply
Privacy
- User controls provider
- Local options available
- No browser-level logging required
Prompt Injection
- This is an application concern
- API doesn't solve prompt injection
- Same as any AI integration
Privacy Considerations
User Control
User decides:
- Which provider (local, cloud, which cloud)
- Which sites get access
- When to revoke
Local Options
With local AI (Ollama):
- Data never leaves device
- Full privacy
- No network requests
Transparency
- User knows which sites use AI
- Can review in browser settings
- Clear permission UI
Example: Full Application
<!DOCTYPE html>
<html>
<head>
<title>AI Writing Assistant</title>
</head>
<body>
<textarea id="input" placeholder="Write something..."></textarea>
<button id="improve">Improve Writing</button>
<div id="output"></div>
<script>
const input = document.getElementById('input');
const output = document.getElementById('output');
const button = document.getElementById('improve');
button.addEventListener('click', async () => {
// Check for AI support
if (!('llm' in navigator)) {
output.textContent = 'Browser AI not available. Install WebLLM extension.';
return;
}
try {
// Show loading state
button.disabled = true;
output.textContent = 'Improving...';
// Stream the response
const stream = navigator.llm.streamPrompt(
`Improve this writing, making it clearer and more engaging:\n\n${input.value}`
);
output.textContent = '';
for await (const chunk of stream) {
output.textContent += chunk;
}
} catch (error) {
if (error.name === 'NotAllowedError') {
output.textContent = 'Please allow AI access for this site.';
} else {
output.textContent = `Error: ${error.message}`;
}
} finally {
button.disabled = false;
}
});
</script>
</body>
</html>
Conclusion
navigator.llm isn't revolutionary—it's evolutionary. It applies proven web platform patterns to AI:
- Permission model from geolocation
- Streaming from fetch
- Error handling from standard APIs
- Progressive enhancement from service workers
The web platform grows by absorbing new capabilities. AI is next.