Browser-Native AI vs AI-Native Browsers

You don't need a new browser for AI—you need a new API

WebLLM Team
Browser-Native AI vs AI-Native Browsers

The AI browser wars have begun.

In 2025, a new category of product is fighting for your attention: the "AI-native browser." OpenAI launched Atlas. The Browser Company shipped Dia. Opera evolved Aria into an agentic assistant. Fellou claims over a million users for its agentic browser.

The pitch is compelling: AI built into your browser. Chat with your tabs. Summarize pages instantly. Let AI book your flights, fill your shopping cart, research your competitors.

But there's another path—one that doesn't require abandoning your browser, fragmenting the web, or handing your browsing data to another company. It's called browser-native AI: bringing AI capabilities to the web platform through standard APIs, just like we got geolocation, camera access, and push notifications.

This is the case for browser-native AI over AI-native browsers—and why the choice matters for the future of the web.

The AI-Native Browser Landscape

Let's look at what's being built.

OpenAI Atlas

Launched in October 2025, Atlas is OpenAI's Chromium fork with ChatGPT baked in. It features an "Ask ChatGPT" sidebar, agent mode that can perform multi-step tasks (grocery shopping, research compilation), and deep integration with OpenAI's models.

The browser knows your tabs, your history, your context. It can summarize pages, edit text inline, and execute actions on your behalf.

Dia (The Browser Company)

After Arc reached "enthusiast" status but never mainstream adoption, The Browser Company pivoted to Dia—an "AI-first browser" where AI isn't a feature but the foundation. The browser helps with writing, learning, planning, and shopping through persistent context awareness.

"AI won't exist as an app. Or a button. We believe it'll be an entirely new environment—built on top of a web browser," their website declares. They've introduced a $20/month Pro tier for heavy AI users.

Opera Aria

Opera has been integrating AI since 2023, powered by their Composer AI engine (built on OpenAI and Google technologies). Aria can generate images, understand context, and command tabs through natural language.

Their 2025 "Browser Operator" represents what they call "agentic AI"—the browser can organize tabs, perform research, and automate browsing tasks on your behalf.

Fellou

Fellou bills itself as an "agentic AI browser for deep search and automation." It runs tasks in a "shadow workspace" (background processing), maintains persistent memory of your behavior, and coordinates actions across multiple web interfaces.

On the Online-Mind2web benchmark, Fellou claims 80% task completion—nearly double competing tools.

What These Browsers Promise

The value proposition across all AI-native browsers is similar:

For productivity:

  • Summarize any page
  • Chat with your open tabs
  • AI writing assistance in every text field
  • Research automation

For task automation:

  • Goal-driven browsing ("book me a flight under $400")
  • Multi-step task execution
  • Form filling and data entry
  • Cross-site workflows

For personalization:

  • Persistent context across sessions
  • Learning from your behavior
  • Proactive suggestions
  • Personal knowledge base

It sounds transformative. And for some users, it will be.

But there are costs that the marketing doesn't highlight.

The Hidden Costs of AI-Native Browsers

The Privacy Problem

In August 2025, researchers from UCL, UC Davis, and Mediterranea University published findings from the first large-scale analysis of AI browser assistants. What they found was alarming:

  • Several assistants transmitted full webpage content—including visible information—to their servers
  • One assistant (Merlin) captured form inputs including banking details and health data
  • Extensions like Sider and TinaMind shared user queries with Google Analytics, enabling cross-site tracking
  • Assistants could infer age, gender, income, and interests—and used this to personalize across sessions

Some assistants were found to violate HIPAA and FERPA by collecting protected health and educational information.

When OpenAI launched Atlas, privacy concerns emerged immediately. The Electronic Frontier Foundation found that Atlas memorized sensitive queries, including searches about reproductive health services and specific doctors' names.

A Pew survey from July 2025 found that 61% of U.S. adults believe AI browser assistants are "creepy"—yet only 18% have disabled them. The privacy paradox in action.

The Vendor Lock-In Problem

Each AI-native browser creates its own ecosystem:

  • Atlas requires OpenAI's models, OpenAI's account, OpenAI's pricing
  • Dia uses their AI infrastructure with their Pro tier
  • Aria is powered by Opera's Composer engine
  • Fellou has its own Eko framework and memory system

You don't choose your AI provider. The browser vendor does.

This is like if every browser had a different search engine you couldn't change. Chrome only uses Google. Safari only uses DuckDuckGo. Firefox only uses Bing. Imagine the outcry.

Yet for AI—arguably more personal than search—we're accepting vendor lock-in as the default.

The Fragmentation Problem

AI-native browsers are, almost exclusively, Chromium forks. Each adds proprietary AI features that don't work in other browsers. Each creates developer expectations that won't transfer.

We've been here before. The early 2000s had "best viewed in Internet Explorer." The 2010s had apps that only worked in Chrome. Each era of fragmentation hurt users and developers alike.

AI-native browsers are fragmenting again—not on rendering engines, but on AI capabilities.

The Developer Problem

If you're building a web app, AI-native browsers offer you... nothing.

These browsers add AI for users, not developers. There's no API for your app to detect if it's running in Dia or Atlas. No way to leverage the browser's AI programmatically. No standard capability you can build on.

Developers who want AI in their apps still need to:

  • Set up server infrastructure
  • Manage API keys
  • Pay for API calls
  • Handle provider integrations

The browser's AI is for the browser. Not for the web.

What Developers Actually Need

Here's what developers building AI-powered web apps want:

// A standard API that works across browsers
const response = await navigator.llm.prompt('Summarize this article');

// Permission-gated like camera and location
// User controls the provider
// Works offline with local models
// No API keys in code

Not this:

// Different code for each AI browser
if (window.AtlasChatGPT) {
  const response = await window.AtlasChatGPT.ask(prompt);
} else if (window.DiaAI) {
  const response = await window.DiaAI.complete(prompt);
} else if (window.AriaAPI) {
  const response = await window.AriaAPI.generate(prompt);
} else {
  // Fallback: expensive server infrastructure
  const response = await fetch('/api/ai', { body: prompt });
}

The first approach is browser-native AI. The second is the fragmented future AI-native browsers are building toward.

The Web Platform Way

The web has absorbed seemingly impossible capabilities before—without requiring new browsers.

Geolocation (2009)

Before: "Real-time location in a browser? Users need a native app."

After: navigator.geolocation.getCurrentPosition(callback)

We didn't need a "location-native browser." We needed an API, a permission model, and browser implementation.

WebRTC (2011)

Before: "Video calls in a browser? Native apps only."

After: navigator.mediaDevices.getUserMedia({ video: true })

We didn't need a "video-native browser." We needed an API.

Push Notifications (2015)

Before: "Real-time alerts? That's what apps are for."

After: new Notification("Hello")

We didn't need a "notification-native browser."

WebGPU (2023)

Before: "GPU compute in a browser? Impossible."

After: navigator.gpu.requestAdapter()

We didn't need a "GPU-native browser."

The pattern is clear:

  1. Capability emerges (GPS, cameras, GPUs, AI)
  2. Someone proposes a browser API
  3. Browsers implement with a permission model
  4. Developers use the standard API
  5. Users control access per-site

AI shouldn't require a new browser. It requires a new API.

A Note on Chrome's Task-Specific APIs

Google has been experimenting with built-in AI APIs: Summarizer, Translator, Writer, Rewriter, Language Detector. The instinct is right—browsers should have AI capabilities. But the approach is already outdated.

Task-specific APIs assume browser vendors know which AI functions you need. Five pre-defined tasks, tied to Gemini Nano, no user choice in providers. It's the same vendor lock-in in standards clothing—Google decides the model, Google decides the capabilities, Google decides when to update.

Compare this to getUserMedia(). The browser doesn't bundle a "Google Camera™" that all websites must use. It provides access to whatever camera the user has. The API is the abstraction layer, not the hardware.

AI needs the same design: the browser provides permission and interface, the user chooses the model. Not navigator.summarizer.summarize(), but navigator.llm.prompt()—general-purpose, extensible, provider-agnostic.

Things move fast in AI. By the time task-specific APIs ship broadly, they'll be solving yesterday's problems. The right foundation is extensibility and user choice—not five pre-baked functions for common cases.

Browser-Native AI: The WebLLM Approach

WebLLM implements what the web platform actually needs: a general-purpose AI capability where users choose their provider.

How It Works

  1. Extension installs → Adds navigator.llm API to all websites
  2. User configures providers → Local (Ollama), cloud (OpenAI, Anthropic), or on-device
  3. Websites request AIawait navigator.llm.prompt("Summarize this")
  4. Permission prompt appears → "example.com wants to use AI. Allow?"
  5. User's chosen provider responds → Data routes where user decides
// Works today via WebLLM extension
if ('llm' in navigator) {
  const summary = await navigator.llm.prompt(
    `Summarize this article: ${document.body.innerText}`
  );
  showSummary(summary);
}

Key Differences from AI-Native Browsers

AspectAI-Native BrowsersWebLLM
Provider choiceVendor decidesUser decides
Model flexibilityVendor's model onlyAny: Ollama, OpenAI, Anthropic, local, on-device
API for developersNonenavigator.llm.prompt()
Local-first optionRarelyYes—user configures
Data routingTo browser vendor's serversWherever user chooses
Works in your browserNo (requires new browser)Yes (extension for any Chromium browser)
Open protocolNoYes—designed for native adoption
Open sourceRarelyYes

Privacy by Design

With browser-native AI:

  • Choose local models → Data never leaves your device
  • Choose your cloud provider → Your account, your relationship, your terms
  • Per-site permissions → You control which sites access AI
  • Extension is auditable → Open source, verifiable

AI-native browsers claim "you're in control," but they still mediate every request. They still see what you're asking. Their business model still depends on AI usage.

Browser-native AI puts the user—not the browser vendor—in the decision seat.

The Stakes: Shaping Desirable Futures

The choice between AI-native browsers and browser-native AI isn't just technical. It's about what kind of web we want.

What AI-Native Browsers Build Toward

Vendor consolidation: A few companies control AI access on the web. Users must use their browser to get their AI. Switching costs increase. Competition decreases.

Data centralization: Every query, every context, every browsing pattern flows through AI browser vendors. Even with privacy toggles, they architect the system.

Developer lock-out: AI becomes a browser feature, not a web platform capability. Developers can't build on it. The web doesn't get smarter—specific browsers do.

History repeating: Flash vs. HTML5. Native apps vs. the web. Proprietary vs. open. We've seen this pattern before.

What Browser-Native AI Builds Toward

User agency: You choose your AI provider like you choose your search engine. Local, cloud, or hybrid—your call.

Privacy infrastructure: The browser mediates permission, not execution. Your data goes where you send it.

Developer capability: A standard API that works across browsers. Build once, works everywhere.

Open evolution: Proposals, feedback, multi-vendor implementation. The web platform grows for everyone.

The Reddit thread that inspired this article asked: "What would an AI-native web browser look like?"

One commenter responded: "This sounds like a deeply miserable experience."

Another: "So all of these functionalities can be achieved as a browser extension as well."

The skeptics have a point. AI-native browsers solve distribution problems for AI companies—not capability problems for users. Everything they do can be done through APIs and extensions, without requiring a new browser, without vendor lock-in, without fragmenting the web.

The Path Forward: An Open Protocol for Browsers

WebLLM isn't just an extension. It's a protocol—a design for how browsers should handle AI.

The protocol embeds user choice at its foundation:

  • Users configure their providers (local, cloud, hybrid)
  • Websites request AI through a standard API
  • The browser manages permissions
  • Data flows where users direct it

This design is intentional. When browsers eventually ship native AI support, the question will be: whose protocol?

Why Browsers Will Adopt This

Browser vendors compete on features. When users start expecting AI capabilities—and demanding control over providers—browsers will need to deliver.

Today, that means extensions. Tomorrow, it means native implementation.

The WebLLM protocol gives browser vendors a path:

  1. Chrome, Firefox, Safari can implement navigator.llm natively
  2. User choice is baked in—no vendor gets to lock users into their model
  3. Developers already building on the API get native performance for free
  4. Competition happens on execution, not on locking users out

This is how the web has always worked. Browsers competed to implement geolocation better, not to have incompatible geolocation APIs. They competed on WebRTC performance, not on proprietary video calling protocols.

AI should be no different.

Creating Healthy Competition

When users can choose their AI provider at the browser level:

  • Model providers compete on quality, not on browser distribution deals
  • Browsers compete on integration, not on AI lock-in
  • Developers build once, not per-browser
  • Users win—better AI, more choice, actual privacy options

This is the opposite of what AI-native browsers offer. They compete by building walled gardens. The WebLLM protocol competes by being the garden everyone can plant in.

The Foundation We Need

The web needs a solid, public, open foundation for AI—not five different proprietary implementations from five different browser forks.

WebLLM is building that foundation:

  • Open source - Inspect, fork, contribute
  • Open protocol - Designed for browser adoption, not extension lock-in
  • User choice by design - Not an afterthought, not a toggle—the core architecture
  • Extensible - General-purpose prompt(), not task-specific functions

When browsers adopt this protocol natively, the extension becomes unnecessary. That's the goal. Not to be the permanent solution, but to prove the design and create demand for native implementation.

Conclusion

AI-native browsers are the shiny object. An open AI protocol is the infrastructure investment.

The companies building Chromium forks are solving their distribution problem—how to get users into their ecosystem, using their AI, generating their data, paying their subscriptions. They're building walled gardens with AI features as the walls.

The WebLLM protocol solves your problem: AI capabilities in any browser, with any provider, under your control, with APIs developers can build on—and a path to native browser support that preserves user choice.

The web platform has absorbed location, camera, notifications, GPU compute, and peer-to-peer video. Each time, someone could have built a "location-native browser" or a "video-native browser." Instead, we got standard APIs that all browsers implement, and users kept their freedom to choose.

AI is next.

The question isn't "what would an AI-native browser look like?" It's "how do we add AI to the web platform without breaking what makes the web great?"

The answer isn't a new browser. It's an open protocol, designed for native adoption, with user choice at its core.

That's what WebLLM is building. That's the future we're working toward.


Further Reading

Sources

In this article:

Share this article: