AI for Open Source: A Practical Guide

Your users want AI features. You can't afford the API bills. Here's what to do.

WebLLM Team
AI for Open Source: A Practical Guide

76% of developers use AI tools daily. Your users expect AI features. But you're an open source project running on donations and volunteer time.

The math doesn't work. Or does it?

This is a practical guide for open source maintainers who want to add AI features without destroying their budget.

The Open Source AI Dilemma

User expectations:

  • "Why can't I ask questions in natural language?"
  • "Add AI code completion"
  • "Smart suggestions would be nice"

Project reality:

  • Revenue: $500-5,000/month in donations (typical)
  • AI API costs: $0.01-0.10 per query
  • 10,000 queries/month = $100-1,000
  • That's 20-100% of your budget for ONE feature

Current solutions suck:

  1. Add a paid tier: Fragments the community
  2. Ask for more donations: Already maxed out
  3. Don't add AI: Users go elsewhere
  4. Partner with AI company: Dependency risk

What Projects Have Tried

Obsidian: Plugin Ecosystem

Obsidian doesn't have built-in AI. Instead, they have 50+ AI plugins:

  • Obsidian Copilot
  • Text Generator
  • Smart Connections
  • Etc.

Problems:

  • Users manage their own API keys
  • Inconsistent experience across plugins
  • No unified AI capability
  • Each plugin reinvents integration

tldraw: "Make Real" Feature

tldraw added AI-powered "make real" that converts sketches to UI. It uses GPT-4.

The question: Who pays?

For their hosted version, tldraw absorbs costs. For self-hosted, users bring their own key.

Problem: Self-hosted users need technical setup. Most don't bother.

Excalidraw: Text-to-Diagram

Excalidraw added AI diagram generation.

Reality: It's rate-limited and experimental. Full AI features would cost more than their infrastructure budget.

The User-Powered Model

Here's the shift: users bring their own AI.

The insight:

  • Many developers have API keys (OpenAI, Anthropic, etc.)
  • Many technical users run Ollama locally (free)
  • Many have employer-provided AI access
  • They have AI capacity they can't easily use across apps

Note: Web app subscriptions (like ChatGPT Plus) don't provide API access. Browser AI works with API keys or local models—which open source users (typically developers) often have.

The model:

┌─────────────────────────────────────────────┐
│ Check: Does user have AI configured?        │
│                    ↓                        │
│ Yes → Use their API/local model             │
│ No  → Graceful degradation (or paid option) │
│                    ↓                        │
│ Project pays: $0 per AI query               │
└─────────────────────────────────────────────┘

Implementation Pattern

Step 1: Detect Browser AI

export function hasAI() {
  return 'llm' in navigator;
}

export async function checkAIPermission() {
  if (!hasAI()) return 'unavailable';

  try {
    const permission = await navigator.permissions.query({ name: 'llm' });
    return permission.state; // 'granted', 'denied', or 'prompt'
  } catch {
    return 'prompt'; // Assume available if can't query
  }
}

Step 2: Use AI When Available

export async function enhanceWithAI(content, task) {
  if (!hasAI()) {
    return null; // No AI, return null to signal fallback
  }

  try {
    const response = await navigator.llm.prompt(
      `Task: ${task}\n\nContent:\n${content}`
    );
    return response;
  } catch (error) {
    console.error('AI error:', error);
    return null;
  }
}

Step 3: Design Graceful Degradation

export async function smartSearch(query, items) {
  // Try AI-powered search
  const aiResult = await enhanceWithAI(
    JSON.stringify(items.slice(0, 100)),
    `Find items matching: "${query}". Return JSON array of matching item IDs.`
  );

  if (aiResult) {
    try {
      const matchingIds = JSON.parse(aiResult);
      return items.filter(item => matchingIds.includes(item.id));
    } catch {
      // AI returned bad format, fall through
    }
  }

  // Fallback: traditional search
  return items.filter(item =>
    item.title.toLowerCase().includes(query.toLowerCase())
  );
}

Step 4: Build AI-Enhanced Features

Example: Smart categorization in a note app

async function suggestCategory(noteContent) {
  const categories = ['Work', 'Personal', 'Ideas', 'Tasks', 'Reference'];

  const suggestion = await enhanceWithAI(
    noteContent,
    `Suggest the best category from: ${categories.join(', ')}. Reply with just the category name.`
  );

  if (suggestion && categories.includes(suggestion.trim())) {
    return suggestion.trim();
  }

  // Fallback: no suggestion
  return null;
}

// Usage in UI
const category = await suggestCategory(note.content);
if (category) {
  showSuggestion(`Suggested category: ${category}`);
} else {
  // Feature just doesn't show - graceful degradation
}

UI Patterns

Pattern 1: AI as Enhancement

AI features appear only when available:

function NoteEditor({ note }) {
  const hasAI = useHasAI();

  return (
    <div>
      <TextArea value={note.content} />

      {/* Only show if AI available */}
      {hasAI && (
        <button onClick={handleAISummarize}>
          ✨ Summarize
        </button>
      )}
    </div>
  );
}

Pattern 2: AI with Fallback Message

When AI would help but isn't available:

function SearchResults({ query, results }) {
  const hasAI = useHasAI();

  return (
    <div>
      {results.map(result => <ResultItem key={result.id} {...result} />)}

      {!hasAI && results.length === 0 && (
        <div className="hint">
          💡 Install a browser AI extension for smarter search
        </div>
      )}
    </div>
  );
}

Pattern 3: Progressive Enhancement

Basic works for everyone, AI makes it better:

function AutoComplete({ value, onChange }) {
  const [suggestions, setSuggestions] = useState([]);
  const hasAI = useHasAI();

  useEffect(() => {
    if (hasAI && value.length > 10) {
      // AI-powered completion
      getAISuggestions(value).then(setSuggestions);
    } else {
      // Static suggestions based on history
      setSuggestions(getHistorySuggestions(value));
    }
  }, [value, hasAI]);

  // Same UI either way
  return <SuggestionList items={suggestions} onSelect={onChange} />;
}

Optional: Hosted Fallback

Some users won't have AI. You can optionally provide a hosted fallback:

async function getAIResponse(prompt) {
  // Priority 1: User's browser AI (free for you)
  if (hasAI()) {
    const result = await navigator.llm.prompt(prompt);
    if (result) return result;
  }

  // Priority 2: Your hosted fallback (costs you money)
  if (user.isPremium || user.hasCredits) {
    return await fetch('/api/ai', {
      method: 'POST',
      body: JSON.stringify({ prompt })
    }).then(r => r.json());
  }

  // Priority 3: No AI
  return null;
}

This lets you:

  • Offer AI to everyone with browser AI (free)
  • Offer AI to premium users (they pay subscription)
  • Offer limited AI to free users (credits system)
  • Never block the free tier from basic functionality

Case Study: What Obsidian Could Do

Current state: 50+ fragmented AI plugins, each requiring separate setup.

With browser AI:

// Built into Obsidian core
class AICommands {
  async summarizeNote(note) {
    if (!hasAI()) {
      showNotice('Configure an AI provider to use this feature');
      return;
    }

    const summary = await navigator.llm.prompt(
      `Summarize this note concisely:\n\n${note.content}`
    );

    // Insert summary at top
    note.content = `## Summary\n${summary}\n\n---\n\n${note.content}`;
  }
}

Benefits:

  • Users configure AI once (in browser, not per-plugin)
  • Consistent experience across features
  • Obsidian pays nothing for AI
  • Works with any provider user chooses

Limitations and Tradeoffs

User-powered AI isn't perfect. Be aware of:

Setup friction

  • Users need to install extension or have API keys
  • Non-technical users may struggle with Ollama setup
  • This works best for technical audiences (open source users often are)

Inconsistent experience

  • Different providers give different quality responses
  • You can't guarantee behavior across all providers
  • Testing is harder (need to test with multiple providers)

Not everyone has access

  • API keys cost money (even if it's the user's money)
  • Local models require decent hardware
  • This creates a "haves and have-nots" situation—though it's better than no AI at all

Limited to text (for now)

  • Browser AI standards are still emerging
  • Image generation, embeddings, etc. aren't standardized yet

Be honest with users about these limitations. User-powered AI is a tool with tradeoffs, not a magic solution.

Getting Started

For Maintainers

  1. Install WebLLM for testing: Browser extension that provides navigator.llm

  2. Identify enhancement points: Where would AI help without being required?

  3. Build with fallbacks: Every AI feature should degrade gracefully

  4. Document the option: Tell users they can enable AI features

For Users

  1. Install a browser AI extension (WebLLM, etc.)

  2. Configure your provider:

    • Ollama for local (free, private)
    • OpenAI/Anthropic for cloud (requires API key, not web subscription)
  3. Use apps that support browser AI

Conclusion

Open source projects can have AI features without:

  • Paying API bills
  • Adding paid tiers
  • Depending on AI company partnerships

The pattern: let users bring their own AI.

Users who want AI features often already pay for AI. Let them use what they have. Users who don't want AI can ignore the features.

Your project pays nothing. Everyone wins.


The patterns here are emerging. Feedback from real implementations helps everyone.

Further Reading

In this article:

Share this article: