WebLLM

Tools

Extend AI capabilities with tools for web search, web crawling, and custom functions

Tools allow LLMs to perform actions and access external data. WebLLM supports both custom tools you define and built-in stock tools like web search.

How Tools Work

1

You define tools with descriptions and schemas

The LLM uses descriptions to decide when to call each tool

2

LLM generates structured input matching your schema

The AI extracts parameters from the conversation context

3

Your execute function runs with the input

Executes in the browser - can access DOM, APIs, local storage

4

Tool result is sent back to the LLM

The AI can use the result to continue the conversation

Building UI-Capable Agents

Because tools run in the browser, you can build AI companions that understand user intent and directly manipulate your UI. Here's an example of a theme-switching assistant:

DOM Access

Modify elements, toggle classes, update styles in real-time

Storage APIs

Read/write localStorage, sessionStorage, IndexedDB

Browser APIs

Notifications, clipboard, geolocation, media devices

Defining Custom Tools

Define tools using JSON Schema for input validation:

Tool Schema Reference

Tool Properties

description: string — Tells the LLM when to use this tool

inputSchema: JSONSchema7 — JSON Schema defining input parameters

execute?: (input, options) => Promise<output> — Function to run when called

needsApproval?: boolean | ((input) => boolean) — Require user confirmation

inputSchema Structure

type: 'object' — Always 'object' for tool inputs

properties: { [name]: { type, description } } — Define each parameter

required: string[] — List of required parameter names

Stock Tools

WebLLM includes built-in tools that you can easily add to your requests. These tools are production-ready and work out of the box.

webSearch
Built-in

Search the web using multiple engines (DuckDuckGo, Wikipedia, Reddit). Returns results with titles, snippets, URLs, and optional citations.

Configuration Options

mode: 'auto' | 'on' | 'off'

returnCitations: boolean (default: true)

maxSearchResults: number (default: 5)

engines: ('duckduckgo' | 'wikipedia' | 'reddit')[]

Output

query: The search query executed

results: Array of {title, snippet, url, source}

citations: Formatted citation strings

totalResults: Number of results

crawlWebsite
Built-in

Fetch and extract content from web pages. Extracts title, description, main text content, and structured data (JSON-LD, Open Graph).

Configuration Options

mode: 'auto' | 'on' | 'off'

extractStructuredData: boolean (default: true)

maxContentLength: number (default: 10000)

timeout: number (default: 10000ms)

allowedPatterns: string[] (regex)

blockedPatterns: string[] (regex)

Output

url: The URL that was crawled

title: Page title

content: Extracted text content

description: Meta description

structuredData: JSON-LD, Open Graph data

Using Stock Tools

There are several ways to use the built-in tools:

1. Use Default Instances

2. Merge with Custom Tools

Using Tools Directly

You can also use the search and crawl functions directly without going through an LLM. This is useful for building custom UIs or testing tools.

Controlling Tool Selection

Use toolChoice to control how the LLM uses tools:

Next Steps

Vercel AI SDK

Use tools with the Vercel AI SDK provider

View Guide →
API Reference

Complete API documentation

View API Docs →
Browser Usage

Use tools with navigator.llm API

View Guide →
Tool Playground

Test webSearch and webCrawl tools interactively

Open Playground →