Skip to main content

Python wrapper for Chrome's built-in AI Prompt API

Project description

wiki3-ai

Python wrapper for Chrome's built-in AI Prompt API, enabling direct access to browser-provided language models from Jupyter notebooks.

Features

  • 🤖 Direct access to Chrome's built-in AI via the Prompt API
  • 📓 Seamless Jupyter integration using AnyWidget
  • 🔄 Bidirectional communication via traitlets
  • 🎯 Type-safe Python API matching the Web IDL specification
  • 🌊 Support for streaming responses
  • 🛠️ Tool use capabilities
  • 🖼️ Multimodal input support (text, images, audio)
  • 📊 Session management and cloning
  • 🔒 Privacy-focused on-device processing

Installation

pip install wiki3-ai

Requirements

  • Python 3.10+
  • Chrome browser with Prompt API enabled
  • Running in a Jupyter environment (JupyterLab, Jupyter Notebook, etc.)

Quick Start

from wiki3_ai import LanguageModel

# Create a language model session
session = await LanguageModel.create()

# Simple prompt
result = await session.prompt("Write me a poem about Python.")
print(result)

# Streaming response
async for chunk in session.prompt_streaming("Write me a long story."):
    print(chunk, end="", flush=True)

Usage Examples

System Prompts

from wiki3_ai import LanguageModel, LanguageModelMessage, LanguageModelMessageRole

session = await LanguageModel.create({
    "initialPrompts": [
        {
            "role": "system",
            "content": "You are a helpful Python programming assistant."
        }
    ]
})

response = await session.prompt("How do I read a file in Python?")
print(response)

Checking Availability

from wiki3_ai import LanguageModel, Availability

# Check if the API is available
availability = await LanguageModel.availability()
print(f"Model availability: {availability}")

if availability == Availability.AVAILABLE:
    session = await LanguageModel.create()
    # Use the session...

Configuring Temperature and Top-K

# Get default parameters
params = await LanguageModel.params()
print(f"Default temperature: {params.default_temperature}")
print(f"Max temperature: {params.max_temperature}")

# Create session with custom parameters
session = await LanguageModel.create({
    "temperature": 0.8,
    "topK": 40
})

Session Management

# Create a session
session = await LanguageModel.create()

# Use the session
result1 = await session.prompt("Tell me about AI.")

# Clone for different conversation branches
session2 = await session.clone()
result2 = await session2.prompt("Now tell me about machine learning.")

# Destroy when done
await session.destroy()

Measuring Token Usage

session = await LanguageModel.create()

# Check current usage
print(f"Current usage: {session.input_usage}/{session.input_quota}")

# Measure potential usage before prompting
usage = await session.measure_input_usage("This is my prompt")
print(f"This prompt would use: {usage} tokens")

# Prompt if there's enough quota
if session.input_usage + usage < session.input_quota:
    result = await session.prompt("This is my prompt")

Structured Output with JSON Schema

schema = {
    "type": "object",
    "required": ["rating"],
    "properties": {
        "rating": {
            "type": "number",
            "minimum": 0,
            "maximum": 5
        }
    }
}

result = await session.prompt(
    "Rate this: The food was excellent!",
    {"responseConstraint": schema}
)

import json
data = json.parse(result)
print(f"Rating: {data['rating']}")

API Reference

LanguageModel

Main class for interacting with the language model.

Class Methods

  • create(options=None) - Create a new language model session
  • availability(options=None) - Check model availability
  • params() - Get model parameters

Instance Methods

  • prompt(input, options=None) - Send a prompt and get response
  • prompt_streaming(input, options=None) - Send a prompt and stream response
  • append(input, options=None) - Append messages without getting response
  • measure_input_usage(input, options=None) - Measure token usage
  • clone(options=None) - Clone the session
  • destroy() - Destroy the session

Properties

  • input_usage - Current token usage
  • input_quota - Maximum token quota
  • top_k - Top-K sampling parameter
  • temperature - Temperature sampling parameter

Data Models

  • LanguageModelMessage - A message in the conversation
  • LanguageModelMessageContent - Content with type and value
  • LanguageModelMessageRole - Message role (system/user/assistant)
  • LanguageModelMessageType - Content type (text/image/audio)
  • LanguageModelCreateOptions - Options for creating sessions
  • LanguageModelPromptOptions - Options for prompting
  • LanguageModelParams - Model parameters
  • Availability - Availability status enum

Architecture

This package uses:

  • AnyWidget for Jupyter integration
  • Traitlets for bidirectional Python ↔ JavaScript communication
  • Chrome Prompt API for accessing built-in language models

The communication flow:

  1. Python code calls methods on LanguageModel
  2. Requests are serialized via traitlets to JavaScript
  3. JavaScript calls Chrome's native Prompt API
  4. Results are sent back via traitlets to Python
  5. Python code receives async results

Specifications

This implementation follows:

License

Apache License 2.0 - See LICENSE file for details

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Links

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

wiki3_ai-0.1.0.tar.gz (90.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

wiki3_ai-0.1.0-py3-none-any.whl (17.5 kB view details)

Uploaded Python 3

File details

Details for the file wiki3_ai-0.1.0.tar.gz.

File metadata

  • Download URL: wiki3_ai-0.1.0.tar.gz
  • Upload date:
  • Size: 90.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.2.1 CPython/3.14.0 Linux/6.11.0-1018-azure

File hashes

Hashes for wiki3_ai-0.1.0.tar.gz
Algorithm Hash digest
SHA256 e81f77aaf7416eeab1e133539b684f8ecc31e2d2fcb87b5bc800a027f25b1de8
MD5 c9660af53d9b5b6f26281c20b759ef0d
BLAKE2b-256 4066ed61bab3eb0345a4245db9449f692a913d90d250cfc7663611518df939ad

See more details on using hashes here.

File details

Details for the file wiki3_ai-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: wiki3_ai-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 17.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.2.1 CPython/3.14.0 Linux/6.11.0-1018-azure

File hashes

Hashes for wiki3_ai-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 92deec88c9f42fae922d3f53e8f64cb9e1cf1b2fd148f1fd6889fc0ca2ed79fe
MD5 eda853e1b197b97887c0947df842f83f
BLAKE2b-256 f810aca032166954305dbf0cf1e45bf83ab5171bff1c193b5ba5016674ed3fa9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page