AI Data Infrastructure: Declarative, Multimodal, and Incremental
Project description
The only open source Python library providing declarative data infrastructure for building multimodal AI applications, enabling incremental storage, transformation, indexing, retrieval, and orchestration of data.
Quick Start | Documentation | API Reference | Sample Apps | Discord Community
Installation
pip install pixeltable
Pixeltable replaces the complex multi-system architecture needed for AI applications with a single declarative table interface that natively handles multimodal data like images, videos, and documents.
Demo
https://github.com/user-attachments/assets/b50fd6df-5169-4881-9dbe-1b6e5d06cede
Quick Start
With Pixeltable, you define your entire data processing and AI workflow declaratively using computed columns on tables. Focus on your application logic, not the data plumbing.
# Installation
pip install -qU torch transformers openai pixeltable
# Basic setup
import pixeltable as pxt
# Table with multimodal column types (Image, Video, Audio, Document)
t = pxt.create_table('images', {'input_image': pxt.Image})
# Computed columns: define transformation logic once, runs on all data
from pixeltable.functions import huggingface
# Object detection with automatic model management
t.add_computed_column(
detections=huggingface.detr_for_object_detection(
t.input_image,
model_id='facebook/detr-resnet-50'
)
)
# Extract specific fields from detection results
t.add_computed_column(detections_text=t.detections.label_text)
# OpenAI Vision API integration with built-in rate limiting and async management
from pixeltable.functions import openai
t.add_computed_column(
vision=openai.vision(
prompt="Describe what's in this image.",
image=t.input_image,
model='gpt-4o-mini'
)
)
# Insert data directly from an external URL
# Automatically triggers computation of all computed columns
t.insert(input_image='https://raw.github.com/pixeltable/pixeltable/release/docs/resources/images/000000000025.jpg')
# Query - All data, metadata, and computed results are persistently stored
# Structured and unstructured data are returned side-by-side
results = t.select(
t.input_image,
t.detections_text,
t.vision
).collect()
What Pixeltable Handles
When you run the code above, Pixeltable automatically handles data storage, transformation, AI inference, vector indexing, incremental updates, and versioning. See Key Principles for details.
| You Write | Pixeltable Does |
|---|---|
pxt.Image, pxt.Video, pxt.Document columns |
Stores media, handles formats, caches from URLs |
add_computed_column(fn(...)) |
Runs incrementally, caches results, retries failures |
add_embedding_index(column) |
Manages vector storage, keeps index in sync |
@pxt.udf / @pxt.query |
Creates reusable functions with dependency tracking |
table.insert(...) |
Triggers all dependent computations automatically |
table.select(...).collect() |
Returns structured + unstructured data together |
| (nothing—it's automatic) | Versions all data and schema changes for time-travel |
Deployment options: Pixeltable can serve as your full backend (managing media locally or syncing with S3/GCS/Azure, plus built-in vector search and orchestration) or as an orchestration layer alongside your existing infrastructure.
Where Did My Data Go?
Pixeltable workloads generate various outputs, including both structured outputs (such as bounding boxes for detected objects) and unstructured outputs (such as generated images or video). By default, everything resides in your Pixeltable user directory at ~/.pixeltable. Structured data is stored in a Postgres instance in ~/.pixeltable. Generated media (images, video, audio, documents) are stored outside the Postgres database, in separate flat files in ~/.pixeltable/media. Those media files are referenced by URL in the database, and Pixeltable provides the "glue" for a unified table interface over both structured and unstructured data.
In general, the user is not expected to interact directly with the data in ~/.pixeltable; the data store is fully managed by Pixeltable and is intended to be accessed through the Pixeltable Python SDK.
See Working with External Files for details on loading data from URLs, S3, and local paths.
Key Principles
Store: Unified Multimodal Interface
pxt.Image, pxt.Video, pxt.Audio, pxt.Document, pxt.Json – manage diverse data consistently.
t = pxt.create_table(
'media',
{
'img': pxt.Image,
'video': pxt.Video,
'audio': pxt.Audio,
'document': pxt.Document,
'metadata': pxt.Json
}
)
Orchestrate: Declarative Computed Columns
Define processing steps once; they run automatically on new/updated data. Supports API calls (OpenAI, Anthropic, Gemini), local inference (Hugging Face, YOLOX, Whisper), vision models, and any Python logic.
# LLM API call
t.add_computed_column(
summary=openai.chat_completions(
messages=[{"role": "user", "content": t.text}], model='gpt-4o-mini'
)
)
# Local model inference
t.add_computed_column(
classification=huggingface.vit_for_image_classification(t.image)
)
# Vision analysis
t.add_computed_column(
description=openai.vision(prompt="Describe this image", image=t.image)
)
→ Computed Columns · AI Integrations · Sample App: Prompt Studio
Iterate: Explode & Process Media
Create views with iterators to explode one row into many (video→frames, doc→chunks, audio→segments).
# Document chunking with overlap & metadata
chunks = pxt.create_view('chunks', docs,
iterator=DocumentSplitter.create(
document=docs.doc,
separators='sentence,token_limit',
overlap=50, limit=500
))
# Video frame extraction
frames = pxt.create_view('frames', videos,
iterator=FrameIterator.create(video=videos.video, fps=0.5))
→ Views · Iterators · RAG Pipeline
Index: Built-in Vector Search
Add embedding indexes and perform similarity searches directly on tables/views.
t.add_embedding_index(
'img',
embedding=clip.using(model_id='openai/clip-vit-base-patch32')
)
sim = t.img.similarity(string="cat playing with yarn")
results = t.order_by(sim, asc=False).limit(10).collect()
Extend: Bring Your Own Code
Extend Pixeltable with UDFs, reusable queries, batch processing, and custom aggregators.
@pxt.udf
def format_prompt(context: list, question: str) -> str:
return f"Context: {context}\nQuestion: {question}"
@pxt.query
def search_by_topic(topic: str):
return t.where(t.category == topic).select(t.title, t.summary)
Agents & Tools: Tool Calling & MCP Integration
Register @pxt.udf, @pxt.query functions, or MCP servers as callable tools. LLMs decide which tool to invoke; Pixeltable executes and stores results.
# Load tools from MCP server, UDFs, and query functions
mcp_tools = pxt.mcp_udfs('http://localhost:8000/mcp')
tools = pxt.tools(get_weather_udf, search_context_query, *mcp_tools)
# LLM decides which tool to call; Pixeltable executes it
t.add_computed_column(
tool_output=invoke_tools(tools, t.llm_tool_choice)
)
→ Tool Calling Cookbook · Agents & MCP · Pixelbot · Pixelagent
Query & Experiment: SQL-like Python Querying
Familiar syntax combined with powerful AI capabilities. Test transformations before committing:
# Query data
results = (
t.where(t.score > 0.8)
.order_by(t.timestamp)
.select(t.image, score=t.score)
.limit(10)
.collect()
)
# Test transformation on sample BEFORE adding column
t.select(t.text, summary=summarize(t.text)).head(3) # Nothing stored
t.add_computed_column(summary=summarize(t.text)) # Now commit
Version: Data Persistence & Time Travel
All data is automatically stored and versioned. Query any prior version.
t = pxt.get_table('my_table') # Get a handle to an existing table
t.revert() # Undo the last modification
t.history() # Display all prior versions
old_version = pxt.get_table('my_table:472') # Query a specific version
Import/Export: I/O & Integration
Import from any source and export to ML formats.
# Import from files, URLs, S3, Hugging Face
t.insert(pxt.io.import_csv('data.csv'))
t.insert(pxt.io.import_huggingface_dataset(dataset))
# Export to analytics/ML formats
pxt.io.export_parquet(table, 'data.parquet')
pytorch_ds = table.to_pytorch_dataset('pt') # → PyTorch DataLoader ready
coco_path = table.to_coco_dataset() # → COCO annotations
# ML tool integrations
pxt.create_label_studio_project(table, label_config) # Annotation
pxt.export_images_as_fo_dataset(table, table.image) # FiftyOne
→ Data Import · PyTorch Export · Label Studio · Data Wrangling for ML
Tutorials & Cookbooks
| Fundamentals | Cookbooks | Providers | Sample Apps |
|---|---|---|---|
| All → | All → | All → | All → |
External Storage and Pixeltable Cloud
Supported storage providers:
Store computed media using the destination parameter on columns, or set defaults globally via PIXELTABLE_OUTPUT_MEDIA_DEST and PIXELTABLE_INPUT_MEDIA_DEST. See Configuration.
Data Sharing: Publish datasets to Pixeltable Cloud for team collaboration or public sharing. Replicate public datasets instantly—no account needed for replication.
import pixeltable as pxt
# Replicate a public dataset (no account required)
coco = pxt.replicate(
remote_uri='pxt://pixeltable:fiftyone/coco_mini_2017',
local_path='coco-copy'
)
# Publish your own dataset (requires free account)
pxt.publish(source='my-table', destination_uri='pxt://myorg/my-dataset')
# Store computed media in external cloud storage
t.add_computed_column(
thumbnail=t.image.resize((256, 256)),
destination='s3://my-bucket/thumbnails/'
)
Data Sharing Guide | Cloud Storage | Public Datasets
Built with Pixeltable
| Project | Description |
|---|---|
| Pixelbot | Multimodal Infinite Memory AI Agent — a complete E2E AI app powered by Pixeltable |
| Pixelagent | Lightweight agent framework with built-in memory and tool orchestration |
| Pixelmemory | Persistent memory layer for AI applications |
| MCP Server | Model Context Protocol server for Claude, Cursor, and other AI IDEs |
Contributing
We love contributions! Whether it's reporting bugs, suggesting features, improving documentation, or submitting code changes, please check out our Contributing Guide and join the Discussions or our Discord Server.
License
Pixeltable is licensed under the Apache 2.0 License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pixeltable-0.5.16-py3-none-any.whl.
File metadata
- Download URL: pixeltable-0.5.16-py3-none-any.whl
- Upload date:
- Size: 616.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
27f2bd8225636538c17cab6ac5fae67ab4bcbcdd2c05e8e443b035b6e361fa79
|
|
| MD5 |
af7763465eb79265edbe2222cb6035ee
|
|
| BLAKE2b-256 |
545136561d5856f5d827b9a8bd1527b7c1a3527de001b3fa4655a7cec8b40149
|