Python SDK for the AIR Backend API
Project description
AIR SDK
The AIR SDK is a Python client for the AIR platform — a suite of AI assistant tools for scientific research. It gives you programmatic access to AI-assisted pipeline steps (idea generation, literature search, methods, data analysis, paper writing, peer review) as well as a set of standalone tools you can use independently.
Key capabilities:
- AI-assisted research pipeline — use AI assistants for idea generation, literature search, methods development, data analysis, paper writing, and review as a connected workflow or individual steps
- Local code execution — the
results,one_shot, anddeep_researchmethods coordinate an AI agent on the backend while all generated code runs locally on your machine in an isolated virtual environment - VLM image review — optionally enable vision-language model feedback to iteratively improve generated plots
- File sync — push local data files to a project and pull generated outputs (plots, LaTeX, reports) back to disk after each step
- Standalone AI tools — extract keywords (UNESCO, AAS, AAAI), download and summarise arXiv papers, enhance text with paper context, run OCR on PDFs (local or remote), or get an AI review of a paper PDF
- Model flexibility — use cloud models (Gemini, GPT, Claude) or self-hosted open-source models via vLLM
Installation
pip install ai-research
Quick Start
import os
import air
client = air.AIR(
api_key=os.environ["AIR_API_KEY"],
base_url=os.environ.get("AIR_BASE_URL", "http://localhost:8000"),
)
# Standalone tools
keywords = client.keywords("dark matter and lensing", n=5, kw_type="aas")
enhanced = client.enhance("My analysis references https://arxiv.org/abs/1706.03762")
ocr_result = client.ocr("paper.pdf") # local or server-side PDF
# One-shot task (code runs locally)
result = client.one_shot(
"Generate synthetic data and plot a ROC curve",
agent="engineer",
work_dir="./output",
)
# Deep research (multi-step planning + execution)
result = client.deep_research(
"Fit a Cox model to survival data and write a data guide",
max_plan_steps=2,
plan_instructions="1. engineer, 2. researcher.",
work_dir="./output",
)
# Full research pipeline
project = client.create_project("my-research", data_description="We study...")
idea = project.idea()
literature = project.literature()
methods = project.methods()
results = project.results(max_steps=2, work_dir="./output")
paper = project.paper(journal="AAS", add_citations=True)
review = project.review()
project.pull_files()
Configuration
Set environment variables, or pass them directly to the constructor:
| Variable | Description | Default |
|---|---|---|
AIR_API_KEY |
API key (air_k1_...) |
required |
AIR_BASE_URL |
Backend URL | http://localhost:8000 |
AIR_LOCAL_DIR |
Local root for pull_files / auto-pull |
~/ai-scientist |
AIR_WORK_DIR |
Local root for code execution (results, one_shot, deep_research) |
~/ai-scientist |
Available Models
Examples of models that can be used across the SDK:
| Model | Notes |
|---|---|
gemini-3.1-flash-lite-preview |
Fast, cheapest — good default |
gemini-3.1-pro-preview |
Stronger reasoning |
gemini-2.5-flash |
Good for paper writing |
gpt-5-nano |
Fast, good quality |
gpt-5.2 |
Strongest OpenAI model |
claude-sonnet-4-6 |
Strong all-round |
gpt-oss-120b |
Self-hosted via vLLM, free |
API Reference
AIR class
| Method | Description |
|---|---|
health() |
Check backend status |
keywords(text, n, kw_type) |
Extract keywords (UNESCO, AAS, or AAAI) |
arxiv(text) |
Download arXiv papers from URLs in text |
enhance(text) |
Enhance text with context from referenced papers |
ocr(file_path) |
OCR a PDF (local file or server path) |
review(file_path, ...) |
Standalone paper review with Skepthical (local or server PDF) |
one_shot(task, *, model, agent, enable_vlm_review, ...) |
One-shot task with local code execution |
deep_research(task, *, adaptive_planning, enable_vlm_review, ...) |
Multi-step research with planning and local code execution |
create_project(name, data_description) |
Create a project |
get_project(name) |
Get existing project |
list_projects() |
List all projects |
delete_project(name) |
Delete a project |
Project class
| Method | Description |
|---|---|
idea(default_model, critic_model, idea_iterations) |
Generate research idea |
literature(timeout, max_iterations) |
Run literature search |
methods(default_model, critic_model) |
Develop methods |
results(max_steps, max_attempts, engineer_model, researcher_model, ...) |
Run analysis with local code execution |
paper(journal, add_citations, default_model) |
Write paper |
review(...) |
Run review |
get_file(path) |
Read a project file |
list_files() |
List all project files |
write_file(path, content) |
Write a file |
pull_files(key_only, iteration) |
Download project files locally |
push_files(local_path, remote_path) |
Upload local files to project |
delete() |
Delete the project |
Examples
The examples/ directory contains runnable scripts covering all features:
| Script | Description |
|---|---|
00_manage_projects.py |
List and delete projects |
01_health_check.py |
Backend health check |
02a-c_keywords_*.py |
Keyword extraction (UNESCO, AAS, AAAI) |
03a_ocr_arxiv.py |
OCR an arXiv paper |
03b_ocr_local.py |
OCR a local PDF |
04_enhance.py |
Enhance text with arXiv context |
05a-d_idea_*.py |
Idea generation (quantum, psychology, chemical eng., oncology) |
06_literature_coral.py |
Literature search (marine biology) |
07a-e_methods_*.py |
Methods development (quantum, zoology, psychology, chemical eng., oncology) |
08a_one_shot_engineer.py |
One-shot code generation |
08b_one_shot_researcher.py |
One-shot writing task |
08c_one_shot_vlm.py |
One-shot with VLM image review |
08d_one_shot_researcher_oss.py |
One-shot with self-hosted OSS model |
09_deep_research_biostatistics.py |
Deep research (biostatistics) |
10_pipeline_oscillator.py |
Full pipeline: idea → literature → methods → results |
11_paper_oscillator.py |
Paper writing from existing project |
12_review.py |
Standalone paper review |
Docs
pip install ai-research[docs]
mkdocs serve --livereload
Tests
pytest tests
pytest tests/ -m "not slow" # skip slow tests
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ai_research-0.1.42.tar.gz.
File metadata
- Download URL: ai_research-0.1.42.tar.gz
- Upload date:
- Size: 731.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c0820d8676e734d5d682edf6411bdf21b5608535d2a97008680e78d146858411
|
|
| MD5 |
be58f2ccd3f7a723d31bfd12c658e378
|
|
| BLAKE2b-256 |
901e7ad0f8ccfcb63b6670ca17a05a2fddb5130f3be79fbcdadb53909f0930ba
|
File details
Details for the file ai_research-0.1.42-py3-none-any.whl.
File metadata
- Download URL: ai_research-0.1.42-py3-none-any.whl
- Upload date:
- Size: 29.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
eb543fc781317b0f7cf0bf402c8a69cbf3064e1eba4a85257a7d5e54112c76d1
|
|
| MD5 |
1a1f0bf46d1f0425ef2daf54c888d43a
|
|
| BLAKE2b-256 |
403a721094ab3de3a882661ef59f0788c7a5c2ef80d298f162798f0dc7d28c54
|