Python SDK for the AIR Backend API
Project description
AIR SDK
The AIR SDK is a Python client for the AIR platform — a suite of AI assistant tools for scientific research. It gives you programmatic access to AI-assisted pipeline steps (idea generation, literature search, methods, data analysis, paper writing, peer review) as well as a set of standalone tools you can use independently.
Key capabilities:
- AI-assisted research pipeline — use AI assistants for idea generation, literature search, methods development, data analysis, paper writing, and review as a connected workflow or individual steps
- Local code execution — the
results,one_shot, anddeep_researchmethods coordinate an AI agent on the backend while all generated code runs locally on your machine in an isolated virtual environment - File sync — push local data files to a project and pull generated outputs (plots, LaTeX, reports) back to disk after each step
- Standalone AI tools — extract keywords, download and summarise arXiv papers, enhance text with paper context, run OCR, or get an AI review of a paper PDF independently of any project
Installation
Install the SDK from PyPI:
pip install air-research
Quick Start
import air
client = air.AIR(api_key="air_k1_...", base_url="http://localhost:8000", local_dir="~/my-research")
# Standalone tools
keywords = client.keywords("dark matter and lensing", n=5, kw_type="aas")
enhanced = client.enhance("My research with https://arxiv.org/abs/2301.12345")
# Full research workflow
project = client.create_project("my-research", data_description="We study...")
idea = project.idea()
literature = project.literature()
methods = project.methods()
results = project.results()
paper = project.paper(journal="AAS")
review = project.review()
# File access
print(project.get_file("Iteration0/input_files/idea.md"))
print(project.list_files())
# Sync files locally
project.pull_files() # download all project files
project.push_files("local/data.csv") # upload a local file
# One-shot and deep-research tasks
result = client.one_shot("Analyse the attached dataset and produce a summary plot")
result = client.deep_research("Investigate the impact of dark energy on large-scale structure")
Configuration
Set environment variables, or pass them directly to the constructor:
| Variable | Description |
|---|---|
AIR_API_KEY |
API key (air_k1_...) |
AIR_BASE_URL |
Backend URL (default: http://localhost:8000) |
AIR_LOCAL_DIR |
Local root for pull_files / auto-pull (default: ~/ai-scientist) |
AIR_WORK_DIR |
Local root for results, one_shot, deep_research code execution (default: ~/ai-scientist) |
client = air.AIR(
api_key="air_k1_...",
base_url="https://api.example.com",
local_dir="~/my-research", # where pull_files writes files
)
API Reference
AIR class
| Method | Description |
|---|---|
keywords(text, n=5, kw_type="unesco") |
Extract keywords |
arxiv(text) |
Download arXiv papers from URLs in text |
enhance(text, max_workers=2, max_depth=10) |
Enhance text with arXiv context |
ocr(file_path) |
Process PDF with OCR (server path) |
review(file_path, *, thoroughness, figures_review, verify_statements, review_maths, review_numerics, emails, timeout) |
Standalone paper review with Skepthical |
create_project(name, data_description, iteration, local_dir, auto_pull=True) |
Create a project |
get_project(name, auto_pull=True) |
Get existing project |
list_projects() |
List all projects |
delete_project(name) |
Delete a project |
one_shot(task, *, model, max_rounds, agent, work_dir, timeout, on_output, python_path, venv_path) |
Run a one-shot task with local code execution |
deep_research(task, *, engineer_model, researcher_model, planner_model, max_plan_steps, work_dir, timeout, on_output, ...) |
Run a multi-step deep-research task with local code execution |
Project class
| Method | Description |
|---|---|
idea(mode="fast", iteration=0, timeout=600) |
Generate research idea |
literature(iteration=0, timeout=600) |
Run literature search |
methods(mode="fast", iteration=0, timeout=600) |
Develop methods |
results(iteration=0, timeout=86400, max_attempts=10, max_steps=6, work_dir, on_output, python_path, venv_path) |
Run analysis/experiments with local code execution; plots are auto-uploaded |
paper(journal="NONE", iteration=0, timeout=900) |
Write paper |
review(iteration=0, timeout=600, **kwargs) |
Run review (kwargs: review_engine, review_thoroughness, pdf_version, emails, etc.) |
get_file(path) |
Read a project file |
list_files() |
List all project files |
write_file(path, content, encoding="utf-8") |
Write a file |
pull_files(local_dir, iteration=0, key_only=False) |
Download project files to a local directory |
push_files(local_path, remote_path=None) |
Upload a local file or directory to the project |
delete() |
Delete the project |
auto_pull
When auto_pull=True (default), key project files are automatically downloaded to local_dir after each pipeline step (idea, methods, results, paper). Set auto_pull=False to disable this behaviour.
Standalone review() on AIR
result = client.review(
"/server/path/to/paper.pdf",
thoroughness="High", # "Standard" (default) or "High"
figures_review=True,
verify_statements=True,
review_maths=False,
review_numerics=False,
emails=["you@example.com"],
)
one_shot and deep_research
Both methods connect via WebSocket and execute generated code locally on your machine:
result = client.one_shot(
"Plot a histogram of the attached CSV",
model="gpt-4.1-2025-04-14",
work_dir="~/my-tasks",
)
print(result.output) # streamed text
print(result.files) # list of created local files
result = client.deep_research(
"Reproduce Figure 3 from arxiv:2301.12345",
max_plan_steps=5,
work_dir="~/my-tasks",
)
Docs
Install the documentation with:
pip install air-research[docs]
and build it with
mkdocs serve --livereload
Tests
Test with pytest:
pytest tests
You can also skip slow tests (e.g. idea generation) with
pytest tests/ -m "not slow".
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file ai_research-0.1.25.tar.gz.
File metadata
- Download URL: ai_research-0.1.25.tar.gz
- Upload date:
- Size: 753.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8ee262cd12df4c516f4e5dc18b1e1917797081147cea16f08f4341ee08dabc82
|
|
| MD5 |
93b54b4761c1ce59b2a049e9e9a289a7
|
|
| BLAKE2b-256 |
0245e5de316b1a74132a0115b2bc0923cb9bb8c342c77dc6128ec50c560a76bf
|
File details
Details for the file ai_research-0.1.25-py3-none-any.whl.
File metadata
- Download URL: ai_research-0.1.25-py3-none-any.whl
- Upload date:
- Size: 27.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b00b559fb2244fefdd9faa94734143e8ac3da2961845addded18f40058d9d169
|
|
| MD5 |
e43238972ef8921d5d1b0e0e02a2be26
|
|
| BLAKE2b-256 |
9d814525f2ad88456674ff000a7cbf8a97278d18f50aa5c6ce6f3211ab410f8a
|