Skip to main content

Python SDK for the AIR Backend API

Project description

AIR SDK

The AIR SDK is a Python client for the AIR platform — a suite of AI assistant tools for scientific research. It gives you programmatic access to AI-assisted pipeline steps (idea generation, literature search, methods, data analysis, paper writing, peer review) as well as a set of standalone tools you can use independently.

Key capabilities:

  • AI-assisted research pipeline — use AI assistants for idea generation, literature search, methods development, data analysis, paper writing, and review as a connected workflow or individual steps
  • Local code execution — the results, one_shot, and deep_research methods coordinate an AI agent on the backend while all generated code runs locally on your machine in an isolated virtual environment
  • File sync — push local data files to a project and pull generated outputs (plots, LaTeX, reports) back to disk after each step
  • Standalone AI tools — extract keywords, download and summarise arXiv papers, enhance text with paper context, run OCR, or get an AI review of a paper PDF independently of any project

Installation

Install the SDK from PyPI:

pip install air-research

Quick Start

import air

client = air.AIR(api_key="air_k1_...", base_url="http://localhost:8000", local_dir="~/my-research")

# Standalone tools
keywords = client.keywords("dark matter and lensing", n=5, kw_type="aas")
enhanced = client.enhance("My research with https://arxiv.org/abs/2301.12345")

# Full research workflow
project = client.create_project("my-research", data_description="We study...")
idea = project.idea()
literature = project.literature()
methods = project.methods()
results = project.results()
paper = project.paper(journal="AAS")
review = project.review()

# File access
print(project.get_file("Iteration0/input_files/idea.md"))
print(project.list_files())

# Sync files locally
project.pull_files()                       # download all project files
project.push_files("local/data.csv")       # upload a local file

# One-shot and deep-research tasks
result = client.one_shot("Analyse the attached dataset and produce a summary plot")
result = client.deep_research("Investigate the impact of dark energy on large-scale structure")

Configuration

Set environment variables, or pass them directly to the constructor:

Variable Description
AIR_API_KEY API key (air_k1_...)
AIR_BASE_URL Backend URL (default: http://localhost:8000)
AIR_LOCAL_DIR Local root for pull_files / auto-pull (default: ~/ai-scientist)
AIR_WORK_DIR Local root for results, one_shot, deep_research code execution (default: ~/ai-scientist)
client = air.AIR(
    api_key="air_k1_...",
    base_url="https://api.example.com",
    local_dir="~/my-research",   # where pull_files writes files
)

API Reference

AIR class

Method Description
keywords(text, n=5, kw_type="unesco") Extract keywords
arxiv(text) Download arXiv papers from URLs in text
enhance(text, max_workers=2, max_depth=10) Enhance text with arXiv context
ocr(file_path) Process PDF with OCR (server path)
review(file_path, *, thoroughness, figures_review, verify_statements, review_maths, review_numerics, emails, timeout) Standalone paper review with Skepthical
create_project(name, data_description, iteration, local_dir, auto_pull=True) Create a project
get_project(name, auto_pull=True) Get existing project
list_projects() List all projects
delete_project(name) Delete a project
one_shot(task, *, model, max_rounds, agent, work_dir, timeout, on_output, python_path, venv_path) Run a one-shot task with local code execution
deep_research(task, *, engineer_model, researcher_model, planner_model, max_plan_steps, work_dir, timeout, on_output, ...) Run a multi-step deep-research task with local code execution

Project class

Method Description
idea(mode="fast", iteration=0, timeout=600) Generate research idea
literature(iteration=0, timeout=600) Run literature search
methods(mode="fast", iteration=0, timeout=600) Develop methods
results(iteration=0, timeout=86400, max_attempts=10, max_steps=6, work_dir, on_output, python_path, venv_path) Run analysis/experiments with local code execution; plots are auto-uploaded
paper(journal="NONE", iteration=0, timeout=900) Write paper
review(iteration=0, timeout=600, **kwargs) Run review (kwargs: review_engine, review_thoroughness, pdf_version, emails, etc.)
get_file(path) Read a project file
list_files() List all project files
write_file(path, content, encoding="utf-8") Write a file
pull_files(local_dir, iteration=0, key_only=False) Download project files to a local directory
push_files(local_path, remote_path=None) Upload a local file or directory to the project
delete() Delete the project

auto_pull

When auto_pull=True (default), key project files are automatically downloaded to local_dir after each pipeline step (idea, methods, results, paper). Set auto_pull=False to disable this behaviour.

Standalone review() on AIR

result = client.review(
    "/server/path/to/paper.pdf",
    thoroughness="High",       # "Standard" (default) or "High"
    figures_review=True,
    verify_statements=True,
    review_maths=False,
    review_numerics=False,
    emails=["you@example.com"],
)

one_shot and deep_research

Both methods connect via WebSocket and execute generated code locally on your machine:

result = client.one_shot(
    "Plot a histogram of the attached CSV",
    model="gpt-4.1-2025-04-14",
    work_dir="~/my-tasks",
)
print(result.output)   # streamed text
print(result.files)    # list of created local files

result = client.deep_research(
    "Reproduce Figure 3 from arxiv:2301.12345",
    max_plan_steps=5,
    work_dir="~/my-tasks",
)

Docs

Install the documentation with:

pip install air-research[docs]

and build it with

mkdocs serve --livereload

Tests

Test with pytest:

pytest tests

You can also skip slow tests (e.g. idea generation) with

pytest tests/ -m "not slow".

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ai_research-0.1.15.tar.gz (750.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ai_research-0.1.15-py3-none-any.whl (27.0 kB view details)

Uploaded Python 3

File details

Details for the file ai_research-0.1.15.tar.gz.

File metadata

  • Download URL: ai_research-0.1.15.tar.gz
  • Upload date:
  • Size: 750.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for ai_research-0.1.15.tar.gz
Algorithm Hash digest
SHA256 5c5cdc4fce99c851fed9d2f7192bad7bbd97b5d2f5a71d72d68ab6ca9dd01eae
MD5 0701d9aaa77d1705bdfe95fb14eb2774
BLAKE2b-256 66ab7c2fecab122b4b40efce5058dbac44e23a5b399fb7c620c49a00277958b9

See more details on using hashes here.

File details

Details for the file ai_research-0.1.15-py3-none-any.whl.

File metadata

  • Download URL: ai_research-0.1.15-py3-none-any.whl
  • Upload date:
  • Size: 27.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for ai_research-0.1.15-py3-none-any.whl
Algorithm Hash digest
SHA256 07cdf0daae1c7c32358691d64087cc206404989de66f77d4fbb9004c53f67001
MD5 b9e49f3c3f6ba0a8f149828c64e12dae
BLAKE2b-256 026617648a8e9372dd9865b99668d581fc124087bf2e6b2836d6077ae4fe4d9f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page