Skip to main content

Python SDK for the AIR Backend API

Project description

AIR SDK

The AIR SDK is a Python client for the AIR platform — a suite of AI assistant tools for scientific research. It gives you programmatic access to AI-assisted pipeline steps (idea generation, literature search, methods, data analysis, paper writing, peer review) as well as a set of standalone tools you can use independently.

Key capabilities:

  • AI-assisted research pipeline — use AI assistants for idea generation, literature search, methods development, data analysis, paper writing, and review as a connected workflow or individual steps
  • Local code execution — the results, one_shot, and deep_research methods coordinate an AI agent on the backend while all generated code runs locally on your machine in an isolated virtual environment
  • VLM image review — optionally enable vision-language model feedback to iteratively improve generated plots
  • File sync — push local data files to a project and pull generated outputs (plots, LaTeX, reports) back to disk after each step
  • Standalone AI tools — extract keywords (UNESCO, AAS, AAAI), download and summarise arXiv papers, enhance text with paper context, run OCR on PDFs (local or remote), or get an AI review of a paper PDF
  • Model flexibility — use cloud models (Gemini, GPT, Claude) or self-hosted open-source models via vLLM

Installation

pip install ai-research

Quick Start

import os
import air

client = air.AIR(
    api_key=os.environ["AIR_API_KEY"],
    base_url=os.environ.get("AIR_BASE_URL", "http://localhost:8000"),
)

# Standalone tools
keywords = client.keywords("dark matter and lensing", n=5, kw_type="aas")
enhanced = client.enhance("My analysis references https://arxiv.org/abs/1706.03762")
ocr_result = client.ocr("paper.pdf")  # local or server-side PDF

# One-shot task (code runs locally)
result = client.one_shot(
    "Generate synthetic data and plot a ROC curve",
    agent="engineer",
    work_dir="./output",
)

# Deep research (multi-step planning + execution)
result = client.deep_research(
    "Fit a Cox model to survival data and write a data guide",
    max_plan_steps=2,
    plan_instructions="1. engineer, 2. researcher.",
    work_dir="./output",
)

# Full research pipeline
project = client.create_project("my-research", data_description="We study...")
idea = project.idea()
literature = project.literature()
methods = project.methods()
results = project.results(max_steps=2, work_dir="./output")
paper = project.paper(journal="AAS", add_citations=True)
review = project.review()
project.pull_files()

Configuration

Set environment variables, or pass them directly to the constructor:

Variable Description Default
AIR_API_KEY API key (air_k1_...) required
AIR_BASE_URL Backend URL http://localhost:8000
AIR_LOCAL_DIR Local root for pull_files / auto-pull ~/ai-scientist
AIR_WORK_DIR Local root for code execution (results, one_shot, deep_research) ~/ai-scientist

Available Models

Examples of models that can be used across the SDK:

Model Notes
gemini-3.1-flash-lite-preview Fast, cheapest — good default
gemini-3.1-pro-preview Stronger reasoning
gemini-2.5-flash Good for paper writing
gpt-5-nano Fast, good quality
gpt-5.2 Strongest OpenAI model
claude-sonnet-4-6 Strong all-round
gpt-oss-120b Self-hosted via vLLM, free

API Reference

AIR class

Method Description
health() Check backend status
keywords(text, n, kw_type) Extract keywords (UNESCO, AAS, or AAAI)
arxiv(text) Download arXiv papers from URLs in text
enhance(text) Enhance text with context from referenced papers
ocr(file_path) OCR a PDF (local file or server path)
review(file_path, ...) Standalone paper review with Skepthical (local or server PDF)
one_shot(task, *, model, agent, enable_vlm_review, ...) One-shot task with local code execution
deep_research(task, *, adaptive_planning, enable_vlm_review, ...) Multi-step research with planning and local code execution
create_project(name, data_description) Create a project
get_project(name) Get existing project
list_projects() List all projects
delete_project(name) Delete a project

Project class

Method Description
idea(default_model, critic_model, idea_iterations) Generate research idea
literature(timeout, max_iterations) Run literature search
methods(default_model, critic_model) Develop methods
results(max_steps, max_attempts, engineer_model, researcher_model, ...) Run analysis with local code execution
paper(journal, add_citations, default_model) Write paper
review(...) Run review
get_file(path) Read a project file
list_files() List all project files
write_file(path, content) Write a file
pull_files(key_only, iteration) Download project files locally
push_files(local_path, remote_path) Upload local files to project
delete() Delete the project

Examples

The examples/ directory contains runnable scripts covering all features:

Script Description
00_manage_projects.py List and delete projects
01_health_check.py Backend health check
02a-c_keywords_*.py Keyword extraction (UNESCO, AAS, AAAI)
03a_ocr_arxiv.py OCR an arXiv paper
03b_ocr_local.py OCR a local PDF
04_enhance.py Enhance text with arXiv context
05a-d_idea_*.py Idea generation (quantum, psychology, chemical eng., oncology)
06_literature_coral.py Literature search (marine biology)
07a-e_methods_*.py Methods development (quantum, zoology, psychology, chemical eng., oncology)
08a_one_shot_engineer.py One-shot code generation
08b_one_shot_researcher.py One-shot writing task
08c_one_shot_vlm.py One-shot with VLM image review
08d_one_shot_researcher_oss.py One-shot with self-hosted OSS model
09_deep_research_biostatistics.py Deep research (biostatistics)
10_pipeline_oscillator.py Full pipeline: idea → literature → methods → results
11_paper_oscillator.py Paper writing from existing project
12_review.py Standalone paper review

Docs

pip install ai-research[docs]
mkdocs serve --livereload

Tests

pytest tests
pytest tests/ -m "not slow"  # skip slow tests

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ai_research-0.1.37.tar.gz (729.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ai_research-0.1.37-py3-none-any.whl (28.6 kB view details)

Uploaded Python 3

File details

Details for the file ai_research-0.1.37.tar.gz.

File metadata

  • Download URL: ai_research-0.1.37.tar.gz
  • Upload date:
  • Size: 729.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for ai_research-0.1.37.tar.gz
Algorithm Hash digest
SHA256 e8e64fce7bbfc2f36f5f169d45f1f315d85255829f1ef884aa361c2b227157c4
MD5 ddc5c7ab6c454cd87178628038672d05
BLAKE2b-256 e23b5a92c1dbb01fe7005f802a3916da0b3cd9fd5ee9593687603813168d3963

See more details on using hashes here.

File details

Details for the file ai_research-0.1.37-py3-none-any.whl.

File metadata

  • Download URL: ai_research-0.1.37-py3-none-any.whl
  • Upload date:
  • Size: 28.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for ai_research-0.1.37-py3-none-any.whl
Algorithm Hash digest
SHA256 2013fd1fbea92161b0ae4e74fd89b115b6497222704024fb3370cbfe92d2e54c
MD5 e643cf322697b56acca4ec207a18f981
BLAKE2b-256 bf188171ee58fcaed701c49b4dbfad574b67a4d303d0e76cd82762c79bf1327c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page