Skip to main content

An openai-like sdk for finetuning and batch inference

Project description

Info: v0.7 is talking to a new supabase backend. v0.6 will remain online until at least December 1st, 2025.


This repo is research code. Please use github issues or contact me via email (niels dot warncke at gmail dot com) or slack when you encounter issues.

OpenWeights

An openai-like sdk with the flexibility of working on a local GPU: finetune, inference, API deployments and custom workloads on managed runpod instances.

Installation

Run pip install openweights or install from source via pip install -e .


Quickstart

  1. Create an API key You can create one via the ow signup or using the dashboard.

  2. Start the cluster manager (skip this if you got an API key for a managed cluster) The cluster manager is the service that monitors the job queue and starts runpod workers. You have different options to start the cluster

ow cluster --env-file path/to/env   # Run locally
ow deploy --env-file path/to/env    # Run on a runpod cpu instance

# Or managed, if you trust us with your API keys (usually a bad idea, but okay if you know us personally)
ow env import path/to/env
ow manage start

In all cases, the env file needs at least all envs defined in .env.worker.example.

  1. Submit a job
from openweights import OpenWeights

ow = OpenWeights()

training_file = ow.files.upload("data/train.jsonl", purpose="conversations")["id"]
job = ow.fine_tuning.create(
    model="unsloth/Qwen3-4B",
    training_file=training_file,
    loss="sft",
    epochs=1,
    learning_rate=1e-4,
    r=32,
)

For more examples, checkout the cookbook.

Overview

openweights lets you submit jobs that will be run on managed runpod instances. It supports a range of built-in jobs out-of-the-box, but is built for custom workloads.

Custom jobs

A custom job lets you run a script that you would normally run on one GPU as a job.

Example:

from openweights import OpenWeights, register, Jobs
ow = OpenWeights()

@register('my_custom_job')
class MyCustomJob(Jobs):
    mount = {
        'local/path/to/script.py': 'script.py',
        'local/path/to/dir/': 'dirname/'
    }
    params: Type[BaseModel] = MyParams  # Your Pydantic model for params
    requires_vram_gb: int = 24
    base_image: str = 'nielsrolf/ow-default' # optional

    def get_entrypoint(self, validated_params: BaseModel) -> str:
        # Get the entrypoint command for the job.
        return f'python script.py {json.dumps(validated_params.model_dump())}'

More details

Built-in jobs

Inference

from openweights import OpenWeights
ow = OpenWeights()

file = ow.files.create(
  file=open("mydata.jsonl", "rb"),
  purpose="conversations"
)

job = ow.inference.create(
    model=model,
    input_file_id=file['id'],
    max_tokens=1000,
    temperature=1,
    min_tokens=600,
)

# Wait or poll until job is done, then:
if job.status == 'completed':
    output_file_id = job['outputs']['file']
    output = ow.files.content(output_file_id).decode('utf-8')
    print(output)

More details

OpenAI-like vllm API

from openweights import OpenWeights

ow = OpenWeights()

model = 'unsloth/llama-3-8b-Instruct'

# async with ow.api.deploy(model) also works
with ow.api.deploy(model):            # async with ow.api.deploy(model) also works
    # entering the context manager is equivalent to temp_api = ow.api.deploy(model) ; api.up()
    completion = ow.chat.completions.create(
        model=model,
        messages=[{"role": "user", "content": "is 9.11 > 9.9?"}]
    )
    print(completion.choices[0].message)       # when this context manager exits, it calls api.down()

More details

Inspect-AI

from openweights import OpenWeights
ow = OpenWeights()

job = ow.inspect_ai.create(
    model='meta-llama/Llama-3.3-70B-Instruct',
    eval_name='inspect_evals/gpqa_diamond',
    options='--top-p 0.9', # Can be any options that `inspect eval` accepts - we simply pass them on without validation
)

if job.status == 'completed':
    job.download('output')

CLI

Use ow {cmd} --help for more help on the available commands:

 ow --help
usage: ow [-h] {ssh,exec,signup,cluster,worker,token,ls,cancel,logs,fetch,serve,deploy,env,manage} ...

OpenWeights CLI for remote GPU operations

positional arguments:
  {ssh,exec,signup,cluster,worker,token,ls,cancel,logs,fetch,serve,deploy,env,manage}
    ssh                 Start or attach to a remote shell with live file sync.
    exec                Execute a command on a remote GPU with file sync.
    signup              Create a new user, organization, and API key.
    cluster             Run the cluster manager locally with your own infrastructure.
    worker              Run a worker to execute jobs from the queue.
    token               Manage API tokens for organizations.
    ls                  List job IDs.
    cancel              Cancel jobs by ID.
    logs                Display logs for a job.
    fetch               Fetch file content by ID.
    serve               Start the dashboard backend server.
    deploy              Deploy a cluster instance on RunPod.
    env                 Manage organization secrets (environment variables).
    manage              Control managed cluster infrastructure.

options:
  -h, --help            show this help message and exit

For developing custom jobs, ow ssh is great - it starts a pod, connects via ssh, and live-syncs the local CWD into the remote. This allows editing finetuning code locally and testing it immediately.

General notes

Job and file IDs are content hashes

The job_id is based on the params hash, which means that if you submit the same job many times, it will only run once. If you resubmit a failed or canceled job, it will reset the job status to pending.


Citation

Originally created by Niels Warncke (@nielsrolf).

If you find this repo useful for your research and want to cite it, you can do so via:

@misc{warncke_openweights_2025,
  author       = {Niels Warncke},
  title        = {OpenWeights},
  howpublished = {\url{https://github.com/longtermrisk/openweights}},
  note         = {Commit abcdefg • accessed DD Mon YYYY},
  year         = {2025}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

openweights-0.8.2.tar.gz (5.7 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

openweights-0.8.2-py3-none-any.whl (5.4 MB view details)

Uploaded Python 3

File details

Details for the file openweights-0.8.2.tar.gz.

File metadata

  • Download URL: openweights-0.8.2.tar.gz
  • Upload date:
  • Size: 5.7 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for openweights-0.8.2.tar.gz
Algorithm Hash digest
SHA256 5ebd2815b3547765a38170d748b52f1353d386fe9d918b5e32b15589bb5fcf4b
MD5 cf54a277d19cc4b870e6daafad2c1f89
BLAKE2b-256 eb548ee6fdae4c40352a1b2b3bb53f6f317e6405caacca6f92e64be20d0f667f

See more details on using hashes here.

Provenance

The following attestation bundles were made for openweights-0.8.2.tar.gz:

Publisher: manual_publish.yaml on longtermrisk/openweights

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file openweights-0.8.2-py3-none-any.whl.

File metadata

  • Download URL: openweights-0.8.2-py3-none-any.whl
  • Upload date:
  • Size: 5.4 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for openweights-0.8.2-py3-none-any.whl
Algorithm Hash digest
SHA256 021bc7194db6bca07ba598387d5ccc2971b59ca18c95d72f2e0e1ac5eb4c65aa
MD5 ad756e2908d88cee7e91e4ca4dbecfe8
BLAKE2b-256 85ea087ba1f44ac2c9f605e60188d4482ea5db1920820ce5e2a96e5bc128e118

See more details on using hashes here.

Provenance

The following attestation bundles were made for openweights-0.8.2-py3-none-any.whl:

Publisher: manual_publish.yaml on longtermrisk/openweights

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page