Skip to main content

Wet lab automation as Python code

Project description

biocompute

Wet lab automation as Python code. Maintained by london biocompute.

PyPI version Python License

biocompute is a framework that lets you write wet lab experiments as plain Python. Define your protocol with calls like well.fill(), well.mix(), and well.image(). Then execute on real lab hardware that handles the liquid dispensing, mixing, and imaging automatically. No drag-and-drop GUIs, no vendor lock-in, no manual pipetting.

If you know Python, you can run wet lab experiments.


Quick start

Create a virtual environment and install the biocompute package.

python -m venv .venv
source .venv/bin/activate
pip install biocompute

Create a file called super_simple_experiment.py and copy the code snippet.

from biocompute import wells, red_dye, green_dye, blue_dye

def experiment():
    for well in wells(count=3):
        well.fill(vol=80.0, reagent=red_dye)
        well.fill(vol=40.0, reagent=green_dye)
        well.fill(vol=20.0, reagent=blue_dye)
        well.mix()
        well.image()

Submit the experiment to the job server.

biocompute submit super_simple_experiment.py --follow

And that's it. Results stream back to your terminal as experiments finish executing on the physical hardware.


How it works

Your experiment function describes intent. The compiler takes this high-level declarative code and turns it into a fully scheduled, hardware-specific protocol. It handles:

  • Automatic parallelism — independent operations are identified and scheduled concurrently so protocols finish faster without any manual orchestration.
  • Plate layout — wells are assigned to physical plates based on thermal constraints. Multi-temperature experiments get split across plates automatically.
  • Operation collapsing — redundant per-well instructions (like 96 identical incubations) are collapsed into single plate-level commands.
  • Device mapping — every operation is matched to the right piece of hardware (pipette, camera, incubator, gripper) based on a capability model, so swapping equipment never means rewriting your protocol.
  • Multi-plate scaling — protocols that exceed a single plate are transparently distributed across as many plates as needed.

You describe what should happen. The compiler figures out how to make it fast.

Operations

Method What it does
well.fill(vol, reagent) Dispense vol µL of reagent
well.mix() Mix well contents
well.image() Capture an image

wells(count=n) yields n wells. Multiple calls produce non-overlapping wells.

Reagents

Import the built-in reagents you need.

from biocompute import red_dye, green_dye, blue_dye, water

Because it's just Python

Use numpy. Use scipy. Use whatever. The system only sees wells and operations.

Colour sweep

Sweep red dye volume across ten wells using numpy to generate the range.

import numpy as np
from biocompute import wells, red_dye, green_dye, blue_dye

def experiment():
    for well, r in zip(wells(count=10), np.linspace(10, 100, 10)):
        well.fill(vol=r, reagent=red_dye)
        well.fill(vol=50.0, reagent=green_dye)
        well.fill(vol=50.0, reagent=blue_dye)
        well.mix()
        well.image()

Closed-loop optimisation

Submit an experiment, read results, use them to parameterise the next one.

import numpy as np
from scipy.interpolate import interp1d
from scipy.optimize import minimize_scalar
from biocompute import Client, wells, red_dye, green_dye

with Client() as client:
    volumes = np.linspace(10, 100, 8)

    def experiment_sweep():
        for well, v in zip(wells(count=8), volumes):
            well.fill(vol=v, reagent=red_dye)
            well.fill(vol=50.0, reagent=green_dye)
            well.mix()
            well.image()

    result = client.submit(experiment_sweep)

    model = interp1d(volumes, result.result_data["scores"], kind="cubic")
    optimum = minimize_scalar(model, bounds=(10, 100), method="bounded").x

    def experiment_refine():
        for well, v in zip(wells(count=5), np.linspace(optimum - 10, optimum + 10, 5)):
            well.fill(vol=v, reagent=red_dye)
            well.fill(vol=50.0, reagent=green_dye)
            well.mix()
            well.image()

    final = client.submit(experiment_refine)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

biocompute-0.1.1.tar.gz (16.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

biocompute-0.1.1-py3-none-any.whl (18.9 kB view details)

Uploaded Python 3

File details

Details for the file biocompute-0.1.1.tar.gz.

File metadata

  • Download URL: biocompute-0.1.1.tar.gz
  • Upload date:
  • Size: 16.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.9

File hashes

Hashes for biocompute-0.1.1.tar.gz
Algorithm Hash digest
SHA256 b65cfb41b1e0dff28efee6412034e28294c30cbcebe96e16755ec47a219a6fc3
MD5 6e0ee6185bc29e3d05dd8a9e423bdc0d
BLAKE2b-256 8b9823dc3120bd22466bee146d4933dd9153b6a60a73c8b74cf1f45695476b42

See more details on using hashes here.

File details

Details for the file biocompute-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: biocompute-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 18.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.9

File hashes

Hashes for biocompute-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 5bfb72806aab0ac332d459f8a41943fbf2a23923f849d9320f035fbc742506b0
MD5 b6a6b2997652fe812cdb4e8789f73fbb
BLAKE2b-256 c7fe57f1716e11f791847a2e82d2d786d31680464feb4347027bc71c5860010a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page