Skip to main content

Wet lab automation as Python code

Project description

biocompute

Wet lab automation as Python code. Maintained by london biocompute.

PyPI version Python License

biocompute is a framework that lets you write wet lab experiments as plain Python. Define your protocol with calls like well.fill(), well.mix(), and well.image(). Then execute on real lab hardware that handles the liquid dispensing, mixing, and imaging automatically. No drag-and-drop GUIs, no vendor lock-in, no manual pipetting.

If you know Python, you can run wet lab experiments.


Quick start

Create a virtual environment and install the biocompute package.

python -m venv .venv
source .venv/bin/activate
pip install biocompute

Create a file called super_simple_experiment.py and copy the code snippet.

from biocompute import wells, red_dye, green_dye, blue_dye

def experiment():
    for well in wells(count=3):
        well.fill(vol=80.0, reagent=red_dye)
        well.fill(vol=40.0, reagent=green_dye)
        well.fill(vol=20.0, reagent=blue_dye)
        well.mix()
        well.image()

Submit the experiment to the job server.

biocompute submit super_simple_experiment.py --follow

And that's it. Results stream back to your terminal as experiments finish executing on the physical hardware.


How it works

Your experiment function describes intent. The compiler takes this high-level declarative code and turns it into a fully scheduled, hardware-specific protocol. It handles:

  • Automatic parallelism — independent operations are identified and scheduled concurrently so protocols finish faster without any manual orchestration.
  • Plate layout — wells are assigned to physical plates based on thermal constraints. Multi-temperature experiments get split across plates automatically.
  • Operation collapsing — redundant per-well instructions (like 96 identical incubations) are collapsed into single plate-level commands.
  • Device mapping — every operation is matched to the right piece of hardware (pipette, camera, incubator, gripper) based on a capability model, so swapping equipment never means rewriting your protocol.
  • Multi-plate scaling — protocols that exceed a single plate are transparently distributed across as many plates as needed.

You describe what should happen. The compiler figures out how to make it fast.

Operations

Method What it does
well.fill(vol, reagent) Dispense vol µL of reagent
well.mix() Mix well contents
well.image() Capture an image

wells(count=n) yields n wells. Multiple calls produce non-overlapping wells.

Reagents

Import the built-in reagents you need.

from biocompute import red_dye, green_dye, blue_dye, water

Because it's just Python

Use numpy. Use scipy. Use whatever. The system only sees wells and operations.

Colour sweep

Sweep red dye volume across ten wells using numpy to generate the range.

import numpy as np
from biocompute import wells, red_dye, green_dye, blue_dye

def experiment():
    for well, r in zip(wells(count=10), np.linspace(10, 100, 10)):
        well.fill(vol=r, reagent=red_dye)
        well.fill(vol=50.0, reagent=green_dye)
        well.fill(vol=50.0, reagent=blue_dye)
        well.mix()
        well.image()

Closed-loop optimisation

Submit an experiment, read results, use them to parameterise the next one.

import numpy as np
from scipy.interpolate import interp1d
from scipy.optimize import minimize_scalar
from biocompute import Client, wells, red_dye, green_dye

with Client() as client:
    volumes = np.linspace(10, 100, 8)

    def experiment_sweep():
        for well, v in zip(wells(count=8), volumes):
            well.fill(vol=v, reagent=red_dye)
            well.fill(vol=50.0, reagent=green_dye)
            well.mix()
            well.image()

    result = client.submit(experiment_sweep)

    model = interp1d(volumes, result.result_data["scores"], kind="cubic")
    optimum = minimize_scalar(model, bounds=(10, 100), method="bounded").x

    def experiment_refine():
        for well, v in zip(wells(count=5), np.linspace(optimum - 10, optimum + 10, 5)):
            well.fill(vol=v, reagent=red_dye)
            well.fill(vol=50.0, reagent=green_dye)
            well.mix()
            well.image()

    final = client.submit(experiment_refine)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

biocompute-0.1.2.tar.gz (43.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

biocompute-0.1.2-py3-none-any.whl (18.9 kB view details)

Uploaded Python 3

File details

Details for the file biocompute-0.1.2.tar.gz.

File metadata

  • Download URL: biocompute-0.1.2.tar.gz
  • Upload date:
  • Size: 43.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.9

File hashes

Hashes for biocompute-0.1.2.tar.gz
Algorithm Hash digest
SHA256 bf61ece7d5ca17effce0755408f8cc411f912dba9cff5cfcd6775e9ac9ca0cf5
MD5 853d12086a881bf5504b42502fd466d3
BLAKE2b-256 bf71c20f84abf9985475a87880409fb80df5c24ccb31ec7857165eb56c6f7ef3

See more details on using hashes here.

File details

Details for the file biocompute-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: biocompute-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 18.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.9

File hashes

Hashes for biocompute-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 663d896ff692b3210bc57d5fca919c6111b3a392d7356765b9d2b38e7effac6e
MD5 e00445c3d7a2643e103462a12fc87deb
BLAKE2b-256 999ba28fa53b632595a6aba98f451ef957fbd07eec255de70835d12dfb3a1e49

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page