Skip to main content

Templated Abstract Polymorphic LIMS - A Laboratory Information Management System

Project description

Release Tag CI

Bloom

Bloom is the wet-lab and material-state authority for the stack. It models containers, specimens, derived materials, assay/workset flow, sequencing context, and the physical lineage that links operational lab work back to Atlas order context.

Bloom owns:

  • containers, placements, specimens, and derived materials
  • extraction, QC, library-prep, pool, and run objects
  • wet-lab queue membership and related operational state
  • lineage links between physical-material state and Atlas fulfillment context

Bloom does not own:

  • customer-portal truth and tenant administration
  • patient, clinician, shipment, TRF, or test authority
  • canonical artifact registry authority
  • analysis execution or result-return workflows

If you need to understand what physically exists in the lab, how it changed, and how those changes are linked together, Bloom is the authoritative repo.

Component View

flowchart LR
    UI["Bloom UI + API"] --> Domain["Bloom domain services"]
    Domain --> TapDB["TapDB persistence and template packs"]
    Domain --> Cognito["Cognito / daycog"]
    Domain --> Zebra["zebra_day label printing"]
    Domain --> Atlas["Atlas integration"]
    Domain --> Tracking["carrier tracking integration"]

Prerequisites

  • Python 3.12+
  • Conda for the supported BLOOM environment
  • local PostgreSQL/TapDB-compatible runtime for full local work
  • optional Cognito setup for auth-complete browser flows
  • optional printer and carrier-tracking configuration for the integration-heavy paths

Getting Started

Quickstart

source ./activate <deploy-name>
bloom db init
bloom db seed
bloom server start --port 8912

source ./activate <deploy-name> creates the deployment-scoped conda environment from repo-root environment.yaml when it is missing, then activates it and installs only the Bloom repo editable.

The supported local workflow is CLI-first and uses Bloom’s own environment/bootstrap path.

Delete-only teardown is also available:

bloom db nuke
bloom db nuke --force

Architecture

Technology

  • FastAPI + server-rendered GUI
  • Typer-based bloom CLI
  • TapDB for shared persistence/runtime lifecycle
  • Cognito-backed authentication
  • optional integrations for label printing and carrier tracking

Core Object Model

Bloom’s main concepts are:

  • templates that describe lab object types and allowed structure
  • instances representing containers, materials, assay artifacts, queues, and run context
  • lineage links that model parent/child and workflow relationships
  • audit trails and soft-delete history

Bloom template definitions are authored as JSON packs under config/tapdb_templates/ and loaded through TapDB. Runtime code should not create generic_template rows directly.

Runtime Shape

  • app entrypoint: main.py
  • app factory: bloom_lims.app:create_app
  • CLI: bloom
  • main CLI groups: server, db, config, info, integrations, quality, test, users

Integration Boundaries

  • Atlas provides intake and fulfillment context
  • Dewey may register or resolve artifacts when enabled
  • Ursa consumes sequencing context downstream
  • Zebra Day supports label-print workflows

Visual Tour

Bloom is unusually UI-heavy for a service repo, so the README keeps a few representative screens.

Graph And Metrics

Bloom graph

Accessioning

Bloom accessioning

Object Detail

Bloom object detail

Cost Estimates

Approximate only.

  • Local development: workstation plus a local database.
  • Small shared environment: usually the cost of the Dayhoff-managed host/database footprint, not Bloom-specific code.
  • Integration-heavy environments increase operator cost when printers, tracking, TLS, and shared auth are enabled, but Bloom still tends to be a service inside a broader stack budget rather than a standalone large spend item.

Development Notes

  • Canonical local entry path: source ./activate <deploy-name>
  • Use bloom ... as the main operational interface
  • Use tapdb ... only for shared DB/runtime work Bloom explicitly delegates
  • Use daycog ... only for shared Cognito work Bloom explicitly delegates
  • bloom db reset rebuilds after deletion; bloom db nuke stops after the destructive schema reset

Useful checks:

source ./activate <deploy-name>
bloom --help
pytest -q

Sandboxing

  • Safe: docs work, code reading, tests, bloom --help, and local-only validation against disposable local runtimes
  • Local-stateful: bloom db init, bloom db seed, bloom db reset, and bloom db nuke
  • Requires extra care: Cognito lifecycle, external tracking integrations, printer integrations, and any Dayhoff-managed deployed environment flows

Current Docs

References

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

bloom_lims-3.5.7.tar.gz (13.3 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

bloom_lims-3.5.7-py3-none-any.whl (9.2 MB view details)

Uploaded Python 3

File details

Details for the file bloom_lims-3.5.7.tar.gz.

File metadata

  • Download URL: bloom_lims-3.5.7.tar.gz
  • Upload date:
  • Size: 13.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for bloom_lims-3.5.7.tar.gz
Algorithm Hash digest
SHA256 c13a690a9d357c086554a2fccf0b4db5406e7f22eec8ac076f81769603ac98cd
MD5 6575b22c03d6fe06240528842c3fb2df
BLAKE2b-256 e26fb8f543ce0bf33c767851f898b728ca15f6c4c64ad5d1c75f90b4c22a4aea

See more details on using hashes here.

File details

Details for the file bloom_lims-3.5.7-py3-none-any.whl.

File metadata

  • Download URL: bloom_lims-3.5.7-py3-none-any.whl
  • Upload date:
  • Size: 9.2 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for bloom_lims-3.5.7-py3-none-any.whl
Algorithm Hash digest
SHA256 53b913ca436fe96c693052600b3ad314cd02d354294cfc55900f4d1460febb02
MD5 904d322d89958c1f8b6b797384fdfd51
BLAKE2b-256 5f0f0057bcd706c09028a36e26a78b2d2aace01a7b05f6c323a57f12a8864007

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page