Skip to main content

Templated Abstract Polymorphic LIMS - A Laboratory Information Management System

Project description

Release Tag CI

Bloom

Bloom is the wet-lab and material-state authority for the stack. It models containers, specimens, derived materials, assay/workset flow, sequencing context, and the physical lineage that links operational lab work back to Atlas order context.

Bloom owns:

  • containers, placements, specimens, and derived materials
  • extraction, QC, library-prep, pool, and run objects
  • wet-lab queue membership and related operational state
  • lineage links between physical-material state and Atlas fulfillment context

Bloom does not own:

  • customer-portal truth and tenant administration
  • patient, clinician, shipment, TRF, or test authority
  • canonical artifact registry authority
  • analysis execution or result-return workflows

If you need to understand what physically exists in the lab, how it changed, and how those changes are linked together, Bloom is the authoritative repo.

Component View

flowchart LR
    UI["Bloom UI + API"] --> Domain["Bloom domain services"]
    Domain --> TapDB["TapDB persistence and template packs"]
    Domain --> Cognito["Cognito / daycog"]
    Domain --> Zebra["zebra_day label printing"]
    Domain --> Atlas["Atlas integration"]
    Domain --> Tracking["carrier tracking integration"]

Prerequisites

  • Python 3.12+
  • Conda for the supported BLOOM environment
  • local PostgreSQL/TapDB-compatible runtime for full local work
  • optional Cognito setup for auth-complete browser flows
  • optional printer and carrier-tracking configuration for the integration-heavy paths

Getting Started

Quickstart

source ./activate <deploy-name>
bloom db init
bloom db seed
bloom server start --port 8912

The supported local workflow is CLI-first and uses Bloom’s own environment/bootstrap path.

Delete-only teardown is also available:

bloom db nuke
bloom db nuke --force

Architecture

Technology

  • FastAPI + server-rendered GUI
  • Typer-based bloom CLI
  • TapDB for shared persistence/runtime lifecycle
  • Cognito-backed authentication
  • optional integrations for label printing and carrier tracking

Core Object Model

Bloom’s main concepts are:

  • templates that describe lab object types and allowed structure
  • instances representing containers, materials, assay artifacts, queues, and run context
  • lineage links that model parent/child and workflow relationships
  • audit trails and soft-delete history

Bloom template definitions are authored as JSON packs under config/tapdb_templates/ and loaded through TapDB. Runtime code should not create generic_template rows directly.

Runtime Shape

  • app entrypoint: main.py
  • app factory: bloom_lims.app:create_app
  • CLI: bloom
  • main CLI groups: server, db, config, info, integrations, quality, test, users

Integration Boundaries

  • Atlas provides intake and fulfillment context
  • Dewey may register or resolve artifacts when enabled
  • Ursa consumes sequencing context downstream
  • Zebra Day supports label-print workflows

Visual Tour

Bloom is unusually UI-heavy for a service repo, so the README keeps a few representative screens.

Graph And Metrics

Bloom graph

Accessioning

Bloom accessioning

Object Detail

Bloom object detail

Cost Estimates

Approximate only.

  • Local development: workstation plus a local database.
  • Small shared environment: usually the cost of the Dayhoff-managed host/database footprint, not Bloom-specific code.
  • Integration-heavy environments increase operator cost when printers, tracking, TLS, and shared auth are enabled, but Bloom still tends to be a service inside a broader stack budget rather than a standalone large spend item.

Development Notes

  • Canonical local entry path: source ./activate <deploy-name>
  • Use bloom ... as the main operational interface
  • Use tapdb ... only for shared DB/runtime work Bloom explicitly delegates
  • Use daycog ... only for shared Cognito work Bloom explicitly delegates
  • bloom db reset rebuilds after deletion; bloom db nuke stops after the destructive schema reset

Useful checks:

source ./activate <deploy-name>
bloom --help
pytest -q

Sandboxing

  • Safe: docs work, code reading, tests, bloom --help, and local-only validation against disposable local runtimes
  • Local-stateful: bloom db init, bloom db seed, bloom db reset, and bloom db nuke
  • Requires extra care: Cognito lifecycle, external tracking integrations, printer integrations, and any Dayhoff-managed deployed environment flows

Current Docs

References

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

bloom_lims-0.11.20.tar.gz (13.3 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

bloom_lims-0.11.20-py3-none-any.whl (9.2 MB view details)

Uploaded Python 3

File details

Details for the file bloom_lims-0.11.20.tar.gz.

File metadata

  • Download URL: bloom_lims-0.11.20.tar.gz
  • Upload date:
  • Size: 13.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for bloom_lims-0.11.20.tar.gz
Algorithm Hash digest
SHA256 65a85e431847ab528634a30c4c13129f4f09736ccc297c45bd895bb6169e0a37
MD5 44b040b86c1ccff7645b5e91d9d306ee
BLAKE2b-256 40e05f58654806fe27a66b862b84c57e7e0663062629ca0df4f620d22237716b

See more details on using hashes here.

File details

Details for the file bloom_lims-0.11.20-py3-none-any.whl.

File metadata

  • Download URL: bloom_lims-0.11.20-py3-none-any.whl
  • Upload date:
  • Size: 9.2 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for bloom_lims-0.11.20-py3-none-any.whl
Algorithm Hash digest
SHA256 5288f851cc7ee2df85b6fff7efa7d920c968996c855610e48b042638d005d87e
MD5 82a3f7d240d0cf9c1dd575ee6382d56c
BLAKE2b-256 993702b1c7615034d97fd3c7a8ff3d3ae166d25477ccaae2c6a233e321faa0f5

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page