Skip to main content

Templated Abstract Polymorphic LIMS - A Laboratory Information Management System

Project description

Release Tag CI

Bloom

Bloom is the wet-lab and material-state authority for the stack. It models containers, specimens, derived materials, assay/workset flow, sequencing context, and the physical lineage that links operational lab work back to Atlas order context.

Bloom owns:

  • containers, placements, specimens, and derived materials
  • extraction, QC, library-prep, pool, and run objects
  • wet-lab queue membership and related operational state
  • lineage links between physical-material state and Atlas fulfillment context

Bloom does not own:

  • customer-portal truth and tenant administration
  • patient, clinician, shipment, TRF, or test authority
  • canonical artifact registry authority
  • analysis execution or result-return workflows

If you need to understand what physically exists in the lab, how it changed, and how those changes are linked together, Bloom is the authoritative repo.

Component View

flowchart LR
    UI["Bloom UI + API"] --> Domain["Bloom domain services"]
    Domain --> TapDB["TapDB persistence and template packs"]
    Domain --> Cognito["Cognito / daycog"]
    Domain --> Zebra["zebra_day label printing"]
    Domain --> Atlas["Atlas integration"]
    Domain --> Tracking["carrier tracking integration"]

Prerequisites

  • Python 3.12+
  • Conda for the supported BLOOM environment
  • local PostgreSQL/TapDB-compatible runtime for full local work
  • optional Cognito setup for auth-complete browser flows
  • optional printer and carrier-tracking configuration for the integration-heavy paths

Getting Started

Quickstart

source ./activate <deploy-name>
bloom db init
bloom db seed
bloom server start --port 8912

The supported local workflow is CLI-first and uses Bloom’s own environment/bootstrap path.

Delete-only teardown is also available:

bloom db nuke
bloom db nuke --force

Architecture

Technology

  • FastAPI + server-rendered GUI
  • Typer-based bloom CLI
  • TapDB for shared persistence/runtime lifecycle
  • Cognito-backed authentication
  • optional integrations for label printing and carrier tracking

Core Object Model

Bloom’s main concepts are:

  • templates that describe lab object types and allowed structure
  • instances representing containers, materials, assay artifacts, queues, and run context
  • lineage links that model parent/child and workflow relationships
  • audit trails and soft-delete history

Bloom template definitions are authored as JSON packs under config/tapdb_templates/ and loaded through TapDB. Runtime code should not create generic_template rows directly.

Runtime Shape

  • app entrypoint: main.py
  • app factory: bloom_lims.app:create_app
  • CLI: bloom
  • main CLI groups: server, db, config, info, integrations, quality, test, users

Integration Boundaries

  • Atlas provides intake and fulfillment context
  • Dewey may register or resolve artifacts when enabled
  • Ursa consumes sequencing context downstream
  • Zebra Day supports label-print workflows

Visual Tour

Bloom is unusually UI-heavy for a service repo, so the README keeps a few representative screens.

Graph And Metrics

Bloom graph

Accessioning

Bloom accessioning

Object Detail

Bloom object detail

Cost Estimates

Approximate only.

  • Local development: workstation plus a local database.
  • Small shared environment: usually the cost of the Dayhoff-managed host/database footprint, not Bloom-specific code.
  • Integration-heavy environments increase operator cost when printers, tracking, TLS, and shared auth are enabled, but Bloom still tends to be a service inside a broader stack budget rather than a standalone large spend item.

Development Notes

  • Canonical local entry path: source ./activate <deploy-name>
  • Use bloom ... as the main operational interface
  • Use tapdb ... only for shared DB/runtime work Bloom explicitly delegates
  • Use daycog ... only for shared Cognito work Bloom explicitly delegates
  • bloom db reset rebuilds after deletion; bloom db nuke stops after the destructive schema reset

Useful checks:

source ./activate <deploy-name>
bloom --help
pytest -q

Sandboxing

  • Safe: docs work, code reading, tests, bloom --help, and local-only validation against disposable local runtimes
  • Local-stateful: bloom db init, bloom db seed, bloom db reset, and bloom db nuke
  • Requires extra care: Cognito lifecycle, external tracking integrations, printer integrations, and any Dayhoff-managed deployed environment flows

Current Docs

References

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

bloom_lims-3.5.1.tar.gz (13.3 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

bloom_lims-3.5.1-py3-none-any.whl (9.2 MB view details)

Uploaded Python 3

File details

Details for the file bloom_lims-3.5.1.tar.gz.

File metadata

  • Download URL: bloom_lims-3.5.1.tar.gz
  • Upload date:
  • Size: 13.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for bloom_lims-3.5.1.tar.gz
Algorithm Hash digest
SHA256 4cc2fdb66df85147dffef6dbc742d39e82b393726e96e268dcccc0c929aa7a33
MD5 3cd210d7b1495734381e4acfd42a1a3e
BLAKE2b-256 e5c994da9fe1d28afbaf092735bb45e6383f6b19f6a966f3698d2f898da369c1

See more details on using hashes here.

File details

Details for the file bloom_lims-3.5.1-py3-none-any.whl.

File metadata

  • Download URL: bloom_lims-3.5.1-py3-none-any.whl
  • Upload date:
  • Size: 9.2 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for bloom_lims-3.5.1-py3-none-any.whl
Algorithm Hash digest
SHA256 b426add737414633138e1cad1ffc7b0472d6ba83e6eb4a13e5ad5fa2ea606fa2
MD5 e1157bcf2161f0594a106c111228347a
BLAKE2b-256 f6ca3bbdde98a42126287c547e32081d91c6a21e76414a8afc5e4af8754d22c6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page