Skip to main content

Templated Abstract Polymorphic LIMS - A Laboratory Information Management System

Project description

Release Tag CI

Bloom

Bloom is the wet-lab and material-state authority for the stack. It models containers, specimens, derived materials, assay/workset flow, sequencing context, and the physical lineage that links operational lab work back to Atlas order context.

Bloom owns:

  • containers, placements, specimens, and derived materials
  • extraction, QC, library-prep, pool, and run objects
  • wet-lab queue membership and related operational state
  • lineage links between physical-material state and Atlas fulfillment context

Bloom does not own:

  • customer-portal truth and tenant administration
  • patient, clinician, shipment, TRF, or test authority
  • canonical artifact registry authority
  • analysis execution or result-return workflows

If you need to understand what physically exists in the lab, how it changed, and how those changes are linked together, Bloom is the authoritative repo.

Component View

flowchart LR
    UI["Bloom UI + API"] --> Domain["Bloom domain services"]
    Domain --> TapDB["TapDB persistence and template packs"]
    Domain --> Cognito["Cognito / daycog"]
    Domain --> Zebra["zebra_day label printing"]
    Domain --> Atlas["Atlas integration"]
    Domain --> Tracking["carrier tracking integration"]

Prerequisites

  • Python 3.12+
  • Conda for the supported BLOOM environment
  • local PostgreSQL/TapDB-compatible runtime for full local work
  • optional Cognito setup for auth-complete browser flows
  • optional printer and carrier-tracking configuration for the integration-heavy paths

Getting Started

Quickstart

source ./activate <deploy-name>
bloom db init
bloom db seed
bloom server start --port 8912

The supported local workflow is CLI-first and uses Bloom’s own environment/bootstrap path.

Delete-only teardown is also available:

bloom db nuke
bloom db nuke --force

Architecture

Technology

  • FastAPI + server-rendered GUI
  • Typer-based bloom CLI
  • TapDB for shared persistence/runtime lifecycle
  • Cognito-backed authentication
  • optional integrations for label printing and carrier tracking

Core Object Model

Bloom’s main concepts are:

  • templates that describe lab object types and allowed structure
  • instances representing containers, materials, assay artifacts, queues, and run context
  • lineage links that model parent/child and workflow relationships
  • audit trails and soft-delete history

Bloom template definitions are authored as JSON packs under config/tapdb_templates/ and loaded through TapDB. Runtime code should not create generic_template rows directly.

Runtime Shape

  • app entrypoint: main.py
  • app factory: bloom_lims.app:create_app
  • CLI: bloom
  • main CLI groups: server, db, config, info, integrations, quality, test, users

Integration Boundaries

  • Atlas provides intake and fulfillment context
  • Dewey may register or resolve artifacts when enabled
  • Ursa consumes sequencing context downstream
  • Zebra Day supports label-print workflows

Visual Tour

Bloom is unusually UI-heavy for a service repo, so the README keeps a few representative screens.

Graph And Metrics

Bloom graph

Accessioning

Bloom accessioning

Object Detail

Bloom object detail

Cost Estimates

Approximate only.

  • Local development: workstation plus a local database.
  • Small shared environment: usually the cost of the Dayhoff-managed host/database footprint, not Bloom-specific code.
  • Integration-heavy environments increase operator cost when printers, tracking, TLS, and shared auth are enabled, but Bloom still tends to be a service inside a broader stack budget rather than a standalone large spend item.

Development Notes

  • Canonical local entry path: source ./activate <deploy-name>
  • Use bloom ... as the main operational interface
  • Use tapdb ... only for shared DB/runtime work Bloom explicitly delegates
  • Use daycog ... only for shared Cognito work Bloom explicitly delegates
  • bloom db reset rebuilds after deletion; bloom db nuke stops after the destructive schema reset

Useful checks:

source ./activate <deploy-name>
bloom --help
pytest -q

Sandboxing

  • Safe: docs work, code reading, tests, bloom --help, and local-only validation against disposable local runtimes
  • Local-stateful: bloom db init, bloom db seed, bloom db reset, and bloom db nuke
  • Requires extra care: Cognito lifecycle, external tracking integrations, printer integrations, and any Dayhoff-managed deployed environment flows

Current Docs

References

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

bloom_lims-3.5.3.tar.gz (13.3 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

bloom_lims-3.5.3-py3-none-any.whl (9.2 MB view details)

Uploaded Python 3

File details

Details for the file bloom_lims-3.5.3.tar.gz.

File metadata

  • Download URL: bloom_lims-3.5.3.tar.gz
  • Upload date:
  • Size: 13.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for bloom_lims-3.5.3.tar.gz
Algorithm Hash digest
SHA256 46dddb9adf2c5581ad3c82cd801970475da5c75fe589fba615f96e72fa6d46cc
MD5 32fcb8ab74677a8c3adef51ccd2c30b7
BLAKE2b-256 fb73577739e5f8f0ca3c8b4e9bb9514af1d5b0da8108c184e6e3bd0aa38a12ed

See more details on using hashes here.

File details

Details for the file bloom_lims-3.5.3-py3-none-any.whl.

File metadata

  • Download URL: bloom_lims-3.5.3-py3-none-any.whl
  • Upload date:
  • Size: 9.2 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for bloom_lims-3.5.3-py3-none-any.whl
Algorithm Hash digest
SHA256 90a83e56098ed44ddc9e0b6f1aaf4cb0efc548d94c4a3bc757d374a987350d10
MD5 92d62a32a745ef48414ed3c92f30ee53
BLAKE2b-256 20ea89e21ab62faf46d4fc65f8ea7e4d0d0995e464cb3f4f7e5aabdc674714e4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page