Templated Abstract Polymorphic LIMS - A Laboratory Information Management System
Project description
Bloom: Templated Abstract Polymorphic (and opinionated) LIMS
A conceptual gambit in collaboration with AI /// Pre-Production Release
Version: Dynamically fetched from GitHub Releases (current: v0.10.7)
Built from first principles and drawing upon 30 years experience scaling laboratory process. Constructed with as few object model shortcuts as I could manage (I believe these shortcuts are among the main reasons LIMS nearly universally disappoint). Supporting both arbitrary and prescribed interacting objects. Intended for use: by small to factory scale laboratories, in regulated environments, for both research & operations use cases. Bloom can handle multiple areas LIS tend to touch: accessioning, lab processes, specimen/sample management, equipment, regulatory and compliance.
Table of Contents
- Spoilers (Screenshots)
- Executive Summary
- Features
- Installation
- System Architecture
- Core Data Model
- Subjects vs Objects
- Database Schema
- Object Hierarchy
- Template System
- Workflow Engine
- Action System
- File Management
- API Layer
- Web Interface
- External Integrations
- Configuration
- Deployment
- Testing
- Regulatory & Compliance
- Design Principles
- Dev Tools
- Support, Authors & License
Spoilers
bloom early peeks
Oauth2 Authentication w/All Major Social Providers
and flexible whitelisting, etc...
Graph Object View (add, remove, edit, take actions, explore)
Interactive, Dynamic Metrics
Accessioning Modalities
Nested Assay / Queue / Workset
Instantiate Objects From Available Templates
Object Detail
Specialized Object Detail Views
Labware (ie: a 96w plate)
bloom natively will support arbitrarily defined labware, a 96w plate is just one example. Anything that nested arrays of arrays can describe can be configured as a type of labware with next to no effort!
Exhaustive & Comprehensive Audit Trails (+soft deletes only)
Bells And Whistles
Integrated Barcode Label Printing For All Objects
Workflows Available
Accessioning
Package receipt -> kits registration (multiple) -> specimen registration (multiple) -> requisition capture & association -> adding specimens to assay queues. Fedex tracking details fetched, barcode printing available.
Plasma Isolation -> DNA Extraction -> DNA Quant
managing all object relationships, tracking all details, printing labels, etc.
Executive Summary
BLOOM (Bioinformatics Laboratory Operations and Object Management) is a Laboratory Information Management System (LIMS) designed for managing laboratory workflows, sample tracking, and data management. The system is built on a flexible, template-driven architecture that allows laboratories to define custom object types, workflows, and actions without code changes.
Specification
Features
- Template-driven object creation: All laboratory objects (containers, samples, workflows) are created from JSON templates
- Hierarchical lineage tracking: Parent-child relationships between all objects with full audit trail
- Flexible workflow engine: Configurable multi-step workflows with queue management
- Action system: Extensible action framework for object state transitions and operations
- File management: S3-compatible file storage with metadata tracking
- Barcode/label printing: Integration with Zebra label printers via zebra_day
- FedEx tracking: Package tracking integration via fedex_tracking_day
- Multi-interface support: FastAPI REST API and standard+admin user web page
- Unified Search v2: Cross-record search (
instance/template/lineage/audit) with JSON/TSV export (docs/SEARCH_V2.md)
Technology Stack
- Language: Python 3.12+
- Database Runtime: daylily-tapdb (local PostgreSQL for dev/test, Aurora PostgreSQL for prod)
- Web Frameworks: FastAPI (primary)
- Storage: AWS S3 / Supabase Storage
- Authentication: AWS Cognito via daylily-cognito/daycog
- Label Printing: zebra_day library
- Package Tracking: fedex_tracking_day library
- Validation: Pydantic v2 with pydantic-settings
- Schema Management: TapDB-managed schema lifecycle
Feature Roadmap
Completed ✅
Core Infrastructure
- Domain-Driven Architecture: Clean separation into 8 domain modules (
bloom_lims/domain/) - Database Lifecycle: TapDB-managed schema and runtime operations (
tapdb db ...,tapdb pg ...) - Pydantic Schema Validation: 10 schema modules for comprehensive input validation (
bloom_lims/schemas/) - Structured Exception Handling: Typed exception hierarchy (
bloom_lims/exceptions.py,bloom_lims/core/exceptions.py) - Session Management: Context managers,
_TransactionContext, proper rollback inBLOOMdb3 - API Versioning:
/api/v1/prefix structure with version negotiation (bloom_lims/api/versioning.py) - Health Check Endpoints: Kubernetes-ready probes at
/health,/health/live,/health/ready,/health/metrics - Dynamic Version Management: Version pulled from GitHub releases (
bloom_lims/_version.py)
Workflow Engine
- Template-Driven Object Creation: All objects created from JSON templates without code changes
- Hierarchical Lineage Tracking: Full parent-child relationships with comprehensive audit trail
- Multi-Step Workflow Engine: Configurable workflows with queue management
- Action System: Extensible framework for object state transitions and operations
- Operational Workflows: Accessioning → Plasma Isolation → DNA Extraction → Quant pipeline
Authentication & Security
- OAuth2/Supabase Authentication: Enterprise-grade auth with Google, GitHub, and social providers
- Domain Whitelisting: Flexible access control configuration
- JWT Token Validation: Secure API authentication
File Management
- S3-Compatible Storage: AWS S3 and Supabase Storage support
- File Sets: Grouping related files with metadata tracking
- Dewey File Manager: Organized file intake, storage, and retrieval system
External Integrations
- Zebra Label Printing: Full barcode printing via zebra_day library
- FedEx Tracking: Package tracking integration via fedex_tracking_day
- Graph Visualization: Cytoscape integration for complex relationship exploration
Developer Experience
- Cross-Platform CI/CD: GitHub Actions for macOS, Ubuntu, CentOS
- Comprehensive Logging: Structured logging with rotation
- CLI Tools:
bloomcommand-line interface (TapDB-backed DB commands) - Interactive Shell:
bloom_shell.pyfor development
In Queue 📋
Performance & Scale
- Caching Layer Integration: Redis/memcached distributed caching backend (
bloom_lims/core/cache_backends.py) - Async Operations: Non-blocking operations for high-throughput automation
- Rate Limiting: API request limiting middleware
- Batch Operations: Bulk processing API endpoints
- Read Session Compatibility: TapDB-backed read/write session router compatibility (
bloom_lims/core/read_replicas.py)
User Engagement
- Plugin Architecture: Custom extensions without core code changes
- Workflow Orchestration: Airflow/Prefect integration for automation
- Enhanced Reporting/Analytics: Built-in insights and dashboards
- Mobile/Tablet Optimization: Responsive lab-friendly interface
- GraphQL API: Flexible queries for complex many-to-many relationships
Infrastructure
- Multi-Tenancy Support: Schema-per-tenant isolation
- Secrets Management: HashiCorp Vault / AWS Secrets Manager integration
- Observability Stack: OpenTelemetry, Prometheus metrics, distributed tracing
- Development Containers: devcontainer configuration for consistent environments
Content
- Template Library Expansion: More out-of-box templates for common lab workflows
- User Documentation: Comprehensive guides and tutorials
- Contributor Guide: Documentation for community contributions
2. System Architecture
2.1 High-Level Architecture
%%{init: {
"flowchart": {"defaultRenderer": "elk"}
}}%%
flowchart TB
subgraph BLOOM["BLOOM LIMS"]
subgraph Presentation["Presentation Layer"]
FastAPI["FastAPI API<br/>(Port 8000)"]
CLI["CLI Tools"]
end
subgraph Business["Business Logic Layer"]
BloomObj["BloomObj<br/>(bobjs.py)"]
BloomWF["BloomWorkflow<br/>Step"]
BloomFile["BloomFile<br/>Set"]
BloomEquip["BloomEquipment"]
end
subgraph DataAccess["Data Access Layer"]
BLOOMdb3["BLOOMdb3 (db.py)<br/>- SQLAlchemy Session Management<br/>- Connection Pooling<br/>- Transaction Management"]
end
subgraph ORM["ORM Models (bdb.py)"]
BloomObjModel["BloomObj Model"]
GenericLineage["GenericLineage"]
EquipmentInst["EquipmentInst"]
DataLineage["DataLineage"]
end
end
subgraph DB["PostgreSQL Database"]
bloom_obj["bloom_obj"]
generic_lineage["generic_instance_lineage"]
equipment["equipment_instance"]
data_lineage["data_lineage"]
end
FastAPI --> Business
CLI --> Business
Business --> DataAccess
DataAccess --> ORM
ORM --> DB
2.2 Module Organization
bloom_lims/
├── bdb.py # SQLAlchemy ORM models and base classes
├── db.py # Database connection and session management (BLOOMdb3)
├── bobjs.py # Business logic classes (BloomObj, BloomWorkflow, etc.)
├── bfile.py # File management (BloomFile, BloomFileSet)
├── bequip.py # Equipment management (BloomEquipment)
├── env.py # Environment configuration
├── config/ # Configuration files
│ ├── assay_config.yaml
│ └── fedex_config.yaml
└── templates/ # Jinja2 HTML templates for Flask UI
2.3 Entry Points
| Entry Point | File | Port | Purpose |
|---|---|---|---|
| Flask UI | bloom_lims/bkend/bkend.py |
5000 | Web-based user interface |
3. Core Data Model
3.1 BloomObj - The Universal Object
Every entity in BLOOM is a BloomObj. This includes:
- Templates: Blueprint definitions for creating instances
- Instances: Actual laboratory objects created from templates
- Containers: Tubes, plates, wells, boxes
- Content: Samples, specimens, reagents
- Workflows: Process definitions and instances
- Workflow Steps: Individual steps within workflows
- Equipment: Laboratory instruments and devices
- Files: Uploaded documents and data files
3.2 Object Classification Hierarchy
Objects are classified using a four-level hierarchy:
category / type / subtype / version
Examples:
container/tube/tube-generic-10ml/1.0content/sample/blood-plasma/1.0workflow/assay/rare-mendelian/1.0workflow_step/queue/accessioning/1.0equipment/instrument/sequencer/1.0
3.3 Template vs Instance
| Aspect | Template | Instance |
|---|---|---|
is_template |
True |
False |
template_uuid |
NULL |
Points to template |
| Purpose | Define structure | Represent real objects |
json_addl |
Contains instantiation_layouts |
Contains properties, actions |
3.4 Subjects vs Objects
BLOOM distinguishes between Objects (facts) and Subjects (decision scopes):
Objects (Facts)
Objects represent concrete, immutable facts about the laboratory:
- A tube exists with EUID
CX123 - A sample was collected on 2024-01-15
- A sequencing run produced file
run_001.fastq
Objects are the physical or digital entities that exist in the laboratory.
Subjects (Decision Scopes)
Subjects are logical aggregates that decisions apply to. They span multiple objects and provide context for:
- Clinical decisions (e.g., "this accession is reportable")
- Workflow decisions (e.g., "this analysis bundle passed QC")
- Regulatory decisions (e.g., "this report was signed out")
Subject Types
| Subject Kind | Description | Example Use Case |
|---|---|---|
accession |
Decision scope for an accession/intake bundle | Clinical reporting decisions |
analysis_bundle |
Decision scope for analysis result bundles | QC pass/fail decisions |
report |
Decision scope for clinical reportable units | Sign-out decisions |
generic |
Fallback for custom use cases | Custom workflows |
Subject Relationships
graph LR
S[Subject SX1] -->|subject_anchor| A[Accession CX123]
S -->|subject_member| T1[Tube CX124]
S -->|subject_member| T2[Tube CX125]
S -->|subject_member| F[FileSet FX456]
- Anchor: The primary object that defines the subject (one per subject)
- Members: Additional objects associated with the subject (many per subject)
Using Subjects
from bloom_lims.subjecting import create_subject, add_subject_members, list_subjects_for_object
# Create a subject with an anchor
subject_euid = create_subject(
bob=bloom_obj,
anchor_euid="CX123",
subject_kind="accession",
)
# Add member objects
add_subject_members(bob, subject_euid, ["CX124", "CX125", "FX456"])
# Find all subjects containing an object
subjects = list_subjects_for_object(bob, "CX123")
Key Design Principles
- Idempotency: Creating a subject with the same anchor and kind returns the existing subject
- Stable Keys: Subject keys are deterministic:
{subject_kind}:{anchor_euid} - Separation of Concerns: Objects store facts; Subjects store decision context
- Audit Trail: All subject relationships are tracked via lineage records
4. Database Schema
4.1 Primary Tables
bloom_obj - Core Object Table
| Column | Type | Description |
|---|---|---|
uuid |
UUID | Primary key |
euid |
Text | Enterprise Unique Identifier (human-readable, variable length) |
name |
String(400) | Object name |
category |
String(100) | Top-level classification |
type |
String(100) | Object type |
subtype |
String(100) | Object subtype |
version |
String(100) | Version string |
is_template |
Boolean | True if this is a template |
is_singleton |
Boolean | True if only one instance allowed |
template_uuid |
UUID | Reference to template (for instances) |
json_addl |
JSONB | Flexible JSON storage for properties, actions, etc. |
bstatus |
String(100) | Object status (active, complete, destroyed, etc.) |
bstate |
String(100) | Object state |
is_deleted |
Boolean | Soft delete flag |
created_dt |
DateTime | Creation timestamp |
modified_dt |
DateTime | Last modification timestamp |
created_by |
String | Creator username |
modified_by |
String | Last modifier username |
audit_comment |
String | Audit trail comment |
polymorphic_discriminator |
String | For SQLAlchemy inheritance |
generic_instance_lineage - Object Relationships
| Column | Type | Description |
|---|---|---|
uuid |
UUID | Primary key |
parent_instance_uuid |
UUID | Parent object UUID |
child_instance_uuid |
UUID | Child object UUID |
relationship_type |
String | Type of relationship |
created_dt |
DateTime | Creation timestamp |
is_deleted |
Boolean | Soft delete flag |
polymorphic_discriminator |
String | For inheritance |
equipment_instance - Equipment Records
| Column | Type | Description |
|---|---|---|
uuid |
UUID | Primary key |
euid |
Text | Enterprise Unique Identifier (human-readable) |
name |
String(400) | Equipment name |
equipment_type |
String(100) | Type of equipment |
json_addl |
JSONB | Equipment properties |
bstatus |
String(100) | Equipment status |
is_deleted |
Boolean | Soft delete flag |
data_lineage - Data Provenance
| Column | Type | Description |
|---|---|---|
uuid |
UUID | Primary key |
parent_data_uuid |
UUID | Parent data UUID |
child_data_uuid |
UUID | Child data UUID |
relationship_type |
String | Type of data relationship |
4.2 EUID Format
The Enterprise Unique Identifier (EUID) is a human-readable identifier designed for laboratory operations:
Format: [PREFIX][SEQUENCE_NUMBER]
Examples: CX1, CX12, CX123, WX1000, CWX5, MRX42
Components:
- PREFIX: 2-3 uppercase letter code identifying object type
- SEQUENCE_NUMBER: Integer with NO leading zeros (critical LIMS design principle)
EUID Prefixes by Object Type:
| Prefix | Object Type | Description |
|---|---|---|
GT |
Template | Generic templates |
GL |
Lineage | Instance lineage records |
CX |
Container | Tubes, plates, racks, etc. |
CWX |
Well | Plate wells |
MX |
Content | Samples, specimens |
MRX |
Reagent | Reagent contents |
MCX |
Control | Control contents |
WX |
Workflow | Workflow instances |
WSX |
Workflow Step | Workflow step instances |
QX |
Queue | Queue instances |
TRX |
Test Requisition | Test requisitions |
EX |
Equipment | Equipment instances |
DX |
Data | Data instances |
AY |
Assay | Assay workflows |
FI |
File | File instances |
FS |
File Set | File set instances |
GX |
Generic | Generic instances |
Design Principles:
- EUIDs start with a prefix to make them human-readable for lab operations
- The numeric portion MUST NOT have leading zeros (e.g.,
CX1notCX001) - Variable length - grows with sequence number
- Prefixes are defined in
bloom_lims/config/{category}/metadata.json - Generated by PostgreSQL trigger function
set_generic_instance_euid()in TapDB schema (tapdb_schema.sql)
5. Object Hierarchy
5.1 Categories
| Category | Description | Examples |
|---|---|---|
container |
Physical containers | tubes, plates, wells, boxes |
content |
Material contents | samples, specimens, reagents |
workflow |
Process definitions | assays, accessioning workflows |
workflow_step |
Workflow components | queues, processing steps |
equipment |
Laboratory equipment | sequencers, thermocyclers |
file |
Digital files | data files, reports |
file_set |
File collections | result sets, batch uploads |
data |
Data records | measurements, results |
control |
Control samples | positive/negative controls |
test_requisition |
Test orders | clinical test requests |
5.2 Container Types
container/
├── tube/
│ ├── tube-generic-10ml/1.0
│ ├── tube-cryovial/1.0
│ └── tube-blood-collection/1.0
├── plate/
│ ├── fixed-plate-24/1.0
│ ├── fixed-plate-96/1.0
│ └── fixed-plate-384/1.0
├── well/
│ └── well-standard/1.0
├── box/
│ ├── box-81-position/1.0
│ └── box-freezer/1.0
└── rack/
└── rack-tube/1.0
5.3 Workflow Structure
workflow (assay instance)
├── workflow_step (queue: accessioning)
│ └── workset (batch of samples)
│ └── containers/samples
├── workflow_step (queue: extraction)
│ └── workset
│ └── containers/samples
├── workflow_step (queue: library-prep)
│ └── workset
│ └── containers/samples
└── workflow_step (queue: sequencing)
└── workset
└── containers/samples
6. Template System
6.1 Template JSON Structure
Templates are stored in json_addl with the following structure:
{
"properties": {
"name": "Template Name",
"description": "Template description",
"lab_code": "LAB001"
},
"instantiation_layouts": [
{
"container/well/well-standard/1.0/": {
"json_addl": {
"cont_address": {
"name": "A1",
"row": "A",
"col": "1"
}
}
}
}
],
"actions": {},
"action_groups": {}
}
6.2 Template Loading
Templates are loaded from JSON files in bloom_lims/templates/ directory:
# Load template from file
bobj = BloomObj(BLOOMdb3())
template = bobj.create_template_from_json_file("path/to/template.json")
# Or create from code string
template = bobj.create_template_by_code("container/plate/fixed-plate-96/1.0")
6.3 Instance Creation
# Create instance from template EUID
bobj = BloomObj(BLOOMdb3())
instances = bobj.create_instances(template_euid)
# Returns: [[parent_instance], [child_instances...]]
# For a plate: [[plate], [well1, well2, ..., well96]]
# Create instance by code path
instance = bobj.create_instance_by_code(
"container/tube/tube-generic-10ml/1.0",
{"json_addl": {"properties": {"name": "My Tube"}}}
)
7. Workflow Engine
7.1 Workflow Components
| Component | Description | Class |
|---|---|---|
| Workflow | Top-level process definition | BloomWorkflow |
| Workflow Step | Individual step/queue | BloomWorkflowStep |
| Workset | Batch of items in a queue | Part of workflow_step |
| Action | Operations on objects | Defined in json_addl |
7.2 Workflow Lifecycle
stateDiagram-v2
[*] --> created
created --> in_progress: Start Work
in_progress --> complete: Finish Successfully
in_progress --> abandoned: Cancel/Stop
in_progress --> failed: Error Occurred
complete --> [*]
abandoned --> [*]
failed --> [*]
7.3 Status Values
| Status | Description |
|---|---|
created |
Initial state after creation |
in_progress |
Work has started |
complete |
Successfully finished |
abandoned |
Cancelled/stopped |
failed |
Error occurred |
destroyed |
Object destroyed (containers) |
active |
Currently active |
7.4 BloomWorkflow Class
class BloomWorkflow(BloomObj):
"""Manages workflow instances and their lifecycle."""
def create_empty_workflow(self, template_euid):
"""Create a new workflow instance from template."""
return self.create_instances(template_euid)
def do_action(self, wf_euid, action, action_group, action_ds={}):
"""Execute an action on a workflow."""
# Supported actions:
# - do_action_create_and_link_child
# - do_action_create_package_and_first_workflow_step
# - do_action_destroy_specimen_containers
7.5 BloomWorkflowStep Class
class BloomWorkflowStep(BloomObj):
"""Manages individual workflow steps and queues."""
def do_action(self, wfs_euid, action, action_group, action_ds={}):
"""Execute an action on a workflow step."""
# Supported actions:
# - do_action_create_and_link_child
# - do_action_create_input
# - do_action_create_child_container_and_link_child_workflow_step
# - do_action_create_test_req_and_link_child_workflow_step
# - do_action_add_container_to_assay_q
# - do_action_fill_plate_undirected
# - do_action_fill_plate_directed
# - do_action_link_tubes_auto
# - do_action_cfdna_quant
# - do_action_stamp_copy_plate
# - do_action_log_temperature
8. Action System
8.1 Action Structure in json_addl
Actions are defined in the json_addl field of objects:
{
"action_groups": {
"status_actions": {
"label": "Status Actions",
"actions": {
"set_in_progress": {
"label": "Start Work",
"action_enabled": "1",
"method_name": "do_action_set_object_status",
"captured_data": {
"object_status": "in_progress"
}
},
"set_complete": {
"label": "Mark Complete",
"action_enabled": "1",
"method_name": "do_action_set_object_status",
"captured_data": {
"object_status": "complete"
}
}
}
}
},
"actions": {
"print_label": {
"label": "Print Barcode Label",
"action_enabled": "1",
"method_name": "do_action_print_barcode_label",
"lab": "main_lab",
"printer_name": "zebra_1",
"label_style": "2x1_basic"
}
}
}
8.2 Available Action Methods
Global Actions (BloomObj)
| Method | Description |
|---|---|
do_action_set_object_status |
Change object status |
do_action_print_barcode_label |
Print barcode label |
do_action_destroy_specimen_containers |
Mark containers as destroyed |
do_action_create_package_and_first_workflow_step_assay |
Create package workflow |
do_action_move_workset_to_another_queue |
Move workset between queues |
do_stamp_plates_into_plate |
Stamp multiple plates into one |
do_action_download_file |
Download file from storage |
do_action_add_file_to_file_set |
Add file to file set |
do_action_remove_file_from_file_set |
Remove file from file set |
do_action_add_relationships |
Create lineage relationships |
Workflow Step Actions (BloomWorkflowStep)
| Method | Description |
|---|---|
do_action_create_and_link_child |
Create child object and link |
do_action_create_input |
Create input object |
do_action_create_child_container_and_link_child_workflow_step |
Create container with workflow step |
do_action_create_test_req_and_link_child_workflow_step |
Create test requisition |
do_action_add_container_to_assay_q |
Add container to assay queue |
do_action_fill_plate_undirected |
Fill plate without position mapping |
do_action_fill_plate_directed |
Fill plate with position mapping |
do_action_link_tubes_auto |
Auto-link tubes |
do_action_cfdna_quant |
cfDNA quantification action |
do_action_stamp_copy_plate |
Create plate copy |
do_action_log_temperature |
Log temperature reading |
8.3 Action Execution Flow
# 1. Get object and action definition
bobj = BloomObj(BLOOMdb3())
obj = bobj.get_by_euid(euid)
action_ds = obj.json_addl["action_groups"][action_group]["actions"][action]
# 2. Add captured data from user input
action_ds["captured_data"] = user_input_data
action_ds["curr_user"] = current_user
# 3. Execute action
result = bobj.do_action(euid, action, action_group, action_ds)
# 4. Action records execution in json_addl["action_log"]
8.4 Action Logging
Every action execution is logged:
{
"action_log": [
{
"action": "set_in_progress",
"action_group": "status_actions",
"timestamp": "2024-01-15T10:30:00",
"user": "lab_tech_1",
"captured_data": {
"object_status": "in_progress"
}
}
]
}
9. File Management
9.1 BloomFile Class
The BloomFile class (bfile.py) manages file uploads and downloads:
class BloomFile(BloomObj):
"""Manages file objects in BLOOM."""
def upload_file(self, file_path, bucket="bloom-files", metadata=None):
"""Upload file to S3/Supabase storage."""
# Creates BloomObj record
# Uploads to storage bucket
# Returns file EUID
def download_file(self, euid, save_path="./", include_metadata=False):
"""Download file from storage."""
# Retrieves file from storage
# Optionally includes metadata JSON
def get_file_metadata(self, euid):
"""Get file metadata without downloading."""
9.2 BloomFileSet Class
Groups related files together:
class BloomFileSet(BloomObj):
"""Manages collections of files."""
def create_file_set(self, name, description=None):
"""Create a new file set."""
def add_files_to_file_set(self, euid, file_euid):
"""Add files to an existing file set."""
def remove_files_from_file_set(self, euid, file_euid):
"""Remove files from a file set."""
def get_files_in_set(self, euid):
"""Get all files in a file set."""
9.3 Storage Configuration
Files are stored in S3-compatible storage (AWS S3 or Supabase Storage):
# Environment variables for storage
SUPABASE_URL = os.getenv("SUPABASE_URL")
SUPABASE_KEY = os.getenv("SUPABASE_KEY")
AWS_ACCESS_KEY_ID = os.getenv("AWS_ACCESS_KEY_ID")
AWS_SECRET_ACCESS_KEY = os.getenv("AWS_SECRET_ACCESS_KEY")
S3_BUCKET = os.getenv("S3_BUCKET", "bloom-files")
10. API Layer
10.1 FastAPI REST API (v1)
The FastAPI backend provides versioned REST API access at /api/v1/. All endpoints support pagination and filtering.
API modules are organized in bloom_lims/api/v1/:
Object Operations (/api/v1/objects)
| Endpoint | Method | Description |
|---|---|---|
/api/v1/objects/ |
GET | List objects with filters (type, subtype, status, name_contains) |
/api/v1/objects/{euid} |
GET | Get object by EUID |
/api/v1/objects/ |
POST | Create new object |
Container Operations (/api/v1/containers)
| Endpoint | Method | Description |
|---|---|---|
/api/v1/containers/ |
GET | List containers (filter by type, subtype, status) |
/api/v1/containers/{euid} |
GET | Get container (optionally include_contents) |
/api/v1/containers/ |
POST | Create container from template |
/api/v1/containers/{euid}/contents |
POST | Add content to container |
Content Operations (/api/v1/content)
| Endpoint | Method | Description |
|---|---|---|
/api/v1/content/ |
GET | List content items |
/api/v1/content/{euid} |
GET | Get content by EUID |
Workflow Operations (/api/v1/workflows)
| Endpoint | Method | Description |
|---|---|---|
/api/v1/workflows/ |
GET | List workflows (filter by status, workflow_type) |
/api/v1/workflows/{euid} |
GET | Get workflow details |
/api/v1/workflows/{euid}/advance |
POST | Advance workflow to next step |
File Operations (/api/v1/files)
| Endpoint | Method | Description |
|---|---|---|
/api/v1/files/ |
GET | List files (filter by file_type, status) |
/api/v1/files/{euid} |
GET | Get file metadata |
/api/v1/files/ |
POST | Create file record (with optional upload) |
/api/v1/files/{file_euid}/link/{parent_euid} |
POST | Link file to parent object |
Equipment Operations (/api/v1/equipment)
| Endpoint | Method | Description |
|---|---|---|
/api/v1/equipment/ |
GET | List equipment |
/api/v1/equipment/{euid} |
GET | Get equipment details |
Authentication (/api/v1/auth)
| Endpoint | Method | Description |
|---|---|---|
/api/v1/auth/me |
GET | Get current user info |
/api/v1/user-tokens |
GET/POST | List/create personal Bloom API tokens |
/api/v1/user-tokens/{token_id} |
DELETE | Revoke personal Bloom API token |
/api/v1/user-tokens/{token_id}/usage |
GET | View usage for personal token |
/api/v1/admin/groups |
GET | Admin group list |
/api/v1/admin/groups/{group_code}/members |
GET/POST | Admin group membership management |
/api/v1/admin/groups/{group_code}/members/{user_id} |
DELETE | Admin remove group member |
/api/v1/admin/user-tokens |
GET | Admin list all tokens |
/api/v1/admin/user-tokens/{token_id} |
DELETE | Admin revoke any token |
/api/v1/admin/user-tokens/{token_id}/usage |
GET | Admin token usage |
/api/v1/external/specimens |
POST | External Atlas-driven specimen create |
/api/v1/external/specimens/{specimen_euid} |
GET/PATCH | External specimen get/update |
/api/v1/external/specimens/by-reference |
GET | External specimen lookup by Atlas refs |
10.2 Request/Response Format
# Example: Create instance from template
POST /api/templates/{template_euid}/instantiate
Content-Type: application/json
{
"json_addl": {
"properties": {
"name": "Sample Tube 001",
"lab_code": "LAB001"
}
}
}
# Response
{
"success": true,
"data": {
"euid": "CX1234",
"uuid": "550e8400-e29b-41d4-a716-446655440000",
"name": "Sample Tube 001",
"category": "container",
"type": "tube",
"subtype": "tube-generic-10ml",
"version": "1.0",
"bstatus": "created"
}
}
10.3 Authentication
Bloom API authentication supports:
- Session/Cognito auth for interactive/internal use
- Bloom personal API bearer tokens (
blm_...) for external machine integrations - Legacy
X-API-Keyonly when explicitly enabled in development
Bloom RBAC roles:
INTERNAL_READ_ONLYINTERNAL_READ_WRITEADMIN
External token scopes:
internal_rointernal_rwadmin
Token self-service is gated by API_ACCESS group membership.
Legacy API key behavior:
- disabled by default
- only available when both:
- environment is development
BLOOM_ALLOW_LEGACY_API_KEY=true
See full integration runbook: docs/AUTH_INTEGRATION.md.
# External integration token example
Authorization: Bearer blm_<token>
11. Web Interface
11.1 Flask Application Structure
bloom_lims/bkend/
├── bkend.py # Main Flask application
├── templates/ # Jinja2 templates
│ ├── base.html
│ ├── index.html
│ ├── object_detail.html
│ ├── workflow_view.html
│ └── ...
└── static/ # Static assets
├── css/
├── js/
└── images/
11.2 Key Routes
| Route | Description |
|---|---|
/ |
Home page / dashboard |
/object/<euid> |
Object detail view |
/workflow/<euid> |
Workflow view |
/search |
Search interface |
/templates |
Template browser |
/action/<euid>/<action_group>/<action> |
Action execution |
/print/<euid> |
Print barcode label |
11.3 Template Rendering
@app.route('/object/<euid>')
def object_detail(euid):
bobj = BloomObj(BLOOMdb3())
obj = bobj.get_by_euid(euid)
return render_template(
'object_detail.html',
obj=obj,
lineages=obj.parent_of_lineages,
actions=obj.json_addl.get('actions', {}),
action_groups=obj.json_addl.get('action_groups', {})
)
12. External Integrations
12.1 Zebra Label Printing (zebra_day)
BLOOM integrates with Zebra label printers for barcode printing:
from zebra_day import ZebraDay
# Configuration in json_addl
{
"actions": {
"print_label": {
"method_name": "do_action_print_barcode_label",
"lab": "main_lab",
"printer_name": "zebra_zd420",
"label_style": "2x1_basic",
"alt_a": "", # Custom field A
"alt_b": "", # Custom field B
"alt_c": "", # Custom field C
}
}
}
# Printing execution
def print_label(self, lab, printer_name, label_zpl_style, euid, **kwargs):
zd = ZebraDay()
zd.print_label(
printer=printer_name,
template=label_zpl_style,
data={
"euid": euid,
"barcode": euid,
**kwargs
}
)
12.2 FedEx Tracking (fedex_tracking_day)
Package tracking integration for shipment management:
from fedex_tracking_day import FedexTracker
# Get tracking information
tracker = FedexTracker()
tracking_data = tracker.get_fedex_ops_meta_ds(tracking_number)
# Returns:
{
"tracking_number": "1234567890",
"status": "Delivered",
"Transit_Time_sec": 172800,
"delivery_date": "2024-01-15",
"events": [...]
}
12.3 Cognito Authentication
Bloom uses AWS Cognito for single sign-on via the hosted UI. Tokens are validated against the Cognito JWKS before sessions are created.
from auth.cognito.client import get_cognito_auth, CognitoTokenError
cognito = get_cognito_auth()
# Redirect users to the hosted UI login page
login_url = cognito.config.authorize_url
# Validate an incoming ID or access token
try:
claims = cognito.validate_token(id_token)
email = claims.get("email")
except CognitoTokenError as exc:
raise ValueError(f"Invalid Cognito token: {exc}")
12.4 AWS S3 Integration
Alternative file storage using AWS S3:
import boto3
s3_client = boto3.client(
's3',
aws_access_key_id=os.getenv('AWS_ACCESS_KEY_ID'),
aws_secret_access_key=os.getenv('AWS_SECRET_ACCESS_KEY')
)
# Upload file
s3_client.upload_file(
local_path,
bucket_name,
f"bloom/{euid}/{filename}"
)
# Download file
s3_client.download_file(
bucket_name,
f"bloom/{euid}/{filename}",
local_path
)
13. Configuration
BLOOM uses a YAML-based configuration system with environment variable overrides.
13.1 Configuration Files
Configuration precedence (highest to lowest):
- Environment variables with
BLOOM_*prefix - User config:
~/.config/bloom/bloom-config.yaml - Template defaults:
config/bloom-config-template.yaml
Setup:
# Copy template to user config directory
mkdir -p ~/.config/bloom
cp config/bloom-config-template.yaml ~/.config/bloom/bloom-config.yaml
# Edit with your settings
$EDITOR ~/.config/bloom/bloom-config.yaml
13.2 Configuration Reference
# ~/.config/bloom/bloom-config.yaml
# Application settings
environment: "development" # development, staging, production, testing
debug: false
# Database
database:
host: "localhost"
port: 5445 # Default BLOOM PostgreSQL port
database: "bloom"
user: "bloom"
password: "" # Leave empty for peer authentication
# Authentication (Cognito)
auth:
cognito_region: "us-east-1"
cognito_user_pool_id: "us-east-1_XXXXXXXXX"
cognito_client_id: "your_app_client_id"
cognito_domain: "your-domain.auth.us-east-1.amazoncognito.com"
cognito_redirect_uri: "http://127.0.0.1:8000/"
cognito_logout_redirect_uri: "http://127.0.0.1:8000/"
cognito_allowed_domains: [] # Empty = allow all
# AWS settings
aws:
profile: "" # AWS profile name (optional)
region: "us-west-2"
# Storage (S3)
storage:
s3_bucket: ""
s3_region: "us-east-1"
# Logging
logging:
level: "INFO"
13.3 Environment Variable Overrides
Override any YAML setting with environment variables using BLOOM_ prefix and __ for nesting:
export BLOOM_TAPDB__ENV=dev
export BLOOM_TAPDB__DATABASE_NAME=bloom
export BLOOM_AUTH__COGNITO_REGION=us-east-1
export BLOOM_DEBUG=true
13.4 Database Connection
The active database target is resolved from TapDB runtime context:
from bloom_lims.config import get_tapdb_db_config
cfg = get_tapdb_db_config()
# {'host': 'localhost', 'port': 5445, 'database': 'bloom', ...}
13.5 Printer Configuration
Printer configuration is stored in YAML files:
# config/printers.yaml
labs:
main_lab:
printers:
zebra_zd420:
ip: 192.168.1.100
port: 9100
type: zpl
zebra_zd621:
ip: 192.168.1.101
port: 9100
type: zpl
label_styles:
2x1_basic:
width: 2
height: 1
template: |
^XA
^FO50,50^BY3
^BCN,100,Y,N,N
^FD{euid}^FS
^FO50,180^A0N,30,30^FD{alt_a}^FS
^XZ
13.6 Assay Configuration
# config/assay_config.yaml
assays:
rare-mendelian:
name: "Rare Mendelian Disease Panel"
version: "1.0"
steps:
- name: accessioning
queue: workflow_step/queue/accessioning/1.0
- name: extraction
queue: workflow_step/queue/extraction/1.0
- name: library_prep
queue: workflow_step/queue/library-prep/1.0
- name: sequencing
queue: workflow_step/queue/sequencing/1.0
14. Deployment
14.1 Docker Deployment
# Dockerfile
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 5000 8000 8080
CMD ["python", "-m", "bloom_lims.bkend.bkend"]
# docker-compose.yml
version: '3.8'
services:
bloom-web:
build: .
ports:
- "5000:5000"
environment:
- DATABASE_URL=postgresql://bloom:password@db:5432/bloom_lims
depends_on:
- db
bloom-api:
build: .
command: ["uvicorn", "bloom_lims.bkend.fastapi_bkend:app", "--host", "0.0.0.0", "--port", "8000"]
ports:
- "8000:8000"
environment:
- DATABASE_URL=postgresql://bloom:password@db:5432/bloom_lims
depends_on:
- db
db:
image: postgres:15
environment:
- POSTGRES_USER=bloom
- POSTGRES_PASSWORD=password
- POSTGRES_DB=bloom_lims
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
postgres_data:
14.2 Database Initialization
# Activate environment and initialize tapdb-managed runtime/schema
source bloom_activate.sh
bloom db init
bloom db seed
14.3 Running Services
# FastAPI (development)
uvicorn bloom_lims.bkend.fastapi_bkend:app --reload --port 8000
# Production with gunicorn
gunicorn -w 4 -b 0.0.0.0:5000 bloom_lims.bkend.bkend:app
15. Installation
Hardware Supported
see build test badges above for all supported platforms
- Mac (14+)
brew install coreutilsis required for thegtimeoutcommand for some rclone functionality. Runalias timeout=gtimeoutto use the gtimeout w/zsh.brew install mkcertis required for themkcertcommand to create a local certificate authority for testing with https.
- Ubuntu 22+
- Centos 9
Prerequisites
Conda
-
Conda (you may swap in mamba if you prefer). Installing conda:
- Be sure
wgetis available to you.
Linux a pinned version: https://repo.anaconda.com/miniconda/Miniconda3-py312_24.5.0-0-Linux-x86_64.sh
x86_64:
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sharm64:
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-aarch64.shmacOS
intel:
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.shARM:
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-arm64.sh- Then execute the Miniconda.sh script, follow the prompts, when installation completed, follow these last 2 steps:
- Be sure
~/miniconda3/bin/conda init
bash # newly created shells should not auto load the conda (base) env.
https certificates
local development
# Create local CA (one-time)
mkcert -install
# Generate cert for localhost
mkdir -p certs
mkcert -key-file certs/key.pem -cert-file certs/cert.pem localhost 127.0.0.1
production server
Cognito
Very Quickest Start
assumes you have completed the prerequisites
# Clone the repository
git clone git@github.com:Daylily-Informatics/bloom.git
cd bloom
# Activate the BLOOM environment (use this for all future sessions)
source bloom_activate.sh
# Initialize local TapDB runtime + schema for BLOOM namespace
bloom db init
# Start the Bloom LIMS UI
bloom gui
(Optional) Install & Run pgadmin4 Database Admin UI
# Ensure environment is activated first
source bloom_activate.sh
# RUN TESTS
pytest
# INSTALL pgadmin4 (on localhost:8080)
source bloom_lims/env/install_pgadmin.sh
16. Testing
source bloom_activate.sh
pytest
17. Regulatory & Compliance
CLIA
- There is no reason bloom can not be used in a CLIA regulated environment.
CAP
- Bloom can satisfy all relevant CAP checklist items which apply to it. But, as it is s/w you will be running yourself, most checklist items will be concerned with the environment you are using bloom in.
HIPAA
- If installed in an already HIPAA compliant environment, bloom should not need much or any work to be compliant.
18. Design Principles
Enterprise UIDs
Each Object Has A UUID & UUIDs Are Immutable & UUIDs Are Not Reused Or Applied To Other Objects
- Using a UUID on children objects for convenience will lead to a mess as the need to know details about each object is next to impossible when a UUID is assigned to multiple objects.
The UID Identifies The Object Class And The UUID w/in The Class
Exhaustive Metadata About An Object May Be Queried Using The Enterprise UUID.
Metadata may also be printed on labels along with the UUID.
- Keeping metadata out of the UUID formula is a fundamental requirement in building flexible and scalable systems. FUNDAMENTAL.
Trust The Database To Manage The UUIDs
Clear And Concise Data Model
TSV's not CSV's
- There are few/no compelling reasons to use CSV's over TSV's & so many reasons not to use CSV's.
All LIMS Data Editable w/CRUD UI (and noting is ever really deleted)
- It is! Fully (though with some safeguards still not in place).
- soft deletes need to be reviewed more closely
Easily Configurable Object Definitions As Well As Actions
- Requiring as little code changes as possible.
Other Principles
- Simple
- Scalable
- Secure
- Flexible & Extensible
- Open Source
- Operationally Robust
- Free
- Sustainable
Use Cases
Many To Many Relationships Among All Objects
All other relationships are subsets of this, and designing parts of the LIMS which disallow many to many will result in an inflexible system.
Objects May Be Involved In Multiple Workflows Simultaneously
Support For Predefined and Arbitrary Workflows
Objects May All Be: Root (Singleton, Parent & Able To Become A Child At Some Point), Child (Singleton, Parent And Possibly Terminal) Of One Another
Zero Loss Of Data (Comprehensive Audit Trails, Soft Deletes) && 100% Audit Coverage
19. Dev Tools
note: all commands below are expected to be run from a shell with the BLOOM environment activated:
source bloom_activate.sh
Drop/Rebuild DB (Destructive)
Use TapDB-backed BLOOM CLI commands:
bloom db reset --yes
bloom db seed
Assay extraction pipeline reseed runbook (HLA 1.2 + Carrier 3.9):
bloom db reset -y
bloom db seed
bloom gui
After reseed, verify each assay workflow has queue steps with these subtypes:
extraction-batch-eligibleblood-to-gdna-extraction-eligiblebuccal-to-gdna-extraction-eligibleinput-gdna-normalization-eligibleillumina-novaseq-libprep-eligibleont-libprep-eligible
Build LIMS Workflows With Autogen Objects
Similar to pytest, but more extensive. Useful for development and smoke testing. Run the accessioning/extraction workflow generator:
python smoke_exams/accession_extract_qant.py <num_iterations> <assay_type>- Example:
python smoke_exams/accession_extract_qant.py 2 1(runs 2 iterations with HLA-typing assay)
Run the bloom UI
bloom guiorsource run_bloomui.sh
Run the pgadmin UI
source bloom_lims/env/install_pgadmin.sh
Start Interactive Shell w/Core Bloom Objects Instantiated
bloom shell or python bloom_shell.py
Random Notes
File System Case Sensitivity
MacOS is Not Case Sensitive
echo "test" > test.log
echo "TEST" > TEST.LOG
more test.log
# OUTPUT: TEST
more TEST.log
# OUTPUT: TEST
- This still shocks me & is worth a reminder.
Ubuntu Is Case Sensitive
echo "test" > test.log
echo "TEST" > TEST.LOG
more test.log
# OUTPUT: test
more TEST.LOG
# OUTPUT: TEST
Assume Case Insensitivity In All File Names
- Given we can not be certain where files will be reconstituted, we must assume that files might be created in a case insensitive file system when allowing download.
Bloom UUIDs and EUIDs Are Safe As File Names
A widely adopted UUID spec (and used by postgres), rfc4122, treats uc and lc as the same character. Bloom EUIDs only contain uc characters in a prefix followed by integers.
20. Support
No promises, please file issues to log a bug or request a feature.
Authors
- John Major:li aka iamh2o:gh
- Josh Durham
- Adam Tracy
Deployment & Maintenance
You may deploy bloom wherever it will run. This does mean you are responsible for all aspects of the deployment, including security, backups (AND recovery), performance optimization, monitoring, etc. This need not be daunting. I am available for consulting on these topics.
License
- MIT
References // Acknowledgments
- chatGPT4 for helping me build this.
- All the folks I've built systems for to date and were patient with my tools and offered helpful feedback.
- snakemake :: inspiration.
- multiqc :: inspiration.
- ga4cgh :: inspiration.
- the human genome project :: where I learned I dug LIS.
- cytoscape :: incredible graph visualization tools!
- The OSS world.
- Semantic Mediawiki :: inspiration.
- Datomic :: inspiration.
Appendix A: Common Patterns
A.1 Creating a New Sample Workflow
from bloom_lims.bobjs import BloomObj, BloomWorkflow
from bloom_lims.db import BLOOMdb3
# 1. Get workflow template
bobj = BloomObj(BLOOMdb3())
wf_template = bobj.query_template_by_component_v2(
"workflow", "assay", "rare-mendelian", "1.0"
)[0]
# 2. Create workflow instance
bwf = BloomWorkflow(BLOOMdb3())
workflow = bwf.create_empty_workflow(wf_template.euid)
# 3. Create sample container
tube_template = bobj.query_template_by_component_v2(
"container", "tube", "tube-generic-10ml", "1.0"
)[0]
tube = bobj.create_instances(tube_template.euid)[0][0]
# 4. Link sample to workflow step
first_step = workflow[0][0].parent_of_lineages[0].child_instance
bobj.create_generic_instance_lineage_by_euids(first_step.euid, tube.euid)
A.2 Executing an Action
from bloom_lims.bobjs import BloomObj
from bloom_lims.db import BLOOMdb3
bobj = BloomObj(BLOOMdb3())
# Get object by EUID (format: PREFIX + sequence number)
obj = bobj.get_by_euid("CX1234")
# Prepare action data
action_ds = obj.json_addl["action_groups"]["status_actions"]["actions"]["set_complete"]
action_ds["captured_data"] = {"object_status": "complete"}
action_ds["curr_user"] = "lab_tech_1"
# Execute action
result = bobj.do_action(
obj.euid,
"set_complete",
"status_actions",
action_ds
)
A.3 Querying Objects
from bloom_lims.bobjs import BloomObj
from bloom_lims.db import BLOOMdb3
bobj = BloomObj(BLOOMdb3())
# By EUID (format: PREFIX + sequence number, e.g., CX1234, WX100)
obj = bobj.get_by_euid("CX1234")
# By UUID
obj = bobj.get_by_uuid("550e8400-e29b-41d4-a716-446655440000")
# By type (templates)
templates = bobj.query_template_by_component_v2(
category="container",
type="plate",
subtype="fixed-plate-96",
version="1.0"
)
# By type (instances)
instances = bobj.query_instance_by_component_v2(
category="workflow",
type="assay",
subtype="rare-mendelian",
version="1.0"
)
# Search with filters
results = bobj.search_objects(
category="container",
bstatus="active",
name_contains="Sample"
)
Appendix B: Glossary
| Term | Definition |
|---|---|
| EUID | Enterprise Unique Identifier - Prefix + sequence number (e.g., CX123, WX1000) |
| UUID | Universally Unique Identifier - Standard 128-bit identifier |
| Template | Blueprint for creating object instances |
| Instance | Actual object created from a template |
| Lineage | Parent-child relationship between objects |
| Workflow | Multi-step process definition |
| Workflow Step | Individual step/queue in a workflow |
| Workset | Batch of items being processed together |
| Action | Operation that can be performed on an object |
| Action Group | Collection of related actions |
| json_addl | JSON field for flexible object properties |
| category | Top-level object classification |
| type | Object type within category |
| subtype | Object subtype within type |
| bstatus | Current status of an object |
Document Version: 2.0 Last Updated: 2024-12-24 BLOOM LIMS - Version dynamically fetched from GitHub releases
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file bloom_lims-0.11.2.tar.gz.
File metadata
- Download URL: bloom_lims-0.11.2.tar.gz
- Upload date:
- Size: 308.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5847438bdee0cbce7c227713f0262494f5c7205d123a1904788bfbfcfdb3521b
|
|
| MD5 |
0af14d2ed82453369096e6c9639cf473
|
|
| BLAKE2b-256 |
92467212f90adf00abf7ba5560960df3d405d5dc9e1547818b79324c8a1bc6b0
|
File details
Details for the file bloom_lims-0.11.2-py3-none-any.whl.
File metadata
- Download URL: bloom_lims-0.11.2-py3-none-any.whl
- Upload date:
- Size: 262.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
bb014d0f6a5656765e31a531fb6b5443af3dc27a01c3036b17a4d060c6f5415b
|
|
| MD5 |
03cc0cce62154e04494598c2e1eab25c
|
|
| BLAKE2b-256 |
9b781bd6a43010c81f1867db5f365bc10341c728f2965dba42306f2e3281f715
|