Skip to main content

Lambda's Interdisciplinary Large Atlas

Project description

LAILA

PyPI Python License: MIT Status Docs

Lambda's Interdisciplinary Large Atlas

pip install laila-core

LAILA is a Python platform for unifying training, simulation, and data management into a single computational workflow. It wraps heterogeneous storage backends (S3, GCS, Redis, HDF5, filesystem, and more) behind one consistent API so that memorizing data, recalling it, and orchestrating compute feels the same regardless of where things live.

LAILA is type-free

LAILA is type-free — whatever type you memorize is exactly the type you get back. No serialization boilerplate, no type casting, one interface for everything:

import torch, laila

dict_entry = laila.constant(data={"key": [1, 2, 3]})
laila.memorize(dict_entry)                          # memorize a dict
laila.remember(dict_entry.global_id).data            # returns a dict

tensor_entry = laila.constant(data=torch.randn(128, 64))
laila.memorize(tensor_entry)                         # memorize a tensor
laila.remember(tensor_entry.global_id).data           # returns a tensor

LAILA has a uniform API

The same three verbs — memorize, remember, and forget — work across every storage backend. S3, HDF5, Cloudflare R2, Redis, GCS, filesystem — swap the pool, keep the code:

from laila.pool import S3Pool, HDF5Pool, CloudflarePool

s3_pool = S3Pool(...)
hdf5_pool = HDF5Pool(...)
cf_pool = CloudflarePool(...)

laila.memory.extend(s3_pool, pool_nickname="s3")
laila.memory.extend(hdf5_pool, pool_nickname="hdf5")
laila.memory.extend(cf_pool, pool_nickname="cloudflare")

entry = laila.constant(data=torch.randn(128, 64))

laila.memorize(entry, pool_nickname="s3")          # write to S3
laila.memorize(entry, pool_nickname="hdf5")        # write to HDF5
laila.memorize(entry, pool_nickname="cloudflare")  # write to Cloudflare R2

laila.remember(entry.global_id, pool_nickname="s3")          # read from S3
laila.remember(entry.global_id, pool_nickname="hdf5")        # read from HDF5
laila.remember(entry.global_id, pool_nickname="cloudflare")  # read from Cloudflare R2

laila.forget(entry.global_id, pool_nickname="s3")          # delete from S3
laila.forget(entry.global_id, pool_nickname="hdf5")        # delete from HDF5
laila.forget(entry.global_id, pool_nickname="cloudflare")  # delete from Cloudflare R2

LAILA has async operations

Every operation returns a future you can wait on synchronously or await asynchronously:

future = laila.memorize(entry)
laila.wait(future)    # blocking
await future          # or async

Quick example

Stack a fast local cache in front of remote storage with a single operator. Reads cascade through the chain until they find the data, caching a copy in every tier on the way back up.

import laila
from laila.pool import S3Pool, HDF5Pool

# Create an HDF5 pool (local disk) and an S3 pool (remote)
hdf5_pool = HDF5Pool(nickname="cache_hdf5")
s3_pool = S3Pool(
    bucket_name="your-bucket",
    access_key_id="YOUR_ACCESS_KEY_ID",
    secret_access_key="YOUR_SECRET_ACCESS_KEY",
    region_name="us-east-1",
    nickname="origin_s3",
)

# Register both pools with LAILA's memory system
laila.memory.extend(hdf5_pool, pool_nickname="cache_hdf5")
laila.memory.extend(s3_pool, pool_nickname="origin_s3")

# Wire a three-tier proxy chain: memory → HDF5 → S3
laila.alpha_pool << hdf5_pool << s3_pool

# Store an entry directly in S3
entry = laila.constant(data={"msg": "hello from S3"}, nickname="proxy_demo")
future = laila.memorize(entry, pool_nickname="origin_s3")
laila.wait(future)

print(laila.alpha_pool.exists(entry.global_id))  # False — not cached yet

# Read through the alpha pool — cascades to S3, caches on the way back
blob = laila.alpha_pool[entry.global_id]

print(laila.alpha_pool.exists(entry.global_id))  # True  — cached in memory
print(hdf5_pool.exists(entry.global_id))          # True  — cached on disk
print(s3_pool.exists(entry.global_id))            # True  — the origin

Installation extras

Install only the backends you need:

pip install "laila-core[s3]"        # S3 / Cloudflare R2 / Backblaze B2
pip install "laila-core[redis]"     # Redis
pip install "laila-core[hdf5]"      # HDF5
pip install "laila-core[torch]"     # PyTorch tensor support
pip install "laila-core[all]"       # everything

Vision

LAILA is intended to serve as an interdisciplinary platform for teams that need to move fluidly between data creation, data storage, model training, and large-scale execution. Rather than treating infrastructure boundaries as the primary abstraction, LAILA focuses on ergonomic syntax and reusable interfaces that let users reason about workflows at a higher level.

This approach makes it easier to:

  • organize and manage data across multiple storage systems
  • connect compute and memory workflows with less boilerplate
  • build distributed pipelines that remain readable and maintainable
  • reduce the operational friction between experimentation and production-scale execution

Current release

LAILA is currently in beta 1.0.

The current release includes the command and memory module as the first public component of the broader platform. Interfaces may continue to evolve as the platform expands and real-world usage informs the next stage of development.

Learn more

  • Tutorials — progressive walkthroughs from basic entries to full model checkpointing
  • API Reference — auto-generated from docstrings
  • Examples — end-to-end notebooks covering datasets, multipool setups, and more

Credits

  • Creator: Amir Zadeh
  • Tutorials and Documentation: Dihan Lin
  • Acknowledgements: Jason Zhang, Xuweiyi Chen, Connor Alvarez

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

laila_core-1.0.1b8.tar.gz (101.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

laila_core-1.0.1b8-py3-none-any.whl (153.1 kB view details)

Uploaded Python 3

File details

Details for the file laila_core-1.0.1b8.tar.gz.

File metadata

  • Download URL: laila_core-1.0.1b8.tar.gz
  • Upload date:
  • Size: 101.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for laila_core-1.0.1b8.tar.gz
Algorithm Hash digest
SHA256 17c9783fc7e5d035d69a72604fcd31271e86bd55cfd7e358e88c0b6845a180dd
MD5 9d3a101dec5fe64e7221c35474a66924
BLAKE2b-256 3c3b34fb2d2a78753e6c538d1c2ff15b230d65205559d832e0685e3494d5c8c7

See more details on using hashes here.

Provenance

The following attestation bundles were made for laila_core-1.0.1b8.tar.gz:

Publisher: publish-to-pypi.yml on LambdaLabsML/laila-core

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file laila_core-1.0.1b8-py3-none-any.whl.

File metadata

  • Download URL: laila_core-1.0.1b8-py3-none-any.whl
  • Upload date:
  • Size: 153.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for laila_core-1.0.1b8-py3-none-any.whl
Algorithm Hash digest
SHA256 5af14e29cb0fbf326ab21e8c79fb0c6f49ca0223d05727910bc98da5bb8a02c0
MD5 d135d44810783ecd83aadb20a4fffbad
BLAKE2b-256 cd5c78d0eb578a7d0e2c0b109f9586507af501e56be871e87d929132eb9ff4e6

See more details on using hashes here.

Provenance

The following attestation bundles were made for laila_core-1.0.1b8-py3-none-any.whl:

Publisher: publish-to-pypi.yml on LambdaLabsML/laila-core

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page