Skip to main content

Wrapper for nicely displaying progress bars for langchain embedding components when using multiprocessing or ray.

Project description

Langchain Progress

A module that adds a context manager to wrap lanchain embedding elements to better handle progress bars. This is particularly useful when using ray or multiprocessing to use a single progress bar across all remotes/processes

Installing

The library can be installed using PyPI:

pip install langchain-progress

If you only need a subset of the library's features, you can install dependencies for your chosen setup:

pip install langchain-progress[tqdm]
pip install langchain-progress[ray]

How to Use

This context manager can be used in a single-process or across a distributed process such as ray to display the process of generating embeddings using langchain. The ProgressManager context manager requires that a langchain embedding object be provided and optionally accepts a progress bar. If no progress bar is provided, a new progress bar will be created using tqdm. An important note is that if using show_progress=True when instantiating an embeddings object, any internal progress bar created within that class will be replaced with one from langchain-progress.

The following is a simple example of passing an existing progress bar and depending on the automatically generated progress bar.

from langchain_progress import ProgressManager

with ProgressManager(embeddings):
    result = FAISS.from_documents(docs, embeddings)

with ProgressManager(embeddings, pbar):
    result = FAISS.from_documents(docs, embeddings)

Ray Example

The real use-case for this context manager is when using ray or multiprocessing to improve embedding speed. If show_progress=True is enabled for embeddings objects, a new progress bar is created for each process. This causes fighting while drawing each individual progress bar, causing the progress bar to be redrawn for each update on each process. This approach also doesn't allow us to report to a single progress bar across all remotes for a unified indication of progress. Using the ProgressManager context manager we can solve these problems. We can also use the RayPBar context manager to simplify the setup and passing of ray progress bars. The following is the recommended way to create progress bars using ray:

from ray.experimental import tqdm_ray

from langchain_progress import RayPBar

@ray.remote(num_gpus=1)
def process_shard(shard, pbar):
    embeddings = HuggingFaceEmbeddings(model_name='sentence-transformers/all-MiniLM-L6-v2')

    with ProgressManager(embeddings, pbar):
        result = FAISS.from_documents(shard, embeddings)

    return result

doc_shards = np.array_split(docs, num_shards)

with RayPBar(total=len(docs)) as pbar:
    vectors = ray.get([process_shard.remote(shard, pbar) for shard in doc_shards])

pbar.close.remote()

A full example can be found in ./examples/ray_example.py.

Multiprocessing Example

To simplify implementing progress bars with multiprocessing, the MultiprocessingPBar context manager handles the creation and updating of the shared progress bar processes. The following is the recommended way to create progress bars using multiprocessing:

from langchain_progress import MultiprocessingPBarManager

def process_shard(shard, pbar):
    embeddings = HuggingFaceEmbeddings(model_name='sentence-transformers/all-MiniLM-L6-v2')

    with ProgressManager(embeddings, pbar):
        result = FAISS.from_documents(shard, embeddings)

    return result

doc_shards = np.array_split(docs, num_shards)

with MultiprocessingPBar(total=len(docs)) as pbar, Pool(num_shards) as pool:
    vectors = pool.starmap(process_shard, [(shard, pbar) for shard in doc_shards])

A full example can be found in ./examples/multiprocessing_example.py.

Tests

To run the test suite, you can run the following command from the root directory. Tests will be skipped if the required optional libraries are not installed:

python -m unittest

Limitations

This wrapper cannot create progress bars for any API based embedding tool such as HuggingFaceInferenceAPIEmbeddings as it relies on wrapping the texts supplied to the embeddings method. This obviously can't be done when querying a remote API. This module also doesn't currently support all of langchain's embedding classes. If your embedding class isn't yet supported, please open an issue and I'll take a look when I get time.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_progress-0.1.1.tar.gz (8.3 kB view details)

Uploaded Source

Built Distribution

langchain_progress-0.1.1-py3-none-any.whl (9.7 kB view details)

Uploaded Python 3

File details

Details for the file langchain_progress-0.1.1.tar.gz.

File metadata

  • Download URL: langchain_progress-0.1.1.tar.gz
  • Upload date:
  • Size: 8.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.0.0 CPython/3.12.2

File hashes

Hashes for langchain_progress-0.1.1.tar.gz
Algorithm Hash digest
SHA256 4fcd5a7317c3d60610bde9b9a8a050846e4a646c0d175e6a3a0a4dfbeafc6b5a
MD5 6f3e6554016f8f7b8846bd60eed1914f
BLAKE2b-256 e277c28126edc8074077570a1f4f3d478bbbefc0d22f50fdae8640d03c6e45c3

See more details on using hashes here.

File details

Details for the file langchain_progress-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_progress-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 45f3c03474a191a2c94c7fbd7ef4cf638538c8507a81455621108f2a707cbae7
MD5 bd67ac1589fcaabea6d83d8769441daf
BLAKE2b-256 fb845db0cbf01ea97ea408d0609728664b64ed2fd91471c96769f1818e7a56c3

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page