Skip to main content

Simple interface for creating and managing LLM chains

Project description

LLM Blocks

LLM Blocks is a Python module that helps users interact with Language Learning Model (LLM) chains. It provides a simple and flexible way to create and manage LLM chains, ensuring efficient interactions with models such as OpenAI's GPT-3.5 Turbo.

Installation

First, make sure to have Python installed. Then, to install the required dependencies for this module, run the following commands:

pip install -r requirements.txt
pip install -r requirements.dev.txt

Configuration

To run the LLM Blocks, you'll need an OpenAI API key. Store your API key in a .env file or export it as an environment variable:

For a .env file:

OPENAI_API_KEY=your_openai_api_key

For environment variables:

export OPENAI_API_KEY=your_openai_api_key

Usage

The llm_blocks folder contains the main ChatUtils class, which can be utilized to create and manage your LLM chains.

Here's a simple example of using the GenericChain class:

from llm_blocks.chat_utils import GenericChain

# Create a chain with a given template
template = "The meaning of {word} is:"
my_chain = GenericChain(template)

# Call the chain with any input you desire
response = my_chain("friendship")
print(response)

Module Structure

  • exclude.toml: Configuration file to specify files or directories to exclude.
  • requirements.dev.txt: Development dependencies for this module.
  • requirements.txt: Main dependencies for this module.
  • llm_blocks
    • chat_utils.py: Python file containing the definition of the GenericChain class and utility functions for working with LLM chains.

Logging Responses

The GenericChain class keeps a log of all interactions with the given LLM. The log is stored as a list of dictionaries and can be accessed with my_chain.logs.

Example:

for log in my_chain.logs:
    print(f'Inputs: {log["inputs"]}')
    print(f'Callback: {log["callback"]}')
    print(f'Response: {log["response"]}')
    print(f'Response Time: {log["response_time"]}')
    print('---')

Contributing

Feel free to submit pull requests, report bugs, or suggest new features through the GitHub repository. We appreciate your contributions and feedback!

License

This project is licensed under the MIT License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm-blocks-0.2.3.tar.gz (3.0 kB view details)

Uploaded Source

Built Distribution

llm_blocks-0.2.3-py3-none-any.whl (3.2 kB view details)

Uploaded Python 3

File details

Details for the file llm-blocks-0.2.3.tar.gz.

File metadata

  • Download URL: llm-blocks-0.2.3.tar.gz
  • Upload date:
  • Size: 3.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.1

File hashes

Hashes for llm-blocks-0.2.3.tar.gz
Algorithm Hash digest
SHA256 bbe07fa17c6c23387451579fdbb4f6aa8b80a2b79869f1919610b519e1bc382a
MD5 4de92b75548ec27e0e034e05c6938401
BLAKE2b-256 6246c901f8313a9958f47cd11a0aae19987746a04e4c6a82e28706e950930eda

See more details on using hashes here.

File details

Details for the file llm_blocks-0.2.3-py3-none-any.whl.

File metadata

  • Download URL: llm_blocks-0.2.3-py3-none-any.whl
  • Upload date:
  • Size: 3.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.1

File hashes

Hashes for llm_blocks-0.2.3-py3-none-any.whl
Algorithm Hash digest
SHA256 4d0afde37ef5f38a3c3964f761f5c77186aae65292112b34d4875afe1e70ba06
MD5 74e019bc92df31fdc2272ca154009a1f
BLAKE2b-256 d86f8f2f9e8e63971d352a8fcd751aed6cb71be0b38a7ac1747b06edac5349d7

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page