A high-performance LLM chaining framework for python
Project description
HyperChain
HyperChain is an easy-to-use and efficient Python library that simplifies interacting with various Large Language Models. It allows for asynchronous execution and chaining of the LLMs with out-of-order execution using customizable prompt templates.
Installation
To install HyperChain directly from the GitHub repository, execute the following commands:
git clone https://github.com/PyRepair/hyperchain.git
cd hyperchain
pip install .
Usage
Below are some simple examples to get you started with HyperChain. More detailed examples can be found in the examples folder.
Prompt Templates
HyperChain offers templates to create prompts easily for different LLM applications. These templates can be combined using the '+' operator.
StringTemplate
The StringTemplate is mainly used with completion models and takes a formatable string as input. A python string will automatically be wrapped in a StringTemplate by the LLMChain.
from hyperchain.prompt_templates import StringTemplate
template = StringTemplate("Answer the following question: {question}\n")
ChatTemplate
The ChatTemplate is used with chat models and takes a list of formatable messages as input. This is illustrated with OpenAI's chat models in example_chat_chain.py.
from hyperchain.prompt_templates import ChatTemplate
template = ChatTemplate(
[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "{question}"},
],
)
MaskToSentinelTemplate
The MaskToSentinelTemplate simplifies converting a masked prompt into one that uses sentinel tokens, necessary for some Seq2Seq models like T5. This avoids manual tracking of sentinel tokens and their order, as demonstrated with the CodeT5 model in example_masked_code.py.
from hyperchain.prompt_templates import MaskToSentinelTemplate
template = MaskToSentinelTemplate("{masked_code}")
print(template.format(masked_code="def greet_user(<mask>: User):\n print('Hi,' + <mask>)\n"))
LLMRunner
An LLMRunner is used to communicate with a specific LLM inside a chain and provide it with error handlers for different LLM-specific exceptions.
from hyperchain.llm_runners import OpenAIRunner
llm_runner = OpenAIRunner(
model="MODEL",
api_key="OPENAI_API_KEY",
model_params={"max_tokens": 500},
)
LLMChain
LLMChain allows using templates to create prompts and send them to a chosen Large-Language Model. It supports asynchronous execution and includes error handling for exceptions identified in each LLMRunner.
from hyperchain.chain import LLMChain
chain = LLMChain(
template=template,
llm_runner=llm_runner,
output_name="answer",
)
The output_name argument allows the output from one chain to be used as the input for the next, under a specified name as seen in example_chain.py
ChainSequence is automatically created when Chain instances are added together with the '+' operator. It leverages required_keys and output_keys instance variables of each chain to calculate dependencies and execute the sequence out-of-order where possible.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file hyperchain-0.1.3.tar.gz
.
File metadata
- Download URL: hyperchain-0.1.3.tar.gz
- Upload date:
- Size: 347.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.0.0 CPython/3.12.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 15366ed2f213bfca6af24192832bd847bce27e4f3b0366f4916d30b54b01aa57 |
|
MD5 | e2505aad436ee888d64f3ebfbd00b8dd |
|
BLAKE2b-256 | f70a371380b881f4694eeeb10306aeb5e2c1b447ef53cece3319e8a0011ed6b6 |
File details
Details for the file hyperchain-0.1.3-py3-none-any.whl
.
File metadata
- Download URL: hyperchain-0.1.3-py3-none-any.whl
- Upload date:
- Size: 355.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.0.0 CPython/3.12.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | f194009d1da91f6f9af892ed0a4d97696950e3915fea352d5b5eca38ee9b36ac |
|
MD5 | 40a1a50ba9edcda095d1796d8048047f |
|
BLAKE2b-256 | 7cac1eeb1bada6489b610fbe8e4d04b88ff18b67d81171b2e74da324b67f78ee |