A high-performance LLM chaining framework for python
Project description
HyperChain
HyperChain is an easy-to-use and efficient Python library that simplifies interacting with various Large Language Models. It allows for asynchronous execution and chaining of the LLMs with out-of-order execution using customizable prompt templates.
Installation
To install HyperChain directly from the GitHub repository, execute the following commands:
git clone https://github.com/PyRepair/hyperchain.git
cd hyperchain
pip install .
Usage
Below are some simple examples to get you started with HyperChain. More detailed examples can be found in the examples folder.
Prompt Templates
HyperChain offers templates to create prompts easily for different LLM applications. These templates can be combined using the '+' operator.
StringTemplate
The StringTemplate is mainly used with completion models and takes a formatable string as input. A python string will automatically be wrapped in a StringTemplate by the LLMChain.
from hyperchain.prompt_templates import StringTemplate
template = StringTemplate("Answer the following question: {question}\n")
ChatTemplate
The ChatTemplate is used with chat models and takes a list of formatable messages as input. This is illustrated with OpenAI's chat models in example_chat_chain.py.
from hyperchain.prompt_templates import ChatTemplate
template = ChatTemplate(
[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "{question}"},
],
)
MaskToSentinelTemplate
The MaskToSentinelTemplate simplifies converting a masked prompt into one that uses sentinel tokens, necessary for some Seq2Seq models like T5. This avoids manual tracking of sentinel tokens and their order, as demonstrated with the CodeT5 model in example_masked_code.py.
from hyperchain.prompt_templates import MaskToSentinelTemplate
template = MaskToSentinelTemplate("{masked_code}")
print(template.format(masked_code="def greet_user(<mask>: User):\n print('Hi,' + <mask>)\n"))
LLMRunner
An LLMRunner is used to communicate with a specific LLM inside a chain and provide it with error handlers for different LLM-specific exceptions.
from hyperchain.llm_runners import OpenAIRunner
llm_runner = OpenAIRunner(
model="MODEL",
api_key="OPENAI_API_KEY",
model_params={"max_tokens": 500},
)
LLMChain
LLMChain allows using templates to create prompts and send them to a chosen Large-Language Model. It supports asynchronous execution and includes error handling for exceptions identified in each LLMRunner.
from hyperchain.chain import LLMChain
chain = LLMChain(
template=template,
llm_runner=llm_runner,
output_name="answer",
)
The output_name argument allows the output from one chain to be used as the input for the next, under a specified name as seen in example_chain.py
ChainSequence is automatically created when Chain instances are added together with the '+' operator. It leverages required_keys and output_keys instance variables of each chain to calculate dependencies and execute the sequence out-of-order where possible.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file hyperchain-0.1.0.tar.gz
.
File metadata
- Download URL: hyperchain-0.1.0.tar.gz
- Upload date:
- Size: 347.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.11.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 40bbe61c0fc0c746c04c37f6600aed298d4e0a73c5ce556ded161dfe3253ae14 |
|
MD5 | 77de318fe2cc6888402710a7c263f072 |
|
BLAKE2b-256 | 5af93fd872189c4329dcbb267b6d34457cb668fbd822d8c8c2590252953e3f43 |
File details
Details for the file hyperchain-0.1.0-py3-none-any.whl
.
File metadata
- Download URL: hyperchain-0.1.0-py3-none-any.whl
- Upload date:
- Size: 696.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.11.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | b52aa06436659f19906f8be547db9bf53ffe3aec0ef20859aa28234534484e5c |
|
MD5 | cff86bd95e513bf78440b3e9145f5004 |
|
BLAKE2b-256 | 403efaeec41e14283f234a003768da476d88f9c0b13d351428265781256c0fc3 |