Pattern Based Question and Answer (PBQA) is a Python library that provides tools for querying LLMs and managing text embeddings. It combines guided generation with multi-shot prompting to improve response quality and consistency.
Project description
Pattern Based Question and Answer
Description
Pattern Based Question and Answer (PBQA) is a Python library that provides tools for querying LLMs and managing text embeddings. It combines guided generation with multi-shot prompting to improve response quality and ensure consistency. By enforcing valid responses, PBQA makes it easy to combine the flexibility of LLMs with the reliability and control of symbolic approaches.
Installation
PBQA requires Python 3.9 or higher, and can be installed via pip:
pip install PBQA
Additionally, PBQA requires a running instance of llama.cpp to interact with LLMs. For instructions on installation, see the llama.cpp repository.
Usage
llama.cpp
For instructions on hosting a model with llama.cpp, see the following page. Optionally, caching can be enabled to speed up generation.
Python
PBQA provides a simple API for querying LLMs.
from PBQA import DB, LLM
from time import strftime
# First, we set up a database at a specified path
db = DB(path="db")
# Then, we load a pattern file into the database
db.load_pattern("examples/weather.yaml")
# Next, we connect to the LLM server
llm = LLM(db=db, host="127.0.0.1")
# And connect to the model
llm.connect_model(
model="llama",
port=8080,
stop=["<|eot_id|>", "<|start_header_id|>"],
temperature=0,
)
# Finally, we query the LLM and receive a response based on the specified pattern
# Optionally, external data can be provided to the LLM which it can use in its response
weather_query = llm.ask(
"Could I see the stars tonight?",
"weather",
"llama",
external={"now": strftime("%Y-%m-%d %H:%M")},
)
Using the weather.yaml pattern file and llama 3 running on 127.0.0.1:8080, the response should look something like this:
{
"latitude": 51.51,
"longitude": 0.13,
"time": "2024-06-18 01:00",
}
For more information, see the examples directory.
Pattern Files
Pattern files are used to guide the LLM in generating responses. They are written in YAML and consist of three parts: the system prompt, component metadata, and examples.
# The system prompt is the main instruction given to the LLM telling it what to do
system_prompt: Your job is to translate the user's input into a weather query. Reply with the json for the weather query and nothing else.
now: # Each component of the response needs to have it's own key, "component:" at minimum
external: true # Optionally, specify whether the component requires external data
latitude:
grammar: | # Or define a GBNF grammar
root ::= coordinate
coordinate ::= integer "." integer
integer ::= digit | digit digit
digit ::= "0" | "1" | "2" | "3" | "4" | "5" | "6" | "7" | "8" | "9"
longitude:
grammar: ...
time:
grammar: ...
examples: # Lastly, examples can be provided for multi-shot prompting
- input: What will the weather be like tonight
now: 2019-09-30 10:36
latitude: 51.51
longitude: 0.13
time: 2019-09-30 20:00
- input: Could I see the stars tonight?
...
For more examples, look at the pattern files in the examples directory. Information on the GBNF grammar format can be found here.
Cache
Unless overridden, queries using the same pattern will use the same system prompt and base examples, allowing a large part of the response to be cached and speeding up generation. This can be disabled by setting use_cache=False
in the ask()
method.
PBQA allocates a slot/process for each pattern-model pair in the llama.cpp server. Set -np
to the number of unique combinations of patterns and models you want to enable caching for. Slots are allocated in the order they are requested, and if the number of available slots is exceeded, the last slot is reused for any excess pattern-model pairs.
You can manually assign a cache slot to a specific pattern-model pair using the link
method. Optionally, a specific cache slot can be provided, up to the number of available processes. The cache slot used for a query can also be overridden by passing the cache_slot
parameter to the llm.ask()
method.
from PBQA import DB, LLM
db = DB(path="db")
db.load_pattern("examples/weather.yaml")
llm = LLM(db=db, host="127.0.0.1")
llm.connect_model(
model="llama",
port=8080,
stop=["<|eot_id|>", "<|start_header_id|>"],
temperature=0,
)
llm.link(pattern="weather", model="llama")
Once a pattern-model pair is linked, the "model" parameter in the ask()
method may also be omitted. The query will instead use the model assigned during the last appropriate link
call.
Roadmap
Future features in no particular order with no particular timeline:
- Preset grammars for common data types
- Parallel query execution
- Combining multi-shot prompting with message history
- Multimodal support
- Further speed improvements (possibly batching)
- Support for more LLM backends
Relevant Literature
- Language Models are Few-Shot Learners (Brown et al., 2020)
- Many-Shot In-Context Learning (Aragwal, 2024)
- Chain-of-Thought Prompting Elicits Reasoning in Large Language Models (Wei et al., 2022)
- Using Grammar Masking to Ensure Syntactic Validity in LLM-based Modeling Tasks (Lukas et al., 2024)
Contributing
Contributions are welcome! If you have any suggestions or would like to contribute, please open an issue or a pull request.
Support
If you want to support the development of PBQA, consider buying me a coffee. Any support is greatly appreciated!
License and Acknowledgements
This project is licensed under the terms of the MIT License. For more details, see the LICENSE file.
Qdrant is a vector database that provides an API for managing and querying text embeddings. PBQA uses Qdrant to store and retrieve text embeddings.
llama.cpp is a C++ library that provides an easy-to-use interface for running LLMs on a wide variety of hardware. It includes support for Apple silicon, x86 architectures, and NVIDIA GPUs, as well as custom CUDA kernels for running LLMs on AMD GPUs via HIP. PBQA uses llama.cpp to interact with LLMs.
PBQA was developed by Bart Haagsma as part of different project. If you have any questions or suggestions, please feel free to contact me at dev.baagsma@gmail.com.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file pbqa-0.2.4.tar.gz
.
File metadata
- Download URL: pbqa-0.2.4.tar.gz
- Upload date:
- Size: 16.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | f2ecd2bef29e4c2693604dc688a59429650dce75c9ba0f2c03a3ad2a27498ca6 |
|
MD5 | cf4441f0b97127a7d7369b4d7d4121ab |
|
BLAKE2b-256 | 031d2cc289adb6517b9c6f0bff32abc90a6968eff8d01f3ea228f39bda04ad86 |
File details
Details for the file PBQA-0.2.4-py3-none-any.whl
.
File metadata
- Download URL: PBQA-0.2.4-py3-none-any.whl
- Upload date:
- Size: 17.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.12.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 724ea85b403d66b71c46a66f325b981998a27657dc32cd08bf0e527154e68baf |
|
MD5 | 440fd0b23382f19468d629cfc592a83c |
|
BLAKE2b-256 | dfabd617fdc13f383012bf473701bcb591575056af89993f4f2430ee611d1bfb |