Extension of Transformers library for Context-Free Grammar Constrained Decoding with EBNF grammars
Project description
๐ค Transformers CFG
Latest News
Support for Unicode(multilingual) grammars (2024-02-29) Integration with Text-Generation-WebUI (2023-12-17)
We are thrilled to announce that transformers_cfg
has been used in the Text-Generation-WebUI project.
This integration enables users to utilize our CFG capabilities within the popular, 30.5K-starred web interface for text generation.
For more details, see Relevent Pull Request
Introduction
transformers_cfg
is an extension library for the popular Transformers library by Hugging Face, tailored for working with context-free grammars (CFG).
This package provides additional tools and functionalities to enhance your experience with natural language processing tasks involving CFGs.
It was initially developed as a pull request to the Hugging Face Transformers library. See relevant discussion here.
Installation
pip install transformers-cfg
QuickStart: Force LLM to generate a valid json object
The below example can be found in examples/generate_json.py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers_cfg.grammar_utils import IncrementalGrammarConstraint
from transformers_cfg.generation.logits_process import GrammarConstrainedLogitsProcessor
if __name__ == "__main__":
# Detect if GPU is available, otherwise use CPU
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(f"Using device: {device}")
model_id = "mistralai/Mistral-7B-v0.1"
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
tokenizer.pad_token = tokenizer.eos_token
model = AutoModelForCausalLM.from_pretrained(model_id).to(
device
) # Load model to defined device
model.generation_config.pad_token_id = model.generation_config.eos_token_id
# Load json grammar
with open("examples/grammars/json.ebnf", "r") as file:
grammar_str = file.read()
grammar = IncrementalGrammarConstraint(grammar_str, "root", tokenizer)
grammar_processor = GrammarConstrainedLogitsProcessor(grammar)
# Generate
prefix1 = "This is a valid json string for http request:"
prefix2 = "This is a valid json string for shopping cart:"
input_ids = tokenizer([prefix1, prefix2], add_special_tokens=False, return_tensors="pt", padding=True)["input_ids"]
output = model.generate(
input_ids,
max_length=50,
logits_processor=[grammar_processor],
repetition_penalty=1.1,
num_return_sequences=1,
)
# decode output
generations = tokenizer.batch_decode(output, skip_special_tokens=True)
print(generations)
"""
'This is a valid json string for http request:{ "request": { "method": "GET", "headers": [], "content": "Content","type": "application" }}
'This is a valid json string for shopping cart:{ "name": "MyCart", "price": 0, "value": 1 }
"""
Why should I use transformers-CFG?
- We support EBNF grammar description format
- We offer the same grammar interface as llama-cpp project, allowing you to drop-in replace llama-cpp with transformers-CFG.
- We allow you to use any of the models in the ๐ค Transformers library, including the ones that are not supported by llama-cpp.
- We support multilingual grammars, you can use any character from any language in your grammar, e.g. ไธญๆ, ๆฅๆฌ่ช, ํ๊ตญ์ด, เคนเคฟเคจเฅเคฆเฅ, ุงูุนุฑุจูุฉ, ืขืืจืืช, or emoji ๐ค.
What is grammar ?
TL;DR: Think of it as an enhanced version of regular expressions.
Here is an example of a simplified JSON grammar:
# A JSON object is the root of the grammar
root ::= object
# An object starts with "{" and ends with "}" and contains pairs separated by ","
object ::= "{" pair ("," pair)* "}"
# A pair is a string followed by a ":" and a value
pair ::= string ":" value
# A string is a sequence of alphanumeric characters enclosed in double quotes
string ::= '"' [a-zA-Z0-9]* '"'
# A value can be a string, another object, or a boolean value
value ::= string | object | "true" | "false" | "null"
This grammar describes the structure of a JSON object. It specifies that a JSON object is a pair of key-value pairs, where the key is a string and the value can be a string, another object, or a boolean value.
Grammar doesn't need to be complicated. You can use it to describe very simple but useful things, like a valid email address, a valid URL, or phone number.
phone_number ::= "+" [0-9]+
You can also force it to generate only emojis or generate only korean characters.
['Describe your feeling with emoji: ๐๐๐๐ฏ๐
๐๐๐๐๐๐๐๐
๐๐๐๐๐๐๐๐๐๐๐
๐๐๐๐๐๐๐๐๐๐', 'Write a poem with emoji: ๐๐๐๐๐๐๐๐๐๐๐
๐๐๐๐๐๐๐๐๐๐๐๐๐๐๐๐๐๐๐๐๐๐๐']
More details can be found in this doc from llama-cpp Advanced grammar debugging guide can be found here
Automatic Grammar Generation
Here is an awesome tool to generate grammars for you: Grammar Builder
Grammar Collection
We provide a collection of grammars in the examples/grammars
folder, which are mostly identical to the grammars in llama-cpp project.
We try to keep the grammars up-to-date with the original grammars from llama-cpp project.
But up to now, we can not yet guarantee that all grammars from llama-cpp project can be directly used in transformers-CFG.
The list of grammars contains:
- json.ebnf: A grammar for generating valid json objects.
- json_arr.ebnf: A grammar for generating valid json arrays.
- c.ebnf: A grammar for generating valid C programs.
- chess.ebnf: A grammar for generating valid chess moves.
- arithmetic.ebnf: A grammar for generating valid arithmetic expressions.
Supported Models
- LLaMa family models
- GPT family models
- Bloom family models
- Mistral family models
- Falcon family models
- ...
See supported_models.yaml for the full list of supported models.
As a rule of thumb, all models with the same tokenizer should naturally be supported. If you find any model that is not supported, please open an issue or submit a pull request.
Citation
Please consider citing our work, if you found the provided resources useful.
@inproceedings{geng-etal-2023-grammar,
title = {Grammar-Constrained Decoding for Structured {NLP} Tasks without Finetuning},
author = {Geng, Saibo and Josifoski, Martin and Peyrard, Maxime and West, Robert},
year = 2023,
month = dec,
booktitle = {Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing},
publisher = {Association for Computational Linguistics},
address = {Singapore},
url = {https://aclanthology.org/2023.emnlp-main.674},
editor = {Bouamor, Houda and Pino, Juan and Bali, Kalika}
}
License
This project is licensed under the MIT License.
Acknowledgement
This project is derived from the torch-grammars project, which was derived from the llama-cpp project.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for transformers_cfg-0.2.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | ee3aed10cea2acea4441766cb35930d0d1e2687b1853ce6fea8d1515b157a1ac |
|
MD5 | 2c3670087e427ac1e119818a2026e77d |
|
BLAKE2b-256 | aa6b895645fc54d81007f3c885134be30e6ce382625140ace54cdee328434a12 |