Skip to main content

Instruction/chat prompts creation library for text generation LLMs. It supports local and Hugging Face models.

Project description

llm-templates

PyPI - Version

It's a conversation formatter for chat models. The library allows you to easily format a chat conversation, with the same format with which the language model was trained. The library is installed with pip:

pip install llm-templates

You can quickly start with the library using the following Colab notebook:

Open In Colab

The library has built in templates for the following models:

  • zephyr
  • llama2
  • llama3
  • mistral
  • gemma
  • cohere
  • phi3

...and HuggingFace models, using Jinja2 templates when tokenizer_config.json file is available.

This is a quick example with Llama3 model:

from llm_templates import Formatter, Conversation, Content

messages = [Content(role="user", content="Hello!"),
            Content(role="assistant", content="How can I help you?"),
            Content(role="user", content="Write a poem about the sea")]

conversation = Conversation(model='llama3', messages=messages)
conversation_str = Formatter().render(conversation, add_assistant_prompt=True)

print(conversation_str)

And the result will be:

<|begin_of_text|><|start_header_id|>user<|end_header_id|>
Hello!<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>
How can I help you?<|eot_id|>
<|start_header_id|>user<|end_header_id|>
Write a poem about the sea<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>

Introduction

Many models are based on foundational or pre-trained LLMs, which are then retrained (fin-tuning) with specially designed instruction datasets to improve and refine the abilities of these models on specific tasks:

transfer_learning

These data sets typically include a variety of text examples, which can range from questions and answers to instructions and answers. The main purpose is to teach the model how to follow instructions or how to appropriately respond to certain types of requests.

When a language model like GPT-3 or GPT-4 is “fine-tuned” on these data sets, it learns to better understand and perform the tasks presented to it. For example, you may get better at understanding complex instructions, generating more relevant and accurate responses, or adapting to specific communication styles. This is particularly useful for specialized applications, where the model is required to understand and respond appropriately to a specific set of instructions or questions related to a particular field or topic, or for use in dialog systems.

The process of adapting basic LLMs to models trained in following instructions (instruction-following) is called alignment. https://openai.com/research/instruction-following:

alignment

Instruction datasets are used for fine-tuning Large Language Models (LLMs). This fine-tuning typically uses supervised learning and includes both an input string and an expected output string. The input and output strings follow a template known as the instruction dataset format (for example, [INST] <<SYS>>). OpenAI's ChatML and Stanford's Alpaca are examples of Instruction datasets. Below is the instruction data format used by Alpaca for fine-tuning that includes context information (the input field below):

Below is an instruction that describes a task, paired with an input that provides 
further context. Write a response that appropriately completes the request.

### Instruction:
{instruction}

### Input:
{input}

### Response:

Because the models have been fine-tuned (trained) to generate text in dialogue or query contexts, at the time of inference, we will need to format our prompts in the same way, to not deteriorate the quality of our queries or dialogues.

In conversations or in the request for instructions, messages have a role and content, the latter being the actual text of the message. Commonly, the roles are "user" for messages sent by the user, "assistant" for responses written by the model, and optionally, "system" for high-level directives given at the beginning of the conversation.

If all this seems a bit abstract, here is a chat example to make it more concrete:

[
{"role": "user", "content": "Hello!"},
{"role": "assistant", "content": "Nice to meet you!"}
]

This sequence of messages needs to be converted into a text string before it can be tokenized and used as input to a model. The problem, however, is that there are many ways to do this conversion. We could, for example, convert the message list into an Instant Messenger format:

User: Hello!
Assistant: Nice to meet you!

Or we could add special tokens to indicate the roles:

[USER] Hello! [/USER]
[ASST] Nice to meet you! [/ASST]

Or we could also add tokens to indicate boundaries between messages, but insert the role information as a string:

<|im_start|>user
Hello!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>

There are many ways to do this, and none of them are best or correct. The way depends on how the different models have been trained. The previous examples are not invented; but they are real and used in some popular models. Once a model has been trained with a certain format, we want to ensure that future inputs use the same format, otherwise we can deteriorate the performance of the model.

Library usage

This is where llm-templates comes in. It is a Python package that provides a simple and flexible way to convert a list of messages into a string that can be used as input to a model. It also provides a way to convert the model's output back into a list of messages. The package is designed to be easy to use and to work with a wide range of models and formats. The package is designed to be easy to use and to work with a wide range of models and formats. It is also designed to be flexible and to allow you to customize the conversion process to suit your needs. The package is designed to be easy to use and to work with a wide range of models and formats. It is also designed to be flexible and to allow you to customize the conversion process to suit your needs.

from llm_templates import Formatter, Conversation
messages = [
    {
        "role": "user",
        "content": "Hello!"
    },
    {
        "role": "assistant",
        "content": "How can I help you?"
    },
    {
        "role": "user",
        "content": "Write a poem about the sea"
    }
]

formatter = Formatter()

# Local model
conversation = Conversation(model="zephyr", messages=messages)
print(formatter.render(conversation, add_assistant_prompt=True))

And the output using zephyr model template will be:

<|user|>Hello!</s>
<|assistant|>How can I help you?</s>
<|user|>Write a poem about the sea</s>
<|assistant|></s>

Another example using llama2 model:

formatter = Formatter()

# Local model
conversation = Conversation(model="zephyr", messages=messages)
print(formatter.render(conversation))

The output will be:

<s>[INST] Hello! [/INST]
How can I help you? </s>
<s>[INST] Write a poem about the sea [/INST]

You can also use HuggingFace models:

from llm_templates import Formatter, Conversation, Content

messages = [Content(role="user", content="Hello!"),
            Content(role="assistant", content="How can I help you?"),
            Content(role="user", content="Write a poem about the sea")]

formatter = Formatter()

# Apply Hugging Face Mixtral model template
model = "mistralai/Mixtral-8x7B-Instruct-v0.1"
# model = "HuggingFaceH4/zephyr-7b-beta"
conversation = Conversation(model=model, messages=messages)
conversation_str = formatter.render(conversation)

print(conversation_str)

# And then call the model in HuggingFace via API
from huggingface_hub import InferenceClient
client = InferenceClient()

result = client.text_generation(prompt=conversation_str, model=model, max_new_tokens=768, temperature=0.7, top_p=0.9,
                                top_k=50)

And the result will be something like this:

The sea, a vast and endless blue,
A world of wonder, forever new.
Its waves crash down with gentle might,
A symphony of nature's sight.

Beneath the surface, secrets lie,
A realm where creatures roam and fly.
Coral castles, home to life,
A world at peace, amidst the strife.

....

References

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_templates-0.1.10.tar.gz (12.0 kB view details)

Uploaded Source

Built Distribution

llm_templates-0.1.10-py3-none-any.whl (12.7 kB view details)

Uploaded Python 3

File details

Details for the file llm_templates-0.1.10.tar.gz.

File metadata

  • Download URL: llm_templates-0.1.10.tar.gz
  • Upload date:
  • Size: 12.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.9.19

File hashes

Hashes for llm_templates-0.1.10.tar.gz
Algorithm Hash digest
SHA256 e17e1df8ca511220e03151847bdeb07b3ddb9c0e636d164a2602b3b18ae8146d
MD5 e6ce10024512f06a53d812817ff20641
BLAKE2b-256 c4586788389d92810e9a4586e59fd640f6508a88a8fd26f71c9e0a443baa7c0b

See more details on using hashes here.

File details

Details for the file llm_templates-0.1.10-py3-none-any.whl.

File metadata

File hashes

Hashes for llm_templates-0.1.10-py3-none-any.whl
Algorithm Hash digest
SHA256 08d3f6dc10bf97856b9ae3b78241014058e922c167abb33d9e181bfbc7b0abaa
MD5 508dc049da75ea1fd98c7bf2ceb11e13
BLAKE2b-256 3b4d1aa9fd80442403e827ce293ed8e2b8bef5990aa83fa51bdb0a727a567c1f

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page