Python bindings for the llama.cpp library
Project description
Python Bindings for llama.cpp
Simple Python bindings for @ggerganov's llama.cpp library.
This package provides:
- Low-level access to C API via
ctypesinterface. - High-level Python API for text completion
- OpenAI-like API
- LangChain compatibility
- LlamaIndex compatibility
- OpenAI compatible web server
Documentation is available at https://llama-cpp-python.readthedocs.io/en/latest.
Installation
Requirements:
- Python 3.8+
- C compiler
- Linux: gcc or clang
- Windows: Visual Studio or MinGW
- MacOS: Xcode
To install the package, run:
pip install llama-cpp-python-win==0.3.21
This will also build llama.cpp from source and install it alongside this python package.
If this fails, add --verbose to the pip install see the full cmake build log.
Pre-built Wheel (New)
It is also possible to install a pre-built wheel with basic CPU support.
pip install llama-cpp-python-win==0.3.21 \
--extra-index-url https://github.com/Srinadhch07/llama-cpp-python-wheels/releases/download/v0.3.21/llama_cpp_python_win-0.3.21-cp314-cp314-win_amd64.whl
High-level API
The high-level API provides a simple managed interface through the Llama class.
Below is a short example demonstrating how to use the high-level API to for basic text completion:
from llama_cpp import Llama
llm = Llama(
model_path="./models/7B/llama-model.gguf",
# n_gpu_layers=-1, # Uncomment to use GPU acceleration
# seed=1337, # Uncomment to set a specific seed
# n_ctx=2048, # Uncomment to increase the context window
)
output = llm(
"Q: Name the planets in the solar system? A: ", # Prompt
max_tokens=32, # Generate up to 32 tokens, set to None to generate up to the end of the context window
stop=["Q:", "\n"], # Stop generating just before the model would generate a new question
echo=True # Echo the prompt back in the output
) # Generate a completion, can also call create_completion
print(output)
By default llama-cpp-python generates completions in an OpenAI compatible format:
{
"id": "cmpl-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"object": "text_completion",
"created": 1679561337,
"model": "./models/7B/llama-model.gguf",
"choices": [
{
"text": "Q: Name the planets in the solar system? A: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune and Pluto.",
"index": 0,
"logprobs": None,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 14,
"completion_tokens": 28,
"total_tokens": 42
}
}
Text completion is available through the __call__ and create_completion methods of the Llama class.
Pulling models from Hugging Face Hub
You can download Llama models in gguf format directly from Hugging Face using the from_pretrained method.
You'll need to install the huggingface-hub package to use this feature (pip install huggingface-hub).
llm = Llama.from_pretrained(
repo_id="Qwen/Qwen2-0.5B-Instruct-GGUF",
filename="*q8_0.gguf",
verbose=False
)
By default from_pretrained will download the model to the huggingface cache directory, you can then manage installed model files with the huggingface-cli tool.
Chat Completion
The high-level API also provides a simple interface for chat completion.
Chat completion requires that the model knows how to format the messages into a single prompt.
The Llama class does this using pre-registered chat formats (ie. chatml, llama-2, gemma, etc) or by providing a custom chat handler object.
The model will will format the messages into a single prompt using the following order of precedence:
- Use the
chat_handlerif provided - Use the
chat_formatif provided - Use the
tokenizer.chat_templatefrom theggufmodel's metadata (should work for most new models, older models may not have this) - else, fallback to the
llama-2chat format
Set verbose=True to see the selected chat format.
from llama_cpp import Llama
llm = Llama(
model_path="path/to/llama-2/llama-model.gguf",
chat_format="llama-2"
)
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are an assistant who perfectly describes images."},
{
"role": "user",
"content": "Describe this image in detail please."
}
]
)
Chat completion is available through the create_chat_completion method of the Llama class.
For OpenAI API v1 compatibility, you use the create_chat_completion_openai_v1 method which will return pydantic models instead of dicts.
JSON and JSON Schema Mode
To constrain chat responses to only valid JSON or a specific JSON Schema use the response_format argument in create_chat_completion.
JSON Mode
The following example will constrain the response to valid JSON strings only.
from llama_cpp import Llama
llm = Llama(model_path="path/to/model.gguf", chat_format="chatml")
llm.create_chat_completion(
messages=[
{
"role": "system",
"content": "You are a helpful assistant that outputs in JSON.",
},
{"role": "user", "content": "Who won the world series in 2020"},
],
response_format={
"type": "json_object",
},
temperature=0.7,
)
JSON Schema Mode
To constrain the response further to a specific JSON Schema add the schema to the schema property of the response_format argument.
from llama_cpp import Llama
llm = Llama(model_path="path/to/model.gguf", chat_format="chatml")
llm.create_chat_completion(
messages=[
{
"role": "system",
"content": "You are a helpful assistant that outputs in JSON.",
},
{"role": "user", "content": "Who won the world series in 2020"},
],
response_format={
"type": "json_object",
"schema": {
"type": "object",
"properties": {"team_name": {"type": "string"}},
"required": ["team_name"],
},
},
temperature=0.7,
)
Function Calling
The high-level API supports OpenAI compatible function and tool calling. This is possible through the functionary pre-trained models chat format or through the generic chatml-function-calling chat format.
from llama_cpp import Llama
llm = Llama(model_path="path/to/chatml/llama-model.gguf", chat_format="chatml-function-calling")
llm.create_chat_completion(
messages = [
{
"role": "system",
"content": "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. The assistant calls functions with appropriate input when necessary"
},
{
"role": "user",
"content": "Extract Jason is 25 years old"
}
],
tools=[{
"type": "function",
"function": {
"name": "UserDetail",
"parameters": {
"type": "object",
"title": "UserDetail",
"properties": {
"name": {
"title": "Name",
"type": "string"
},
"age": {
"title": "Age",
"type": "integer"
}
},
"required": [ "name", "age" ]
}
}
}],
tool_choice={
"type": "function",
"function": {
"name": "UserDetail"
}
}
)
Functionary v2
The various gguf-converted files for this set of models can be found here. Functionary is able to intelligently call functions and also analyze any provided function outputs to generate coherent responses. All v2 models of functionary supports parallel function calling. You can provide either functionary-v1 or functionary-v2 for the chat_format when initializing the Llama class.
Due to discrepancies between llama.cpp and HuggingFace's tokenizers, it is required to provide HF Tokenizer for functionary. The LlamaHFTokenizer class can be initialized and passed into the Llama class. This will override the default llama.cpp tokenizer used in Llama class. The tokenizer files are already included in the respective HF repositories hosting the gguf files.
from llama_cpp import Llama
from llama_cpp.llama_tokenizer import LlamaHFTokenizer
llm = Llama.from_pretrained(
repo_id="meetkai/functionary-small-v2.2-GGUF",
filename="functionary-small-v2.2.q4_0.gguf",
chat_format="functionary-v2",
tokenizer=LlamaHFTokenizer.from_pretrained("meetkai/functionary-small-v2.2-GGUF")
)
NOTE: There is no need to provide the default system messages used in Functionary as they are added automatically in the Functionary chat handler. Thus, the messages should contain just the chat messages and/or system messages that provide additional context for the model (e.g.: datetime, etc.).
License
This project is licensed under the terms of the MIT license.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llama_cpp_python_win-0.3.21-cp314-cp314-win_amd64.whl.
File metadata
- Download URL: llama_cpp_python_win-0.3.21-cp314-cp314-win_amd64.whl
- Upload date:
- Size: 7.0 MB
- Tags: CPython 3.14, Windows x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6999c8f9c4cdeecd1feffb41df645980f9f4f298058f833ea3f166b99a158952
|
|
| MD5 |
8fdfd81023e8c8150610e7c11f6a1517
|
|
| BLAKE2b-256 |
de1a37adbdba8727d2156f63a0ff731ad3ef7f7fd94a9fa218f80ba37c4522a2
|