Skip to main content

An integration between the llama.cpp LLM framework and Haystack

Project description

llama-cpp-haystack

PyPI - Version PyPI - Python Version


Custom component for Haystack (2.x) for running LLMs using the Llama.cpp LLM framework. This implementation leverages the Python Bindings for llama.cpp.

Table of Contents

Installation

pip install llama-cpp-haystack

The default install behaviour is to build llama.cpp for CPU only on Linux and Windows and use Metal on MacOS.

To install using the other backends, first install llama-cpp-python using the instructions on their installation documentation and then install llama-cpp-haystack.

For example, to use llama-cpp-haystack with the cuBLAS backend:

export LLAMA_CUBLAS=1
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
pip install llama-cpp-haystack

Usage

You can utilize the LlamaCppGenerator to load models quantized using llama.cpp (GGUF) for text generation.

Information about the supported models and model parameters can be found on the llama.cpp documentation.

The GGUF versions of popular models can be downloaded from HuggingFace.

Passing additional model parameters

The model_path, n_ctx, n_batch arguments have been exposed for convenience and can be directly passed to the Generator during initialization as keyword arguments.

The model_kwargs parameter can be used to pass additional arguments when initializing the model. In case of duplication, these kwargs override model_path, n_ctx, and n_batch init parameters.

See Llama.cpp's model documentation for more information on the available model arguments.

For example, to offload the model to GPU during initialization:

from haystack_integrations.components.generators.llama_cpp import LlamaCppGenerator

generator = LlamaCppGenerator(
    model_path="/content/openchat-3.5-1210.Q3_K_S.gguf", 
    n_ctx=512,
    n_batch=128,
    model_kwargs={"n_gpu_layers": -1}
)
generator.warm_up()

input = "Who is the best American actor?"
prompt = f"GPT4 Correct User: {input} <|end_of_turn|> GPT4 Correct Assistant:"

result = generator.run(prompt, generation_kwargs={"max_tokens": 128})
generated_text = result["replies"][0]

print(generated_text)

Passing generation parameters

The generation_kwargs parameter can be used to pass additional generation arguments like max_tokens, temperature, top_k, top_p, etc to the model during inference.

See Llama.cpp's create_completion documentation for more information on the available generation arguments.

For example, to set the max_tokens and temperature:

from haystack_integrations.components.generators.llama_cpp import LlamaCppGenerator

generator = LlamaCppGenerator(
    model_path="/content/openchat-3.5-1210.Q3_K_S.gguf",
    n_ctx=512,
    n_batch=128,
    generation_kwargs={"max_tokens": 128, "temperature": 0.1},
)
generator.warm_up()

input = "Who is the best American actor?"
prompt = f"GPT4 Correct User: {input} <|end_of_turn|> GPT4 Correct Assistant:"

result = generator.run(prompt)
generated_text = result["replies"][0]

print(generated_text)

The generation_kwargs can also be passed to the run method of the generator directly:

from haystack_integrations.components.generators.llama_cpp import LlamaCppGenerator

generator = LlamaCppGenerator(
    model_path="/content/openchat-3.5-1210.Q3_K_S.gguf",
    n_ctx=512,
    n_batch=128,
)
generator.warm_up()

input = "Who is the best American actor?"
prompt = f"GPT4 Correct User: {input} <|end_of_turn|> GPT4 Correct Assistant:"

result = generator.run(
    prompt,
    generation_kwargs={"max_tokens": 128, "temperature": 0.1},
)
generated_text = result["replies"][0]

print(generated_text)

Example

Below is the example Retrieval Augmented Generation pipeline that uses the Simple Wikipedia Dataset from HuggingFace. You can find more examples in the examples folder.

Load the dataset:

# Install HuggingFace Datasets using "pip install datasets"
from datasets import load_dataset
from haystack import Document, Pipeline
from haystack.components.builders.answer_builder import AnswerBuilder
from haystack.components.builders.prompt_builder import PromptBuilder
from haystack.components.embedders import SentenceTransformersDocumentEmbedder, SentenceTransformersTextEmbedder
from haystack.components.retrievers import InMemoryEmbeddingRetriever
from haystack.components.writers import DocumentWriter
from haystack.document_stores import InMemoryDocumentStore

# Import LlamaCppGenerator
from haystack_integrations.components.generators.llama_cpp import LlamaCppGenerator

# Load first 100 rows of the Simple Wikipedia Dataset from HuggingFace
dataset = load_dataset("pszemraj/simple_wikipedia", split="validation[:100]")

docs = [
    Document(
        content=doc["text"],
        meta={
            "title": doc["title"],
            "url": doc["url"],
        },
    )
    for doc in dataset
]

Index the documents to the InMemoryDocumentStore using the SentenceTransformersDocumentEmbedder and DocumentWriter:

doc_store = InMemoryDocumentStore(embedding_similarity_function="cosine")
doc_embedder = SentenceTransformersDocumentEmbedder(model="sentence-transformers/all-MiniLM-L6-v2")

# Indexing Pipeline
indexing_pipeline = Pipeline()
indexing_pipeline.add_component(instance=doc_embedder, name="DocEmbedder")
indexing_pipeline.add_component(instance=DocumentWriter(document_store=doc_store), name="DocWriter")
indexing_pipeline.connect(connect_from="DocEmbedder", connect_to="DocWriter")

indexing_pipeline.run({"DocEmbedder": {"documents": docs}})

Create the Retrieval Augmented Generation (RAG) pipeline and add the LlamaCppGenerator to it:

# Prompt Template for the https://huggingface.co/openchat/openchat-3.5-1210 LLM
prompt_template = """GPT4 Correct User: Answer the question using the provided context.
Question: {{question}}
Context:
{% for doc in documents %}
    {{ doc.content }}
{% endfor %}
<|end_of_turn|>
GPT4 Correct Assistant:
"""

rag_pipeline = Pipeline()

text_embedder = SentenceTransformersTextEmbedder(model="sentence-transformers/all-MiniLM-L6-v2")

# Load the LLM using LlamaCppGenerator
model_path = "openchat-3.5-1210.Q3_K_S.gguf"
generator = LlamaCppGenerator(model_path=model_path, n_ctx=4096, n_batch=128)

rag_pipeline.add_component(
    instance=text_embedder,
    name="text_embedder",
)
rag_pipeline.add_component(instance=InMemoryEmbeddingRetriever(document_store=doc_store, top_k=3), name="retriever")
rag_pipeline.add_component(instance=PromptBuilder(template=prompt_template), name="prompt_builder")
rag_pipeline.add_component(instance=generator, name="llm")
rag_pipeline.add_component(instance=AnswerBuilder(), name="answer_builder")

rag_pipeline.connect("text_embedder", "retriever")
rag_pipeline.connect("retriever", "prompt_builder.documents")
rag_pipeline.connect("prompt_builder", "llm")
rag_pipeline.connect("llm.replies", "answer_builder.replies")
rag_pipeline.connect("retriever", "answer_builder.documents")

Run the pipeline:

question = "Which year did the Joker movie release?"
result = rag_pipeline.run(
    {
        "text_embedder": {"text": question},
        "prompt_builder": {"question": question},
        "llm": {"generation_kwargs": {"max_tokens": 128, "temperature": 0.1}},
        "answer_builder": {"query": question},
    }
)

generated_answer = result["answer_builder"]["answers"][0]
print(generated_answer.data)
# The Joker movie was released on October 4, 2019.

License

llama-cpp-haystack is distributed under the terms of the Apache-2.0 license.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_cpp_haystack-1.2.0.tar.gz (23.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_cpp_haystack-1.2.0-py3-none-any.whl (15.4 kB view details)

Uploaded Python 3

File details

Details for the file llama_cpp_haystack-1.2.0.tar.gz.

File metadata

  • Download URL: llama_cpp_haystack-1.2.0.tar.gz
  • Upload date:
  • Size: 23.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: python-httpx/0.28.1

File hashes

Hashes for llama_cpp_haystack-1.2.0.tar.gz
Algorithm Hash digest
SHA256 ac0a0e227393a14e0949bf1bddf8d629578654c46e2140d1015de13917bc1244
MD5 ced419138d89c08d25a02b3b368a7025
BLAKE2b-256 0740a7f7f985d6ab6fb45b160e967e426e2e87991b84bec114c53824c3db6746

See more details on using hashes here.

File details

Details for the file llama_cpp_haystack-1.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_cpp_haystack-1.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 21771a659871ed3741c3fea405ef91b3dc1884adf1ca207739e184f70d4d3c46
MD5 0098ab9959d95aa6e2459faeb65ae0e3
BLAKE2b-256 6f603e989486525fca63a792e9634ab9013c86b30a8eadbf382e46b13f29647f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page