Skip to main content

No project description provided

Project description

Integration: TitanML

Use TitanML's Takeoff server to serve local models efficiently with Haystack 2.0


You can use the Takeoff inference server to deploy local models efficiently in your Haystack 2.0 pipelines. Takeoff is a state-of-the art inference server focused on deploying openly available language models at scale. It can run LLMs on local machines with consumer GPUs, and on cloud infrastructure.

The TakeoffGenerator component in Haystack 2.0 is a wrapper around the Takeoff server API, and can be used to serve takeoff-deployed models efficiently in Haystack pipelines.


pip install takeoff_haystack


You can interact with takeoff deployed models using the TakeoffGenerator component in Haystack. To do so, you must have a takeoff model deployed. For information on how to do so, please read the takeoff docs here.

The following example deploys a gpt2 model using takeoff locally on port 3000.

docker run --gpus all -e TAKEOFF_MODEL_NAME=TheBloke/Llama-2-7B-Chat-AWQ \
                      -e TAKEOFF_DEVICE=cuda \
                      -e TAKEOFF_MAX_SEQUENCE_LENGTH=256 \
                      -it \
                      -p 3000:3000 tytn/takeoff-pro:0.11.0-gpu


Below is an example of using takeoff models in a Haystack RAG pipeline. It summarizes headlines from popular news sites in technology.

from typing import Dict, List
from haystack import Document, Pipeline
from import PromptBuilder  
from haystack.components.retrievers.in_memory import InMemoryBM25Retriever
from haystack.document_stores.in_memory import InMemoryDocumentStore
import feedparser
from takeoff_haystack import TakeoffGenerator

# Dict of website RSS feeds  
urls = {
  'theverge': '',
  'techcrunch': '',
  'mashable': '',
  'cnet': '',
  'engadget': '',
  'zdnet': '',
  'venturebeat': '',
  'readwrite': '',    
  'wired': '',
  'gizmodo': '',

# Configurable parameters

def get_titles(urls: Dict[str, str], num_sites: int, num_titles: int) -> List[str]:
  titles: List[str] = []
  sites = list(urls.keys())[:num_sites]
  for site in sites:
    feed = feedparser.parse(urls[site])  
    entries = feed.entries[:num_titles]
    for entry in entries:
  return titles
titles = get_titles(urls, NUM_WEBSITES, NUM_TITLES)
titles_string = " - ".join(titles)

document_store = InMemoryDocumentStore()

template = """
{% for document in documents %}
  {{ document.content }}  
{% endfor %}
REQUEST: {{ query }}

pipe = Pipeline()
pipe.add_component("retriever", InMemoryBM25Retriever(document_store=document_store))
pipe.add_component("prompt_builder", PromptBuilder(template=template))
pipe.add_component("llm", TakeoffGenerator(base_url="http://localhost", port="3000"))
pipe.connect("retriever", "prompt_builder.documents")
pipe.connect("prompt_builder", "llm")

query = f"Summarize each of the {NUM_WEBSITES * NUM_TITLES} provided headlines in three words."
response ={"prompt_builder": {"query": query}, "retriever": {"query": query}})

You should see a response like the following

['\n\n\nANSWER:\n\n1. Poker Roguelike - Exciting gameplay\n2. AI-powered news reader - Personalized feed\n3. Best laptops MWC 2024 - Powerful devices']

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

takeoff_haystack-0.1.0.tar.gz (3.2 kB view hashes)

Uploaded Source

Built Distribution

takeoff_haystack-0.1.0-py3-none-any.whl (3.7 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page