Skip to main content

llama-index llms pipeshift integration

Project description

LlamaIndex Llms Integration: Pipeshift

Pipeshift provides a fast and scalable infrastructure for fine-tuning and inferencing open-source LLMs. We abstract away the training + inferencing infrastructure and the tooling around it, enabling engineering teams to get to production with all the optimizations and one-click deployments.

Installation

  1. Install the required Python packages:

    %pip install llama-index-llms-pipeshift
    %pip install llama-index
    
  2. Set the PIPESHIFT_API_KEY as an environment variable or pass it directly to the class constructor.

  3. Choose any of the pre-deployed models or the one deployed by you from deployments section of pipeshift dashboard

Usage

Basic Completion

To generate a simple completion, use the complete method:

from llama_index.llms.pipeshift import Pipeshift

llm = Pipeshift(
    model="meta-llama/Meta-Llama-3.1-8B-Instruct",
    # api_key="YOUR_API_KEY" # alternative way to pass api_key if not specified in environment variable
)
res = llm.complete("supercars are ")
print(res)

Example output:

Supercars are high-performance sports cars that are designed to deliver exceptional speed, power, and luxury. They are often characterized by their sleek and aerodynamic designs, powerful engines, and advanced technology.

Basic Chat

To simulate a chat with multiple messages:

from llama_index.core.llms import ChatMessage
from llama_index.llms.pipeshift import Pipeshift

messages = [
    ChatMessage(
        role="system", content="You are sales person at supercar showroom"
    ),
    ChatMessage(role="user", content="why should I pick porsche 911 gt3 rs"),
]
res = Pipeshift(
    model="meta-llama/Meta-Llama-3.1-8B-Instruct", max_tokens=50
).chat(messages)
print(res)

Example output:

assistant: 1. Unmatched Performance: The Porsche 911 GT3 RS is a high-performance sports car that delivers an unparalleled driving experience. It boasts a powerful 4.0-liter flat

Streaming Completion

To stream a response in real-time using stream_complete:

from llama_index.llms.pipeshift import Pipeshift

llm = Pipeshift(model="meta-llama/Meta-Llama-3.1-8B-Instruct")
resp = llm.stream_complete("porsche GT3 RS is ")

for r in resp:
    print(r.delta, end="")

Example output (partial):

 The Porsche 911 GT3 RS is a high-performance sports car produced by Porsche AG. It is part of the 911 (991 and 992 generations) series and is%

Streaming Chat

For a streamed conversation, use stream_chat:

from llama_index.llms.pipeshift import Pipeshift
from llama_index.core.llms import ChatMessage

llm = Pipeshift(model="meta-llama/Meta-Llama-3.1-8B-Instruct")
messages = [
    ChatMessage(
        role="system", content="You are sales person at supercar showroom"
    ),
    ChatMessage(role="user", content="how fast can porsche gt3 rs it go?"),
]
resp = llm.stream_chat(messages)

for r in resp:
    print(r.delta, end="")

Example output (partial):

The Porsche 911 GT3 RS is an incredible piece of engineering. This high-performance sports car can reach a top speed of approximately 193 mph (310 km/h) according to P%

LLM Implementation example

Examples

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_pipeshift-0.2.1.tar.gz (3.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_index_llms_pipeshift-0.2.1-py3-none-any.whl (3.9 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_llms_pipeshift-0.2.1.tar.gz.

File metadata

  • Download URL: llama_index_llms_pipeshift-0.2.1.tar.gz
  • Upload date:
  • Size: 3.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.11.10 Darwin/24.1.0

File hashes

Hashes for llama_index_llms_pipeshift-0.2.1.tar.gz
Algorithm Hash digest
SHA256 43388c734704e56b1d90eeb560906f40a15f79da13dc97d013cc69a7c3aceff3
MD5 bbed47cc3bbeba44fb6029c667be89a9
BLAKE2b-256 1dd9d577120bcac1ca28569f09e258f80ba822c759fd7508f028c1a6b1a4a62c

See more details on using hashes here.

File details

Details for the file llama_index_llms_pipeshift-0.2.1-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_llms_pipeshift-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 35630bc6c799293c837e230297a21bce185bbdba29d9ca016d2d584769db24a8
MD5 91deff42ed6acee7ae3c016840cd181d
BLAKE2b-256 27d9d7f7397e4e844a99a73de7753a1fdf4a81b99fd7fca03adb849e7e22f470

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page