Skip to main content

llama-index llms pipeshift integration

Project description

LlamaIndex Llms Integration: Pipeshift

Pipeshift provides a fast and scalable infrastructure for fine-tuning and inferencing open-source LLMs. We abstract away the training + inferencing infrastructure and the tooling around it, enabling engineering teams to get to production with all the optimizations and one-click deployments.

Installation

  1. Install the required Python packages:

    %pip install llama-index-llms-pipeshift
    %pip install llama-index
    
  2. Set the PIPESHIFT_API_KEY as an environment variable or pass it directly to the class constructor.

  3. Choose any of the pre-deployed models or the one deployed by you from deployments section of pipeshift dashboard

Usage

Basic Completion

To generate a simple completion, use the complete method:

from llama_index.llms.pipeshift import Pipeshift

llm = Pipeshift(
    model="meta-llama/Meta-Llama-3.1-8B-Instruct",
    # api_key="YOUR_API_KEY" # alternative way to pass api_key if not specified in environment variable
)
res = llm.complete("supercars are ")
print(res)

Example output:

Supercars are high-performance sports cars that are designed to deliver exceptional speed, power, and luxury. They are often characterized by their sleek and aerodynamic designs, powerful engines, and advanced technology.

Basic Chat

To simulate a chat with multiple messages:

from llama_index.core.llms import ChatMessage
from llama_index.llms.pipeshift import Pipeshift

messages = [
    ChatMessage(
        role="system", content="You are sales person at supercar showroom"
    ),
    ChatMessage(role="user", content="why should I pick porsche 911 gt3 rs"),
]
res = Pipeshift(
    model="meta-llama/Meta-Llama-3.1-8B-Instruct", max_tokens=50
).chat(messages)
print(res)

Example output:

assistant: 1. Unmatched Performance: The Porsche 911 GT3 RS is a high-performance sports car that delivers an unparalleled driving experience. It boasts a powerful 4.0-liter flat

Streaming Completion

To stream a response in real-time using stream_complete:

from llama_index.llms.pipeshift import Pipeshift

llm = Pipeshift(model="meta-llama/Meta-Llama-3.1-8B-Instruct")
resp = llm.stream_complete("porsche GT3 RS is ")

for r in resp:
    print(r.delta, end="")

Example output (partial):

 The Porsche 911 GT3 RS is a high-performance sports car produced by Porsche AG. It is part of the 911 (991 and 992 generations) series and is%

Streaming Chat

For a streamed conversation, use stream_chat:

from llama_index.llms.pipeshift import Pipeshift
from llama_index.core.llms import ChatMessage

llm = Pipeshift(model="meta-llama/Meta-Llama-3.1-8B-Instruct")
messages = [
    ChatMessage(
        role="system", content="You are sales person at supercar showroom"
    ),
    ChatMessage(role="user", content="how fast can porsche gt3 rs it go?"),
]
resp = llm.stream_chat(messages)

for r in resp:
    print(r.delta, end="")

Example output (partial):

The Porsche 911 GT3 RS is an incredible piece of engineering. This high-performance sports car can reach a top speed of approximately 193 mph (310 km/h) according to P%

LLM Implementation example

Examples

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_pipeshift-0.4.0.tar.gz (5.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_index_llms_pipeshift-0.4.0-py3-none-any.whl (4.5 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_llms_pipeshift-0.4.0.tar.gz.

File metadata

  • Download URL: llama_index_llms_pipeshift-0.4.0.tar.gz
  • Upload date:
  • Size: 5.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.9 {"installer":{"name":"uv","version":"0.10.9","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for llama_index_llms_pipeshift-0.4.0.tar.gz
Algorithm Hash digest
SHA256 11bd9149f1e11cbd6ab169be68dc250e986770d3cb8ba9333c1b8f42dcf7b7bd
MD5 cbf822263673c9c51ea9768176c057fe
BLAKE2b-256 bf8c855481cd3e4cd7e67d58b7b599b251d76cee145b3ca883e480141e9e5d07

See more details on using hashes here.

File details

Details for the file llama_index_llms_pipeshift-0.4.0-py3-none-any.whl.

File metadata

  • Download URL: llama_index_llms_pipeshift-0.4.0-py3-none-any.whl
  • Upload date:
  • Size: 4.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.9 {"installer":{"name":"uv","version":"0.10.9","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for llama_index_llms_pipeshift-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 3fbe0ed50ee287a7db1415a8ec26f0e7fc77a5ea9f19c37f5eebabb9fcf67ec3
MD5 db2e4ec9e8bf4060b4499a43ea88cbcf
BLAKE2b-256 90219378e8022323b959d6e109abd83a5e1b87ce12c93e0cefe004d93f37c399

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page