Skip to main content

llama-index llms perplexity integration

Project description

LlamaIndex Llms Integration: Perplexity

Installation

To install the required packages, run:

%pip install llama-index-llms-perplexity
!pip install llama-index

Setup

Import Libraries and Configure API Key

Import the necessary libraries and set your Perplexity API key:

from llama_index.llms.perplexity import Perplexity

pplx_api_key = "your-perplexity-api-key"  # Replace with your actual API key

Initialize the Perplexity LLM

Create an instance of the Perplexity LLM with your API key and desired model settings:

llm = Perplexity(
    api_key=pplx_api_key, model="mistral-7b-instruct", temperature=0.5
)

Chat Example

Sending a Chat Message

You can send a chat message using the chat method. Here’s how to do that:

from llama_index.core.llms import ChatMessage

messages_dict = [
    {"role": "system", "content": "Be precise and concise."},
    {"role": "user", "content": "Tell me 5 sentences about Perplexity."},
]

messages = [ChatMessage(**msg) for msg in messages_dict]

# Get response from the model
response = llm.chat(messages)
print(response)

Async Chat

To send messages asynchronously, you can use the achat method:

response = await llm.achat(messages)
print(response)

Stream Chat

For streaming responses, you can use the stream_chat method:

resp = llm.stream_chat(messages)
for r in resp:
    print(r.delta, end="")

Async Stream Chat

To stream responses asynchronously, use the astream_chat method:

resp = await llm.astream_chat(messages)
async for delta in resp:
    print(delta.delta, end="")

LLM Implementation example

https://docs.llamaindex.ai/en/stable/examples/llm/perplexity/

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_perplexity-0.3.0.tar.gz (4.8 kB view details)

Uploaded Source

Built Distribution

File details

Details for the file llama_index_llms_perplexity-0.3.0.tar.gz.

File metadata

File hashes

Hashes for llama_index_llms_perplexity-0.3.0.tar.gz
Algorithm Hash digest
SHA256 38dee0d1cb67b8e9e0accd00d1b3f42c875197fbe482c2523fc2babb5a8216da
MD5 2e8092d53bc732ed30b94a4143b0813d
BLAKE2b-256 a9c98f031ee876128935623387fed6e32e25ad8952444645a7fc5ea2673d518f

See more details on using hashes here.

File details

Details for the file llama_index_llms_perplexity-0.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_llms_perplexity-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e14a5b00c79d9b9d3d1a3f14da988a07a0c8befc8aaa00b219630fbf997f1697
MD5 00ed4aea9e8291be6c8b3caa151ef382
BLAKE2b-256 5412ef48de642a410c82350be6f791d0a7c133a63c9e7d34053811ca7760aab9

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page