llama-index llms perplexity integration
Project description
LlamaIndex Llms Integration: Perplexity
Installation
To install the required packages, run:
%pip install llama-index-llms-perplexity
!pip install llama-index
Setup
Import Libraries and Configure API Key
Import the necessary libraries and set your Perplexity API key:
from llama_index.llms.perplexity import Perplexity
pplx_api_key = "your-perplexity-api-key" # Replace with your actual API key
Initialize the Perplexity LLM
Create an instance of the Perplexity LLM with your API key and desired model settings:
llm = Perplexity(
api_key=pplx_api_key, model="mistral-7b-instruct", temperature=0.5
)
Chat Example
Sending a Chat Message
You can send a chat message using the chat
method. Here’s how to do that:
from llama_index.core.llms import ChatMessage
messages_dict = [
{"role": "system", "content": "Be precise and concise."},
{"role": "user", "content": "Tell me 5 sentences about Perplexity."},
]
messages = [ChatMessage(**msg) for msg in messages_dict]
# Get response from the model
response = llm.chat(messages)
print(response)
Async Chat
To send messages asynchronously, you can use the achat
method:
response = await llm.achat(messages)
print(response)
Stream Chat
For streaming responses, you can use the stream_chat
method:
resp = llm.stream_chat(messages)
for r in resp:
print(r.delta, end="")
Async Stream Chat
To stream responses asynchronously, use the astream_chat
method:
resp = await llm.astream_chat(messages)
async for delta in resp:
print(delta.delta, end="")
LLM Implementation example
https://docs.llamaindex.ai/en/stable/examples/llm/perplexity/
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Close
Hashes for llama_index_llms_perplexity-0.2.1.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | ffc81b462468cbdc58515c9299c78ee050adb6d7d888889a58a51dbbe7fbccf1 |
|
MD5 | 4b6c4faac29d15e847bba4495adb9c3b |
|
BLAKE2b-256 | e350e1edb51f6e13dbca30c4e8b99a370c1ec809d871ed642227353386da9f39 |
Close
Hashes for llama_index_llms_perplexity-0.2.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9b7037a76bef9439bf5d9f12f026ce8c1510b09392529eca6688ed762630dda5 |
|
MD5 | 46b2926779fb5f4ce7a94611f54c186f |
|
BLAKE2b-256 | 22b00e657f943a8218361b869c72eee1cc48ccbadcd1e9f5c311749a72c4e5bb |