llama-index llms fireworks integration
Project description
LlamaIndex Llms Integration: Fireworks
Installation
-
Install the required Python packages:
%pip install llama-index-llms-fireworks %pip install llama-index
-
Set the Fireworks API key as an environment variable or pass it directly to the class constructor.
Usage
Basic Completion
To generate a simple completion, use the complete
method:
from llama_index.llms.fireworks import Fireworks
resp = Fireworks().complete("Paul Graham is ")
print(resp)
Example output:
Paul Graham is a well-known essayist, programmer, and startup entrepreneur. He co-founded Y Combinator, which supported startups like Dropbox, Airbnb, and Reddit.
Basic Chat
To simulate a chat with multiple messages:
from llama_index.core.llms import ChatMessage
from llama_index.llms.fireworks import Fireworks
messages = [
ChatMessage(
role="system", content="You are a pirate with a colorful personality"
),
ChatMessage(role="user", content="What is your name"),
]
resp = Fireworks().chat(messages)
print(resp)
Example output:
Arr matey, ye be askin' for me name? Well, I be known as Captain Redbeard the Terrible!
Streaming Completion
To stream a response in real-time using stream_complete
:
from llama_index.llms.fireworks import Fireworks
llm = Fireworks()
resp = llm.stream_complete("Paul Graham is ")
for r in resp:
print(r.delta, end="")
Example output (partial):
Paul Graham is a well-known essayist, programmer, and venture capitalist...
Streaming Chat
For a streamed conversation, use stream_chat
:
from llama_index.llms.fireworks import Fireworks
from llama_index.core.llms import ChatMessage
llm = Fireworks()
messages = [
ChatMessage(
role="system", content="You are a pirate with a colorful personality"
),
ChatMessage(role="user", content="What is your name"),
]
resp = llm.stream_chat(messages)
for r in resp:
print(r.delta, end="")
Example output (partial):
Arr matey, ye be askin' for me name? Well, I be known as Captain Redbeard the Terrible...
Model Configuration
To configure the model for more specific behavior:
from llama_index.llms.fireworks import Fireworks
llm = Fireworks(model="accounts/fireworks/models/firefunction-v1")
resp = llm.complete("Paul Graham is ")
print(resp)
Example output:
Paul Graham is an English-American computer scientist, entrepreneur, venture capitalist, and blogger.
API Key Configuration
To use separate API keys for different instances:
from llama_index.llms.fireworks import Fireworks
llm = Fireworks(
model="accounts/fireworks/models/firefunction-v1", api_key="YOUR_API_KEY"
)
resp = llm.complete("Paul Graham is ")
print(resp)
LLM Implementation example
https://docs.llamaindex.ai/en/stable/examples/llm/fireworks/
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file llama_index_llms_fireworks-0.3.0.tar.gz
.
File metadata
- Download URL: llama_index_llms_fireworks-0.3.0.tar.gz
- Upload date:
- Size: 4.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.11.10 Darwin/22.3.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | cff316e5b0d67dd12912ca2df5302ff46e2a979fd722ec2159d49f015a480db0 |
|
MD5 | 3694bd060e623afd403f95cc147359f8 |
|
BLAKE2b-256 | a02d43a90b3785ceff64f1357708cef08eeb128afbbe940d518e7f127f9d56fd |
File details
Details for the file llama_index_llms_fireworks-0.3.0-py3-none-any.whl
.
File metadata
- Download URL: llama_index_llms_fireworks-0.3.0-py3-none-any.whl
- Upload date:
- Size: 5.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.11.10 Darwin/22.3.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6d2fad66fc0bf5d9bd0424f8e9fb01504718b7b0ec5daf5c3876336178026110 |
|
MD5 | d448b4daeac159fe232e6a1dfd5c528d |
|
BLAKE2b-256 | 305f295885481c6e5afc311e2704d229a229edde2f9709a2d042acdbca3b69ea |