llama-index llms gemini integration
Project description
LlamaIndex Llms Integration: Gemini
Installation
-
Install the required Python packages:
%pip install llama-index-llms-gemini !pip install -q llama-index google-generativeai
-
Set the Google API key as an environment variable:
%env GOOGLE_API_KEY=your_api_key_here
Usage
Basic Content Generation
To generate a poem using the Gemini model, use the following code:
from llama_index.llms.gemini import Gemini
resp = Gemini().complete("Write a poem about a magic backpack")
print(resp)
Chat with Messages
To simulate a conversation, send a list of messages:
from llama_index.core.llms import ChatMessage
from llama_index.llms.gemini import Gemini
messages = [
ChatMessage(role="user", content="Hello friend!"),
ChatMessage(role="assistant", content="Yarr what is shakin' matey?"),
ChatMessage(
role="user", content="Help me decide what to have for dinner."
),
]
resp = Gemini().chat(messages)
print(resp)
Streaming Responses
To stream content responses in real-time:
from llama_index.llms.gemini import Gemini
llm = Gemini()
resp = llm.stream_complete(
"The story of Sourcrust, the bread creature, is really interesting. It all started when..."
)
for r in resp:
print(r.text, end="")
To stream chat responses:
from llama_index.llms.gemini import Gemini
from llama_index.core.llms import ChatMessage
llm = Gemini()
messages = [
ChatMessage(role="user", content="Hello friend!"),
ChatMessage(role="assistant", content="Yarr what is shakin' matey?"),
ChatMessage(
role="user", content="Help me decide what to have for dinner."
),
]
resp = llm.stream_chat(messages)
Using Other Models
To find suitable models available in the Gemini model site:
import google.generativeai as genai
for m in genai.list_models():
if "generateContent" in m.supported_generation_methods:
print(m.name)
Specific Model Usage
To use a specific model, you can configure it like this:
from llama_index.llms.gemini import Gemini
llm = Gemini(model="models/gemini-pro")
resp = llm.complete("Write a short, but joyous, ode to LlamaIndex")
print(resp)
Asynchronous API
To use the asynchronous completion API:
from llama_index.llms.gemini import Gemini
llm = Gemini()
resp = await llm.acomplete("Llamas are famous for ")
print(resp)
For asynchronous streaming of responses:
resp = await llm.astream_complete("Llamas are famous for ")
async for chunk in resp:
print(chunk.text, end="")
LLM Implementation example
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file llama_index_llms_gemini-0.3.7.tar.gz
.
File metadata
- Download URL: llama_index_llms_gemini-0.3.7.tar.gz
- Upload date:
- Size: 5.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.10.12 Linux/6.8.0-1014-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | cf8bb02d8facb5fd6108903a68ebdbbb72764d3cad115889a97b245527488c48 |
|
MD5 | 5286ba6957046aa47d99a38e91fc64fe |
|
BLAKE2b-256 | 8e91a14594839ceadad4e65da3633b7198d2170587e8ac0779c01339d94ad534 |
File details
Details for the file llama_index_llms_gemini-0.3.7-py3-none-any.whl
.
File metadata
- Download URL: llama_index_llms_gemini-0.3.7-py3-none-any.whl
- Upload date:
- Size: 6.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.10.12 Linux/6.8.0-1014-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0bf069c4c844af384a3ba69b7dbc23c625c86a25c949f43c6b48ace806dca4e5 |
|
MD5 | dd0321a9203d2020a0fae4c9edde44c1 |
|
BLAKE2b-256 | 92dc7209e5852b59a5085f1c54ccd4d8c20d9dedd8bbed3e5168efe56cc2ccac |