Skip to main content

llama-index llms google genai integration

Project description

LlamaIndex Llms Integration: Google GenAI

Installation

  1. Install the required Python packages:

    %pip install llama-index-llms-google-genai
    
  2. Set the Google API key as an environment variable:

    %env GOOGLE_API_KEY=your_api_key_here
    

Usage

Basic Content Generation

To generate a poem using the Gemini model, use the following code:

from llama_index.llms.google_genai import GoogleGenAI

llm = GoogleGenAI(model="gemini-2.0-flash")
resp = llm.complete("Write a poem about a magic backpack")
print(resp)

Chat with Messages

To simulate a conversation, send a list of messages:

from llama_index.core.llms import ChatMessage
from llama_index.llms.google_genai import GoogleGenAI

messages = [
    ChatMessage(role="user", content="Hello friend!"),
    ChatMessage(role="assistant", content="Yarr what is shakin' matey?"),
    ChatMessage(
        role="user", content="Help me decide what to have for dinner."
    ),
]

llm = GoogleGenAI(model="gemini-2.0-flash")
resp = llm.chat(messages)
print(resp)

Streaming Responses

To stream content responses in real-time:

from llama_index.llms.google_genai import GoogleGenAI

llm = GoogleGenAI(model="gemini-2.0-flash")
resp = llm.stream_complete(
    "The story of Sourcrust, the bread creature, is really interesting. It all started when..."
)
for r in resp:
    print(r.text, end="")

To stream chat responses:

from llama_index.core.llms import ChatMessage
from llama_index.llms.google_genai import GoogleGenAI

llm = GoogleGenAI(model="gemini-2.0-flash")
messages = [
    ChatMessage(role="user", content="Hello friend!"),
    ChatMessage(role="assistant", content="Yarr what is shakin' matey?"),
    ChatMessage(
        role="user", content="Help me decide what to have for dinner."
    ),
]
resp = llm.stream_chat(messages)

Specific Model Usage

To use a specific model, you can configure it like this:

from llama_index.llms.google_genai import GoogleGenAI

llm = GoogleGenAI(model="models/gemini-pro")
resp = llm.complete("Write a short, but joyous, ode to LlamaIndex")
print(resp)

Asynchronous API

To use the asynchronous completion API:

from llama_index.llms.google_genai import GoogleGenAI

llm = GoogleGenAI(model="models/gemini-pro")
resp = await llm.acomplete("Llamas are famous for ")
print(resp)

For asynchronous streaming of responses:

resp = await llm.astream_complete("Llamas are famous for ")
async for chunk in resp:
    print(chunk.text, end="")

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llama_index_llms_google_genai-0.3.1.tar.gz (11.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llama_index_llms_google_genai-0.3.1-py3-none-any.whl (11.9 kB view details)

Uploaded Python 3

File details

Details for the file llama_index_llms_google_genai-0.3.1.tar.gz.

File metadata

File hashes

Hashes for llama_index_llms_google_genai-0.3.1.tar.gz
Algorithm Hash digest
SHA256 2b440d4633af45564a6801a4a6f37af0c0da430a3ccf5f4b53eff2de34aa8962
MD5 41830dfa0fca180aa20c8b38497f579c
BLAKE2b-256 9e50b14ae7cfe9ed15764d7cc4a74100e94f26de2b73137491f2354442093ca7

See more details on using hashes here.

File details

Details for the file llama_index_llms_google_genai-0.3.1-py3-none-any.whl.

File metadata

File hashes

Hashes for llama_index_llms_google_genai-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 7bcec1a3c2314dc957277a375813a045d9e5eb8e0765b1d6cdb31dfecf8c4518
MD5 1312cfa0e94df680c8a3eb8e7ade3318
BLAKE2b-256 c6c9a2df1cc203950ad64eed77a9eba9a2988f760b7e11537a71fc76f6268879

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page