Skip to main content

A reverse-engineered async wrapper for Google Gemini web client

Project description

Gemini Workmark

Gemini Icon Gemini-API

A reverse-engineered asynchronous python wrapper for Google Gemini (formerly Bard).

Features

  • ImageFx Support - Supports retrieving images generated by ImageFx, Google's latest AI image generator.
  • Classified Outputs - Auto categorizes texts, web images and AI generated images from the response.
  • Official Flavor - Provides a simple and elegant interface inspired by Google Generative AI's official API.
  • Asynchronous - Utilizes asyncio to run generating tasks and return outputs efficiently.

Installation

pip install gemini-webapi

Authentication

  • Go to https://gemini.google.com and login with your Google account
  • Press F12 for web inspector, go to Network tab and refresh the page
  • Click any request and copy cookie values of __Secure-1PSID and __Secure-1PSIDTS

Note: __Secure-1PSIDTS could get expired frequently if the Google account is actively used elsewhere, especially when visiting https://gemini.google.com directly. It's recommended to use a separate Google account if you are builing a keep-alive service with this package.

Usage

Initialization

Import required packages and initialize a client with your cookies obtained from the previous step.

import asyncio
from gemini import GeminiClient

# Replace "COOKIE VALUE HERE" with your actual cookie values
Secure_1PSID = "COOKIE VALUE HERE"
Secure_1PSIDTS = "COOKIE VALUE HERE"

async def main():
    client = GeminiClient(Secure_1PSID, Secure_1PSIDTS, proxy=None)
    await client.init(timeout=30)

asyncio.run(main())

Generate contents from text inputs

Ask a one-turn quick question by calling GeminiClient.generate_content.

async def main():
    response = await client.generate_content("Hello World!")
    print(response.text)

asyncio.run(main())

Note: simply use print(response) to get the same output if you just want to see the response text

Conversations across multiple turns

If you want to keep conversation continuous, please use GeminiClient.start_chat to create a ChatSession object and send messages through it. The conversation history will be automatically handled and get updated after each turn.

async def main():
    chat = client.start_chat()
    response1 = await chat.send_message("Briefly introduce Europe")
    response2 = await chat.send_message("What's the population there?")
    print(response1.text, response2.text, sep="\n\n----------------------------------\n\n")

asyncio.run(main())

Retrieve images in response

Images in the API's output are stored as a list of Image objects. You can access the image title, URL, and description by calling image.title, image.url and image.alt respectively.

async def main():
    response = await client.generate_content("Send me some pictures of cats")
    for image in response.images:
        print(image, "\n\n----------------------------------\n")

asyncio.run(main())

Generate image with ImageFx

In February 2022, Google introduced a new AI image generator called ImageFx and integrated it into Gemini. You can ask Gemini to generate images with ImageFx simply by natural language.

async def main():
    response = await client.generate_content("Generate some pictures of cats")
    for image in response.images:
        print(image, "\n\n----------------------------------\n")

asyncio.run(main())

Note: by default, when asked to send images (like the previous example), Gemini will send images fetched from web instead of generating images with AI model, unless you specifically require to "generate" images in your prompt. In this package, web images and generated images are treated differently as WebImage and GeneratedImage, and will be automatically categorized in the output.

Save images to local files

You can save images returned from Gemini to local files under /temp by calling Image.save(). Optionally, you can specify the file path and file name by passing path and filename arguments to the function. Works for both WebImage and GeneratedImage.

async def main():
    response = await client.generate_content("Generate some pictures of cats")
    for i, image in enumerate(response.images):
        await image.save(path="temp/", filename=f"cat_{i}.png")

asyncio.run(main())

Check and switch to other answer candidates

A response from Gemini usually contains multiple reply candidates with different generated contents. You can check all candidates and choose one to continue the conversation. By default, the first candidate will be chosen automatically.

async def main():
    # Start a conversation and list all reply candidates
    chat = client.start_chat()
    response = await chat.send_message("What's the best Japanese dish? Recommend one only.")
    for candidate in response.candidates:
        print(candidate, "\n\n----------------------------------\n")

    # Control the ongoing conversation flow by choosing candidate manually
    new_candidate = chat.choose_candidate(index=1)  # Choose the second candidate here
    followup_response = await chat.send_message("Tell me more about it.")  # Will generate contents based on the chosen candidate
    print(new_candidate, followup_response, sep="\n\n----------------------------------\n\n")

asyncio.run(main())

References

Google AI Studio

acheong08/Bard

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

gemini-webapi-0.2.0.tar.gz (15.1 kB view details)

Uploaded Source

Built Distribution

gemini_webapi-0.2.0-py3-none-any.whl (10.7 kB view details)

Uploaded Python 3

File details

Details for the file gemini-webapi-0.2.0.tar.gz.

File metadata

  • Download URL: gemini-webapi-0.2.0.tar.gz
  • Upload date:
  • Size: 15.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/4.0.2 CPython/3.11.8

File hashes

Hashes for gemini-webapi-0.2.0.tar.gz
Algorithm Hash digest
SHA256 a05b67c2a91851d9c7c1604abb068f324915f699ffe99aec3056ec18057ceb09
MD5 d94bc07f41f1099b23682a9605af655c
BLAKE2b-256 54827a00d1ce3b7f4e50d7f96cbac64fba7f73bd9bfa0504d2d324444d98d0a7

See more details on using hashes here.

File details

Details for the file gemini_webapi-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for gemini_webapi-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 0a741e42d6dff9a9d0fa27739ead4db750cda17c502869cc33b84d456d9e78f3
MD5 91fc6fedf9d1c79df0c9ef2a017c8968
BLAKE2b-256 793a013db2345777132999e57226077d7103416fdb6e50bcc4b2e5659d18b8a0

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page