A reverse-engineered async wrapper for Google Gemini web client
Project description
Gemini-API
A reverse-engineered asynchronous python wrapper for Google Gemini (formerly Bard).
Features
- ImageFx Support - Supports retrieving images generated by ImageFx, Google's latest AI image generator.
- Classified Outputs - Automatically categorizes texts, web images and AI generated images from the response.
- Official Flavor - Provides a simple and elegant interface inspired by Google Generative AI's official API.
- Asynchronous - Utilizes
asyncio
to run generating tasks and return outputs efficiently.
Installation
pip install gemini-webapi
Authentication
- Go to https://gemini.google.com and login with your Google account
- Press F12 for web inspector, go to
Network
tab and refresh the page - Click any request and copy cookie values of
__Secure-1PSID
and__Secure-1PSIDTS
Note: __Secure-1PSIDTS
could get expired frequently if the Google account is actively used elsewhere, especially when visiting https://gemini.google.com directly. It's recommended to use a separate Google account if you are builing a keep-alive service with this package.
Usage
Initialization
Import required packages and initialize a client with your cookies obtained from the previous step.
import asyncio
from gemini import GeminiClient
# Replace "COOKIE VALUE HERE" with your actual cookie values
Secure_1PSID = "COOKIE VALUE HERE"
Secure_1PSIDTS = "COOKIE VALUE HERE"
async def main():
client = GeminiClient(Secure_1PSID, Secure_1PSIDTS, proxy=None)
await client.init(timeout=30)
asyncio.run(main())
Generate contents from text inputs
Ask a one-turn quick question by calling GeminiClient.generate_content
.
async def main():
response = await client.generate_content("Hello World!")
print(response.text)
asyncio.run(main())
Note: simply use print(response)
to get the same output if you just want to see the response text
Conversations across multiple turns
If you want to keep conversation continuous, please use GeminiClient.start_chat
to create a ChatSession
object and send messages through it. The conversation history will be automatically handled and get updated after each turn.
async def main():
chat = client.start_chat()
response1 = await chat.send_message("Briefly introduce Europe")
response2 = await chat.send_message("What's the population there?")
print(response1.text, response2.text, sep="\n\n----------------------------------\n\n")
asyncio.run(main())
Retrieve images in response
Images in the API's output are stored as a list of Image
objects. You can access the image title, URL, and description by calling image.title
, image.url
and image.alt
respectively.
async def main():
response = await client.generate_content("Send me some pictures of cats")
images = response.images
for image in images:
print(image, "\n\n----------------------------------\n")
asyncio.run(main())
Generate image with ImageFx
In February 2022, Google introduced a new AI image generator called ImageFx and integrated it into Gemini. You can ask Gemini to generate images with ImageFx simply by natural language.
async def main():
response = await client.generate_content("Generate some pictures of cats")
images = response.images
for image in images:
print(image, "\n\n----------------------------------\n")
asyncio.run(main())
Note: by default, when asked to send images (like the previous example), Gemini will send images fetched from web instead of generating images with AI model, unless you specifically require to "generate" images in your prompt. In this package, web images and generated images are treated differently as WebImage
and GeneratedImage
, and will be automatically categorized in the output.
Check and switch to other answer candidates
A response from Gemini usually contains multiple reply candidates with different generated contents. You can check all candidates and choose one to continue the conversation. By default, the first candidate will be chosen automatically.
async def main():
# Start a conversation and list all reply candidates
chat = client.start_chat()
response = await chat.send_message("What's the best Japanese dish? Recommend one only.")
for candidate in response.candidates:
print(candidate, "\n\n----------------------------------\n")
# Control the ongoing conversation flow by choosing candidate manually
new_candidate = chat.choose_candidate(index=1) # Choose the second candidate here
followup_response = await chat.send_message("Tell me more about it.") # Will generate contents based on the chosen candidate
print(new_candidate, followup_response, sep="\n\n----------------------------------\n\n")
asyncio.run(main())
References
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file gemini-webapi-0.1.0.tar.gz
.
File metadata
- Download URL: gemini-webapi-0.1.0.tar.gz
- Upload date:
- Size: 13.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/4.0.2 CPython/3.11.8
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6cdabc1a1589959ee61130e752c3cb1a4a9e5c389da5fa1164ded749e224095c |
|
MD5 | 5d1e0031adc9dd0f99a2ab6b95d2f569 |
|
BLAKE2b-256 | 1070dc625123b017c1c97003b30cfaab17f85d8f6f74488d29993fbaac969f17 |
File details
Details for the file gemini_webapi-0.1.0-py3-none-any.whl
.
File metadata
- Download URL: gemini_webapi-0.1.0-py3-none-any.whl
- Upload date:
- Size: 9.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/4.0.2 CPython/3.11.8
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e77b4d340716bb9d1b645f7085b51820c8aabb13ad13800626ccbec97a74c2ba |
|
MD5 | 5651c757150b35fbd401025ca864dd25 |
|
BLAKE2b-256 | 5139f41630903d5906e4708508b954bfa83f22b4e3429416882a796162fbc167 |