The python package that returns Response of Google Gemini through API.
Project description
Development Status :: 1 - Planning
Not ready yet. Development and QA for the service underway from March 1st, 2024.
Google - Gemini API
A Python wrapper, python-gemini-api, interacts with Google Gemini via reverse engineering.
What is Gemini?
Gemini is a family of generative AI models developed by Google DeepMind that is designed for multimodal use cases. The Gemini API gives you access to the Gemini Pro and Gemini Pro Vision models. In February 2024, Google's Gemini service was changed to Gemini. Paper, Official Web
Installation
pip install python-gemini-api
pip install git+https://github.com/dsdanielpark/Gemini-API.git
Authentication
Warning DO NOT expose your cookies.
Cookie requirements may vary based on country/regions and the status of your Google account.
- Visit https://gemini.google.com/
- F12 for console
- Session: Application → Cookies → Copy the value of
__Secure-1PSIDTS
,__Secure-1PSIDCC
,__Secure-1PSID
,NID
cookie orSIDCC
cookie. Depending on the region and Google account status, multiple cookies may be required.
Usage
After changed Bard to Gemini, multiple cookies, often updated, are needed based on region or Google account. Thus, automatic cookie renewal logic is crucial.
Initialization
You must appropriately set the cookies_dict
parameter to Gemini
class. Needed cookie values may vary by country/region/account.
Async client
from gemini import GeminiClient
cookies = {
"__Secure-1PSID": "value",
"__Secure-1PSIDTS": "value",
"__Secure-1PSIDCC": "value",
"NID": "value",
}
client = GeminiClient(cookies=cookies)
# client = GeminiClient(auto_cookies=True) # Or use auto_cookies paprameter
await client.async_init()
Sync session
from gemini import Gemini
cookies = {
"SIDCC": "value"
}
client = Gemini(cookies=cookies)
# client = Gemini(auto_cookies=True) # Or use auto_cookies paprameter
Can update cookies automatically using broser_cookie3. Cookie values can be changed frequently, thus it is recommended to automatically update.
Before proceeding, ensure that the GeminiClient object is defined without any errors.
Text generation
prompt = "Hello, Gemini. What's the weather like in Seoul today?"
response = GeminiClient.generate_content(prompt)
print(response)
Image generation
prompt = "Hello, Gemini. Give me a beautiful photo of Seoul's scenery."
response = GeminiClient.generate_content(prompt)
print("\n".join(response.images)) # Print images
for i, image in enumerate(response.images): # Save images
image.save(path="folder_path/", filename=f"seoul_{i}.png")
Generate content with image
It may not work as it is only available for certain accounts, regions, and other restrictions. As an experimental feature, it is possible to ask questions with an image. However, this functionality is only available for accounts with image upload capability in Gemini's web UI.
prompt = "What is in the image?"
image = open("folder_path/image.jpg", "rb").read() # (jpeg, png, webp) are supported.
response = GeminiClient.generate_content(prompt, image)
Text To Speech(TTS) from Gemini
Business users and high traffic volume may be subject to account restrictions according to Google's policies. Please use the Official Google Cloud API for any other purpose.
text = "Hello, I'm developer in seoul" # Gemini will speak this sentence
response = GeminiClient.generate_content(prompt)
audio = GeminiClient.speech(text)
with open("speech.ogg", "wb") as f:
f.write(bytes(audio["audio"]))
Further
Behind a proxy
If you are working behind a proxy, use the following.
proxies = {
"http": "http://proxy.example.com:8080",
"https": "https://proxy.example.com:8080"
}
GeminiClient = Gemini(cookies=cookies, proxies=proxies, timeout=30)
GeminiClient.generate_content("Hello, Gemini. Give me a beautiful photo of Seoul's scenery.")
Use rotating proxies
If you want to avoid blocked requests and bans, then use Smart Proxy by Crawlbase. It forwards your connection requests to a randomly rotating IP address in a pool of proxies before reaching the target website. The combination of AI and ML make it more effective to avoid CAPTCHAs and blocks.
# Get your proxy url at crawlbase https://crawlbase.com/docs/smart-proxy/get/
proxy_url = "http://xxxxx:@smartproxy.crawlbase.com:8012"
proxies = {"http": proxy_url, "https": proxy_url}
GeminiClient = Gemini(cookies=cookies, proxies=proxies, timeout=30)
GeminiClient.generate_content("Hello, Gemini. Give me a beautiful photo of Seoul's scenery.")
Reusable session object
You can continue the conversation using a reusable session. However, this feature is limited, and it is difficult for a package-level feature to perfectly maintain context. You can try to maintain the consistency of conversations same way as other LLM services, such as passing some sort of summary of past conversations to the DB.
from gemini import Gemini, SESSION_HEADERS
import requests
cookies = {
"__Secure-1PSID": "value",
"__Secure-1PSIDTS": "value",
"__Secure-1PSIDCC": "value",
"NID": "value",
}
session = requests.Session()
session.headers = SESSION_HEADERS
session.cookies.update(cookies)
GeminiClient = Gemini(session=session, timeout=30)
response = GeminiClient.generate_content("Hello, Gemini. What's the weather like in Seoul today?")
# Continued conversation without set new session
response = GeminiClient.generate_content("What was my last prompt?")
More features
- Chat Gemini
- Get image links
- Multi-language Gemini
- Export Conversation
- Export Code to Repl.it
- Executing Python code received as a response from Gemini
- Max_token, Max_sentences
- Translation to another programming language
How to use open-source Gemma
Gemma models are Google's lightweight, advanced text-to-text, decoder-only language models, derived from Gemini research. Available in English, they offer open weights and variants, ideal for tasks like question answering and summarization. Their small size enables deployment in resource-limited settings, broadening access to cutting-edge AI. For more infomation, visit Gemma-7b model card.
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b")
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
Use Crawlbase API for efficient data scraping to train AI models, boasting a 98% success rate and 99.9% uptime. It's quick to start, GDPR/CCPA compliant, supports massive data extraction, and is trusted by 70k+ developers.
FAQ
You can find most help on the FAQ and Issue pages. Alternatively, utilize the official Gemini API at Google AI Studio.
Issues
Sincerely grateful for any reports on new features or bugs. Your valuable feedback on the code is highly appreciated. Frequent errors may occur due to changes in Google's service API interface. Both Issue reports and Pull requests contributing to improvements are always welcome. We strive to maintain an active and courteous open community.
Contributors
We would like to express my sincere gratitude to all the contributors.
Contacts
- Core Maintainer: Minwoo(Daniel) Park, @dsdanielpark
- E-mail: parkminwoo1991@gmail.com
License
MIT license, 2024, Minwoo(Daniel) Park. We hereby strongly disclaim any explicit or implicit legal liability related to our works. Users are required to use this package responsibly and at their own risk.
References
[1] Github acheong08/Bard
[2] Github GoogleCloudPlatform/generative-ai
[3] Github HanaokaYuzu/Gemini-API
[4] Google AI Studio
Warning Users bear all legal responsibilities when using the GeminiAPI package, which offers easy access to Google Gemini for developers. This unofficial Python package isn't affiliated with Google and may lead to Google account restrictions if used excessively or commercially due to its reliance on Google account cookies. Frequent changes in Google's interface, Google's API policies, and your country/region, as well as the status of your Google account, may affect functionality. Utilize the issue page and discussion page.
Copyright (c) 2024 Minwoo(Daniel) Park, South Korea
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file python-gemini-api-0.1.3.tar.gz
.
File metadata
- Download URL: python-gemini-api-0.1.3.tar.gz
- Upload date:
- Size: 34.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | d192e2365fdc51e0f1ad96d69284e5da0bc5e139d2c1a65a4be2870281dd6152 |
|
MD5 | 07a3b9d15703b683206976d51414f636 |
|
BLAKE2b-256 | 650c687407e88628f2cc1126ab1a983717f5ded323bf6eba718d1eb6fe0be968 |
File details
Details for the file python_gemini_api-0.1.3-py3-none-any.whl
.
File metadata
- Download URL: python_gemini_api-0.1.3-py3-none-any.whl
- Upload date:
- Size: 38.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4d8c4cbe7a6418e47e8bf0ab9d3ce514f46749d8f58ebe580bf50f08460d1bfc |
|
MD5 | fd05ef9212ab3b34b16f801ee83af272 |
|
BLAKE2b-256 | f6068563a829bf0f0a3d10dacec0b4854f1b2a3961eca5d1b6a63729cf8fee73 |