Skip to main content

The python package that returns Response of Google Gemini through API.

Project description

Development Status :: 1 - Planning

Not fully prepared yet.

Gemini Icon Google - Gemini API

A Python wrapper, python-gemini-api, interacts with Google Gemini via reverse engineering. Reconstructing with REST syntax for users facing frequent authentication errors or unable to authenticate properly in Google Authentication.

Collaborated competently with Antonio Cheong.

What is Gemini?

Gemini is a family of generative AI models developed by Google DeepMind that is designed for multimodal use cases. The Gemini API gives you access to the Gemini Pro and Gemini Pro Vision models. In February 2024, Google's Bard service was changed to Gemini. Paper, Official Website, Official API, API Documents.


Installation

pip install python-gemini-api
pip install git+https://github.com/dsdanielpark/Gemini-API.git

Authentication

[!NOTE] Cookies can change quickly. Don't reopen the same session or repeat prompts too often; they'll expire faster. If the cookie value doesn't export correctly, refresh the Gemini page and export again. Check this sample cookie file.

  1. Visit https://gemini.google.com/ and wait for it to fully load.
  2. (Recommended) Export cookies on the gemini site using a Chrome extension. Use ExportThisCookies, then open and copy the txt file contents. For manual collection, see this image.
  3. (Additional requirement) To manually collect the nonce value: Press F12 → Network → Send any prompt to webui gemini → Click the post address starting with "https://gemini.google.com/_/BardChatUi/data/assistant.lamda.BardFrontendService/StreamGenerate" → Payload → Form Data → Copy the "at" key value. Reference this image.

Usage

After changed Bard to Gemini, multiple cookies, often updated, are needed based on region or Google account. Thus, automatic cookie renewal logic is crucial.

Initialization

You must appropriately set the cookies_dict parameter to Gemini class. When using the auto_cookies argument to automatically collect cookies, keep the Gemini web page opened that receives Gemini's response open in your web browser.

from gemini import Gemini

cookies = {
    "key": "value"
}

GeminiClient = Gemini(cookies=cookies)
# GeminiClient = Gemini(cookie_fp="folder/cookie_file.json") # Or use cookie file path
# GeminiClient = Gemini(auto_cookies=True) # Or use auto_cookies paprameter
# GeminiClient = Gemini(cookies=cookies, nonce="value") # If you encounter nonce error, pass nonce value. See `Authentication` section above.

Can update cookies automatically using broser_cookie3. For the first attempt, manually download the cookies to test the functionality.

[!IMPORTANT] Before proceeding, ensure that the GeminiClient object is defined without any errors.


Text generation

prompt = "Hello, Gemini. What's the weather like in Seoul today?"
response = GeminiClient.generate_content(prompt)
print(response)

Image generation

prompt = "Hello, Gemini. Give me a beautiful photo of Seoul's scenery."
response = GeminiClient.generate_content(prompt)

print("\n".join(response.images)) # Print images

for i, image in enumerate(response.images): # Save images
    image.save(path="folder_path/", filename=f"seoul_{i}.png")

Generate content with image

As an experimental feature, it is possible to ask questions with an image. However, this functionality is only available for accounts with image upload capability in Gemini's web UI.

prompt = "What is in the image?"
image = open("folder_path/image.jpg", "rb").read() # (jpeg, png, webp) are supported.

response = GeminiClient.generate_content(prompt, image)

Text To Speech(TTS) from Gemini

Business users and high traffic volume may be subject to account restrictions according to Google's policies. Please use the Official Google Cloud API for any other purpose.

text = "Hello, I'm developer in seoul" # Gemini will speak this sentence
response = GeminiClient.generate_content(prompt)
audio = GeminiClient.speech(text)
with open("speech.ogg", "wb") as f:
    f.write(bytes(audio["audio"]))

Further

Behind a proxy

If you are working behind a proxy, use the following.

proxies = {
    "http": "http://proxy.example.com:8080",
    "https": "https://proxy.example.com:8080"
}

GeminiClient = Gemini(cookies=cookies, proxies=proxies, timeout=30)
GeminiClient.generate_content("Hello, Gemini. Give me a beautiful photo of Seoul's scenery.")

Use rotating proxies

If you want to avoid blocked requests and bans, then use Smart Proxy by Crawlbase. It forwards your connection requests to a randomly rotating IP address in a pool of proxies before reaching the target website. The combination of AI and ML make it more effective to avoid CAPTCHAs and blocks.

# Get your proxy url at crawlbase https://crawlbase.com/docs/smart-proxy/get/
proxy_url = "http://xxxxx:@smartproxy.crawlbase.com:8012" 
proxies = {"http": proxy_url, "https": proxy_url}

GeminiClient = Gemini(cookies=cookies, proxies=proxies, timeout=30)
GeminiClient.generate_content("Hello, Gemini. Give me a beautiful photo of Seoul's scenery.")

Reusable session object

You can continue the conversation using a reusable session. However, this feature is limited, and it is difficult for a package-level feature to perfectly maintain context. You can try to maintain the consistency of conversations same way as other LLM services, such as passing some sort of summary of past conversations to the DB.

from gemini import Gemini, HEADERS
import requests

cookies = {
    "key": "value"
}

session = requests.Session()
session.headers = HEADERS
session.cookies.update(cookies)

GeminiClient = Gemini(session=session, timeout=30)
response = GeminiClient.generate_content("Hello, Gemini. What's the weather like in Seoul today?")

# Continued conversation without set new session
response = GeminiClient.generate_content("What was my last prompt?")

More features


How to use open-source Gemma

Gemma models are Google's lightweight, advanced text-to-text, decoder-only language models, derived from Gemini research. Available in English, they offer open weights and variants, ideal for tasks like question answering and summarization. Their small size enables deployment in resource-limited settings, broadening access to cutting-edge AI. For more infomation, visit Gemma-7b model card.

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b")

input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt")

outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))

Sponsor

Use Crawlbase API for efficient data scraping to train AI models, boasting a 98% success rate and 99.9% uptime. It's quick to start, GDPR/CCPA compliant, supports massive data extraction, and is trusted by 70k+ developers.

FAQ

You can find most help on the FAQ and Issue pages. Alternatively, utilize the official Gemini API at Google AI Studio.

Issues

Sincerely grateful for any reports on new features or bugs. Your valuable feedback on the code is highly appreciated. Frequent errors may occur due to changes in Google's service API interface. Both Issue reports and Pull requests contributing to improvements are always welcome. We strive to maintain an active and courteous open community.

Contributions

We would like to express our sincere gratitude to all the contributors.

Further development potential
  • refactoring
  • gemini/core: httpx.session
    • messages
      • content
        • text
          • parsing
        • image
          • parsing
      • response format structure class
      • tool_calls
    • third party
      • replit
      • google tools
  • gemini/client: httpx.AsyncClient
    • messages
      • content
        • text
          • parsing
        • image
          • parsing
      • response format structure class
      • tool_calls
    • third party
      • replit
      • google tools

Contacts

Core maintainers:

License

MIT license, 2024, Minwoo(Daniel) Park. We hereby strongly disclaim any explicit or implicit legal liability related to our works. Users are required to use this package responsibly and at their own risk. This project is a personal initiative and is not affiliated with or endorsed by Google. It is recommended to use Google's official API.

References

[1] Github acheong08/Bard
[2] Github dsdanielpark/Bard-API
[3] Github GoogleCloudPlatform/generative-ai
[4] Google AI Studio

Warning* Users bear all legal responsibilities when using the GeminiAPI package, which offers easy access to Google Gemini for developers. This unofficial Python package isn't affiliated with Google and may lead to Google account restrictions if used excessively or commercially due to its reliance on Google account cookies. Frequent changes in Google's interface, Google's API policies, and your country/region, as well as the status of your Google account, may affect functionality. Utilize the issue page and discussion page.


Copyright (c) 2024 Minwoo(Daniel) Park, South Korea

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

python-gemini-api-1.0.1.tar.gz (34.7 kB view details)

Uploaded Source

Built Distribution

python_gemini_api-1.0.1-py3-none-any.whl (36.7 kB view details)

Uploaded Python 3

File details

Details for the file python-gemini-api-1.0.1.tar.gz.

File metadata

  • Download URL: python-gemini-api-1.0.1.tar.gz
  • Upload date:
  • Size: 34.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.12

File hashes

Hashes for python-gemini-api-1.0.1.tar.gz
Algorithm Hash digest
SHA256 80fbd23123ca5d31fc602933b1e5f20829572bda1bade4710b5d111b260ee985
MD5 085a9769a28c338bc33364c23bb52f28
BLAKE2b-256 7f7af9f6d16a8dd903b1fa2a8c1fd6943d3051ee05a169919f93dc2654c15a62

See more details on using hashes here.

File details

Details for the file python_gemini_api-1.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for python_gemini_api-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 22ce0c0ddd6450c194c25b07474c6cb136bf9a98555801536d2f3171a3a40de5
MD5 5d936f33375a9d51d36662a94cbf7aea
BLAKE2b-256 4abc65df2931a99ccb70cc4f0f6615eb7f8254e350047129a59c218b23567ee5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page