A lightweight Gemini chat wrapper using HTTP requests
Project description
generativellm
generativellm is a simple wrapper around the Gemini REST API using pure HTTP requests. It lets you create chat sessions and generate responses.
Installation
pip install generativellm
Usage
from generativellm import AIChat
chatbot = AIChat(token="your-gemini-api-key", model="gemini-pro")
conversation = [
"Hello!",
"Hi there! How can I help?",
"Can you summarize general relativity?",
]
response = chatbot.get_response(conversation)
print(response)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
generativellm-0.0.1.tar.gz
(2.4 kB
view details)
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file generativellm-0.0.1.tar.gz.
File metadata
- Download URL: generativellm-0.0.1.tar.gz
- Upload date:
- Size: 2.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
121d9290b721003681071e470db85621c24a6161eff7caa7c99fd9f64156999f
|
|
| MD5 |
a91e816039ba60421a8fe52d76b8edf6
|
|
| BLAKE2b-256 |
9d831b5ec79e8c1937847a580dfc3b6ac8d4a9146fd370795cf57ce6584dcdef
|
File details
Details for the file generativellm-0.0.1-py3-none-any.whl.
File metadata
- Download URL: generativellm-0.0.1-py3-none-any.whl
- Upload date:
- Size: 2.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0e9e812669a72e516fd24809a28e9cb4a7b3cb2f19c339903f2d18545a0a7c24
|
|
| MD5 |
85143e90f759fab12b1a9c8700180825
|
|
| BLAKE2b-256 |
add61e18e6fb2b6d7ba7c49e49668caecba90f5ba0b85c3a1ec2a28f4257141f
|