A lightweight Gemini chat wrapper using HTTP requests
Project description
generativellm
generativellm is a simple wrapper around the Gemini REST API using pure HTTP requests. It lets you create chat sessions and generate responses.
Installation
pip install generativellm
Usage
from generativellm import AIChat
chatbot = AIChat(token="your-gemini-api-key", model="gemini-pro")
conversation = [
"Hello!",
"Hi there! How can I help?",
"Can you summarize general relativity?",
]
response = chatbot.get_response(conversation)
print(response)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
generativellm-0.0.2.tar.gz
(2.4 kB
view details)
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file generativellm-0.0.2.tar.gz.
File metadata
- Download URL: generativellm-0.0.2.tar.gz
- Upload date:
- Size: 2.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
84c0955d6c7c5483deb7499fd8e97d7cbb897c46a8cf40fab5f649595f613dc8
|
|
| MD5 |
2c7b98d942325bde77d5081f18d0f05e
|
|
| BLAKE2b-256 |
2184a9779d4e993562c75605656a5582055cabacd65d66aaaff19475cad2248c
|
File details
Details for the file generativellm-0.0.2-py3-none-any.whl.
File metadata
- Download URL: generativellm-0.0.2-py3-none-any.whl
- Upload date:
- Size: 2.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4f29a5fcf0f84534b8df73763342ff7fdd637facf93dbf6723d9c13997d5b962
|
|
| MD5 |
451a71d58a721b05ae1fbb33b7a54eb8
|
|
| BLAKE2b-256 |
d286069ad1c667aaacfd92000d15f9b62b67d3ac50b1c5c7d0cb7c6d5eb78733
|