A lightweight Gemini chat wrapper using HTTP requests
Project description
generativellm
generativellm is a simple wrapper around the Gemini REST API using pure HTTP requests. It lets you create chat sessions and generate responses.
Installation
pip install generativellm
Usage
from generativellm import AIChat
chatbot = AIChat(token="your-gemini-api-key", model="gemini-pro")
conversation = [
"Hello!",
"Hi there! How can I help?",
"Can you summarize general relativity?",
]
response = chatbot.get_response(conversation)
print(response)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
generativellm-0.1.2.tar.gz
(2.4 kB
view details)
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file generativellm-0.1.2.tar.gz.
File metadata
- Download URL: generativellm-0.1.2.tar.gz
- Upload date:
- Size: 2.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f718e74324e3d1d431649bba202e0e7c6f980ed8d898e9bab8e3be487d710e8b
|
|
| MD5 |
7c4c71d40a1cbcfaf1f1ad60f587034c
|
|
| BLAKE2b-256 |
46541f8eca9d2d4196a642fdd012a01413b06e7dcba76ae7edd8896851f72f29
|
File details
Details for the file generativellm-0.1.2-py3-none-any.whl.
File metadata
- Download URL: generativellm-0.1.2-py3-none-any.whl
- Upload date:
- Size: 2.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ec6708128a3f11b9409474bbebac05dda2673d996a9c6383e7262e74caf1ef7b
|
|
| MD5 |
e4ed499986b46c533d395169a68e5659
|
|
| BLAKE2b-256 |
84e4cbcf08de08f75b8aa270efd870de2de371f4896bb4194779c18798aca66c
|