A Python client for the Puter API
Project description
PutergenAI: Python SDK for Puter.js
Overview
PutergenAI is a lightweight, robust Python SDK for interacting with the Puter.js API, an open-source cloud operating system focused on privacy and AI capabilities. This SDK provides a clean interface for AI chat completions (supporting multiple models like GPT, Claude, Grok, etc.), file system operations (read/write/delete), and utility AI functions (text-to-image, image-to-text, text-to-speech).
Installation
Install via pip (recommended for production):
pip install putergenai
For development, clone the repo and install locally:
git clone https://github.com/nerve11/putergenai.git
cd putergenai
pip install -e .
Dependencies:
requests(>=2.32.0) for HTTP communication.
No other external libs are required, keeping the footprint small. Tested on Python 3.8–3.12 across Linux, macOS, and Windows.
Pro Tip: Use a virtual environment (e.g., venv or poetry) to isolate dependencies. If you encounter SSL issues, ensure your system's CA certificates are up-to-date.
New: See .gitignore and requirements.txt for project setup and dependency management.
Quick Start
from putergenai import PuterClient
# Initialize and login
client = PuterClient()
client.login("your_username", "your_password")
# AI Chat example (non-streaming)
messages = [{"role": "user", "content": "What is the meaning of life?"}]
response = client.ai_chat(messages=messages, options={"model": "gpt-5"}, strict_model=True)
print("Response:", response["response"]["result"]["message"]["content"])
print("Used Model:", response["used_model"])
# Streaming example
gen = client.ai_chat(messages=messages, options={"model": "claude-sonnet-4", "stream": True})
for content, used_model in gen:
print(content, end='', flush=True)
print("\nUsed Model:", used_model)
# File system example
client.fs_write("test.txt", "Hello, Puter!")
content = client.fs_read("test.txt").decode('utf-8')
print("File content:", content)
client.fs_delete("test.txt")
This snippet demonstrates authentication, AI chat (with model enforcement), and basic FS ops. Run with test_mode=True to simulate without costs.
Best Practice: Always wrap API calls in try-except blocks to handle ValueError for authentication issues or network errors. For production, implement exponential backoff on retries.
Security Note: All user inputs and file paths are now sanitized. Sensitive data is never logged. See the updated client.py for details.
API Syntax and Reference
The SDK centers around the PuterClient class. All methods are synchronous for simplicity; for async, wrap in asyncio or use threading.
Initialization
client = PuterClient(token="optional_pre_existing_token")
- If
tokenis provided, skips login. Otherwise, calllogin().
Authentication
client.login(username: str, password: str) -> str
- Returns the auth token.
- Raises
ValueErroron failure (e.g., invalid credentials).
AI Chat
client.ai_chat(
messages: List[Dict[str, Any]],
options: Optional[Dict[str, Any]] = None,
test_mode: bool = False,
image_url: Optional[Union[str, List[str]]] = None,
prompt: Optional[Union[str, List[Dict[str, Any]]]] = None,
strict_model: bool = False
) -> Union[Dict[str, Any], Generator[tuple[str, str], None, None]]
- messages: List of chat messages (e.g.,
[{"role": "user", "content": "Hi"}]). - options: Dict with
model(str, e.g., "gpt-5"),stream(bool),temperature(float 0-2). - test_mode: Use test API (no credits consumed).
- image_url: For vision models (e.g., GPT-4o).
- prompt: Alternative to messages for simple queries.
- strict_model: If True, raises error on model fallback.
- Returns:
- Non-stream:
{"response": dict, "used_model": str}. - Stream: Generator yielding
(content_chunk, used_model).
- Non-stream:
Syntax Notes:
- Models are passed explicitly in payload for reliability.
- Handles server fallbacks (e.g., GPT-5 → GPT-4.1-nano) with warnings or errors.
- Retries up to 3 times on availability issues, auto-enabling
test_modeif needed.
File System Operations
client.fs_write(path: str, content: Union[str, bytes, Any]) -> Dict[str, Any]
client.fs_read(path: str) -> bytes
client.fs_delete(path: str) -> None
path: Cloud path (e.g., "test.txt").content: String, bytes, or file-like object.- Raises
ValueErrororrequests.RequestExceptionon failure.
Other AI Utilities
client.ai_img2txt(image: Union[str, Any], test_mode: bool = False) -> str
client.ai_txt2img(prompt: str, test_mode: bool = False) -> str
client.ai_txt2speech(text: str, options: Optional[Dict[str, Any]] = None) -> bytes
ai_img2txt: OCR from URL or file.ai_txt2img: Generates image URL from prompt.ai_txt2speech: Returns MP3 bytes.
Error Handling: All methods raise exceptions on failure. Use try-except for resilience.
Use Cases
1. Interactive AI Chat Bot (e.g., Customer Support)
Use streaming for real-time responses. Handle model fallbacks for reliability.
messages = [{"role": "system", "content": "You are a helpful assistant."}]
while True:
user_input = input("You: ")
if user_input == "exit": break
messages.append({"role": "user", "content": user_input})
gen = client.ai_chat(messages, options={"model": "gpt-5", "stream": True}, strict_model=False)
print("Assistant: ", end='')
for content, _ in gen:
print(content, end='', flush=True)
print()
Why it works: Streaming reduces latency; strict_model=False ensures uptime if GPT-5 is unavailable.
2. File Backup Tool (Cloud Storage Integration)
Sync local files to Puter.js FS.
def backup_file(local_path: str, cloud_path: str):
with open(local_path, 'rb') as f:
client.fs_write(cloud_path, f)
print(f"Backed up {local_path} to {cloud_path}")
Pro Tip: Implement hashing (e.g., SHA-256) to avoid unnecessary uploads. Use threading for large files.
3. AI Content Generation Pipeline (e.g., Blog Post Generator)
Generate text, convert to speech, and store.
prompt = "Write a blog post about AI ethics."
response = client.ai_chat(prompt=prompt, options={"model": "claude-3-5-sonnet"})
content = response["response"]["result"]["message"]["content"]
client.fs_write("blog_post.txt", content)
audio = client.ai_txt2speech(content)
with open("blog_post.mp3", "wb") as f:
f.write(audio)
Best Practice: Batch requests for high-volume use; monitor costs via Puter.js dashboard.
4. Vision-Based Analysis (e.g., Image Description)
description = client.ai_img2txt("https://example.com/image.jpg", test_mode=True)
print("Image description:", description)
Limitation Note: Vision models (e.g., GPT-4o) may require specific drivers; test with test_mode=True first.
Error Handling and Best Practices
-
Common Errors:
ValueError: Invalid credentials or model mismatch (withstrict_model=True).requests.RequestException: Network issues; implement retries with exponential backoff.- Model fallback: Server may use a different model (e.g., GPT-5 → GPT-4.1-nano); check
used_modelin response.
-
Best Practices:
- Security: Never hardcode credentials; use environment variables (e.g.,
os.getenv("PUTER_USERNAME")). All user input is sanitized. - Performance: For streaming, use in async contexts (e.g.,
asyncio) to avoid blocking. - Costs: Always set
test_mode=Truein dev; monitor usage via Puter.js API. - Testing: Write unit tests for your integration (e.g., mock responses with
responseslib). - Versioning: Pin to a specific SDK version in
requirements.txt(e.g.,putergenai==0.1.5). - Scalability: For multi-user apps, pool clients or use session tokens.
- Security: Never hardcode credentials; use environment variables (e.g.,
If you encounter issues, check logs (enable DEBUG via logging.basicConfig(level=logging.DEBUG)) and verify your Puter.js account status. Contributions welcome—see below.
Contributing
Fork the repo, create a branch (git checkout -b feature/xyz), commit changes, and open a PR. Follow PEP 8 for style. Include tests (use unittest) and update docs if needed.
Run tests:
python -m unittest discover tests
License
MIT License. See LICENSE for details.
Acknowledgments
Built on top of Puter.js—kudos to the team for an innovative API. Inspired by real-world needs for privacy-focused AI tools.
Maintainer: Nerve11 (@Nerve11)
Last Updated: August 11, 2025
Version: 0.1.5
If this SDK saves you time, star the repo! Questions? Open an issue.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file putergenai-0.1.5.tar.gz.
File metadata
- Download URL: putergenai-0.1.5.tar.gz
- Upload date:
- Size: 15.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
52bd3e50d43d12a9b3ee424742667d06d3cdbbaa7068a82a72d8c2540ffe8f7a
|
|
| MD5 |
05b1caf648762642451ee9e1bf1dfd2f
|
|
| BLAKE2b-256 |
e71235d96f0fba3a7cb7461f261b21b63c486ec60439c0d04a408281a1126b47
|
File details
Details for the file putergenai-0.1.5-py3-none-any.whl.
File metadata
- Download URL: putergenai-0.1.5-py3-none-any.whl
- Upload date:
- Size: 11.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a5236334da36a84922dcb6ee63b83f93c70cc5c635a969baf8de3f96188cf392
|
|
| MD5 |
c7b48660c06168a5bf67646369038ab7
|
|
| BLAKE2b-256 |
3846bd6ae4c194acb37e1ca2165c3c90c645eec7355a3a00d1fbad270bd5a46c
|