An asynchronous Python client for the Puter API
Project description
PutergenAI: Python SDK for Puter.js
Asynchronous Python client for interacting with the Puter.com API. This SDK provides access to Puter's AI models (including OpenAI GPT, Claude, Mistral, Grok, DeepSeek, and more), file system operations, image generation, OCR, and text-to-speech capabilities.
Features
- AI Chat Completions: Support for 200+ AI models from various providers (OpenAI, Anthropic, Mistral, xAI, DeepSeek, Google, TogetherAI, OpenRouter, and more)
- File System Operations: Read, write, and delete files on Puter.com
- Image Generation: Create images from text prompts using various models
- OCR: Extract text from images
- Text-to-Speech: Convert text to MP3 audio
- Streaming Support: Real-time streaming responses for chat completions
- Fallback & Retry Logic: Automatic model fallback and retry mechanisms for reliability
Installation
Install PutergenAI using pip:
pip install putergenai
Or from source:
git clone https://github.com/Nerve11/putergenai.git
cd putergenai
pip install .
Quick Start
import asyncio
from putergenai import PuterClient
async def main():
async with PuterClient() as client:
# Login to Puter.com
await client.login("your_username", "your_password")
# AI Chat with GPT-4o
result = await client.ai_chat(
prompt="Hello, how are you?",
options={"model": "gpt-4o", "stream": False}
)
print(result["response"]["result"]["message"]["content"])
asyncio.run(main())
Authentication
Authentication is required for most operations. Use your Puter.com username and password:
await client.login("your_username", "your_password")
Environment Variables (Recommended)
For testing and development, you can use environment variables. Create a .env file in your project root:
# Copy the example file
cp .env.example .env
# Edit .env with your credentials
PUTER_USERNAME=your_username
PUTER_PASSWORD=your_password
Then use the credentials in your code:
import os
from dotenv import load_dotenv
load_dotenv()
username = os.getenv('PUTER_USERNAME')
password = os.getenv('PUTER_PASSWORD')
await client.login(username, password)
AI Chat Completions
Synchronous Chat
messages = [
{"role": "user", "content": "What is the capital of France?"}
]
result = await client.ai_chat(messages=messages, options={"model": "gpt-4o"})
print(result["response"]["result"]["message"]["content"])
Streaming Chat
async def stream_chat():
messages = [{"role": "user", "content": "Tell me a story"}]
gen = await client.ai_chat(
messages=messages,
options={"model": "claude-opus-4.5", "stream": True}
)
print("Assistant: ", end='', flush=True)
async for content, model in gen:
print(content, end='', flush=True)
print()
asyncio.run(stream_chat())
Vision/Chat with Images
messages = [
{
"role": "user",
"content": [
{"type": "text", "text": "What's in this image?"},
{"type": "image_url", "image_url": {"url": "https://example.com/image.jpg"}}
]
}
]
result = await client.ai_chat(messages=messages, options={"model": "gpt-4o"})
Supported Models
The SDK supports models from:
- OpenAI: GPT-5, GPT-4o, o3 series
- Anthropic: Claude Opus, Sonnet, Haiku series
- Mistral: Large, Medium, Small models
- xAI: Grok series
- DeepSeek: Chat and Reasoner models
- Google: Gemini series
- TogetherAI: Various models including LLMs and image generation
- OpenRouter: Access to 100+ models from different providers
For the complete list of supported models, refer to the model_to_driver mapping in the source code.
File System Operations
Write a File
await client.fs_write("hello.txt", "Hello, Puter!")
Read a File
content = await client.fs_read("hello.txt")
print(content.decode('utf-8'))
Delete a File
await client.fs_delete("hello.txt")
Image Generation
image_url = await client.ai_txt2img(
"A beautiful sunset over mountains",
model="pollinations-image"
)
print(image_url)
OCR (Image to Text)
text = await client.ai_img2txt("https://example.com/image.png")
print(text)
# Or with file upload
with open("image.png", "rb") as f:
text = await client.ai_img2txt(f)
Text-to-Speech
audio_bytes = await client.ai_txt2speech("Hello, world!")
with open("output.mp3", "wb") as f:
f.write(audio_bytes)
Advanced Options
Custom Parameters
options = {
"model": "gpt-5",
"temperature": 1,
"max_tokens": 2000,
"stream": True
}
gen = await client.ai_chat(
messages=messages,
options=options,
test_mode=False, # Use test mode for debugging
strict_model=True # Enforce exact model usage
)
Tools and Function Calling
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get weather information",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string"}
}
}
}
}
]
messages = [{"role": "user", "content": "What's the weather in Paris?"}]
result = await client.ai_chat(
messages=messages,
options={"model": "gpt-4o", "tools": tools}
)
# Check for tool calls in result
if "tool_calls" in result["response"]["result"]["message"]:
# Handle tool calls...
pass
Error Handling
The SDK includes built-in error handling and retry logic:
- Automatic retries for transient failures
- Model fallback when preferred models are unavailable
- SSL verification options for debugging network issues
try:
result = await client.ai_chat(messages=messages)
except ValueError as e:
print(f"Error: {e}")
Examples
See the examples/ directory for comprehensive usage examples:
examples/example.py: Interactive chat terminal applicationexamples/example-ui.py: GUI chat application with CustomTkinter
Run an example:
python examples/example.py
Contributing
Contributions are welcome!
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for new functionality
- Ensure all tests pass
- Submit a pull request
License
PutergenAI is licensed under the MIT License. See LICENSE for details.
Built with ❤️ for the Puter.com platform
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file putergenai-2.1.0.tar.gz.
File metadata
- Download URL: putergenai-2.1.0.tar.gz
- Upload date:
- Size: 28.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
bbd382f4841102d4d18951ef6b8e2b5cd29cf6e344fdee60750f579f1e774ae2
|
|
| MD5 |
c21a59be7add539e6cc40034648ceb9f
|
|
| BLAKE2b-256 |
e4176317d58b2a79ac2f8ced956d31e5836ab6ca9978e8b2ad0e773d2943f49c
|
File details
Details for the file putergenai-2.1.0-py3-none-any.whl.
File metadata
- Download URL: putergenai-2.1.0-py3-none-any.whl
- Upload date:
- Size: 18.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
eec8709be4288962d5ead7600a48ef2e7cb090b32e2423c3bcb4e52c0e4a3008
|
|
| MD5 |
26db766c0aa6889a4261df5d52c3c87e
|
|
| BLAKE2b-256 |
d3f064196a4fbc05e261f578dc6120f1062c1c7d4970a97ad98c9461edd87041
|