Official Python SDK for Apertis AI API
Project description
Apertis Python SDK
Official Python SDK for the Apertis AI API.
Installation
pip install apertis
Quick Start
from apertis import Apertis
client = Apertis(api_key="your-api-key")
# Or set APERTIS_API_KEY environment variable
response = client.chat.completions.create(
model="gpt-5.2",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)
Features
- Sync and Async Support: Both synchronous and asynchronous clients
- Streaming: Real-time streaming for chat completions
- Tool Calling: Function/tool calling support
- Embeddings: Text embedding generation
- Type Hints: Full type annotations for IDE support
- Automatic Retries: Built-in retry logic for transient errors
Usage
Chat Completions
from apertis import Apertis
client = Apertis()
response = client.chat.completions.create(
model="gpt-5.2",
messages=[{"role": "user", "content": "What is the capital of France?"}],
temperature=0.7,
max_tokens=100,
)
print(response.choices[0].message.content)
print(f"Tokens used: {response.usage.total_tokens}")
Streaming
stream = client.chat.completions.create(
model="gpt-5.2",
messages=[{"role": "user", "content": "Tell me a story"}],
stream=True,
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)
Async Usage
import asyncio
from apertis import AsyncApertis
async def main():
client = AsyncApertis()
response = await client.chat.completions.create(
model="gpt-5.2",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
await client.close()
asyncio.run(main())
Async Streaming
async def stream_example():
client = AsyncApertis()
stream = await client.chat.completions.create(
model="gpt-5.2",
messages=[{"role": "user", "content": "Tell me a joke"}],
stream=True,
)
async for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)
await client.close()
Tool Calling
response = client.chat.completions.create(
model="gpt-5.2",
messages=[{"role": "user", "content": "What's the weather in Tokyo?"}],
tools=[{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city name"
}
},
"required": ["location"]
}
}
}],
tool_choice="auto",
)
if response.choices[0].message.tool_calls:
for tool_call in response.choices[0].message.tool_calls:
print(f"Function: {tool_call.function.name}")
print(f"Arguments: {tool_call.function.arguments}")
Embeddings
response = client.embeddings.create(
model="text-embedding-3-small",
input="Hello, world!",
)
embedding = response.data[0].embedding
print(f"Embedding dimension: {len(embedding)}")
Batch Embeddings
response = client.embeddings.create(
model="text-embedding-3-small",
input=["Hello", "World", "How are you?"],
)
for item in response.data:
print(f"Index {item.index}: {len(item.embedding)} dimensions")
Error Handling
from apertis import (
Apertis,
ApertisError,
APIError,
AuthenticationError,
RateLimitError,
)
client = Apertis()
try:
response = client.chat.completions.create(
model="gpt-5.2",
messages=[{"role": "user", "content": "Hello!"}]
)
except AuthenticationError as e:
print(f"Invalid API key: {e.message}")
except RateLimitError as e:
print(f"Rate limited. Status: {e.status_code}")
except APIError as e:
print(f"API error {e.status_code}: {e.message}")
except ApertisError as e:
print(f"Error: {e.message}")
Configuration
client = Apertis(
api_key="your-api-key", # Or use APERTIS_API_KEY env var
base_url="https://api.apertis.ai/v1", # Custom base URL
timeout=60.0, # Request timeout in seconds
max_retries=2, # Number of retries for failed requests
default_headers={"X-Custom": "value"}, # Additional headers
)
Context Manager
with Apertis() as client:
response = client.chat.completions.create(
model="gpt-5.2",
messages=[{"role": "user", "content": "Hello!"}]
)
# Client is automatically closed
# Async version
async with AsyncApertis() as client:
response = await client.chat.completions.create(...)
Available Models
Chat Models
gpt-5.2gpt-5.2-codexgpt-5.1claude-opus-4-5-20251101claude-sonnet-4.5claude-haiku-4.5gemini-3-pro-previewgemini-3-flash-previewgemini-2.5-flash-preview
Embedding Models
text-embedding-3-smalltext-embedding-3-largetext-embedding-ada-002
Requirements
- Python 3.9+
- httpx
- pydantic
License
Apache 2.0 - see LICENSE for details.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
apertis-0.1.0.tar.gz
(18.5 kB
view details)
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
apertis-0.1.0-py3-none-any.whl
(18.7 kB
view details)
File details
Details for the file apertis-0.1.0.tar.gz.
File metadata
- Download URL: apertis-0.1.0.tar.gz
- Upload date:
- Size: 18.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2f1ca5ff860a93f7c482ad63785d1905e9c05d6d946443d10128cc923d2340a0
|
|
| MD5 |
e090d4c5b6595b1032ad7a5aada1353c
|
|
| BLAKE2b-256 |
1bcaa167b042d7e0ac3de110cee727e8f43ee03f297d49ac49b4f0a012c0971f
|
File details
Details for the file apertis-0.1.0-py3-none-any.whl.
File metadata
- Download URL: apertis-0.1.0-py3-none-any.whl
- Upload date:
- Size: 18.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
71140a2bdd5c782efe4185c642fbae7bb9cccc3b52b1b905a73ce257606922e7
|
|
| MD5 |
6afa879789fd6375f8b2a92f307ebd58
|
|
| BLAKE2b-256 |
ac5b271f0f14a26dff533e6c4bd02a02fdeeeffaa3e6d01e96bdd0913417f1a2
|