Official Python SDK for Efference ML API - Process videos with GPU-accelerated machine learning models
Project description
Efference Python SDK
Official Python client for the Efference ML API - advanced 3D vision inference platform with GPU-accelerated depth estimation and correction.
Table of Contents
Installation
From PyPI
pip install efference
Quick Start
Basic Video Processing
from efference import EfferenceClient
client = EfferenceClient(api_key="sk_live_your_api_key")
result = client.videos.process("path/to/video.mp4")
print(f"Status: {result['status']}")
print(f"Credits deducted: {result['credits_deducted']}")
Basic Image Processing
result = client.images.process_rgbd(
"color.png",
"depth.png",
save_visualization="depth_colored.png"
)
print(f"Depth range: {result['inference_result']['output']['min']:.2f}m - {result['inference_result']['output']['max']:.2f}m")
Authentication
Set Your API Key
The SDK reads your API key from the EFFERENCE_API_URL environment variable or accepts it directly:
import os
os.environ["EFFERENCE_API_URL"] = "https://api.efference.ai"
client = EfferenceClient(api_key="sk_live_your_key")
Custom Endpoint (Testing)
client = EfferenceClient(
api_key="sk_test_your_key",
base_url="http://localhost:8000"
)
API Reference
EfferenceClient
Main client class for interacting with the Efference API.
Initialization
client = EfferenceClient(api_key: str, timeout: Optional[float] = None)
Parameters:
| Parameter | Type | Description | Default |
|---|---|---|---|
api_key |
str | Your API key (sk_live_* or sk_test_*) | Required |
timeout |
float | Request timeout in seconds | 300 |
Raises:
ValueError: If api_key is empty
Videos Namespace: client.videos
Process Single Frame
result = client.videos.process(
file_path: str | Path | file-like,
model: str = "rgbd",
content_type: str = None
) -> dict
Process a video file through the ML model.
Parameters:
| Parameter | Type | Description |
|---|---|---|
file_path |
str, Path, or file-like | Path to video file |
model |
str | Model variant (default: "rgbd") |
content_type |
str | MIME type (auto-detected if None) |
Returns: Dictionary with inference results
Example:
result = client.videos.process("video.mp4")
print(result["inference_result"])
print(f"Credits used: {result['credits_deducted']}")
Process All Frames (Batch)
result = client.videos.process_batch(
file_path: str | Path | file-like,
max_frames: int = None,
frame_skip: int = 1,
content_type: str = None
) -> dict
Process all or multiple frames from a video file.
Parameters:
| Parameter | Type | Description |
|---|---|---|
file_path |
str, Path, file-like | Path to video file |
max_frames |
int | Max frames to process (None = all) |
frame_skip |
int | Process every Nth frame |
content_type |
str | MIME type (auto-detected if None) |
Example:
result = client.videos.process_batch(
"video.mp4",
max_frames=100,
frame_skip=2
)
print(f"Processed {result['frames_processed']} frames")
print(f"Credits used: {result['credits_deducted']}")
Response structure (example):
{
"status": "success",
"filename": "8134891-uhd_2160_4096_25fps.mp4",
"file_size_bytes": 18040017,
"model_name": "d435",
"video_metadata": {
"fps": 25.0,
"frame_count": 660,
"width": 1440,
"height": 2732,
"extracted_frames": 50
},
"frames_processed": 50,
"frame_skip": 1,
"batch_results": [
{
"frame_index": 0,
"inference_result": {
"model_type": "rgbd",
"output": {
"shape": [518, 518],
"dtype": "float16",
"min": 2.2734375,
"max": 19.828125,
"mean": 6.35546875,
"has_valid_depth": true
}
}
}
// ... 49 more frame entries ...
],
"processing_summary": {
"total_frames_in_video": 660,
"frames_extracted": 50,
"frames_processed": 50
},
"credits_deducted": 15.602150440216064,
"credits_remaining": true,
"billing_info": {
"base_cost": 2.0,
"frame_cost": 5.0,
"size_cost": 8.602150440216064,
"total": 15.602150440216064
}
}
Note: fields under batch_results[*].inference_result.output report depth statistics for each processed frame. The exact numbers depend on the input video and model configuration.
Images Namespace: client.images
Process RGBD Image
result = client.images.process_rgbd(
rgb_path: str | Path | file-like,
depth_path: str | Path | file-like = None,
depth_scale: float = 1000.0,
input_size: int = 518,
max_depth: float = 25.0,
save_visualization: str | Path = None,
save_3panel: str | Path = None
) -> dict
Process RGB image with optional depth for depth estimation/correction.
Parameters:
| Parameter | Type | Description | Default |
|---|---|---|---|
rgb_path |
str, Path, file-like | Path to RGB image | Required |
depth_path |
str, Path, file-like | Optional depth image | None |
depth_scale |
float | Depth sensor scale factor | 1000.0 |
input_size |
int | Model input resolution | 518 |
max_depth |
float | Max depth for visualization | 25.0 |
save_visualization |
str, Path | Save colorized depth PNG | None |
save_3panel |
str, Path | Save comparison PNG | None |
Example:
result = client.images.process_rgbd(
"color.png",
"depth_raw.png",
depth_scale=1000.0,
save_visualization="depth_colored.png",
save_3panel="comparison.png"
)
print(f"Status: {result['status']}")
print(f"Depth range: {result['inference_result']['output']['min']:.2f}m - {result['inference_result']['output']['max']:.2f}m")
Visualize Depth Results
fig = client.images.visualize_depth(
result: dict,
mode: str = "single",
show: bool = True
) -> matplotlib.figure.Figure
Display depth visualization using matplotlib.
Parameters:
| Parameter | Type | Description |
|---|---|---|
result |
dict | API response from process_rgbd() |
mode |
str | "single" or "3panel" |
show |
bool | Display immediately |
Example:
result = client.images.process_rgbd("color.png", "depth.png")
fig = client.images.visualize_depth(result, mode="3panel")
Streaming Namespace: client.streaming
Start Camera Stream
result = client.streaming.start(camera_type: str = "realsense") -> dict
Example:
result = client.streaming.start("realsense")
print(f"Status: {result['status']}")
Get Frame from Stream
frame = client.streaming.get_frame(run_inference: bool = False) -> dict
Example:
frame = client.streaming.get_frame(run_inference=True)
print(f"Frame #{frame['frame_data']['frame_count']}")
Stop Camera Stream
result = client.streaming.stop() -> dict
Get Stream Status
status = client.streaming.status() -> dict
Models Namespace: client.models
Switch Model
result = client.models.switch(model_name: str) -> dict
Example:
result = client.models.switch("d405")
print(f"Active model: {result['current_model']}")
List Available Models
models = client.models.list() -> dict
Example:
models = client.models.list()
print(f"Available: {models['available_models']}")
Examples
Example 1: Simple Video Processing
from efference import EfferenceClient
client = EfferenceClient(api_key="sk_live_your_key")
result = client.videos.process("test_video.mp4")
print(f"Status: {result['status']}")
print(f"File size: {result['file_size_bytes'] / 1e6:.2f}MB")
print(f"Credits deducted: {result['credits_deducted']:.2f}")
print(f"Credits remaining: {result['credits_remaining']:.2f}")
Example 2: Batch Video Processing
result = client.videos.process_batch(
"long_video.mp4",
max_frames=50,
frame_skip=1
)
print(f"Processed {result['frames_processed']} frames")
for idx, frame_result in enumerate(result['batch_results']):
print(f"Frame {idx}: {frame_result['inference_result']['output']}")
Example 3: Image Depth Estimation with Visualization
result = client.images.process_rgbd(
"input/rgb.png",
"input/depth.png",
save_visualization="output/depth.png",
save_3panel="output/comparison.png"
)
client.images.visualize_depth(result, mode="3panel", show=True)
Example 4: Custom Depth Parameters
result = client.images.process_rgbd(
"color.png",
depth_path="depth_raw.png",
depth_scale=1000.0,
input_size=518,
max_depth=30.0
)
output = result['inference_result']['output']
print(f"Depth range: {output['min']:.2f}m - {output['max']:.2f}m")
print(f"Mean depth: {output['mean']:.2f}m")
Example 5: Error Handling
import httpx
from efference import EfferenceClient
client = EfferenceClient(api_key="sk_live_your_key")
try:
result = client.videos.process("video.mp4")
except FileNotFoundError as e:
print(f"File not found: {e}")
except httpx.HTTPStatusError as e:
if e.response.status_code == 401:
print("Authentication failed. Check your API key.")
elif e.response.status_code == 402:
print("Insufficient credits.")
elif e.response.status_code == 413:
print("File too large.")
else:
print(f"HTTP error: {e.response.status_code}")
except httpx.TimeoutException:
print("Request timed out.")
except httpx.RequestError as e:
print(f"Connection error: {e}")
Error Handling
Common Errors and Solutions
| Error | Cause | Solution |
|---|---|---|
| 401 Unauthorized | Invalid API key | Verify key starts with sk_live_ or sk_test_ |
| 402 Payment Required | Insufficient credits | Purchase additional credits |
| 413 Payload Too Large | Video exceeds 500MB | Split into smaller files |
| 504 Gateway Timeout | Processing took too long | Increase timeout or reduce input size |
| 429 Too Many Requests | Rate limited | Implement exponential backoff |
| 500 Internal Server | Server error | Retry request after delay |
Advanced Usage
Custom Endpoint
client = EfferenceClient(
api_key="sk_live_your_key",
base_url="http://your-server.local:8000"
)
Custom Timeout
client = EfferenceClient(
api_key="sk_live_your_key",
timeout=600.0 # 10 minutes
)
File-like Objects
import io
video_bytes = open("video.mp4", "rb").read()
video_io = io.BytesIO(video_bytes)
result = client.videos.process(video_io, content_type="video/mp4")
Retry Logic
import time
def process_with_retry(client, video_path, max_retries=3):
for attempt in range(max_retries):
try:
return client.videos.process(video_path)
except Exception as e:
if attempt == max_retries - 1:
raise
wait_time = 2 ** attempt
print(f"Attempt {attempt + 1} failed. Retrying in {wait_time}s...")
time.sleep(wait_time)
Support and Resources
- Documentation: https://docs.efference.ai
- API Status: https://status.efference.ai
- GitHub Issues: https://github.com/EfferenceAI/efference/issues
- Email: support@efference.ai
License
MIT License - See LICENSE file for details
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file efference-0.1.4.tar.gz.
File metadata
- Download URL: efference-0.1.4.tar.gz
- Upload date:
- Size: 15.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b64c7c4a57b3d94b96501b35274eb4cde32956248b8e1cbbd375587ae45e8e04
|
|
| MD5 |
3b0ceb94a21dc8c77bfaf77df33e8986
|
|
| BLAKE2b-256 |
d1dedb6fa957c5be623136aa87cae0ee457d883fcd83b7e92d8bdc4527b6bede
|
File details
Details for the file efference-0.1.4-py3-none-any.whl.
File metadata
- Download URL: efference-0.1.4-py3-none-any.whl
- Upload date:
- Size: 12.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
40d6a3a670ca49361ec7c17a056761e4c81fdd3c8ebd2142650613ee36f6807d
|
|
| MD5 |
bb625f23a60e1974bd22f5f9c62e051d
|
|
| BLAKE2b-256 |
20061c864e5ded6797deed1e3560b15166f0e6d2ed61c3ffc8238d394260be2b
|