No project description provided
Project description
FlyMy.AI
Generated with FlyMy.AI in 🚀 70ms
Welcome to FlyMy.AI inference platform. Our goal is to provide the fastest and most affordable deployment solutions for neural networks and AI applications.
- Fast Inference: Experience the fastest Stable Diffusion inference globally.
- Scalability: Autoscaling to millions of users per second.
- Ease of Use: One-click deployment for any publicly available neural networks.
Website
For more information, visit our website: FlyMy.AI Or connect with us and other users on Discord: Join Discord
Getting Started
This is a Python client for FlyMyAI. It allows you to easily run models and get predictions from your Python code in sync and async mode.
Requirements
- Python 3.8+
Installation
Install the FlyMyAI client using pip:
pip install flymyai
Authentication
Before using the client, you need to have your API key, username, and project name. In order to get credentials, you have to sign up on flymy.ai and get your personal data on the profile.
Basic Usage
Here's a simple example of how to use the FlyMyAI client:
BERT Sentiment analysis
import flymyai
response = flymyai.run(
apikey="fly-secret-key",
model="flymyai/bert",
payload={"text": "What a fabulous fancy building! It looks like a palace!"}
)
print(response.output_data["logits"][0])
Sync Streams
For llms you should use stream method
llama 3.1 8b
from flymyai import client, FlyMyAIPredictException
fma_client = client(apikey="fly-secret-key")
stream_iterator = fma_client.stream(
payload={
"prompt": "tell me a story about christmas tree",
"best_of": 12,
"max_tokens": 1024,
"stop": 1,
"temperature": 1,
"top_k": 1,
"top_p": "0.95",
},
model="flymyai/llama-v3-1-8b"
)
try:
for response in stream_iterator:
if response.output_data.get("output"):
print(response.output_data["output"].pop(), end="")
except FlyMyAIPredictException as e:
print(e)
raise e
finally:
print()
print(stream_iterator.stream_details)
Async Streams
For llms you should use stream method
Stable Code Instruct 3b
import asyncio
from flymyai import async_client, FlyMyAIPredictException
async def run_stable_code():
fma_client = async_client(apikey="fly-secret-key")
stream_iterator = fma_client.stream(
payload={
"prompt": "What's the difference between an iterator and a generator in Python?",
"best_of": 12,
"max_tokens": 512,
"stop": 1,
"temperature": 1,
"top_k": 1,
"top_p": "0.95",
},
model="flymyai/Stable-Code-Instruct-3b"
)
try:
async for response in stream_iterator:
if response.output_data.get("output"):
print(response.output_data["output"].pop(), end="")
except FlyMyAIPredictException as e:
print(e)
raise e
finally:
print()
print(stream_iterator.stream_details)
asyncio.run(run_stable_code())
File Inputs
ResNet image classification
You can pass file inputs to models using file paths:
import pathlib
import flymyai
response = flymyai.run(
apikey="fly-secret-key",
model="flymyai/resnet",
payload={"image": pathlib.Path("/path/to/image.png")}
)
print(response.output_data["495"])
File Response Handling
Files received from the neural network are always encoded in base64 format. To process these files, you need to decode them first. Here's an example of how to handle an image file:
StableDiffusion Turbo image generation in ~50ms 🚀
import base64
import flymyai
response = flymyai.run(
apikey="fly-secret-key",
model="flymyai/SDTurboFMAAceleratedH100",
payload={
"prompt": "An astronaut riding a rainbow unicorn, cinematic, dramatic, photorealistic",
}
)
base64_image = response.output_data["sample"][0]
image_data = base64.b64decode(base64_image)
with open("generated_image.jpg", "wb") as file:
file.write(image_data)
Asynchronous Requests
FlyMyAI supports asynchronous requests for improved performance. Here's how to use it:
import asyncio
import flymyai
async def main():
payloads = [
{
"prompt": "An astronaut riding a rainbow unicorn, cinematic, dramatic, photorealistic",
"negative_prompt": "Dark colors, gloomy atmosphere, horror",
"seed": count,
"denoising_steps": 4,
"scheduler": "DPM++ SDE"
}
for count in range(1, 10)
]
async with asyncio.TaskGroup() as gr:
tasks = [
gr.create_task(
flymyai.async_run(
apikey="fly-secret-key",
model="flymyai/DreamShaperV2-1",
payload=payload
)
)
for payload in payloads
]
results = await asyncio.gather(*tasks)
for result in results:
print(result.output_data["output"])
asyncio.run(main())
Running Models in the Background
To run a model in the background, simply use the async_run() method:
import asyncio
import flymyai
import pathlib
async def background_task():
payload = {"audio": pathlib.Path("/path/to/audio.mp3")}
response = await flymyai.async_run(
apikey="fly-secret-key",
model="flymyai/whisper",
payload=payload
)
print("Background task completed:", response.output_data["transcription"])
async def main():
task = asyncio.create_task(background_task())
await task
asyncio.run(main())
# Continue with other operations while the model runs in the background
Asynchronous Prediction Tasks
For long-running operations, FlyMyAI provides asynchronous prediction tasks. This allows you to submit a task and check its status later, which is useful for handling time-consuming predictions without blocking your application.
Using Synchronous Client
from flymyai import client
from flymyai.core.exceptions import (
RetryTimeoutExceededException,
FlyMyAIExceptionGroup,
)
# Initialize client
fma_client = client(apikey="fly-secret-key")
# Submit async prediction task
prediction_task = fma_client.predict_async_task(
model="flymyai/flux-schnell",
payload={"prompt": "Funny Cat with Stupid Dog"}
)
try:
# Get result
result = prediction_task.result()
print(f"Prediction completed: {result.inference_responses}")
except RetryTimeoutExceededException:
print("Prediction is taking longer than expected")
except FlyMyAIExceptionGroup as e:
print(f"Prediction failed: {e}")
Using Asynchronous Client
import asyncio
from flymyai import async_client
from flymyai.core.exceptions import (
RetryTimeoutExceededException,
FlyMyAIExceptionGroup,
)
async def run_prediction():
# Initialize async client
fma_client = async_client(apikey="fly-secret-key")
# Submit async prediction task
prediction_task = await fma_client.predict_async_task(
model="flymyai/flux-schnell",
payload={"prompt": "Funny Cat with Stupid Dog"}
)
try:
# Await result with default timeout
result = await prediction_task.result()
print(f"Prediction completed: {result.inference_responses}")
# Check response status
all_successful = all(
resp.infer_details["status"] == 200
for resp in result.inference_responses
)
print(f"All predictions successful: {all_successful}")
except RetryTimeoutExceededException:
print("Prediction is taking longer than expected")
except FlyMyAIExceptionGroup as e:
print(f"Prediction failed: {e}")
# Run async function
asyncio.run(run_prediction())
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file flymyai-1.0.23.tar.gz
.
File metadata
- Download URL: flymyai-1.0.23.tar.gz
- Upload date:
- Size: 22.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.4 CPython/3.13.0 Linux/6.5.0-1025-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | af78e26b572087a8e62bcfccd06b941035c29c2ffba631e278bc9c0fc35777c9 |
|
MD5 | 879a62da630f086bcecffbed98857f2b |
|
BLAKE2b-256 | 89056c4d97610112c262418a02ef4c0e8a95c35b9382b79798576674db2364ba |
File details
Details for the file flymyai-1.0.23-py3-none-any.whl
.
File metadata
- Download URL: flymyai-1.0.23-py3-none-any.whl
- Upload date:
- Size: 30.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.4 CPython/3.13.0 Linux/6.5.0-1025-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | b4f39cd4ca663076c028022823b05c00a39a84f8d461663f1197c9a84c3d1720 |
|
MD5 | 1bb3c018b79f75397b25f1ae6035d322 |
|
BLAKE2b-256 | 6b112e07076d17cbba1995c3b290904322ad593bb782f63d63c00aa81443cf2a |