Skip to main content

No project description provided

Project description

FlyMy.AI

Generated with FlyMy.AI in 🚀 70ms
Generated with FlyMy.AI in 🚀 70ms

Welcome to FlyMy.AI inference platform. Our goal is to provide the fastest and most affordable deployment solutions for neural networks and AI applications.

  • Fast Inference: Experience the fastest Stable Diffusion inference globally.
  • Scalability: Autoscaling to millions of users per second.
  • Ease of Use: One-click deployment for any publicly available neural networks.

Website

For more information, visit our website: FlyMy.AI Or connect with us and other users on Discord: Join Discord

Getting Started

This is a Python client for FlyMyAI. It allows you to easily run models and get predictions from your Python code in sync and async mode.

Requirements

  • Python 3.8+

Installation

Install the FlyMyAI client using pip:

pip install flymyai

Authentication

Before using the client, you need to have your API key, username, and project name. In order to get credentials, you have to sign up on flymy.ai and get your personal data on the profile.

Basic Usage

Here's a simple example of how to use the FlyMyAI client:

BERT Sentiment analysis

import flymyai

response = flymyai.run(
    apikey="fly-secret-key",
    model="flymyai/bert",
    payload={"text": "What a fabulous fancy building! It looks like a palace!"}
)
print(response.output_data["logits"][0])

Sync Streams

For llms you should use stream method

llama 3.1 8b

from flymyai import client, FlyMyAIPredictException

fma_client = client(apikey="fly-secret-key")

stream_iterator = fma_client.stream(
    payload={
        "prompt": "tell me a story about christmas tree",
        "best_of": 12,
        "max_tokens": 1024,
        "stop": 1,
        "temperature": 1,
        "top_k": 1,
        "top_p": "0.95",
    },
    model="flymyai/llama-v3-1-8b"
)
try:
    for response in stream_iterator:
        if response.output_data.get("output"):
            print(response.output_data["output"].pop(), end="")
except FlyMyAIPredictException as e:
    print(e)
    raise e
finally:
    print()
    print(stream_iterator.stream_details)

Async Streams

For llms you should use stream method

Stable Code Instruct 3b

import asyncio

from flymyai import async_client, FlyMyAIPredictException


async def run_stable_code():
    fma_client = async_client(apikey="fly-secret-key")
    stream_iterator = fma_client.stream(
        payload={
            "prompt": "What's the difference between an iterator and a generator in Python?",
            "best_of": 12,
            "max_tokens": 512,
            "stop": 1,
            "temperature": 1,
            "top_k": 1,
            "top_p": "0.95",
        },
        model="flymyai/Stable-Code-Instruct-3b"
    )
    try:
        async for response in stream_iterator:
            if response.output_data.get("output"):
                print(response.output_data["output"].pop(), end="")
    except FlyMyAIPredictException as e:
        print(e)
        raise e
    finally:
        print()
        print(stream_iterator.stream_details)


asyncio.run(run_stable_code())

File Inputs

ResNet image classification

You can pass file inputs to models using file paths:

import pathlib

import flymyai

response = flymyai.run(
    apikey="fly-secret-key",
    model="flymyai/resnet",
    payload={"image": pathlib.Path("/path/to/image.png")}
)
print(response.output_data["495"])

File Response Handling

Files received from the neural network are always encoded in base64 format. To process these files, you need to decode them first. Here's an example of how to handle an image file:

StableDiffusion Turbo image generation in ~50ms 🚀

import base64
import flymyai

response = flymyai.run(
    apikey="fly-secret-key",
    model="flymyai/SDTurboFMAAceleratedH100",
    payload={
        "prompt": "An astronaut riding a rainbow unicorn, cinematic, dramatic, photorealistic",
    }
)
base64_image = response.output_data["sample"][0]
image_data = base64.b64decode(base64_image)
with open("generated_image.jpg", "wb") as file:
    file.write(image_data)

Asynchronous Requests

FlyMyAI supports asynchronous requests for improved performance. Here's how to use it:

import asyncio
import flymyai


async def main():
    payloads = [
        {
            "prompt": "An astronaut riding a rainbow unicorn, cinematic, dramatic, photorealistic",
            "negative_prompt": "Dark colors, gloomy atmosphere, horror",
            "seed": count,
            "denoising_steps": 4,
            "scheduler": "DPM++ SDE"
         }
        for count in range(1, 10)
    ]
    async with asyncio.TaskGroup() as gr:
        tasks = [
            gr.create_task(
                flymyai.async_run(
                    apikey="fly-secret-key",
                    model="flymyai/DreamShaperV2-1",
                    payload=payload
                )
            )
            for payload in payloads
        ]
    results = await asyncio.gather(*tasks)
    for result in results:
        print(result.output_data["output"])


asyncio.run(main())

Running Models in the Background

To run a model in the background, simply use the async_run() method:

import asyncio
import flymyai
import pathlib


async def background_task():
    payload = {"audio": pathlib.Path("/path/to/audio.mp3")}
    response = await flymyai.async_run(
        apikey="fly-secret-key",
        model="flymyai/whisper",
        payload=payload
    )
    print("Background task completed:", response.output_data["transcription"])


async def main():
    task = asyncio.create_task(background_task())
    await task

asyncio.run(main())
# Continue with other operations while the model runs in the background

Asynchronous Prediction Tasks

For long-running operations, FlyMyAI provides asynchronous prediction tasks. This allows you to submit a task and check its status later, which is useful for handling time-consuming predictions without blocking your application.

Using Synchronous Client

from flymyai import client
from flymyai.core.exceptions import (
    RetryTimeoutExceededException,
    FlyMyAIExceptionGroup,
)

# Initialize client
fma_client = client(apikey="fly-secret-key")

# Submit async prediction task
prediction_task = fma_client.predict_async_task(
    model="flymyai/flux-schnell",
    payload={"prompt": "Funny Cat with Stupid Dog"}
)

try:
    # Get result
    result = prediction_task.result()

    print(f"Prediction completed: {result.inference_responses}")
except RetryTimeoutExceededException:
    print("Prediction is taking longer than expected")
except FlyMyAIExceptionGroup as e:
    print(f"Prediction failed: {e}")

Using Asynchronous Client

import asyncio
from flymyai import async_client
from flymyai.core.exceptions import (
    RetryTimeoutExceededException,
    FlyMyAIExceptionGroup,
)

async def run_prediction():
    # Initialize async client
    fma_client = async_client(apikey="fly-secret-key")
    
    # Submit async prediction task
    prediction_task = await fma_client.predict_async_task(
        model="flymyai/flux-schnell",
        payload={"prompt": "Funny Cat with Stupid Dog"}
)
    
    try:
        # Await result with default timeout
        result = await prediction_task.result()
        print(f"Prediction completed: {result.inference_responses}")
        
        # Check response status
        all_successful = all(
            resp.infer_details["status"] == 200 
            for resp in result.inference_responses
        )
        print(f"All predictions successful: {all_successful}")
        
    except RetryTimeoutExceededException:
        print("Prediction is taking longer than expected")
    except FlyMyAIExceptionGroup as e:
        print(f"Prediction failed: {e}")

# Run async function
asyncio.run(run_prediction())

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

flymyai-1.0.22.tar.gz (22.3 kB view details)

Uploaded Source

Built Distribution

flymyai-1.0.22-py3-none-any.whl (30.7 kB view details)

Uploaded Python 3

File details

Details for the file flymyai-1.0.22.tar.gz.

File metadata

  • Download URL: flymyai-1.0.22.tar.gz
  • Upload date:
  • Size: 22.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.4 CPython/3.13.0 Linux/6.5.0-1025-azure

File hashes

Hashes for flymyai-1.0.22.tar.gz
Algorithm Hash digest
SHA256 ff5feecc23ac9c8323c68ed02dd2658b7c19fa2236d9e5b8478e21dfd31c2b61
MD5 ca867f4a9a82aacab12d08ea33fe3ca5
BLAKE2b-256 759e9a5f3b37f776291bcf2b61e8621d0c1cdf224bb6e2f6faa88a430be7ae43

See more details on using hashes here.

File details

Details for the file flymyai-1.0.22-py3-none-any.whl.

File metadata

  • Download URL: flymyai-1.0.22-py3-none-any.whl
  • Upload date:
  • Size: 30.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.4 CPython/3.13.0 Linux/6.5.0-1025-azure

File hashes

Hashes for flymyai-1.0.22-py3-none-any.whl
Algorithm Hash digest
SHA256 8adf636c738dc348f2648d649f2f3eb8c79b5b036f1af96829683f6ecbab1ca3
MD5 5cd07196a427e0c1fae3da6130e4d0fe
BLAKE2b-256 3088cd7120d3de1daed33e6dd12e5781af5ee0db7004f60c55b19ce97f04d3cf

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page