Skip to main content

Nitrous Oxide for your AI Infrastructure.

Project description

Nitrous Oxide for your AI Infrastructure

PyPI Version PyPI Version PyPI Downloads PyPi Downloads
Discord PyPi Version

Website | Docs | Discord

โšก๏ธ What is NOS?

NOS (torch-nos) is a fast and flexible Pytorch inference server, specifically designed for optimizing and running inference of popular foundational AI models.

  • ๐Ÿ‘ฉโ€๐Ÿ’ป Easy-to-use: Built for PyTorch and designed to optimize, serve and auto-scale Pytorch models in production without compromising on developer experience.
  • ๐Ÿฅท Flexible: Run and serve several foundational AI models (Stable Diffusion, CLIP, Whisper) in a single place.
  • ๐Ÿ”Œ Pluggable: Plug your front-end to NOS with out-of-the-box high-performance gRPC/REST APIs, avoiding all kinds of ML model deployment hassles.
  • ๐Ÿš€ Scalable: Optimize and scale models easily for maximum HW performance without a PhD in ML, distributed systems or infrastructure.
  • ๐Ÿ“ฆ Extensible: Easily hack and add custom models, optimizations, and HW-support in a Python-first environment.
  • โš™๏ธ HW-accelerated: Take full advantage of your underlying HW (GPUs, ASICs) without compromise.
  • โ˜๏ธ Cloud-agnostic: Run on any cloud HW (AWS, GCP, Azure, Lambda Labs, On-Prem) with our ready-to-use inference server containers.

NOS inherits its name from Nitrous Oxide System, the performance-enhancing system typically used in racing cars. NOS is designed to be modular and easy to extend.

๐Ÿš€ Getting Started

Get started with the full NOS server by installing via pip:

$ conda create -n nos-py38 python=3.8
$ conda activate nos-py38
$ conda install pytorch>=2.0.1 torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
$ pip install torch-nos[server]

If you want to simply use a light-weight NOS client and run inference on your local machine (via docker), you can install the client-only package:

$ conda create -n nos-py38 python=3.8
$ conda activate nos-py38
$ pip install torch-nos

For a more detailed quickstart, navigate to our quickstart docs.

๐Ÿ”ฅ Quickstart / Show me the code

โšก๏ธ Start the GPU server

The quickest way to get started is to start the GPU server. The --http flag optionally starts an HTTP gateway server so that you can run the REST API examples. We recommend you test out the gRPC client API to get the most out-of-the-box performance.

nos serve up --http

This command pulls and starts the latest GPU docker server with all the NOS goodies, without you requiring to manually do any setup. You'll see a bunch of debug logs on the console, wait until you see Uvicorn running on http://0.0.0.0:8000 before continuing to the next section. To follow the remaining examples, start a new terminal (leaving the server running in the background).

๐Ÿž๏ธ Image Generation (Stable-Diffusion-as-a-Service)

gRPC API โšก REST API
from nos.client import Client

client = Client("[::]:50051")

sdxl = client.Module("stabilityai/stable-diffusion-xl-base-1-0")
image, = sdxl(prompts=["fox jumped over the moon"],
              width=1024, height=1024, num_images=1)
curl \
-X POST http://localhost:8000/v1/infer \
-H 'Content-Type: application/json' \
-d '{
    "model_id": "stabilityai/stable-diffusion-xl-base-1-0",
    "inputs": {
        "prompts": ["fox jumped over the moon"],
        "width": 1024,
        "height": 1024,
        "num_images": 1
    }
}'

๐Ÿง  Text & Image Embedding (CLIP-as-a-Service)

gRPC API โšก REST API
from nos.client import Client

client = Client("[::]:50051")

clip = client.Module("openai/clip-vit-base-patch32")
txt_vec = clip.encode_text(text=["fox jumped over the moon"])
curl \
-X POST http://localhost:8000/v1/infer \
-H 'Content-Type: application/json' \
-d '{
    "model_id": "openai/clip-vit-base-patch32",
    "method": "encode_text",
    "inputs": {
        "texts": ["fox jumped over the moon"]
    }
}'

๐ŸŽ™๏ธ Audio Transcription (Whisper-as-a-Service)

gRPC API โšก REST API
from pathlib import Path
from nos.client import Client

client = Client("[::]:50051")

model = client.Module("openai/whisper-large-v2")
with client.UploadFile(Path("audio.wav")) as remote_path:
  response = model(path=remote_path)
# {"chunks": ...}
curl \
-X POST http://localhost:8000/v1/infer/file \
-H 'accept: application/json' \
-H 'Content-Type: multipart/form-data' \
-F 'model_id=openai/whisper-large-v2' \
-F 'file=@audio.wav'

๐Ÿง Object Detection (YOLOX-as-a-Service)

gRPC API โšก REST API
from pathlib import Path
from nos.client import Client

client = Client("[::]:50051")

model = client.Module("yolox/medium")
response = model(images=[Image.open("image.jpg")])
# {"bboxes": ..., "scores": ..., "labels": ...}
curl \
-X POST http://localhost:8000/v1/infer/file \
-H 'accept: application/json' \
-H 'Content-Type: multipart/form-data' \
-F 'model_id=yolox/medium' \
-F 'file=@image.jpg'

๐Ÿ—‚๏ธ Directory Structure

โ”œโ”€โ”€ docker         # Dockerfile for CPU/GPU servers
โ”œโ”€โ”€ docs           # mkdocs documentation
โ”œโ”€โ”€ examples       # example guides, jupyter notebooks, demos
โ”œโ”€โ”€ makefiles      # makefiles for building/testing
โ”œโ”€โ”€ nos
โ”‚ย ย  โ”œโ”€โ”€ cli        # CLI (hub, system)
โ”‚ย ย  โ”œโ”€โ”€ client     # gRPC / REST client
โ”‚ย ย  โ”œโ”€โ”€ common     # common utilities
โ”‚ย ย  โ”œโ”€โ”€ executors  # runtime executor (i.e. Ray)
โ”‚ย ย  โ”œโ”€โ”€ hub        # hub utilies
โ”‚ย ย  โ”œโ”€โ”€ managers   # model manager / multiplexer
โ”‚ย ย  โ”œโ”€โ”€ models     # model zoo
โ”‚ย ย  โ”œโ”€โ”€ proto      # protobuf defs for NOS gRPC service
โ”‚ย ย  โ”œโ”€โ”€ server     # server backend (gRPC)
โ”‚ย ย  โ””โ”€โ”€ test       # pytest utilities
โ”œโ”€โ”€ requirements   # requirement extras (server, docs, tests)
โ”œโ”€โ”€ scripts        # basic scripts
โ””โ”€โ”€ tests          # pytests (client, server, benchmark)

๐Ÿ“š Documentation

๐Ÿ›ฃ Roadmap

HW / Cloud Support

  • Commodity GPUs

    • NVIDIA GPUs (20XX, 30XX, 40XX)
    • AMD GPUs (RX 7000)
  • Cloud GPUs

    • NVIDIA (H100, A100, A10G, A30G, T4, L4)
    • AMD (MI200, MI250)
  • Cloud Service Providers (via SkyPilot)

    • AWS, GCP, Azure
    • Opinionated Cloud: Lambda Labs, RunPod, etc
  • Cloud ASICs

๐Ÿ“„ License

This project is licensed under the Apache-2.0 License.

๐Ÿ“ก Telemetry

NOS collects anonymous usage data using Sentry. This is used to help us understand how the community is using NOS and to help us prioritize features. You can opt-out of telemetry by setting NOS_TELEMETRY_ENABLED=0.

๐Ÿค Contributing

We welcome contributions! Please see our contributing guide for more information.

๐Ÿ”— Quick Links


<style> .md-typeset h1, .md-content__button { display: none; } </style>

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

torch_nos-0.1.0rc3-py3-none-any.whl (1.7 MB view details)

Uploaded Python 3

File details

Details for the file torch_nos-0.1.0rc3-py3-none-any.whl.

File metadata

File hashes

Hashes for torch_nos-0.1.0rc3-py3-none-any.whl
Algorithm Hash digest
SHA256 41168c780a3e3abad854e2394a43f3d578ed48a065b3a4cb8ae71b7b073c1136
MD5 abcef93c2b74c4bba98ddebddbc7ca2e
BLAKE2b-256 6d75fac60bfddfd5f5d348e9e4042e19525248aefd79871b1851eb2b70a51b67

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page