Skip to main content

Nitrous Oxide for your AI Infrastructure.

Project description

Nitro Boost for your AI Infrastructure

Website | Docs | Tutorials | Playground | Blog | Discord

PyPI Version PyPI Version PyPI Downloads Docker Pulls
PyPi Downloads Discord PyPi Version

NOS is a fast and flexible PyTorch inference server that runs on any cloud or AI HW.

🛠️ Key Features

  • 👩‍💻 Easy-to-use: Built for PyTorch and designed to optimize, serve and auto-scale Pytorch models in production without compromising on developer experience.
  • 🥷 Multi-modal & Multi-model: Serve multiple foundational AI models (LLMs, Diffusion, Embeddings, Speech-to-Text and Object Detection) simultaneously, in a single server.
  • ⚙️ HW-aware Runtime: Deploy PyTorch models effortlessly on modern AI accelerators (NVIDIA GPUs, AWS Inferentia2, AMD - coming soon, and even CPUs).
  • ☁️ Cloud-agnostic Containers: Run on any cloud (AWS, GCP, Azure, Lambda Labs, On-Prem) with our ready-to-use inference server containers.

🔥 What's New

🚀 Quickstart

We highly recommend that you go to our quickstart guide to get started. To install the NOS client, you can run the following command:

conda create -n nos python=3.8 -y
conda activate nos
pip install torch-nos

Once the client is installed, you can start the NOS server via the NOS serve CLI. This will automatically detect your local environment, download the docker runtime image and spin up the NOS server:

nos serve up --http --logging-level INFO

You are now ready to run your first inference request with NOS! You can run any of the following commands to try things out. You can set the logging level to DEBUG if you want more detailed information from the server.

👩‍💻 What can NOS do?

💬 Chat / LLM Agents (ChatGPT-as-a-Service)


NOS provides an OpenAI-compatible server with streaming support so that you can connect your favorite OpenAI-compatible LLM client to talk to NOS.


API / Usage

gRPC API ⚡

from nos.client import Client

client = Client("[::]:50051")

model = client.Module("TinyLlama/TinyLlama-1.1B-Chat-v1.0")
response = model.chat(message="Tell me a story of 1000 words with emojis", _stream=True)

REST API

curl \
-X POST http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
    "model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0",
    "messages": [{
        "role": "user",
        "content": "Tell me a story of 1000 words with emojis"
    }],
    "temperature": 0.7,
    "stream": true
  }'

🏞️ Image Generation (Stable-Diffusion-as-a-Service)


Build MidJourney discord bots in seconds.


API / Usage

gRPC API ⚡

from nos.client import Client

client = Client("[::]:50051")

sdxl = client.Module("stabilityai/stable-diffusion-xl-base-1-0")
image, = sdxl(prompts=["hippo with glasses in a library, cartoon styling"],
              width=1024, height=1024, num_images=1)

REST API

curl \
-X POST http://localhost:8000/v1/infer \
-H 'Content-Type: application/json' \
-d '{
    "model_id": "stabilityai/stable-diffusion-xl-base-1-0",
    "inputs": {
        "prompts": ["hippo with glasses in a library, cartoon styling"],
        "width": 1024, "height": 1024,
        "num_images": 1
    }
}'

🧠 Text & Image Embedding (CLIP-as-a-Service)


Build scalable semantic search of images/videos in minutes.


API / Usage

gRPC API ⚡

from nos.client import Client

client = Client("[::]:50051")

clip = client.Module("openai/clip-vit-base-patch32")
txt_vec = clip.encode_text(texts=["fox jumped over the moon"])

REST API

curl \
-X POST http://localhost:8000/v1/infer \
-H 'Content-Type: application/json' \
-d '{
    "model_id": "openai/clip-vit-base-patch32",
    "method": "encode_text",
    "inputs": {
        "texts": ["fox jumped over the moon"]
    }
}'

🎙️ Audio Transcription (Whisper-as-a-Service)


Perform real-time audio transcription using Whisper.


API / Usage

gRPC API ⚡

from pathlib import Path
from nos.client import Client

client = Client("[::]:50051")

model = client.Module("openai/whisper-small.en")
with client.UploadFile(Path("audio.wav")) as remote_path:
  response = model(path=remote_path)
# {"chunks": ...}

REST API

curl \
-X POST http://localhost:8000/v1/infer/file \
-H 'accept: application/json' \
-H 'Content-Type: multipart/form-data' \
-F 'model_id=openai/whisper-small.en' \
-F 'file=@audio.wav'

🧐 Object Detection (YOLOX-as-a-Service)


Run classical computer-vision tasks in 2 lines of code.


API / Usage

gRPC API ⚡

from pathlib import Path
from nos.client import Client

client = Client("[::]:50051")

model = client.Module("yolox/medium")
response = model(images=[Image.open("image.jpg")])

REST API

curl \
-X POST http://localhost:8000/v1/infer/file \
-H 'accept: application/json' \
-H 'Content-Type: multipart/form-data' \
-F 'model_id=yolox/medium' \
-F 'file=@image.jpg'

⚒️ Custom models


Want to run models not supported by NOS? You can easily add your own models following the examples in the NOS Playground.

📄 License

This project is licensed under the Apache-2.0 License.

📡 Telemetry

NOS collects anonymous usage data using Sentry. This is used to help us understand how the community is using NOS and to help us prioritize features. You can opt-out of telemetry by setting NOS_TELEMETRY_ENABLED=0.

🤝 Contributing

We welcome contributions! Please see our contributing guide for more information.

🔗 Quick Links


<style> .md-typeset h1, .md-content__button { display: none; } </style>

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

torch_nos-0.2.0b0-py3-none-any.whl (1.8 MB view details)

Uploaded Python 3

File details

Details for the file torch_nos-0.2.0b0-py3-none-any.whl.

File metadata

  • Download URL: torch_nos-0.2.0b0-py3-none-any.whl
  • Upload date:
  • Size: 1.8 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.17

File hashes

Hashes for torch_nos-0.2.0b0-py3-none-any.whl
Algorithm Hash digest
SHA256 3284d9db1f8c5b7741a16a5fec7208d15a58d01c89873e31443e21ff24398ccf
MD5 e4e3e45029273083b2483122cf1368ec
BLAKE2b-256 fabb544bc36e1366ce02c87a3563af2e9e607355f6a12e0e4de03850465a81b7

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page