With no prior knowledge of machine learning or device-specific deployment, you can deploy a computer vision model to a range of devices and environments using Roboflow Inference.
Project description
👋 hello
Roboflow Inference is an opinionated tool for running inference on state-of-the-art computer vision models. With no prior knowledge of machine learning or device-specific deployment, you can deploy a computer vision model to a range of devices and environments. Inference supports object detection, classification, and instance segmentation models, and running foundation models (CLIP and SAM).
🎥 Inference in action
Check out Inference running on a video of a football game:
https://github.com/roboflow/inference/assets/37276661/121ab5f4-5970-4e78-8052-4b40f2eec173
👩🏫 Examples
The /examples
directory contains example code for working with and extending inference
, including HTTP and UDP client code and an insights dashboard, along with community examples (PRs welcome)!
💻 Why Inference?
Inference provides a scalable method through which you can manage inferences for your vision projects.
Inference is backed by:
-
A server, so you don’t have to reimplement things like image processing and prediction visualization on every project.
-
Standardized APIs for computer vision tasks, so switching out the model weights and architecture can be done independently of your application code.
-
Model architecture implementations, which implement the tensor parsing glue between images and predictions for supervised models that you've fine-tuned to perform custom tasks.
-
A model registry, so your code can be independent from your model weights & you don't have to re-build and re-deploy every time you want to iterate on your model weights.
-
Data management integrations, so you can collect more images of edge cases to improve your dataset & model the more it sees in the wild.
And more!
📌 Install pip vs Docker:
- pip: Installs
inference
into your Python environment. Lightweight, good for Python-centric projects. - Docker: Packages
inference
with its environment. Ensures consistency across setups; ideal for scalable deployments.
💻 install
With ONNX CPU Runtime:
For CPU powered inference:
pip install inference
or
pip install inference-cpu
With ONNX GPU Runtime:
If you have an NVIDIA GPU, you can accelerate your inference with:
pip install inference-gpu
Without ONNX Runtime:
Roboflow Inference uses Onnxruntime as its core inference engine. Onnxruntime provides an array of different execution providers that can optimize inference on differnt target devices. If you decide to install onnxruntime on your own, install inference with:
pip install inference-core
Alternatively, you can take advantage of some advanced execution providers using one of our published docker images.
Extras:
Some functionality requires extra dependencies. These can be installed by specifying the desired extras during installation of Roboflow Inference.
extra | description |
---|---|
clip |
Ability to use the core CLIP model (by OpenAI) |
gaze |
Ability to use the core Gaze model |
http |
Ability to run the http interface |
sam |
Ability to run the core Segment Anything model (by Meta AI) |
Note: Both CLIP and Segment Anything require pytorch to run. These are included in their respective dependencies however pytorch installs can be highly environment dependent. See the official pytorch install page for instructions specific to your enviornment.
Example install with http dependencies:
pip install inference[http]
🐋 docker
You can learn more about Roboflow Inference Docker Image build, pull and run in our documentation.
- Run on x86 CPU:
docker run --net=host roboflow/roboflow-inference-server-cpu:latest
- Run on NVIDIA GPU:
docker run --network=host --gpus=all roboflow/roboflow-inference-server-gpu:latest
👉 more docker run options
- Run on arm64 CPU:
docker run -p 9001:9001 roboflow/roboflow-inference-server-arm-cpu:latest
- Run on NVIDIA GPU with TensorRT Runtime:
docker run --network=host --gpus=all roboflow/roboflow-inference-server-trt:latest
- Run on NVIDIA Jetson with JetPack
4.x
:
docker run --privileged --net=host --runtime=nvidia roboflow/roboflow-inference-server-jetson:latest
- Run on NVIDIA Jetson with JetPack
5.x
:
docker run --privileged --net=host --runtime=nvidia roboflow/roboflow-inference-server-jetson-5.1.1:latest
🔥 quickstart
Docker Quickstart:
import requests
dataset_id = "soccer-players-5fuqs"
version_id = "1"
image_url = "https://source.roboflow.com/pwYAXv9BTpqLyFfgQoPZ/u48G0UpWfk8giSw7wrU8/original.jpg"
#Replace ROBOFLOW_API_KEY with your Roboflow API Key
api_key = "ROBOFLOW_API_KEY"
confidence = 0.5
url = f"http://localhost:9001/{dataset_id}/{version_id}"
params = {
"api_key": api_key,
"confidence": confidence,
"image": image_url,
}
res = requests.post(url, params=params)
print(res.json())
pip Quickstart:
After installing via pip, you can run a simple inference using:
from inference.models.utils import get_roboflow_model
model = get_roboflow_model(
model_id="soccer-players-5fuqs/1",
#Replace ROBOFLOW_API_KEY with your Roboflow API Key
api_key="ROBOFLOW_API_KEY"
)
results = model.infer(image="https://source.roboflow.com/pwYAXv9BTpqLyFfgQoPZ/u48G0UpWfk8giSw7wrU8/original.jpg", confidence=0.5, iou_threshold=0.5)
print(results)
CLIP Quickstart:
You can run inference with OpenAI's CLIP model using:
from inference.models import Clip
model = Clip(
#Replace ROBOFLOW_API_KEY with your Roboflow API Key
api_key = "ROBOFLOW_API_KEY"
)
image_url = "https://source.roboflow.com/7fLqS2r1SV8mm0YzyI0c/yy6hjtPUFFkq4yAvhkvs/original.jpg"
embeddings = model.embed_image(image_url)
print(embeddings)
SAM Quickstart:
You can run inference with Meta's Segment Anything model using:
from inference.models import SegmentAnything
model = SegmentAnything(
#Replace ROBOFLOW_API_KEY with your Roboflow API Key
api_key = "ROBOFLOW_API_KEY"
)
image_url = "https://source.roboflow.com/7fLqS2r1SV8mm0YzyI0c/yy6hjtPUFFkq4yAvhkvs/original.jpg"
embeddings = model.embed_image(image_url)
print(embeddings)
🏗️ inference process
To standardize the inference process throughout all our models, Roboflow Inference has a structure for processing inference requests. The specifics can be found on each model's respective page, but overall it works like this for most models:
📝 license
The Roboflow Inference code is distributed under an Apache 2.0 license. The models supported by Roboflow Inference have their own licenses. View the licenses for supported models below.
model | license |
---|---|
inference/models/clip |
MIT |
inference/models/gaze |
MIT, Apache 2.0 |
inference/models/sam |
Apache 2.0 |
inference/models/vit |
Apache 2.0 |
inference/models/yolact |
MIT |
inference/models/yolov5 |
AGPL-3.0 |
inference/models/yolov7 |
GPL-3.0 |
inference/models/yolov8 |
AGPL-3.0 |
🚀 enterprise
With a Roboflow Inference Enterprise License, you can access additional Inference features, including:
- Server cluster deployment
- Device management
- Active learning
- YOLOv5 and YOLOv8 model sub-license
To learn more, contact the Roboflow team.
📚 documentation
Visit our documentation for usage examples and reference for Roboflow Inference.
🏆 contribution
We would love your input to improve Roboflow Inference! Please see our contributing guide to get started. Thank you to all of our contributors! 🙏
💻 explore more Roboflow open source projects
Project | Description |
---|---|
supervision | General-purpose utilities for use in computer vision projects, from predictions filtering and display to object tracking to model evaluation. |
Autodistill | Automatically label images for use in training computer vision models. |
Inference (this project) | An easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models. |
Notebooks | Tutorials for computer vision tasks, from training state-of-the-art models to tracking objects to counting objects in a zone. |
Collect | Automated, intelligent data collection powered by CLIP. |
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Hashes for inference_core-0.8.4-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 87a6e6548afc27680b3b19be2ae0fbb5c26fbf2ba478c439f15dad51cdef3d98 |
|
MD5 | 594af259d20150e7662c0a36fec356f6 |
|
BLAKE2b-256 | 26645ea8f28fc709c31e7a00970026f50accb9adf4fcf57f9d4e58aafbc99f4f |