With no prior knowledge of machine learning or device-specific deployment, you can deploy a computer vision model to a range of devices and environments using Roboflow Inference CLI.
Project description
Roboflow Inference CLI
Roboflow Inference is an opinionated tool for running inference on state-of-the-art computer vision models. With no prior knowledge of machine learning or device-specific deployment, you can deploy a computer vision model to a range of devices and environments. Inference supports object detection, classification, and instance segmentation models, and running foundation models (CLIP and SAM).
🎥 Inference in action
Check out Inference running on a video of a football game:
https://github.com/roboflow/inference/assets/37276661/121ab5f4-5970-4e78-8052-4b40f2eec173
👩🏫 Examples
The /examples
directory contains example code for working with and extending inference
, including HTTP and UDP client code and an insights dashboard, along with community examples (PRs welcome)!
inference serve
inference serve
is the main command for starting a local inference server. It takes a port number and will only start the docker container if there is not already a container running on that port.
inference serve --port 9001
inference infer
inference infer
is the main command for running inference on a single image. It takes a path to an image, a Roboflow project name, model version, and API key, and will return a JSON object with the model's predictions. You can also specify a host to run inference on our hosted inference server.
Local image
inference infer --image ./image.jpg --project_id my-project --model-version 1 --api-key my-api-key
Hosted image
inference infer --image https://[your-hosted-image-url] --project_id my-project --model-version 1 --api-key my-api-key
Hosted inference
inference infer --image ./image.jpg --project_id my-project --model-version 1 --api-key my-api-key --host https://infer.roboflow.com
💻 Why Inference?
Inference provides a scalable method through which you can manage inferences for your vision projects.
Inference is backed by:
-
A server, so you don’t have to reimplement things like image processing and prediction visualization on every project.
-
Standardized APIs for computer vision tasks, so switching out the model weights and architecture can be done independently of your application code.
-
Model architecture implementations, which implement the tensor parsing glue between images and predictions for supervised models that you've fine-tuned to perform custom tasks.
-
A model registry, so your code can be independent from your model weights & you don't have to re-build and re-deploy every time you want to iterate on your model weights.
-
Data management integrations, so you can collect more images of edge cases to improve your dataset & model the more it sees in the wild.
And more!
📝 license
The Roboflow Inference code is distributed under an Apache 2.0 license. The models supported by Roboflow Inference have their own licenses. View the licenses for supported models below.
model | license |
---|---|
inference/models/clip |
MIT |
inference/models/gaze |
MIT, Apache 2.0 |
inference/models/sam |
Apache 2.0 |
inference/models/vit |
Apache 2.0 |
inference/models/yolact |
MIT |
inference/models/yolov5 |
AGPL-3.0 |
inference/models/yolov7 |
GPL-3.0 |
inference/models/yolov8 |
AGPL-3.0 |
🚀 enterprise
With a Roboflow Inference Enterprise License, you can access additional Inference features, including:
- Server cluster deployment
- Device management
- Active learning
- YOLOv5 and YOLOv8 model sub-license
To learn more, contact the Roboflow team.
📚 documentation
Visit our documentation for usage examples and reference for Roboflow Inference.
🏆 contribution
We would love your input to improve Roboflow Inference! Please see our contributing guide to get started. Thank you to all of our contributors! 🙏
💻 explore more Roboflow open source projects
Project | Description |
---|---|
supervision | General-purpose utilities for use in computer vision projects, from predictions filtering and display to object tracking to model evaluation. |
Autodistill | Automatically label images for use in training computer vision models. |
Inference (this project) | An easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models. |
Notebooks | Tutorials for computer vision tasks, from training state-of-the-art models to tracking objects to counting objects in a zone. |
Collect | Automated, intelligent data collection powered by CLIP. |
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Hashes for inference_cli-0.0.8-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 746be9f3b0e3f9b9d4a6f5825e29fefd0974df752c92bfd029bf83129ace1402 |
|
MD5 | 37a8857869acf1af2e86e24f790baaaf |
|
BLAKE2b-256 | 3239dfbb8defc4fe9ce9def706ea758429a0eb49e282d91299f6107afc672fb1 |