Skip to main content

With no prior knowledge of machine learning or device-specific deployment, you can deploy a computer vision model to a range of devices and environments using Roboflow Inference CLI.

Project description

Roboflow Inference CLI

Roboflow Inference CLI offers a lightweight interface for running the Roboflow inference server locally or the Roboflow Hosted API.

To create custom inference server Docker images, go to the parent package, Roboflow Inference.

Roboflow has everything you need to deploy a computer vision model to a range of devices and environments. Inference supports object detection, classification, and instance segmentation models, and running foundation models (CLIP and SAM).

👩‍🏫 Examples

inference server start

Starts a local inference server. It optionally takes a port number (default is 9001) and will only start the docker container if there is not already a container running on that port.

Before you begin, ensure that you have Docker installed on your machine. Docker provides a containerized environment, allowing the Roboflow Inference Server to run in a consistent and isolated manner, regardless of the host system. If you haven't installed Docker yet, you can get it from Docker's official website.

The CLI will automatically detect the device you are running on and pull the appropriate Docker image.

inference server start --port 9001 [-e {optional_path_to_file_with_env_variables}]

Parameter --env-file (or -e) is the optional path for .env file that will be loaded into inference server in case that values of internal parameters needs to be adjusted. Any value passed explicitly as command parameter is considered as more important and will shadow the value defined in .env file under the same target variable name.

inference server status

Checks the status of the local inference server.

inference server status

inference server stop

Stops the inference server.

inference server stop

inference infer

Runs inference on a single image. It takes a path to an image, a Roboflow project name, model version, and API key, and will return a JSON object with the model's predictions. You can also specify a host to run inference on our hosted inference server.

Local image

inference infer ./image.jpg --project-id my-project --model-version 1 --api-key my-api-key

Hosted image

inference infer https://[YOUR_HOSTED_IMAGE_URL] --project-id my-project --model-version 1 --api-key my-api-key

Hosted API inference

inference infer ./image.jpg --project-id my-project --model-version 1 --api-key my-api-key --host https://detect.roboflow.com

Supported Devices

Roboflow Inference CLI currently supports the following device targets:

  • x86 CPU
  • ARM64 CPU
  • NVIDIA GPU

For Jetson specific inference server images, check out the Roboflow Inference package, or pull the images directly following instructions in the official Roboflow Inference documentation.

📝 license

The Roboflow Inference code is distributed under an Apache 2.0 license. The models supported by Roboflow Inference have their own licenses. View the licenses for supported models below.

model license
inference/models/clip MIT
inference/models/gaze MIT, Apache 2.0
inference/models/sam Apache 2.0
inference/models/vit Apache 2.0
inference/models/yolact MIT
inference/models/yolov5 AGPL-3.0
inference/models/yolov7 GPL-3.0
inference/models/yolov8 AGPL-3.0

🚀 enterprise

With a Roboflow Inference Enterprise License, you can access additional Inference features, including:

  • Server cluster deployment
  • Active learning
  • YOLOv5 and YOLOv8 model sub-license

To learn more, contact the Roboflow team.

📚 documentation

Visit our documentation for usage examples and reference for Roboflow Inference.

💻 explore more Roboflow open source projects

Project Description
supervision General-purpose utilities for use in computer vision projects, from predictions filtering and display to object tracking to model evaluation.
Autodistill Automatically label images for use in training computer vision models.
Inference (this project) An easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
Notebooks Tutorials for computer vision tasks, from training state-of-the-art models to tracking objects to counting objects in a zone.
Collect Automated, intelligent data collection powered by CLIP.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

inference_cli-0.26.0-py3-none-any.whl (59.8 kB view details)

Uploaded Python 3

File details

Details for the file inference_cli-0.26.0-py3-none-any.whl.

File metadata

File hashes

Hashes for inference_cli-0.26.0-py3-none-any.whl
Algorithm Hash digest
SHA256 019f6dc77be7b5803e460e582c723f23602f8e6d39391c73983abfe1b5a2bab6
MD5 7bfb843bd2c95e39eea77e228426690d
BLAKE2b-256 5d3cafe5507fff1350dfdc251ca927eae52bfec87246400c30a4a9520125744a

See more details on using hashes here.

Provenance

The following attestation bundles were made for inference_cli-0.26.0-py3-none-any.whl:

Publisher: publish.pypi.yml on roboflow/inference

Attestations:

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page