Skip to main content

With no prior knowledge of machine learning or device-specific deployment, you can deploy a computer vision model to a range of devices and environments using Roboflow Inference.

Project description

👋 hello

Roboflow Inference is an open-source platform designed to simplify the deployment of computer vision models. It enables developers to perform object detection, classification, and instance segmentation and utilize foundation models like CLIP, Segment Anything, and YOLO-World through a Python-native package, a self-hosted inference server, or a fully managed API.

Explore our enterprise options for advanced features like server deployment, active learning, and commercial licenses for YOLOv5 and YOLOv8.

💻 install

Inference package requires Python>=3.8,<=3.11. Click here to learn more about running Inference inside Docker.

pip install inference
👉 additional considerations
  • hardware

    Enhance model performance in GPU-accelerated environments by installing CUDA-compatible dependencies.

    pip install inference-gpu
    
  • models

    The inference and inference-gpu packages install only the minimal shared dependencies. Install model-specific dependencies to ensure code compatibility and license compliance. Learn more about the models supported by Inference.

    pip install inference[yolo-world]
    

🔥 quickstart

Use Inference SDK to run models locally with just a few lines of code. The image input can be a URL, a numpy array (BGR), or a PIL image.

from inference import get_model

model = get_model(model_id="yolov8n-640")

results = model.infer("https://media.roboflow.com/inference/people-walking.jpg")
👉 roboflow models

Set up your ROBOFLOW_API_KEY to access thousands of fine-tuned models shared by the Roboflow Universe community and your custom model. Navigate to 🔑 keys section to learn more.

from inference import get_model

model = get_model(model_id="soccer-players-5fuqs/1")

results = model.infer(
    image="https://media.roboflow.com/inference/soccer.jpg",
    confidence=0.5,
    iou_threshold=0.5
)
👉 foundational models
  • CLIP Embeddings - generate text and image embeddings that you can use for zero-shot classification or assessing image similarity.

    from inference.models import Clip
    
    model = Clip()
    
    embeddings_text = clip.embed_text("a football match")
    embeddings_image = model.embed_image("https://media.roboflow.com/inference/soccer.jpg")
    
  • Segment Anything - segment all objects visible in the image or only those associated with selected points or boxes.

    from inference.models import SegmentAnything
    
    model = SegmentAnything()
    
    result = model.segment_image("https://media.roboflow.com/inference/soccer.jpg")
    
  • YOLO-World - an almost real-time zero-shot detector that enables the detection of any objects without any training.

    from inference.models import YOLOWorld
    
    model = YOLOWorld(model_id="yolo_world/l")
    
    result = model.infer(
        image="https://media.roboflow.com/inference/dog.jpeg",
        text=["person", "backpack", "dog", "eye", "nose", "ear", "tongue"],
        confidence=0.03
    )
    

📟 inference server

  • deploy server

    The inference server is distributed via Docker. Behind the scenes, inference will download and run the image that is appropriate for your hardware. Here, you can learn more about the supported images.

    inference server start
    
  • run client

    Consume inference server predictions using the HTTP client available in the Inference SDK.

    from inference_sdk import InferenceHTTPClient
    
    client = InferenceHTTPClient(
        api_url="http://localhost:9001",
        api_key=<ROBOFLOW_API_KEY>
    )
    with client.use_model(model_id="soccer-players-5fuqs/1"):
        predictions = client.infer("https://media.roboflow.com/inference/soccer.jpg")
    

    If you're using the hosted API, change the local API URL to https://detect.roboflow.com. Accessing the hosted inference server and/or using any of the fine-tuned models require a ROBOFLOW_API_KEY. For further information, visit the 🔑 keys section.

🎥 inference pipeline

The inference pipeline is an efficient method for processing static video files and streams. Select a model, define the video source, and set a callback action. You can choose from predefined callbacks that allow you to display results on the screen or save them to a file.

from inference import InferencePipeline
from inference.core.interfaces.stream.sinks import render_boxes

pipeline = InferencePipeline.init(
    model_id="yolov8x-1280",
    video_reference="https://media.roboflow.com/inference/people-walking.mp4",
    on_prediction=render_boxes
)

pipeline.start()
pipeline.join()

🔑 keys

Inference enables the deployment of a wide range of pre-trained and foundational models without an API key. To access thousands of fine-tuned models shared by the Roboflow Universe community, configure your API key.

export ROBOFLOW_API_KEY=<YOUR_API_KEY>

📚 documentation

Visit our documentation to explore comprehensive guides, detailed API references, and a wide array of tutorials designed to help you harness the full potential of the Inference package.

⚡️ Model-specific extras

Explore the list of inference extras to install model-specific dependencies.

© license

See the "Self Hosting and Edge Deployment" section of the Roboflow Licensing documentation for information on how Roboflow Inference is licensed.

🏆 contribution

We would love your input to improve Roboflow Inference! Please see our contributing guide to get started. Thank you to all of our contributors! 🙏


Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

inference_cpu-0.26.1-py3-none-any.whl (804.8 kB view details)

Uploaded Python 3

File details

Details for the file inference_cpu-0.26.1-py3-none-any.whl.

File metadata

File hashes

Hashes for inference_cpu-0.26.1-py3-none-any.whl
Algorithm Hash digest
SHA256 a5022a6957f6d87250bee70db6767796fbad0f0a49754d5e00441c3a3eef4c4c
MD5 211ea6547337f704db74107d50509a73
BLAKE2b-256 d34173b8d78e5d38bb9ca01f3667c2f2888cea379fcce8ad4724da3c592cc142

See more details on using hashes here.

Provenance

The following attestation bundles were made for inference_cpu-0.26.1-py3-none-any.whl:

Publisher: publish.pypi.yml on roboflow/inference

Attestations:

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page