Skip to main content

Azure Vision base model for use with Autodistill

Project description

Autodistill Azure Vision Module

This repository contains the code supporting the Azure Vision module for use with Autodistill.

The Azure Vision Image Analysis 4.0 API enables you to detect objects in images.

With this package, you can use the API to automatically label data for use in training a fine-tuned computer vision model.

This is ideal if you want to train a model that you own on a custom dataset.

You can then use your trained model on your computer using Autodistill, or at the edge or in the cloud by deploying with Roboflow Inference.

Read the full Autodistill documentation.

Read the Autodistill Azure Vision documentation.

Installation

[!NOTE]
Using this project will incur billing charges for API calls to the Azure Vision Image Analysis 4.0 API. Refer to the Azure Computer Vision pricing for more information and to calculate your expected pricing. This package makes one API call per image you want to label.

To use Azure Vision with autodistill, you need to install the following dependency:

pip install autodistill-azure-vision

Next, you will need to create an Azure account. Once you have an Azure account, create a "Computer vision" deployment in the "Azure AI services" dashboard in Azure.

This deployment will give you two API keys and an endpoint URL. You will need one of these API keys and the endpoint URL to use this Autodistill module.

Set your API key and endpoint URL in your environment:

export AZURE_VISION_SUBSCRIPTION_KEY=<your api key>
export AZURE_VISION_ENDPOINT=<your endpoint>

Use the quickstart below to start labeling images.

Quickstart

Label a Single Image

from autodistill_azure_vision import AzureVision
from autodistill.detection import CaptionOntology

# define an ontology to map class names to our Azure Custom Vision prompt
# the ontology dictionary has the format {caption: class}
# where caption is the prompt sent to the base model, and class is the label that will
# be saved for that caption in the generated annotations
# then, load the model
base_model = AzureVision(
    ontology=CaptionOntology(
        {
            "animal": "animal",
            "a forklift": "forklift"
        }
    ),
    endpoint=os.environ["AZURE_VISION_ENDPOINT"],
    subscription_key=os.environ["AZURE_VISION_SUBSCRIPTION_KEY"]
)

results = base_model.predict("image.jpeg")

print(results)

# annotate predictions on an image
classes = base_model.ontology.classes()

box_annotator = sv.BoxAnnotator()

labels = [
	f"{classes[class_id]} {confidence:0.2f}"
	for _, _, confidence, class_id, _
	in detections
]

image = cv2.imread("image.jpeg")

annotated_frame = box_annotator.annotate(
	scene=image.copy(),
	detections=detections,
	labels=labels
)

sv.plot_image(image=annotated_frame, size=(16, 16))

Label a Folder of Images

from autodistill_azure_vision import AzureVision
from autodistill.detection import CaptionOntology

# define an ontology to map class names to our Azure Custom Vision prompt
# the ontology dictionary has the format {caption: class}
# where caption is the prompt sent to the base model, and class is the label that will
# be saved for that caption in the generated annotations
# then, load the model
base_model = AzureVision(
    ontology=CaptionOntology(
        {
            "animal": "animal",
            "a forklift": "forklift"
        }
    ),
    endpoint=os.environ["AZURE_VISION_ENDPOINT"],
    subscription_key=os.environ["AZURE_VISION_SUBSCRIPTION_KEY"]
)

base_model.label("./context_images", extension=".jpeg")

License

This project is licensed under an MIT license.

🏆 Contributing

We love your input! Please see the core Autodistill contributing guide to get started. Thank you 🙏 to all our contributors!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

autodistill-azure-vision-0.1.0.tar.gz (4.8 kB view hashes)

Uploaded Source

Built Distribution

autodistill_azure_vision-0.1.0-py3-none-any.whl (5.2 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page