Skip to main content

CLIP module for use with Autodistill

Project description

Autodistill CLIP Module

This repository contains the code supporting the CLIP base model for use with Autodistill.

CLIP, developed by OpenAI, is a computer vision model trained using pairs of images and text. You can use CLIP with autodistill for image classification.

Read the full Autodistill documentation.

Read the CLIP Autodistill documentation.

Installation

To use CLIP with autodistill, you need to install the following dependency:

pip3 install autodistill-clip

Quickstart

from autodistill_clip import CLIP
from autodistill.detection import CaptionOntology

# define an ontology to map class names to our CLIP prompt
# the ontology dictionary has the format {caption: class}
# where caption is the prompt sent to the base model, and class is the label that will
# be saved for that caption in the generated annotations
# then, load the model
base_model = CLIP(
    ontology=CaptionOntology(
        {
            "person": "person",
            "a forklift": "forklift"
        }
    )
)

results = base_model.predict("./context_images/test.jpg")

print(results)

base_model.label("./context_images", extension=".jpeg")

License

The code in this repository is licensed under an MIT license.

🏆 Contributing

We love your input! Please see the core Autodistill contributing guide to get started. Thank you 🙏 to all our contributors!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

autodistill_clip-0.1.5.tar.gz (4.2 kB view hashes)

Uploaded Source

Built Distribution

autodistill_clip-0.1.5-py3-none-any.whl (4.4 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page