ViT module for use with Autodistill
Project description
Autodistill ViT Module
This repository contains the code supporting the ViT target model for use with Autodistill.
ViT is a classification model pre-trained on ImageNet-21k, developed by Google. You can train ViT classification models using Autodistill.
Read the full Autodistill documentation.
Read the ViT Autodistill documentation.
Installation
To use the ViT target model, you will need to install the following dependency:
pip3 install autodistill-vit
Quickstart
from autodistill_vit import ViT
target_model = ViT()
# train a model from a classification folder structure
target_model.train("./context_images_labeled/", epochs=200)
# run inference on the new model
pred = target_model.predict("./context_images_labeled/train/images/dog-7.jpg", conf=0.01)
License
The code in this repository is licensed under an Apache 2.0 license.
🏆 Contributing
We love your input! Please see the core Autodistill contributing guide to get started. Thank you 🙏 to all our contributors!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
autodistill-vit-0.1.0.tar.gz
(7.7 kB
view hashes)
Built Distribution
Close
Hashes for autodistill_vit-0.1.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | fff1adb79031b5236aa7c4895223f0434e1679db9efcb0642688253f3da3ad68 |
|
MD5 | 5dd16e521fd83e8a2030a8af318d743f |
|
BLAKE2b-256 | 0275bd2affd25601bdcce764bdfbd104feb89437c18d8f9dd924a464aa4bb633 |