Active learning for edge vision.
Project description
Active learning at the edge for computer vision.
The goal of this project is to create a framework for the active learning loop for computer vision deployed on edge devices.
Supported tasks:
- Image classification
- Object detection
- Segmentation
Installation
Get a release from PyPI
pip install active-vision
Install from source
git clone https://github.com/dnth/active-vision.git
cd active-vision
pip install -e .
I recommend using uv to set up a virtual environment and install the package. You can also use other virtual env of your choice.
If you're using uv:
uv venv
uv sync
Once the virtual environment is created, you can install the package using pip.
[!TIP] If you're using uv add a uv before the pip install command to install into your virtual environment. Eg:
uv pip install active-vision
Usage
See the notebook for a complete example.
Be sure to prepared 3 datasets:
- initial_samples: A dataframe of an existing labeled training dataset to seed the training set.
- unlabeled: A dataframe of unlabeled data which we will sample from using active learning.
- eval: A dataframe of labeled data which we will use to evaluate the performance of the model.
As a toy example I created the above 3 datasets from the imagenette dataset.
from active_vision import ActiveLearner
import pandas as pd
# Create an active learner instance with a model
al = ActiveLearner("resnet18")
# Load dataset
train_df = pd.read_parquet("training_samples.parquet")
al.load_dataset(df, filepath_col="filepath", label_col="label")
# Train model
al.train(epochs=3, lr=1e-3)
# Evaluate the model on a *labeled* evaluation set
accuracy = al.evaluate(eval_df, filepath_col="filepath", label_col="label")
# Get predictions from an *unlabeled* set
pred_df = al.predict(filepaths)
# Sample low confidence predictions from unlabeled set
uncertain_df = al.sample_uncertain(pred_df, num_samples=10)
# Launch a Gradio UI to label the low confidence samples
al.label(uncertain_df, output_filename="uncertain")
Once complete, the labeled samples will be save into a new df. We can now add the newly labeled data to the training set.
# Add newly labeled data to training set and save as a new file active_labeled
al.add_to_train_set(labeled_df, output_filename="active_labeled")
Repeat the process until the model is good enough. Use the dataset to train a larger model and deploy.
[!TIP] For the toy dataset, I got to about 93% accuracy on the evaluation set with 200+ labeled images. The best performing model on the leaderboard got 95.11% accuracy training on all 9469 labeled images.
This took me about 6 iterations of relabeling. Each iteration took about 5 minutes to complete including labeling and model training (resnet18). See the notebook for more details.
But using the dataset of 200+ images, I trained a more capable model (convnext_small_in22k) and got 99.3% accuracy on the evaluation set. See the notebook for more details.
Workflow
There are two workflows for active learning at the edge that we can use depending on the availability of labeled data.
With unlabeled data
If we have no labeled data, we can use active learning to iteratively improve the model and build a labeled dataset.
- Load a small proxy model.
- Label an initial dataset. If there is none, you'll have to label some images.
- Train the proxy model on the labeled dataset.
- Run inference on the unlabeled dataset.
- Evaluate the performance of the proxy model.
- Is model good enough?
- Yes: Save the proxy model and the dataset.
- No: Select the most informative images to label using active learning.
- Label the most informative images and add them to the dataset.
- Repeat steps 3-6.
- Save the proxy model and the dataset.
- Train a larger model on the saved dataset.
graph TD
A[Load a small proxy model] --> B[Label an initial dataset]
B --> C[Train proxy model on labeled dataset]
C --> D[Run inference on unlabeled dataset]
D --> E[Evaluate proxy model performance]
E --> F{Model good enough?}
F -->|Yes| G[Save proxy model and dataset]
G --> H[Train and deploy a larger model]
F -->|No| I[Select informative images using active learning]
I --> J[Label selected images]
J --> C
With labeled data
If we have a labeled dataset, we can use active learning to iteratively improve the dataset and the model by fixing the most important label errors.
- Load a small proxy model.
- Train the proxy model on the labeled dataset.
- Run inference on the entire labeled dataset.
- Get the most important label errors with active learning.
- Fix the label errors.
- Repeat steps 2-5 until the dataset is good enough.
- Save the labeled dataset.
- Train a larger model on the saved labeled dataset.
graph TD
A[Load a small proxy model] --> B[Train proxy model on labeled dataset]
B --> C[Run inference on labeled dataset]
C --> D[Get important label errors using active learning]
D --> E[Fix label errors]
E --> F{Dataset good enough?}
F -->|No| B
F -->|Yes| G[Save cleaned dataset]
G --> H[Train and deploy larger model]
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file active_vision-0.0.4.tar.gz.
File metadata
- Download URL: active_vision-0.0.4.tar.gz
- Upload date:
- Size: 14.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.0.1 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
aae3c1ea2a17ce894418c5523f5a41fbada53680b5e3a532cd21c085eabc4c37
|
|
| MD5 |
ce802d0a7b16d68e61493d60c12ebf20
|
|
| BLAKE2b-256 |
0fedcd7e60a985d422ecb22112f2f01b89994a1d409bfe5041732b549deb6716
|
File details
Details for the file active_vision-0.0.4-py3-none-any.whl.
File metadata
- Download URL: active_vision-0.0.4-py3-none-any.whl
- Upload date:
- Size: 11.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.0.1 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
818bd384adaa692bd5d3ed008bef84327d3817d1c23de780df5d12a851d5fa67
|
|
| MD5 |
d1f90a6a940160b54c0b31be0819b46c
|
|
| BLAKE2b-256 |
60a9c572fb16597ad7b790e4419c1e9f1eaba031d70f64196be525b07f9b122b
|