Tools for using wildlife re-identification datasets.
Project description
Introduction
The wildlife-tools
library offers a simple interface for various tasks in the Wildlife Re-Identification domain. It covers use cases such as training, feature extraction, similarity calculation, image retrieval, and classification. It complements the wildlife-datasets
library, which acts as dataset repository. All datasets there can be used in combination with WildlifeDataset
component, which serves for loading extracting images and image tensors other tasks.
More information can be found in Documentation
Modules in the in the wildlife-tools
- The
data
module provides tools for creating instances of theWildlifeDataset
. - The
train
module offers tools for fine-tuning feature extractors on theWildlifeDataset
. - The
features
module provides tools for extracting features from theWildlifeDataset
using various extractors. - The
similarity
module provides tools for constructing a similarity matrix from query and database features. - The
inference
module offers tools for creating predictions using the similarity matrix.
Relations between modules:
graph TD;
A[Data]-->|WildlifeDataset|B[Features]
A-->|WildlifeDataset|C;
C[Train]-->|finetuned extractor|B;
B-->|query and database features|D[Similarity]
D-->|similarity matrix|E[Inference]
Example
1. Create WildlifeDataset
Using metadata from wildlife-datasets
, create WildlifeDataset
object for the StripeSpotter dataset.
from wildlife_datasets.datasets import StripeSpotter
from wildlife_tools.data import WildlifeDataset
import torchvision.transforms as T
metadata = StripeSpotter('datasets/StripeSpotter')
transform = T.Compose([T.Resize([224, 224]), T.ToTensor()])
dataset = WildlifeDataset(metadata.df, metadata.root, transform=transform)
Optionally, split metadata into subsets. In this example, query is first 100 images and rest are in database.
database = WildlifeDataset(metadata.df.iloc[100:,:], metadata.root, transform=transform)
query = WildlifeDataset(metadata.df.iloc[:100,:], metadata.root, transform=transform)
2. Extract features
Extract features using MegaDescriptor Tiny, downloaded from HuggingFace hub.
from wildlife_tools.features import DeepFeatures
name = 'hf-hub:BVRA/MegaDescriptor-T-224'
extractor = DeepFeatures(timm.create_model(name, num_classes=0, pretrained=True))
query, database = extractor(query), extractor(database)
3. Calculate similarity
Calculate cosine similarity between query and database deep features.
from wildlife_tools.similarity import CosineSimilarity
similarity_function = CosineSimilarity()
similarity = similarity_function(query, database)
4. Evaluate
Use the cosine similarity in nearest neigbour classifier and get predictions.
classifier = KnnClassifier(k=1, database_labels=database.labels_string)
predictions = classifier(similarity['cosine'])
Installation
pip install wildlife-tools
Following dependencies were used:
torch==2.0.1
pytorch-metric-learning==1.6.0
faiss-gpu==1.7.2
pycocotools==2.0.4
timm==0.9.2
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for wildlife_tools-0.0.7-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 69e1b42c6dd1b6d78089a42edecded4526ec7c15059a0e2dce61c748cb49f3ad |
|
MD5 | 8100afd55abddb8aced5e7161b498b7b |
|
BLAKE2b-256 | 701cd3f4295a06d808d1012c68ec5d18b85128159360235bf6b9f5eb54a4f118 |