Project description
Keras_cv_attention_models
- coco_train_script.py is under testing
- Modified using
anchors_mode
value in [anchor_free, yolor, efficientdet]
instead of all previous use_anchor_free_mode
and use_yolor_anchors_mode
in training
/ evaluating
/ model structure
. - 2022.04.15
CoAtNet
is using vv_dim = key_dim
instead of previous vv_dim = out_shape // num_heads
now, and pretrained weights updated, be caution of this update if wanna reload earlier models. - 2022.04.24
SwinTransformerV2
parameter window_ratio
is replaced with window_size
for preferring new weights from official publication. - 2022.05.16
General Usage
Basic
- Currently recommended TF version is
tensorflow==2.8.0
. Expecially for training or TFLite conversion.
- Default import
import os
import tensorflow as tf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from tensorflow import keras
- Install as pip package:
pip install -U keras-cv-attention-models
# Or
pip install -U git+https://github.com/leondgarse/keras_cv_attention_models
Refer to each sub directory for detail usage.
- Basic model prediction
from keras_cv_attention_models import volo
mm = volo.VOLO_d1(pretrained="imagenet")
""" Run predict """
import tensorflow as tf
from tensorflow import keras
from skimage.data import chelsea
img = chelsea() # Chelsea the cat
imm = keras.applications.imagenet_utils.preprocess_input(img, mode='torch')
pred = mm(tf.expand_dims(tf.image.resize(imm, mm.input_shape[1:3]), 0)).numpy()
pred = tf.nn.softmax(pred).numpy() # If classifier activation is not softmax
print(keras.applications.imagenet_utils.decode_predictions(pred)[0])
# [('n02124075', 'Egyptian_cat', 0.9692954),
# ('n02123045', 'tabby', 0.020203391),
# ('n02123159', 'tiger_cat', 0.006867502),
# ('n02127052', 'lynx', 0.00017674894),
# ('n02123597', 'Siamese_cat', 4.9493494e-05)]
Or just use model preset preprocess_input
and decode_predictions
from keras_cv_attention_models import coatnet
from skimage.data import chelsea
mm = coatnet.CoAtNet0()
preds = mm(mm.preprocess_input(chelsea()))
print(mm.decode_predictions(preds))
# [[('n02124075', 'Egyptian_cat', 0.9653769), ('n02123159', 'tiger_cat', 0.018427467), ...]
- Exclude model top layers by set
num_classes=0
from keras_cv_attention_models import resnest
mm = resnest.ResNest50(num_classes=0)
print(mm.output_shape)
# (None, 7, 7, 2048)
- Reload own model weights by set
pretrained="xxx.h5"
. Better if reloading model with different input_shape
and with weights shape not matching.
import os
from keras_cv_attention_models import coatnet
pretrained = os.path.expanduser('~/.keras/models/coatnet0_224_imagenet.h5')
mm = coatnet.CoAtNet1(input_shape=(384, 384, 3), pretrained=pretrained)
- Alias name
kecam
can be used instead of keras_cv_attention_models
. It's __init__.py
only with one line from keras_cv_attention_models import *
.
import kecam
mm = kecam.yolor.YOLOR_CSP()
imm = kecam.test_images.dog_cat()
preds = mm(mm.preprocess_input(imm))
bboxs, lables, confidences = mm.decode_predictions(preds)[0]
kecam.coco.show_image_with_bboxes(imm, bboxs, lables, confidences)
- Calculate flops method from TF 2.0 Feature: Flops calculation #32809.
from keras_cv_attention_models import coatnet, resnest, model_surgery
model_surgery.get_flops(coatnet.CoAtNet0())
# >>>> FLOPs: 4,221,908,559, GFLOPs: 4.2219G
model_surgery.get_flops(resnest.ResNest50())
# >>>> FLOPs: 5,378,399,992, GFLOPs: 5.3784G
- Code format is using
line-length=160
:
find ./* -name "*.py" | grep -v __init__ | grep -v setup.py | xargs -I {} black -l 160 {}
Layers
- attention_layers is
__init__.py
only, which imports core layers defined in model architectures. Like RelativePositionalEmbedding
from botnet
, outlook_attention
from volo
, and many other Positional Embedding Layers
/ Attention Blocks
.
from keras_cv_attention_models import attention_layers
aa = attention_layers.RelativePositionalEmbedding()
print(f"{aa(tf.ones([1, 4, 14, 16, 256])).shape = }")
# aa(tf.ones([1, 4, 14, 16, 256])).shape = TensorShape([1, 4, 14, 16, 14, 16])
Model surgery
- model_surgery including functions used to change model parameters after built.
from keras_cv_attention_models import model_surgery
mm = keras.applications.ResNet50() # Trainable params: 25,583,592
# Replace all ReLU with PReLU. Trainable params: 25,606,312
mm = model_surgery.replace_ReLU(mm, target_activation='PReLU')
# Fuse conv and batch_norm layers. Trainable params: 25,553,192
mm = model_surgery.convert_to_fused_conv_bn_model(mm)
ImageNet training and evaluating
- ImageNet contains more detail usage and some comparing results.
- Init Imagenet dataset using tensorflow_datasets #9.
- For custom dataset, recommending method is using
tfds.load
, refer Writing custom datasets and Creating private tensorflow_datasets from tfds #48 by @Medicmind.
custom_dataset_script.py
can also be used creating a json
format file, which can be used as --data_name xxx.json
for training, detail usage can be found in Custom recognition dataset.
aotnet.AotNet50
default parameters set is a typical ResNet50
architecture with Conv2D use_bias=False
and padding
like PyTorch
.
- Default parameters for
train_script.py
is like A3
configuration from ResNet strikes back: An improved training procedure in timm with batch_size=256, input_shape=(160, 160)
.
# `antialias` is default enabled for resize, can be turned off be set `--disable_antialias`.
CUDA_VISIBLE_DEVICES='0' TF_XLA_FLAGS="--tf_xla_auto_jit=2" ./train_script.py --seed 0 -s aotnet50
# Evaluation using input_shape (224, 224).
# `antialias` usage should be same with training.
CUDA_VISIBLE_DEVICES='1' ./eval_script.py -m aotnet50_epoch_103_val_acc_0.7674.h5 -i 224 --central_crop 0.95
# >>>> Accuracy top1: 0.78466 top5: 0.94088
- Restore from break point by setting
--restore_path
and --initial_epoch
, and keep other parameters same. restore_path
is higher priority than model
and additional_model_kwargs
, also restore optimizer
and loss
. initial_epoch
is mainly for learning rate scheduler. If not sure where it stopped, check checkpoints/{save_name}_hist.json
.
import json
with open("checkpoints/aotnet50_hist.json", "r") as ff:
aa = json.load(ff)
len(aa['lr'])
# 41 ==> 41 epochs are finished, initial_epoch is 41 then, restart from epoch 42
CUDA_VISIBLE_DEVICES='0' TF_XLA_FLAGS="--tf_xla_auto_jit=2" ./train_script.py --seed 0 -r checkpoints/aotnet50_latest.h5 -I 41
# >>>> Restore model from: checkpoints/aotnet50_latest.h5
# Epoch 42/105
eval_script.py
is used for evaluating model accuracy. EfficientNetV2 self tested imagenet accuracy #19 just showing how different parameters affecting model accuracy.
# evaluating pretrained builtin model
CUDA_VISIBLE_DEVICES='1' ./eval_script.py -m regnet.RegNetZD8
# evaluating pretrained timm model
CUDA_VISIBLE_DEVICES='1' ./eval_script.py -m timm.models.resmlp_12_224 --input_shape 224
# evaluating specific h5 model
CUDA_VISIBLE_DEVICES='1' ./eval_script.py -m checkpoints/xxx.h5
# evaluating specific tflite model
CUDA_VISIBLE_DEVICES='1' ./eval_script.py -m xxx.tflite
- Progressive training refer to PDF 2104.00298 EfficientNetV2: Smaller Models and Faster Training. AotNet50 A3 progressive input shapes
96 128 160
:
CUDA_VISIBLE_DEVICES='1' TF_XLA_FLAGS="--tf_xla_auto_jit=2" ./progressive_train_script.py \
--progressive_epochs 33 66 -1 \
--progressive_input_shapes 96 128 160 \
--progressive_magnitudes 2 4 6 \
-s aotnet50_progressive_3_lr_steps_100 --seed 0
- Transfer learning with
freeze_backbone
or freeze_norm_layers
: EfficientNetV2B0 transfer learning on cifar10 testing freezing backbone #55.
- Token label train test on CIFAR10 #57. Currently not working as well as expected.
Token label
is implementation of Github zihangJiang/TokenLabeling, paper PDF 2104.10858 All Tokens Matter: Token Labeling for Training Better Vision Transformers.
COCO training and evaluating
-
Currently still under testing.
-
COCO contains more detail usage.
-
custom_dataset_script.py
can be used creating a json
format file, which can be used as --data_name xxx.json
for training, detail usage can be found in Custom detection dataset.
-
Default parameters for coco_train_script.py
is EfficientDetD0
with input_shape=(256, 256, 3), batch_size=64, mosaic_mix_prob=0.5, freeze_backbone_epochs=32, total_epochs=105
. Technically, it's any pyramid structure backbone
+ EfficientDet / YOLOX header / YOLOR header
+ anchor_free / yolor / efficientdet anchors
combination supported.
-
Currently 3 types anchors supported, parameter anchors_mode
controls which anchor to use, value in ["efficientdet", "anchor_free", "yolor"]
. Default None
for det_header
presets.
anchors_mode |
use_object_scores |
num_anchors |
anchor_scale |
aspect_ratios |
num_scales |
grid_zero_start |
efficientdet |
False |
9 |
4 |
[1, 2, 0.5] |
3 |
False |
anchor_free |
True |
1 |
1 |
[1] |
1 |
True |
yolor |
True |
3 |
None |
presets |
None |
offset=0.5 |
# Default EfficientDetD0
CUDA_VISIBLE_DEVICES='0' ./coco_train_script.py
# Default EfficientDetD0 using input_shape 512, optimizer adamw, freezing backbone 16 epochs, total 50 + 5 epochs
CUDA_VISIBLE_DEVICES='0' ./coco_train_script.py -i 512 -p adamw --freeze_backbone_epochs 16 --lr_decay_steps 50
# EfficientNetV2B0 backbone + EfficientDetD0 detection header
CUDA_VISIBLE_DEVICES='0' ./coco_train_script.py --backbone efficientnet.EfficientNetV2B0 --det_header efficientdet.EfficientDetD0
# ResNest50 backbone + EfficientDetD0 header using yolox like anchor_free anchors
CUDA_VISIBLE_DEVICES='0' ./coco_train_script.py --backbone resnest.ResNest50 --anchors_mode anchor_free
# UniformerSmall32 backbone + EfficientDetD0 header using yolor anchors
CUDA_VISIBLE_DEVICES='0' ./coco_train_script.py --backbone uniformer.UniformerSmall32 --anchors_mode yolor
# Typical YOLOXS with anchor_free anchors
CUDA_VISIBLE_DEVICES='0' ./coco_train_script.py --det_header yolox.YOLOXS --freeze_backbone_epochs 0
# YOLOXS with efficientdet anchors
CUDA_VISIBLE_DEVICES='0' ./coco_train_script.py --det_header yolox.YOLOXS --anchors_mode efficientdet --freeze_backbone_epochs 0
# CoAtNet0 backbone + YOLOX header with yolor anchors
CUDA_VISIBLE_DEVICES='0' ./coco_train_script.py --backbone coatnet.CoAtNet0 --det_header yolox.YOLOX --anchors_mode yolor
# Typical YOLOR_P6 with yolor anchors
CUDA_VISIBLE_DEVICES='0' ./coco_train_script.py --det_header yolor.YOLOR_P6 --freeze_backbone_epochs 0
# YOLOR_P6 with anchor_free anchors
CUDA_VISIBLE_DEVICES='0' ./coco_train_script.py --det_header yolor.YOLOR_P6 --anchors_mode anchor_free --freeze_backbone_epochs 0
# ConvNeXtTiny backbone + YOLOR header with efficientdet anchors
CUDA_VISIBLE_DEVICES='0' ./coco_train_script.py --backbone convnext.ConvNeXtTiny --det_header yolor.YOLOR --anchors_mode yolor
Note: COCO training still under testing, may change parameters and default behaviors. Take the risk if would like help developing.
-
coco_eval_script.py
is used for evaluating model AP / AR on COCO validation set. It has a dependency pip install pycocotools
which is not in package requirements. More usage can be found in COCO Evaluation.
# EfficientDetD0 using resize method bilinear w/o antialias
CUDA_VISIBLE_DEVICES='1' ./coco_eval_script.py -m efficientdet.EfficientDetD0 --resize_method bilinear --disable_antialias
# >>>> [COCOEvalCallback] input_shape: (512, 512), pyramid_levels: [3, 7], anchors_mode: efficientdet
# YOLOX using BGR input format
CUDA_VISIBLE_DEVICES='1' ./coco_eval_script.py -m yolox.YOLOXTiny --use_bgr_input --nms_method hard --nms_iou_or_sigma 0.65
# >>>> [COCOEvalCallback] input_shape: (416, 416), pyramid_levels: [3, 5], anchors_mode: anchor_free
# YOLOR using letterbox_pad and other tricks.
CUDA_VISIBLE_DEVICES='1' ./coco_eval_script.py -m yolor.YOLOR_CSP --nms_method hard --nms_iou_or_sigma 0.65 \
--nms_max_output_size 300 --nms_topk -1 --letterbox_pad 64 --input_shape 704
# >>>> [COCOEvalCallback] input_shape: (704, 704), pyramid_levels: [3, 5], anchors_mode: yolor
# Specify h5 model
CUDA_VISIBLE_DEVICES='1' ./coco_eval_script.py -m checkpoints/yoloxtiny_yolor_anchor.h5
# >>>> [COCOEvalCallback] input_shape: (416, 416), pyramid_levels: [3, 5], anchors_mode: yolor
Visualizing
- Visualizing is for visualizing convnet filters or attention map scores.
- make_and_apply_gradcam_heatmap is for Grad-CAM class activation visualization.
from keras_cv_attention_models import visualizing, test_images, resnest
mm = resnest.ResNest50()
img = test_images.dog()
superimposed_img, heatmap, preds = visualizing.make_and_apply_gradcam_heatmap(mm, img, layer_name="auto")
- plot_attention_score_maps is model attention score maps visualization.
from keras_cv_attention_models import visualizing, test_images, botnet
img = test_images.dog()
_ = visualizing.plot_attention_score_maps(botnet.BotNetSE33T(), img)
TFLite Conversion
- Currently
TFLite
not supporting Conv2D with groups>1
/ gelu
/ tf.image.extract_patches
/ tf.transpose with len(perm) > 4
. Some operations could be supported in tf-nightly
version. May try if encountering issue. More discussion can be found Converting a trained keras CV attention model to TFLite #17. Some speed testing results can be found How to speed up inference on a quantized model #44.
tf.nn.gelu(inputs, approximate=True)
activation works for TFLite. Define model with activation="gelu/approximate"
or activation="gelu/app"
will set approximate=True
for gelu
. Should better decide before training, or there may be accuracy loss.
- model_surgery.convert_groups_conv2d_2_split_conv2d converts model
Conv2D with groups>1
layers to SplitConv
using split -> conv -> concat
:
from keras_cv_attention_models import regnet, model_surgery
from keras_cv_attention_models.imagenet import eval_func
bb = regnet.RegNetZD32()
mm = model_surgery.convert_groups_conv2d_2_split_conv2d(bb) # converts all `Conv2D` using `groups` to `SplitConv2D`
test_inputs = np.random.uniform(size=[1, *mm.input_shape[1:]])
print(np.allclose(mm(test_inputs), bb(test_inputs)))
# True
converter = tf.lite.TFLiteConverter.from_keras_model(mm)
open(mm.name + ".tflite", "wb").write(converter.convert())
print(np.allclose(mm(test_inputs), eval_func.TFLiteModelInterf(mm.name + '.tflite')(test_inputs), atol=1e-7))
# True
- model_surgery.convert_gelu_and_extract_patches_for_tflite converts model
gelu
activation to gelu approximate=True
, and tf.image.extract_patches
to a Conv2D
version:
from keras_cv_attention_models import cotnet, model_surgery
from keras_cv_attention_models.imagenet import eval_func
mm = cotnet.CotNetSE50D()
mm = model_surgery.convert_groups_conv2d_2_split_conv2d(mm)
mm = model_surgery.convert_gelu_and_extract_patches_for_tflite(mm)
converter = tf.lite.TFLiteConverter.from_keras_model(mm)
open(mm.name + ".tflite", "wb").write(converter.convert())
test_inputs = np.random.uniform(size=[1, *mm.input_shape[1:]])
print(np.allclose(mm(test_inputs), eval_func.TFLiteModelInterf(mm.name + '.tflite')(test_inputs), atol=1e-7))
# True
- model_surgery.prepare_for_tflite is just a combination of above 2 functions:
from keras_cv_attention_models import beit, model_surgery
mm = beit.BeitBasePatch16()
mm = model_surgery.prepare_for_tflite(mm)
converter = tf.lite.TFLiteConverter.from_keras_model(mm)
open(mm.name + ".tflite", "wb").write(converter.convert())
- Not supporting
VOLO
/ HaloNet
models converting, cause they need a longer tf.transpose
perm
.
Recognition Models
AotNet
- Keras AotNet is just a
ResNet
/ ResNetV2
like framework, that set parameters like attn_types
and se_ratio
and others, which is used to apply different types attention layer. Works like byoanet
/ byobnet
from timm
.
- Default parameters set is a typical
ResNet
architecture with Conv2D use_bias=False
and padding
like PyTorch
.
from keras_cv_attention_models import aotnet
# Mixing se and outlook and halo and mhsa and cot_attention, 21M parameters.
# 50 is just a picked number that larger than the relative `num_block`.
attn_types = [None, "outlook", ["bot", "halo"] * 50, "cot"],
se_ratio = [0.25, 0, 0, 0],
model = aotnet.AotNet50V2(attn_types=attn_types, se_ratio=se_ratio, stem_type="deep", strides=1)
model.summary()
BEIT
BotNet
CMT
Model |
Params |
FLOPs |
Input |
Top1 Acc |
Download |
CMTTiny, (Self trained 105 epochs) |
9.5M |
0.65G |
160 |
77.4 |
|
- 305 epochs |
9.5M |
0.65G |
160 |
78.94 |
cmt_tiny_160_imagenet |
- fine-tuned 224 (69 epochs) |
9.5M |
1.32G |
224 |
80.73 |
cmt_tiny_224_imagenet |
CMTTiny, 1000 epochs |
9.5M |
0.65G |
160 |
79.2 |
|
CMTXS |
15.2M |
1.58G |
192 |
81.8 |
|
CMTSmall |
25.1M |
4.09G |
224 |
83.5 |
|
CMTBig |
45.7M |
9.42G |
256 |
84.5 |
|
CoaT
CoAtNet
Model |
Params |
FLOPs |
Input |
Top1 Acc |
Download |
CoAtNet0 (Self trained 105 epochs) |
23.3M |
2.09G |
160 |
80.48 |
coatnet0_160_imagenet.h5 |
- fine-tune 224, 37 epochs |
23.3M |
4.17G |
224 |
82.21 |
coatnet0_224_imagenet.h5 |
CoAtNet0 |
25M |
4.2G |
224 |
81.6 |
|
CoAtNet0, Stride-2 DConv2D |
25M |
4.6G |
224 |
82.0 |
|
CoAtNet1 |
42M |
8.4G |
224 |
83.3 |
|
CoAtNet1, Stride-2 DConv2D |
42M |
8.8G |
224 |
83.5 |
|
CoAtNet2 |
75M |
15.7G |
224 |
84.1 |
|
CoAtNet2, Stride-2 DConv2D |
75M |
16.6G |
224 |
84.1 |
|
CoAtNet2, ImageNet-21k pretrain |
75M |
16.6G |
224 |
87.1 |
|
CoAtNet3 |
168M |
34.7G |
224 |
84.5 |
|
CoAtNet3, ImageNet-21k pretrain |
168M |
34.7G |
224 |
87.6 |
|
CoAtNet3, ImageNet-21k pretrain |
168M |
203.1G |
512 |
87.9 |
|
CoAtNet4, ImageNet-21k pretrain |
275M |
360.9G |
512 |
88.1 |
|
CoAtNet4, ImageNet-21K + PT-RA-E150 |
275M |
360.9G |
512 |
88.56 |
|
JFT pre-trained models accuracy
Model |
Input |
Reported Params |
self-defined Params |
Top1 Acc |
CoAtNet3, Stride-2 DConv2D |
384 |
168M, FLOPs 114G |
160.64M, FLOPs 109.67G |
88.52 |
CoAtNet3, Stride-2 DConv2D |
512 |
168M, FLOPs 214G |
161.24M, FLOPs 205.06G |
88.81 |
CoAtNet4 |
512 |
275M, FLOPs 361G |
270.69M, FLOPs 359.77G |
89.11 |
CoAtNet5 |
512 |
688M, FLOPs 812G |
676.23M, FLOPs 807.06G |
89.77 |
CoAtNet6 |
512 |
1.47B, FLOPs 1521G |
1.336B, FLOPs 1470.56G |
90.45 |
CoAtNet7 |
512 |
2.44B, FLOPs 2586G |
2.413B, FLOPs 2537.56G |
90.88 |
ConvNeXt
CoTNet
DaViT
Model |
Params |
FLOPs |
Input |
Top1 Acc |
Download |
DaViT_T |
28.36M |
4.56G |
224 |
82.8 |
davit_t_imagenet.h5 |
DaViT_S |
49.75M |
8.83G |
224 |
84.2 |
davit_s_imagenet.h5 |
DaViT_B |
87.95M |
15.55G |
224 |
84.6 |
davit_b_imagenet.h5 |
DaViT_L, 21k |
196.8M |
103.2G |
384 |
87.5 |
|
DaViT_H, 1.5B |
348.9M |
327.3G |
512 |
90.2 |
|
DaViT_G, 1.5B |
1.406B |
1.022T |
512 |
90.4 |
|
EfficientNet
FBNetV3
GMLP
Model |
Params |
FLOPs |
Input |
Top1 Acc |
Download |
GMLPTiny16 |
6M |
1.35G |
224 |
72.3 |
|
GMLPS16 |
20M |
4.44G |
224 |
79.6 |
gmlp_s16_imagenet.h5 |
GMLPB16 |
73M |
15.82G |
224 |
81.6 |
|
HaloNet
LCNet
LeViT
MLP mixer
Model |
Params |
FLOPs |
Input |
Top1 Acc |
Download |
MLPMixerS32, JFT |
19.1M |
1.01G |
224 |
68.70 |
|
MLPMixerS16, JFT |
18.5M |
3.79G |
224 |
73.83 |
|
MLPMixerB32, JFT |
60.3M |
3.25G |
224 |
75.53 |
|
- imagenet_sam |
60.3M |
3.25G |
224 |
72.47 |
b32_imagenet_sam.h5 |
MLPMixerB16 |
59.9M |
12.64G |
224 |
76.44 |
b16_imagenet.h5 |
- imagenet21k |
59.9M |
12.64G |
224 |
80.64 |
b16_imagenet21k.h5 |
- imagenet_sam |
59.9M |
12.64G |
224 |
77.36 |
b16_imagenet_sam.h5 |
- JFT |
59.9M |
12.64G |
224 |
80.00 |
|
MLPMixerL32, JFT |
206.9M |
11.30G |
224 |
80.67 |
|
MLPMixerL16 |
208.2M |
44.66G |
224 |
71.76 |
l16_imagenet.h5 |
- imagenet21k |
208.2M |
44.66G |
224 |
82.89 |
l16_imagenet21k.h5 |
- input 448 |
208.2M |
178.54G |
448 |
83.91 |
|
- input 224, JFT |
208.2M |
44.66G |
224 |
84.82 |
|
- input 448, JFT |
208.2M |
178.54G |
448 |
86.78 |
|
MLPMixerH14, JFT |
432.3M |
121.22G |
224 |
86.32 |
|
- input 448, JFT |
432.3M |
484.73G |
448 |
87.94 |
|
MobileNetV3
MobileViT
NAT
NFNets
RegNetY
RegNetZ
ResMLP
ResNeSt
ResNetD
ResNetQ
Model |
Params |
FLOPs |
Input |
Top1 Acc |
Download |
ResNet51Q |
35.7M |
4.87G |
224 |
82.36 |
resnet51q.h5 |
ResNet61Q |
36.8M |
5.96G |
224 |
|
|
ResNeXt
SwinTransformerV2
TinyNet
UniFormer
VOLO
WaveMLP
Detection Models
EfficientDet
YOLOR
YOLOX
Other implemented tensorflow or keras models
Licenses
- This part is copied and modified according to Github rwightman/pytorch-image-models.
- Code. The code here is licensed MIT. It is your responsibility to ensure you comply with licenses here and conditions of any dependent licenses. Where applicable, I've linked the sources/references for various components in docstrings. If you think I've missed anything please create an issue. So far all of the pretrained weights available here are pretrained on ImageNet and COCO with a select few that have some additional pretraining.
- ImageNet Pretrained Weights. ImageNet was released for non-commercial research purposes only (https://image-net.org/download). It's not clear what the implications of that are for the use of pretrained weights from that dataset. Any models I have trained with ImageNet are done for research purposes and one should assume that the original dataset license applies to the weights. It's best to seek legal advice if you intend to use the pretrained weights in a commercial product.
- COCO Pretrained Weights. Should follow cocodataset termsofuse. The annotations in COCO dataset belong to the COCO Consortium and are licensed under a Creative Commons Attribution 4.0 License. The COCO Consortium does not own the copyright of the images. Use of the images must abide by the Flickr Terms of Use. The users of the images accept full responsibility for the use of the dataset, including but not limited to the use of any copies of copyrighted images that they may create from the dataset.
- Pretrained on more than ImageNet and COCO. Several weights included or references here were pretrained with proprietary datasets that I do not have access to. These include the Facebook WSL, SSL, SWSL ResNe(Xt) and the Google Noisy Student EfficientNet models. The Facebook models have an explicit non-commercial license (CC-BY-NC 4.0, https://github.com/facebookresearch/semi-supervised-ImageNet1K-models, https://github.com/facebookresearch/WSL-Images). The Google models do not appear to have any restriction beyond the Apache 2.0 license (and ImageNet concerns). In either case, you should contact Facebook or Google with any questions.
Citing
- BibTeX
@misc{leondgarse,
author = {Leondgarse},
title = {Keras CV Attention Models},
year = {2022},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.6506947},
howpublished = {\url{https://github.com/leondgarse/keras_cv_attention_models}}
}
- Latest DOI:
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution