Skip to content

Latest commit

 

History

History

efficientnet_v2

EfficientNetV2

EfficientNetV2: Smaller Models and Faster Training

Abstract

This paper introduces EfficientNetV2, a new family of convolutional networks that have faster training speed and better parameter efficiency than previous models. To develop this family of models, we use a combination of training-aware neural architecture search and scaling, to jointly optimize training speed and parameter efficiency. The models were searched from the search space enriched with new ops such as Fused-MBConv. Our experiments show that EfficientNetV2 models train much faster than state-of-the-art models while being up to 6.8x smaller. Our training can be further sped up by progressively increasing the image size during training, but it often causes a drop in accuracy. To compensate for this accuracy drop, we propose to adaptively adjust regularization (e.g., dropout and data augmentation) as well, such that we can achieve both fast training and good accuracy. With progressive learning, our EfficientNetV2 significantly outperforms previous models on ImageNet and CIFAR/Cars/Flowers datasets. By pretraining on the same ImageNet21k, our EfficientNetV2 achieves 87.3% top-1 accuracy on ImageNet ILSVRC2012, outperforming the recent ViT by 2.0% accuracy while training 5x-11x faster using the same computing resources. Code will be available at https://github.com/google/automl/tree/master/efficientnetv2.

How to use it?

Predict image

from mmpretrain import inference_model

predict = inference_model('efficientnetv2-b0_3rdparty_in1k', 'demo/bird.JPEG')
print(predict['pred_class'])
print(predict['pred_score'])

Use the model

import torch
from mmpretrain import get_model

model = get_model('efficientnetv2-b0_3rdparty_in1k', pretrained=True)
inputs = torch.rand(1, 3, 224, 224)
out = model(inputs)
print(type(out))
# To extract features.
feats = model.extract_feat(inputs)
print(type(feats))

Test Command

Prepare your dataset according to the docs.

Test:

python tools/test.py configs/efficientnet_v2/efficientnetv2-b0_8xb32_in1k.py https://download.openmmlab.com/mmclassification/v0/efficientnetv2/efficientnetv2-b0_3rdparty_in1k_20221221-9ef6e736.pth

Models and results

Pretrained models

Model Params (M) Flops (G) Config Download
efficientnetv2-s_3rdparty_in21k* 48.16 3.31 config model
efficientnetv2-m_3rdparty_in21k* 80.84 5.86 config model
efficientnetv2-l_3rdparty_in21k* 145.22 13.11 config model
efficientnetv2-xl_3rdparty_in21k* 234.82 18.86 config model

Models with * are converted from the timm. The config files of these models are only for inference. We haven't reproduce the training results.

Image Classification on ImageNet-1k

Model Pretrain Params (M) Flops (G) Top-1 (%) Top-5 (%) Config Download
efficientnetv2-b0_3rdparty_in1k* From scratch 7.14 0.92 78.52 94.44 config model
efficientnetv2-b1_3rdparty_in1k* From scratch 8.14 1.44 79.80 94.89 config model
efficientnetv2-b2_3rdparty_in1k* From scratch 10.10 1.99 80.63 95.30 config model
efficientnetv2-b3_3rdparty_in1k* From scratch 14.36 3.50 82.03 95.88 config model
efficientnetv2-s_3rdparty_in1k* From scratch 21.46 9.72 83.82 96.67 config model
efficientnetv2-m_3rdparty_in1k* From scratch 54.14 26.88 85.01 97.26 config model
efficientnetv2-l_3rdparty_in1k* From scratch 118.52 60.14 85.43 97.31 config model
efficientnetv2-s_in21k-pre_3rdparty_in1k* ImageNet-21k 21.46 9.72 84.29 97.26 config model
efficientnetv2-m_in21k-pre_3rdparty_in1k* ImageNet-21k 54.14 26.88 85.47 97.76 config model
efficientnetv2-l_in21k-pre_3rdparty_in1k* ImageNet-21k 118.52 60.14 86.31 97.99 config model
efficientnetv2-xl_in21k-pre_3rdparty_in1k* ImageNet-21k 208.12 98.34 86.39 97.83 config model

Models with * are converted from the timm. The config files of these models are only for inference. We haven't reproduce the training results.

Citation

@inproceedings{tan2021efficientnetv2,
  title={Efficientnetv2: Smaller models and faster training},
  author={Tan, Mingxing and Le, Quoc},
  booktitle={International Conference on Machine Learning},
  pages={10096--10106},
  year={2021},
  organization={PMLR}
}