Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MAP is stucked at 0 for Mobilenet V2 SSD QAT without pretrained model [Bug] #648

Open
HeleMartin opened this issue Jul 11, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@HeleMartin
Copy link

Describe the bug

I have tried to QAT train from scratch a mobilenet v2 ssd (from : ssdlite_mobilenetv2-scratch_8xb24-600e_coco) but MAP is stucked at 0 after 10 epcohs. I have tried the vgg16 ssd model with the same coco 2017 dataset (from : ssd300_coco) and it was ok, MAP increased.

Could you explain this phenomenon ? Does QAT depend on the used backbone ?

To Reproduce

Here, the used pipeline for QAT mobilenet_v2 :

_base_ = [ './ssdlite_mobilenetv2-scratch_8xb24-600e_coco.py']

mbnv2 = _base_.model

global_qconfig = dict(
    w_observer=dict(type='mmrazor.PerChannelMinMaxObserver'),
    a_observer=dict(type='mmrazor.MovingAverageMinMaxObserver'),
    w_fake_quant=dict(type='mmrazor.FakeQuantize'),
    a_fake_quant=dict(type='mmrazor.FakeQuantize'),
    w_qscheme=dict(
        qdtype='qint8', bit=8, is_symmetry=True, is_symmetric_range=True),
    a_qscheme=dict(qdtype='quint8', bit=8, is_symmetry=True),
)

model = dict(
    _delete_=True,
    type='mmrazor.MMArchitectureQuant',
    data_preprocessor=dict(
        type='mmdet.DetDataPreprocessor',
        mean=[128],
        std=[128],
        bgr_to_rgb=True,
        pad_size_divisor=32),
    architecture=mbnv2,
    float_checkpoint=None, 
    input_shapes=(1, 3, 320, 320),
    quantizer=dict(
        type='mmrazor.OpenVINOQuantizer',
        global_qconfig=global_qconfig,
        tracer=dict(
            type='mmrazor.CustomTracer',
            skipped_methods=[
                'mmdet.models.dense_heads.base_dense_head.BaseDenseHead.predict_by_feat',  
                'mmdet.models.dense_heads.ssd_head.SSDHead.loss_by_feat'
            ])))
            
model_wrapper_cfg = dict(
    type='mmrazor.MMArchitectureQuantDDP',
    broadcast_buffers=False,
    find_unused_parameters=False)
    
# train, val, test setting
train_cfg = dict(
    _delete_=True,
    type='mmrazor.QATEpochBasedLoop',
    max_epochs=10,
    val_interval=1)
val_cfg = dict(_delete_=True, type='mmrazor.QATValLoop')

default_hooks = dict(sync=dict(type='mmrazor.SyncBuffersHook'))

optim_wrapper = dict(
    optimizer=dict(type='SGD', lr=0.0001, momentum=0.9, weight_decay=0.0001))

# learning policy
param_scheduler = dict(
    _delete_=True, type='ConstantLR', factor=1.0, by_epoch=True)

and the ones used for vgg16 :

_base_ = [ './ssd300_coco.py']

ssd = _base_.model

global_qconfig = dict(
    w_observer=dict(type='mmrazor.PerChannelMinMaxObserver'),
    a_observer=dict(type='mmrazor.MovingAverageMinMaxObserver'),
    w_fake_quant=dict(type='mmrazor.FakeQuantize'),
    a_fake_quant=dict(type='mmrazor.FakeQuantize'),
    w_qscheme=dict(
        qdtype='qint8', bit=8, is_symmetry=True, is_symmetric_range=True),
    a_qscheme=dict(qdtype='quint8', bit=8, is_symmetry=True),
)

model = dict(
    _delete_=True,
    type='mmrazor.MMArchitectureQuant',
    data_preprocessor=dict(
        type='mmdet.DetDataPreprocessor',
        mean=[128],
        std=[128],
        bgr_to_rgb=True,
        pad_size_divisor=32),
    architecture=ssd,
    float_checkpoint=None,
    input_shapes=(1, 3, 300, 300),
    quantizer=dict(
        type='mmrazor.OpenVINOQuantizer',
        global_qconfig=global_qconfig,
        tracer=dict(
            type='mmrazor.CustomTracer',
            skipped_methods=[
                'mmdet.models.dense_heads.base_dense_head.BaseDenseHead.predict_by_feat',  
                'mmdet.models.dense_heads.ssd_head.SSDHead.loss_by_feat'
            ])))
            
model_wrapper_cfg = dict(
    type='mmrazor.MMArchitectureQuantDDP',
    broadcast_buffers=False,
    find_unused_parameters=False)
    
# train, val, test setting
train_cfg = dict(
    _delete_=True,
    type='mmrazor.QATEpochBasedLoop',
    max_epochs=10,
    val_interval=1)
val_cfg = dict(_delete_=True, type='mmrazor.QATValLoop')

default_hooks = dict(sync=dict(type='mmrazor.SyncBuffersHook'))

optim_wrapper = dict(
    optimizer=dict(type='SGD', lr=0.0001, momentum=0.9, weight_decay=0.0001))

# learning policy
param_scheduler = dict(
    _delete_=True, type='ConstantLR', factor=1.0, by_epoch=True)
@HeleMartin HeleMartin added the bug Something isn't working label Jul 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant