Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

你好!问一下目标检测网络R-FCN的问题 #69

Open
hp-93 opened this issue Feb 27, 2018 · 8 comments
Open

你好!问一下目标检测网络R-FCN的问题 #69

hp-93 opened this issue Feb 27, 2018 · 8 comments

Comments

@hp-93
Copy link

hp-93 commented Feb 27, 2018

在ubuntu 配置了R-FCN的python 接口,也简单训练rfcn_end2end_ohem.sh(ResNet101),在voc数据集上精度跟github上大致相同,但是在将基础网络修改为resnext50-32x4d时,参考\det\faster_rcnn\models\pascal_voc\resnext50-32x4d工程下resnext50网络定义文件,并将网盘中resnext50-32x4d.caffemodel作为预训练文件,最后得到的检测精度不理想

AP for aeroplane = 0.4099
AP for bicycle = 0.2899
AP for bird = 0.2834
AP for boat = 0.1909
AP for bottle = 0.1109
AP for bus = 0.4431
AP for car = 0.3530
AP for cat = 0.5736
AP for chair = 0.1138
AP for cow = 0.2244
AP for diningtable = 0.2423
AP for dog = 0.5162
AP for horse = 0.4655
AP for motorbike = 0.4042
AP for person = 0.2541
AP for pottedplant = 0.0640
AP for sheep = 0.1348
AP for sofa = 0.2534
AP for train = 0.5169
AP for tvmonitor = 0.1607
Mean AP = 0.300
不知道是什么原因,是我哪里没有考虑到吗?

@soeaver
Copy link
Owner

soeaver commented Feb 27, 2018

如果你用的是xx-merge.prototxt的结构,就要用xx-merge.caffemodel来finetune
其他的话看看均值方差,测试脚本有没有问题

@hp-93
Copy link
Author

hp-93 commented Feb 28, 2018

目前我并没有自己生成预训练模型xx-merge.caffemodel,是直接用hub主的cls中百度网盘的resnext50-32x4d.caffemodel,所以基础网络prototxt文件就相应用了deploy_resnext50-32x4d.prototxt,当然进行了相应的修改,将前几层网络设为不参入学习,并没有xx-merge.caffemodel。
至于均值方差的话,我的caffe版本直接用的是caffe_window,而R-FCN用的是py-R-FCN-master,并没有hub主的py-RFCN-priv-master和caffe-priv,应该不涉及需要修改均值与方差吧

@soeaver
Copy link
Owner

soeaver commented Feb 28, 2018

py-R-FCN-master是不支持除方差操作的,而resnext的均值和方差跟resnet还是不同,这是个问题,但是应该不会导致这么大的精度衰减,其他的就不清楚了

@hp-93
Copy link
Author

hp-93 commented Feb 28, 2018

嗯嗯,谢了,关键是我训练时精度参数全程并不收敛,上下浮动,而且浮动幅度很大
I0226 23:27:26.994686 7644 solver.cpp:2quancheng44] Train net output #0: accuarcy = 0.296875
I0226 23:27:26.994693 7644 solver.cpp:244] Train net output #1: loss_bbox = 0.0694371 (* 1 = 0.0694371 loss)
I0226 23:27:26.994698 7644 solver.cpp:244] Train net output #2: loss_cls = 2.26333 (* 1 = 2.26333 loss)
I0226 23:27:26.994701 7644 solver.cpp:244] Train net output #3: rpn_cls_loss = 0.179019 (* 1 = 0.179019 loss)
I0226 23:27:26.994704 7644 solver.cpp:244] Train net output #4: rpn_loss_bbox = 0.191131 (* 1 = 0.191131 loss)
I0226 23:27:26.994709 7644 sgd_solver.cpp:106] Iteration 720, lr = 0.001
I0226 23:27:32.879098 7644 solver.cpp:228] Iteration 740, loss = 2.51227
I0226 23:27:32.879128 7644 solver.cpp:244] Train net output #0: accuarcy = 0.609375
I0226 23:27:32.879135 7644 solver.cpp:244] Train net output #1: loss_bbox = 0.0947561 (* 1 = 0.0947561 loss)
I0226 23:27:32.879139 7644 solver.cpp:244] Train net output #2: loss_cls = 1.35206 (* 1 = 1.35206 loss)
I0226 23:27:32.879142 7644 solver.cpp:244] Train net output #3: rpn_cls_loss = 0.168547 (* 1 = 0.168547 loss)
I0226 23:27:32.879146 7644 solver.cpp:244] Train net output #4: rpn_loss_bbox = 0.0914559 (* 1 = 0.0914559 loss)
I0226 23:27:32.879150 7644 sgd_solver.cpp:106] Iteration 740, lr = 0.001
I0226 23:27:40.231842 7644 solver.cpp:228] Iteration 760, loss = 1.9404
I0226 23:27:40.231873 7644 solver.cpp:244] Train net output #0: accuarcy = 0.398438
I0226 23:27:40.231879 7644 solver.cpp:244] Train net output #1: loss_bbox = 0.168068 (* 1 = 0.168068 loss)
I0226 23:27:40.231883 7644 solver.cpp:244] Train net output #2: loss_cls = 1.87442 (* 1 = 1.87442 loss)
I0226 23:27:40.231886 7644 solver.cpp:244] Train net output #3: rpn_cls_loss = 0.0348021 (* 1 = 0.0348021 loss)
I0226 23:27:40.231890 7644 solver.cpp:244] Train net output #4: rpn_loss_bbox = 0.0939938 (* 1 = 0.0939938 loss)
I0226 23:27:40.231894 7644 sgd_solver.cpp:106] Iteration 760, lr = 0.001
I0226 23:27:46.188066 7644 solver.cpp:228] Iteration 780, loss = 2.85662
I0226 23:27:46.188096 7644 solver.cpp:244] Train net output #0: accuarcy = 0.328125
I0226 23:27:46.188102 7644 solver.cpp:244] Train net output #1: loss_bbox = 0.36219 (* 1 = 0.36219 loss)
I0226 23:27:46.188107 7644 solver.cpp:244] Train net output #2: loss_cls = 2.1627 (* 1 = 2.1627 loss)
I0226 23:27:46.188110 7644 solver.cpp:244] Train net output #3: rpn_cls_loss = 0.0994308 (* 1 = 0.0994308 loss)
I0226 23:27:46.188113 7644 solver.cpp:244] Train net output #4: rpn_loss_bbox = 0.259526 (* 1 = 0.259526 loss)
I0226 23:27:46.188117 7644 sgd_solver.cpp:106] Iteration 780, lr = 0.001
speed: 0.312s / iter
I0226 23:27:52.118432 7644 solver.cpp:228] Iteration 800, loss = 3.19266
I0226 23:27:52.118461 7644 solver.cpp:244] Train net output #0: accuarcy = 0
I0226 23:27:52.118468 7644 solver.cpp:244] Train net output #1: loss_bbox = 0.111914 (* 1 = 0.111914 loss)
I0226 23:27:52.118472 7644 solver.cpp:244] Train net output #2: loss_cls = 3.05117 (* 1 = 3.05117 loss)
I0226 23:27:52.118476 7644 solver.cpp:244] Train net output #3: rpn_cls_loss = 0.125618 (* 1 = 0.125618 loss)
I0226 23:27:52.118479 7644 solver.cpp:244] Train net output #4: rpn_loss_bbox = 0.123091 (* 1 = 0.123091 loss)
I0226 23:27:52.118484 7644 sgd_solver.cpp:106] Iteration 800, lr = 0.001
I0226 23:27:58.066153 7644 solver.cpp:228] Iteration 820, loss = 3.22972
I0226 23:27:58.066182 7644 solver.cpp:244] Train net output #0: accuarcy = 0
I0226 23:27:58.066190 7644 solver.cpp:244] Train net output #1: loss_bbox = 0.291357 (* 1 = 0.291357 loss)
I0226 23:27:58.066195 7644 solver.cpp:244] Train net output #2: loss_cls = 2.98972 (* 1 = 2.98972 loss)
I0226 23:27:58.066197 7644 solver.cpp:244] Train net output #3: rpn_cls_loss = 0.149604 (* 1 = 0.149604 loss)
I0226 23:27:58.066201 7644 solver.cpp:244] Train net output #4: rpn_loss_bbox = 0.269975 (* 1 = 0.269975 loss)

@soeaver
Copy link
Owner

soeaver commented May 5, 2018

@firefox1031 我的caffe models的图像预处理都有除方差操作,而https://github.com/YuwenXiong/py-R-FCN是不支持的,可能是这个问题

@firefox1031
Copy link

@soeaver 谢谢你的回复,我接下来用你的版本尝试一下。另外还有个问题,你的模型里面提到了把batchnorm层合并到scale层里面,请问具体是怎么做的?

@firefox1031
Copy link

@soeaver 你好,我尝试了你的版本,仍然有loss不收敛的问题,我是用的自己的数据集,总共三类,我在rfcn_voc_resnet18-priv-merge-ohem.prototxt里面做了相应的类别修改,使用的预训练模型是https://github.com/HolmesShuan/ResNet-18-Caffemodel-on-ImageNet中的预训练模型,cfg文件设置如下:
EXP_DIR: rfcn_end2end
TRAIN:
HAS_RPN: True
IMS_PER_BATCH: 1
BBOX_NORMALIZE_TARGETS_PRECOMPUTED: True
RPN_POSITIVE_OVERLAP: 0.7
RPN_BATCHSIZE: 256
PROPOSAL_METHOD: gt
BG_THRESH_LO: 0.1
BATCH_SIZE: 128
AGNOSTIC: False
SNAPSHOT_ITERS: 10000
RPN_PRE_NMS_TOP_N: 6000
RPN_POST_NMS_TOP_N: 300
TEST:
HAS_RPN: True
AGNOSTIC: False
训练过程中也会出现loss的剧烈跳动,简要结果如下:
Iteration 31880,lr =0.0001,loss=0.89979,accuracy=0.90411
Iteration 31900,lr =0.0001,loss=2.11952,accuracy=0
Iteration 31920,lr =0.0001,loss=0.97738,accuracy=0.769231
Iteration 31940,lr =0.0001,loss=1.89307,accuracy=0

时不时就会出现这样的剧烈跳动,请问为什么会有这样的问题?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants