Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

代码结果不好 #2

Open
geekbeing opened this issue May 7, 2021 · 3 comments
Open

代码结果不好 #2

geekbeing opened this issue May 7, 2021 · 3 comments
Assignees
Labels
bug Something isn't working

Comments

@geekbeing
Copy link

geekbeing commented May 7, 2021

您好,我复现了您的代码,为什么和论文中的结果差距很大,甚至说模型就是无效的,我没有改动您的代码,直接运行的
以下是 CGBERT模型epoch = 29的结果

`05/07/2021 01:46:20 - INFO - util.train_helper - epoch = 29████████████████████████████▌| 312/314 [00:05<00:00, 48.88it/s] 01:46:20 - INFO - util.train_helper - global_step = 18750

05/07/2021 01:46:20 - INFO - util.train_helper - loss = 1.0986090652494622

05/07/2021 01:46:20 - INFO - util.train_helper - test_loss = 1.0970154359082507

05/07/2021 01:46:20 - INFO - util.train_helper - test_accuracy = 0.8382118147951038

05/07/2021 01:46:20 - INFO - util.train_helper - aspect_strict_Acc = 0.47897817988291647

05/07/2021 01:46:20 - INFO - util.train_helper - aspect_Macro_F1 = 0

05/07/2021 01:46:20 - INFO - util.train_helper - aspect_Macro_AUC = 0.47944606445295795

05/07/2021 01:46:20 - INFO - util.train_helper - sentiment_Acc = 0.6661184210526315

05/07/2021 01:46:20

  • INFO - util.train_helper - sentiment_Macro_AUC = 0.48320620795185415`

以下是 QACGBERT模型epoch = 24的结果,以及最后的结果
05/08/2021 00:45:03 - INFO - util.train_helper - ***** Evaluation Interval Hit *****8<00:24, 5.05it/s, train_loss=1.35]
Iteration: 100%|████████████████████████████████████████████████████████████████████████| 167/167 [00:03<00:00, 49.49it/s]
05/08/2021 00:45:07 - INFO - util.train_helper - ***** Evaluation results *****
05/08/2021 00:45:07 - INFO - util.train_helper - epoch = 24████████████████████████▌| 166/167 [00:03<00:00, 42.66it/s]

05/08/2021 00:45:07 - INFO - util.train_helper - global_step = 15750

05/08/2021 00:45:07 - INFO - util.train_helper - loss = 1.4141508170202666

05/08/2021 00:45:07 - INFO - util.train_helper - test_loss = 1.4643629932118034

05/08/2021 00:45:07 - INFO - util.train_helper - test_accuracy = 0.40375

05/08/2021 00:45:07 - INFO - util.train_helper - aspect_P = 0.3350454365863295

05/08/2021 00:45:07 - INFO - util.train_helper - aspect_R = 0.8273170731707317

05/08/2021 00:45:07 - INFO - util.train_helper - aspect_F = 0.47694038245219356

05/08/2021 00:45:07 - INFO - util.train_helper - sentiment_Acc_4_classes = 0.36390243902439023

05/08/2021 00:45:07 - INFO - util.train_helper - sentiment_Acc_3_classes = 0.5220966084275437

05/08/2021 00:45:07 - INFO - util.train_helper - sentiment_Acc_2_classes = 0.6234357224118316

Iteration: 100%|███████████████████████████████████████████████████████| 635/635 [02:29<00:00, 4.25it/s, train_loss=2.04]
Epoch: 100%|███████████████████████████████████████████████████████████████████████████| 25/25 [1:01:38<00:00, 147.94s/it]
05/08/2021 00:45:34 - INFO - util.train_helper - ***** Global best performance *****
05/08/2021 00:45:34 - INFO - util.train_helper - accuracy on dev set: 0.4942233632862644

@frankaging
Copy link
Owner

Hi,

Thanks for your feedback. Could you provide your running command so I can further root cause the issue?

Thanks.

@geekbeing
Copy link
Author

geekbeing commented May 9, 2021

i just run the run.sh file following

# example running command
CUDA_VISIBLE_DEVICES=0 python run_classifier.py \
--task_name semeval_NLI_M \
--data_dir ../datasets/semeval2014/ \
--output_dir ../results/semeval2014/QACGBERT-2/ \
--model_type QACGBERT \
--do_lower_case \
--max_seq_length 128 \
--train_batch_size 24 \
--eval_batch_size 24 \
--learning_rate 2e-5 \
--num_train_epochs 30 \
--vocab_file ../models/BERT-Google/vocab.txt \
--bert_config_file ../models/BERT-Google/bert_config.json \
--init_checkpoint ../models/BERT-Google/pytorch_model.bin \
--seed 123 \
--evaluate_interval 250 \
--context_standalone

@frankaging
Copy link
Owner

Thanks! I am sorry that I did not keep this repo updated in the first place. Here is the reason why you experience this catastrophic failure.

I updated this repo for other projects by studying different training rates scheduling for different layers, which is not a topic for this paper. And that causes some issues (it seems like from your runs!) If you look at my recent push, I commented out those lines for the PR opened by you:
770d810
Without this change, it seems like I was trying out some really high learning rate for some linear layers, and that failed the training.

You can do the following things to remediate the catastrophic failure:
(1) pull.
(2) rerun with the updated commands as well.

Since I was working on this repo for other projects, I might forget to remove codes here and there. When I have time, I will update it all at once. Thanks again for your findings! It matters! If you still experience this catastrophic failure, please let me know. If not, please kindly close this issue.

Thanks,
Zen

@frankaging frankaging added the bug Something isn't working label May 9, 2021
@frankaging frankaging self-assigned this May 9, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants