You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
运行到start evaluate engines...之后报错
100%|██████████| 5796/5796 [1:35:50<00:00, 1.01it/s]
start evaluate engines...
0%| | 0/1159 [00:00<?, ?it/s]WARNING:tensorflow:The parameters output_attentions, output_hidden_states and use_cache cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: config=XConfig.from_pretrained('name', output_attentions=True)).
WARNING:tensorflow:The parameter return_dict cannot be set in graph mode and will always be set to True.
2024-06-16 20:29:07.033624: W tensorflow/core/grappler/optimizers/loop_optimizer.cc:907] Skipping loop optimization for Merge node with control input: cond/branch_executed/_10
进程已结束,退出代码为 -1073740791 (0xC0000409)
您好,我遇到了一些问题
运行到start evaluate engines...之后报错
100%|██████████| 5796/5796 [1:35:50<00:00, 1.01it/s]
start evaluate engines...
0%| | 0/1159 [00:00<?, ?it/s]WARNING:tensorflow:The parameters
output_attentions
,output_hidden_states
anduse_cache
cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.:config=XConfig.from_pretrained('name', output_attentions=True)
).WARNING:tensorflow:The parameter
return_dict
cannot be set in graph mode and will always be set toTrue
.2024-06-16 20:29:07.033624: W tensorflow/core/grappler/optimizers/loop_optimizer.cc:907] Skipping loop optimization for Merge node with control input: cond/branch_executed/_10
进程已结束,退出代码为 -1073740791 (0xC0000409)
checkpoint文件夹里没有输出东西
.# 在此处设置模型保存位置,代码支持在原始模型上继续训练,新数据或从头训练一定要改!
checkpoints_dir=checkpoints/0616
.# 模型的名字
checkpoint_name=model0616
我把huggingface的东西下载到了本地,不知道这样直接写路径可不可以
use_pretrained_model=True
pretrained_model=Bert
.# Bert/AlBert
huggingface_tag=E:\1_Code\Transformer\BERT_Data\bert230\bert-base-chinese
.# huggingface上模型的名字
finetune=True
显存最多只能使用70%,不知道如何更改这个限制
The text was updated successfully, but these errors were encountered: