We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
作者好,感谢你的分享! 在按你的步骤进行操作时出现了一个问题:当使用我自定义的vocab.txt时,在执行了init_custdata_model.py文件后发现生成的配置文件中tokenizer.json文件还是原来的字库,并没有更新至我自定义的字库,导致调用processor.tokenizer.get_vocab()时得到的是原字库,而这影响到了训练和测试时的encode和decode。 期待你的回答,再次感谢!
The text was updated successfully, but these errors were encountered:
先执行 python gen_vocab.py获取字典
Sorry, something went wrong.
您好, 我这边是先执行了gen_vocab.py基于我自定义的字库获取了字典
No branches or pull requests
作者好,感谢你的分享!
在按你的步骤进行操作时出现了一个问题:当使用我自定义的vocab.txt时,在执行了init_custdata_model.py文件后发现生成的配置文件中tokenizer.json文件还是原来的字库,并没有更新至我自定义的字库,导致调用processor.tokenizer.get_vocab()时得到的是原字库,而这影响到了训练和测试时的encode和decode。
期待你的回答,再次感谢!
The text was updated successfully, but these errors were encountered: