Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The program runs too long #6

Open
HoangNguyenHuu opened this issue May 18, 2018 · 2 comments
Open

The program runs too long #6

HoangNguyenHuu opened this issue May 18, 2018 · 2 comments

Comments

@HoangNguyenHuu
Copy link

When i ran your code 5 day ago, it printed some information as below. Nothing more since then, no models were created, but the program still runs in my computer. Do you know why? (Dataset: Train set - 4500 sentences, Development set - 1100 sentences)

[dynet] random seed: 969247908
[dynet] allocating memory: 512MB
[dynet] memory allocation done.
Loaded config file sucessfully.
pretrained_embeddings_file ../data/emb/vi.txt
data_dir ../data/treebank
train_file ../data/treebank/train.conllu
dev_file ../data/treebank/dev.conllu
test_file ../data/treebank/test.conllu
min_occur_count 2
save_dir ../ckpt/default
config_file ../ckpt/default/config.cfg
save_model_path ../ckpt/default/model
save_vocab_path ../ckpt/default/vocab
load_dir ../ckpt/default
load_model_path ../ckpt/default/model
load_vocab_path ../ckpt/default/vocab
lstm_layers 3
word_dims 100
tag_dims 100
dropout_emb 0.33
lstm_hiddens 400
dropout_lstm_input 0.33
dropout_lstm_hidden 0.33
mlp_arc_size 500
mlp_rel_size 100
dropout_mlp 0.33
learning_rate 2e-3
decay .75
decay_steps 5000
beta_1 .9
beta_2 .9
epsilon 1e-12
num_buckets_train 40
num_buckets_valid 10
num_buckets_test 10
train_iters 50000
train_batch_size 5000
test_batch_size 5000
validate_every 100
save_after 5000
#words in training set: 3544
Vocab info: #words 10936, #tags 28 #rels 33
(400, 600)
Orthogonal pretrainer loss: 5.20e-27
(400, 600)
Orthogonal pretrainer loss: 7.02e-27
(400, 1200)
Orthogonal pretrainer loss: 2.79e-30
(400, 1200)
Orthogonal pretrainer loss: 2.77e-30
(400, 1200)
Orthogonal pretrainer loss: 2.82e-30
(400, 1200)
Orthogonal pretrainer loss: 2.93e-30
(600, 800)
Orthogonal pretrainer loss: 3.90e-23
@HoangNguyenHuu
Copy link
Author

I fixed it by adding another i -= 1 in after line 57, in the second while loop (file k_means.py). This bug has been detected in tensorflow code:
tdozat/Parser-v1#8

@jcyk
Copy link
Owner

jcyk commented May 26, 2018

Thank you for pointing out that!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants