We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
def inference(self, X, X_len, reuse=None): with tf.name_scope('score'): # The weight matrix is treated as an embedding matrix # Using lookup & reduce_sum to complete calculation of unary score features = tf.nn.embedding_lookup(self.W, X) feat_vec = tf.reduce_sum(features, axis=2) feat_vec = tf.reshape(feat_vec, [-1, self.nb_classes]) scores = feat_vec + self.b # scores = tf.nn.softmax(scores) scores = tf.reshape(scores, [-1, self.time_steps, self.nb_classes]) return scores
请问上面的scores = feat_vec + self.b 这句,为什么要加self.b这个偏移量呢,crf的公式里面也没有加偏移量啊。多谢。
The text was updated successfully, but these errors were encountered:
偏移量这个东西其实是顺手加上的,并没有完全按照CRF的公式定义,可以理解为一组没有出现在特征函数集合中的特征对应的权重,与OOV特征对应的权重起到类似的作用。
Sorry, something went wrong.
No branches or pull requests
def inference(self, X, X_len, reuse=None):
with tf.name_scope('score'):
# The weight matrix is treated as an embedding matrix
# Using lookup & reduce_sum to complete calculation of unary score
features = tf.nn.embedding_lookup(self.W, X)
feat_vec = tf.reduce_sum(features, axis=2)
feat_vec = tf.reshape(feat_vec, [-1, self.nb_classes])
scores = feat_vec + self.b
# scores = tf.nn.softmax(scores)
scores = tf.reshape(scores, [-1, self.time_steps, self.nb_classes])
return scores
请问上面的scores = feat_vec + self.b 这句,为什么要加self.b这个偏移量呢,crf的公式里面也没有加偏移量啊。多谢。
The text was updated successfully, but these errors were encountered: