Skip to content

ShreeCharranR/Text-Representations

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 

Repository files navigation

Statistical Models

Feature engineering techniques:

  • Bag of Words Model (TF)
  • Bag of N-grams Model
  • TF-IDF Model

There are some potential problems which might arise with the Bag of Words model when it is used on large corpora. Since the feature vectors are based on absolute term frequencies, there might be some terms which occur frequently across all documents and these may tend to overshadow other terms in the feature set. The TF-IDF model tries to combat this issue by using a scaling or normalizing factor in its computation. TF-IDF stands for Term Frequency-Inverse Document Frequency, which uses a combination of two metrics in its computation, namely: term frequency (tf) and inverse document frequency (idf). This technique was developed for ranking results for queries in search engines and now it is an indispensable model in the world of information retrieval and NLP.

Mathematically, we can define TF-IDF as tfidf = tf x idf, which can be expanded further to be represented as follows.

Here, tfidf(w, D) is the TF-IDF score for word w in document D.

  • The term tf(w, D) represents the term frequency of the word w in document D, which can be obtained from the Bag of Words model.
  • The term idf(w, D) is the inverse document frequency for the term w, which can be computed as the log transform of the total number of documents in the corpus C divided by the document frequency of the word w, which is basically the frequency of documents in the corpus where the word w occurs.
  • Similarity Features
  • Clustering using Document Similarity Features

Deep Learning Models

  • Word2Vec
  • GloVe
  • FastText

Transfer Learning

  • Google Word2vec

  • BERT ( Similarity & Representantion)

  • Word2Vec

  • GloVe

  • FastText

Releases

No releases published

Packages

No packages published