Skip to content

WladimirSidorenko/DASA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Discourse-Aware Sentiment Analysis

Build Status MIT License

Description

This package provides several implementations of common discourse-aware sentiment analysis (DASA) methods. Most of these approaches infer the overall polarity of the input (e.g, of a tweet) from the polarity scores of its elementary discourse units (EDUs) by either accumulating these scores over the RST tree or choosing a single EDU, which is most representative of the whole analyzed text (e.g., the last discourse segment).

Data Preparation

We use PotTS and SB10k as primary data sources for evaluation.

Tagging, Parsing, and Discourse Segmentation

Before using these corpora, we processed all tweets of these datasets with the text normalization pipeline [SIDARENKA] and parsed them using the Mate dependency parser [BOHNET]. Afterwards, we converted the resulting CoNLL files into the TSV format using the scipt conll2tsv, and subsequently exported the resulting TSV into JSON with the script tsv2json. In addition to that, we also added information about discourse segments and automatically predicted sentiment scores for each of these segements with the scripts add_segmentation and add_polarity_scores respectively.

Discourse Parsing

To derive RST trees for the obtained tweets, we used the script add_rst_trees from the RSTParser package:

pwd
/home/sidorenko/Projects/RSTParser

git rev-parse HEAD
8b595c3913daa68745758c1eb3420bfa90cbb264

for f in ../DASA/data/\*/\*/\*.json; do \
  ./scripts/add_rst_trees bhatia data/pcc-dis-bhatia/test/rstparser.bhatia.model $f > 1 && \
  mv 1 $f;
done

Examples

DDR

To determine the polarity of a tweet using the discourse depth reweighting (DDR) method [BHATIA], you can use the following command to create the model:

dasa_sentiment -v train -t ddr -r bhatia data/PotTS/train/\*.json  data/PotTS/dev/\*.json

and then execute the following scripts to predict the labels for the test sets and evaluate the quality of the resulting model:

dasa_sentiment -v test data/PotTS/test/\*.json > data/PotTS/predicted/root/root.json
dasa_evaluate data/PotTS/test/ data/PotTS/predicted/root/root.json

Equivalently, you can run the following commands to check the performance of this approach on the SB10k corpus:

dasa_sentiment -v train -t ddr -r bhatia data/SB10k/train/\*.json  data/SB10k/dev/\*.json
dasa_sentiment -v test data/SB10k/test/\*.json > data/SB10k/predicted/ddr/ddr.json
dasa_evaluate data/SB10k/test/ data/SB10k/predicted/ddr/ddr.json

Results

Data Positive Negative Neutral Macro F_1 Micro F_1
P R F_1 P R F_1 P R F_1
PotTS SB10k 0.73 0.59 0.77 0.63 0.75 0.61 0.54 0.48 0.59 0.44 0.56 0.46 0.69 0.77 0.61 0.76 0.65 0.77 0.655 0.534 0.674 0.681

Last EDU

To predict the polarity of a tweet based on the polarity of its last EDU, we used the following command to create the model:

dasa_sentiment -v train -t last data/PotTS/train/\*.json  data/PotTS/dev/\*.json

and then executed the following scripts to predict the label and evaluate the quality:

dasa_sentiment -v test data/PotTS/test/\*.json > data/PotTS/predicted/last/last.json
dasa_evaluate data/PotTS/test/ data/PotTS/predicted/last/last.json

equivalently:

dasa_sentiment -v train -t last data/SB10k/train/\*.json  data/SB10k/dev/\*.json
dasa_sentiment -v test data/SB10k/test/\*.json > data/SB10k/predicted/last/last.json
dasa_evaluate data/SB10k/test/ data/SB10k/predicted/last/last.json

Results

Data Positive Negative Neutral Macro F_1 Micro F_1
P R F_1 P R F_1 P R F_1
PotTS SB10k 0.52 0.56 0.83 0.55 0.64 0.56 0.57 0.46 0.17 0.29 0.26 0.36 0.61 0.73 0.43 0.8 0.5 0.76 0.453 0.459 0.549 0.661

No-Discourse

To predict the polarity of a tweet discregarding the discourse information, you can invoke the above scripts as follows:

dasa_sentiment -v train -t no-discourse data/PotTS/train/\*.json  data/PotTS/dev/\*.json

and then the following scripts to predict the label and evaluate the quality:

dasa_sentiment -v test data/PotTS/test/\*.json > data/PotTS/predicted/no-discourse/no-discourse.json
dasa_evaluate data/PotTS/test/ data/PotTS/predicted/no-discourse/no-discourse.json

equivalently:

dasa_sentiment -v train -t no-discourse data/SB10k/train/\*.json  data/SB10k/dev/\*.json
dasa_sentiment -v test data/SB10k/test/\*.json > data/SB10k/predicted/no-discourse/no-discourse.json
dasa_evaluate data/SB10k/test/ data/SB10k/predicted/no-discourse/no-discourse .json

Results

Data Positive Negative Neutral Macro F_1 Micro F_1
P R F_1 P R F_1 P R F_1
PotTS SB10k 0.73 0.64 0.82 0.69 0.77 0.66 0.61 0.45 0.56 0.45 0.58 0.45 0.72 0.82 0.66 0.79 0.69 0.8 0.677 0.557 0.706 0.713

Root EDU

To predict the polarity of a tweet based on the root EDU (i.e., the nucleus of the nucleus), we used the following commands to create and test the models:

dasa_sentiment -v train -t root -r bhatia data/PotTS/train/\*.json  data/PotTS/dev/\*.json

and then the following scripts to predict the label and evaluate the quality:

dasa_sentiment -v test data/PotTS/test/\*.json > data/PotTS/predicted/root/root.json
dasa_evaluate data/PotTS/test/ data/PotTS/predicted/root/root.json

equivalently:

dasa_sentiment -v train -t root -r bhatia data/SB10k/train/\*.json  data/SB10k/dev/\*.json
dasa_sentiment -v test data/SB10k/test/\*.json > data/SB10k/predicted/root/root.json
dasa_evaluate data/SB10k/test/ data/SB10k/predicted/root/root.json

Results

Data Positive Negative Neutral Macro F_1 Micro F_1
P R F_1 P R F_1 P R F_1
PotTS SB10k 0.56 0.51 0.73 0.55 0.64 0.53 0.58 0.4 0.22 0.3 0.32 0.35 0.55 0.74 0.54 0.76 0.54 0.75 0.481 0.438 0.5596 0.64

R2N2

To determine the polarity of a tweet using rhetorical recursive neural networks (R2N2) [BHATIA], you can use the following command to create the model:

dasa_sentiment -v train -t r2n2 -r bhatia data/PotTS/train/\*.json  data/PotTS/dev/\*.json

and then run:

dasa_sentiment -v test data/PotTS/test/\*.json > data/PotTS/predicted/r2n2/r2n2.json
dasa_evaluate data/PotTS/test/ data/PotTS/predicted/r2n2/r2n2.json

to predict the labels on the test sets and evaluate the quality of the resulting model.

Equivalently, you can run the following commands to check the performance of this approach on the SB10k corpus:

dasa_sentiment -v train -t r2n2 -r bhatia data/SB10k/train/\*.json  data/SB10k/dev/\*.json
dasa_sentiment -v test data/SB10k/test/\*.json > data/SB10k/predicted/r2n2/r2n2.json
dasa_evaluate data/SB10k/test/ data/SB10k/predicted/r2n2/r2n2.json

Results

Data Positive Negative Neutral Macro F_1 Micro F_1
P R F_1 P R F_1 P R F_1
PotTS SB10k 0.74 0.64 0.78 0.69 0.76 0.66 0.59 0.46 0.53 0.45 0.56 0.45 0.68 0.81 0.68 0.79 0.68 0.8 0.6572 0.5592 0.6918 0.7133

RDP

To determine the polarity of a tweet using a recursive Dirichlet process (RDP), you can use the following command to train the model:

dasa_sentiment -v train -t rdp -r bhatia data/PotTS/train/\*.json  data/PotTS/dev/\*.json

and then run:

dasa_sentiment -v test data/PotTS/test/\*.json > data/PotTS/predicted/rdp/rdp.json
dasa_evaluate data/PotTS/test/ data/PotTS/predicted/rdp/rdp.json

to predict the labels on the test sets and evaluate the quality of the resulting model.

Equivalently, you can run the following commands to check the performance of this approach on the SB10k corpus:

dasa_sentiment -v train -t rdp -r bhatia data/SB10k/train/\*.json  data/SB10k/dev/\*.json
dasa_sentiment -v test data/SB10k/test/\*.json > data/SB10k/predicted/rdp/rdp.json
dasa_evaluate data/SB10k/test/ data/SB10k/predicted/rdp/rdp.json

Results

Data Positive Negative Neutral Macro F_1 Micro F_1
P R F_1 P R F_1 P R F_1
PotTS SB10k 0.73 0.64 0.82 0.69 0.77 0.66 0.61 0.45 0.56 0.45 0.58 0.45 0.73 0.82 0.65 0.79 0.69 0.8 0.678 0.557 0.706 0.713

WANG

To determine the polarity of a message using a linear combination of EDU polarities [WANG], you can use the following command to create the model:

dasa_sentiment -v train -t wang -r bhatia data/PotTS/train/\*.json  data/PotTS/dev/\*.json

and run:

dasa_sentiment -v test data/PotTS/test/\*.json > data/PotTS/predicted/wang/wang.json
dasa_evaluate data/PotTS/test/ data/PotTS/predicted/wang/wang.json

to predict the labels on the test sets and evaluate the quality of the resulting model.

Results

Data Positive Negative Neutral Macro F_1 Micro F_1
P R F_1 P R F_1 P R F_1
PotTS SB10k 0.58 0.61 0.79 0.63 0.67 0.62 0.61 0.46 0.21 0.29 0.31 0.36 0.61 0.76 0.57 0.82 0.59 0.79 0.4872 0.4884 0.5905 0.6933

LCRF

In the same way, you can use the -t lcrf option, to train and evaluate latent CRFs:

dasa_sentiment -v train -t lcrf -r bhatia data/PotTS/train/\*.json  data/PotTS/dev/\*.json
dasa_sentiment -v test data/PotTS/test/\*.json > data/PotTS/predicted/lcrf/lcrf.json
dasa_evaluate data/PotTS/test/ data/PotTS/predicted/lcrf/lcrf.json

Results

Data Positive Negative Neutral Macro F_1 Micro F_1
P R F_1 P R F_1 P R F_1
PotTS SB10k 0.76 0.64 0.79 0.69 0.77 0.66 0.61 0.45 0.53 0.45 0.56 0.45 0.71 0.82 0.71 0.79 0.71 0.8 0.67 0.557 0.709 0.713

LMCRF

In the same way, you can use the -t lmcrf option, to train and evaluate hidden marginalized CRFs:

dasa_sentiment -v train -t lmcrf -r bhatia data/PotTS/train/\*.json  data/PotTS/dev/\*.json
dasa_sentiment -v test data/PotTS/test/\*.json > data/PotTS/predicted/lmcrf/lmcrf.json
dasa_evaluate data/PotTS/test/ data/PotTS/predicted/lmcrf/lmcrf.json

Results

Data Positive Negative Neutral Macro F_1 Micro F_1
P R F_1 P R F_1 P R F_1
PotTS SB10k 0.77 0.64 0.77 0.69 0.77 0.67 0.61 0.45 0.54 0.45 0.57 0.45 0.69 0.82 0.74 0.79 0.72 0.8 0.671 0.56 0.712 0.715

References

[BHATIA](1, 2) Parminder Bhatia, Yangfeng Ji, and Jacob Eisenstein. 2015. Better Document-Level Sentiment Analysis from RST Discourse Parsing. In Proceedings of Empirical Methods for Natural Language Processing (EMNLP), Lisbon, September.
[BOHNET]Bernd Bohnet. 2009. Effiient parsing of syntactic and semantic dependency structures. In Hajic, J., editor, Proceedings of the Thirteenth Conference on Computational Natural Lan- guage Learning: Shared Task, CoNLL 2009, Boulder, Colorado, USA, June 4, 2009 , pages 67--72. ACL.
[SIDARENKA]Uladzimir Sidarenka, Tatjana Schefflr and Manfred Stede. 2013. Rule-based normalization of German Twitter messages. In Language Processing and Knowledge in the Web - 25th International Conference, GSCL 2013: Proceedings of the workshop Verarbeitung und Annotation von Sprachdaten aus Genres internetbasierter Kommunikation , Darmstadt, Germany.
[WANG]Fei Wang, Yunfang Wu and Likun Qiu. (2013). Exploiting hierarchical discourse structure for review sentiment analysis. In 2013 International Conference on Asian Language Processing, IALP 2013, Urumqi, China, August 17-19, 2013 , pages 121--124. IEEE.