Sentiment analysis of textual content is widely used for automatic summarization of opinions and sentiments expressed by people. With the growing popularity of social media and user-generated content, efficient and effective sentiment analysis is critical to businesses and governments. Lexicon-based methods provide efficiency through their manually developed affective word lists and valence values. However, the predictions of such methods can be biased towards positive or negative polarity thus distorting the analysis. In this paper, we propose Bias-Aware Thresholding (BAT), an approach that can be combined with any lexicon-based method to make it bias-aware. BAT is motivated from cost-sensitive learning where the prediction threshold is changed to reduce prediction error bias. We formally define bias in polarity predictions and present a measure for quantifying it. We evaluate BAT in combination with AFINN and SentiStrength -- two popular lexicon-based methods -- on seven real-world datasets. The results show that bias reduces smoothly with an increase in the absolute value of the threshold, and accuracy increases as well in most cases. We demonstrate that the threshold can be learned reliably from a very small number of labeled examples, and supervised classifiers learned on such small datasets produce poorer bias and accuracy performances.
|Title of host publication||Proceedings of the 30th Annual ACM Symposium on Applied Computing|
|Number of pages||6|
|Publication date||Apr 2015|
|Publication status||Published - Apr 2015|