Skip to main navigation Skip to search Skip to main content

Utilizing standard deviation in text classification weighting schemes

Research output: Contribution to journalArticlepeer-review

9 Scopus citations

Abstract

The term frequency – inverse document frequency (TF-IDF) weighting scheme is widely used in text classification for weighting the features of the vector space model (VSM). It aims at enhancing words’ discriminating capabilities by weighing up the less frequently used words and, at the same time, weighing down the high frequency words (i.e., the common words such as prepositions). This paper attempts to provide an enhanced variant of the well-known TF-IDF method. The TF-IDF is a statistical estimation that computes the weight of each word based on the frequency of the word in both the document and the entire data collection. In this work, we propose considering the word’s standard deviation as another factor when computing the word’s weight. That is, the common words tend to have larger standard deviations more than the uncommon words. In other words, the more the word appears in documents, the greater the standard deviation is. To investigate the proposed TF-IDF based model, we conducted some experiments for Arabic text classification. We used a training textual data collection that contains 1,750 documents of five categories (250 documents for testing). The experimental results show that the proposed approach is superior to the standard TF-IDF term weighting scheme.

Original languageEnglish
Pages (from-to)1349-4198
Number of pages2850
JournalInternational Journal of Innovative Computing, Information and Control
Volume13
Issue number4
StatePublished - 2017

Keywords

  • Arabic
  • Classification
  • Singular value decomposition
  • Text
  • TF-IDF

Funding Agency

  • Kuwait Foundation for the Advancement of Sciences

Fingerprint

Dive into the research topics of 'Utilizing standard deviation in text classification weighting schemes'. Together they form a unique fingerprint.

Cite this