Training data cleaning for text classification (Articolo in rivista)

Type
Label
  • Training data cleaning for text classification (Articolo in rivista) (literal)
Anno
  • 2009-01-01T00:00:00+01:00 (literal)
Alternative label
  • Esuli A.; Sebastiani F. (2009)
    Training data cleaning for text classification
    in Lecture notes in computer science
    (literal)
Http://www.cnr.it/ontology/cnr/pubblicazioni.owl#autori
  • Esuli A.; Sebastiani F. (literal)
Pagina inizio
  • 29 (literal)
Pagina fine
  • 41 (literal)
Http://www.cnr.it/ontology/cnr/pubblicazioni.owl#numeroVolume
  • 5766 (literal)
Rivista
Http://www.cnr.it/ontology/cnr/pubblicazioni.owl#note
  • In: ICTIR 2009 - Advances in Information Retrieval Theory. Second International Conference on the Theory of Information Retrieval (Cambridge, UK, 10-12 September 2009). Proceedings, pp. 29 - 41. Leif Azzopardi, Gabriella Kazai, Stephen E. Robertson, Stefan M. Rüger, Milad Shokouhi, Dawei Song, Emine Yilmaz: (eds.). (Lecture Notes in Computer Science, vol. 5766). Springer Verlag, 2009. (literal)
Note
  • ISI Web of Science (WOS) (literal)
Http://www.cnr.it/ontology/cnr/pubblicazioni.owl#affiliazioni
  • CNR-ISTI, Pisa (literal)
Titolo
  • Training data cleaning for text classification (literal)
Abstract
  • In text classification (TC) and other tasks involving supervised learning, labelled data may be scarce or expensive to obtain. Semi-supervised learning and active learning are two strategies whose aim is maximizing the effectiveness of the resulting classifiers while minimizing the required amount of training effort; both strategies have been actively investigated for TC in recent years. Much less research has been devoted to a third such strategy, training data cleaning (TDC), which consists in devising ranking functions that sort the original training examples in terms of how likely it is that the human annotator has misclassified them, thereby providing a convenient means for the human annotator to revise the training set so as to improve its quality. Working in the context of boosting-based learning methods we present three different techniques for performing TDC and, on two widely used TC benchmarks, evaluate them by their capability of spotting misclassified texts purposefully inserted in the training set. (literal)
Prodotto di
Autore CNR
Insieme di parole chiave

Incoming links:


Prodotto
Autore CNR di
Http://www.cnr.it/ontology/cnr/pubblicazioni.owl#rivistaDi
Insieme di parole chiave di
data.CNR.it