Updating an nlp system to fit new domains: an empirical study on the sentence segmentation problem

Fred Damerau, David Johnson, Tong Zhang

Research output: Contribution to conferenceConference Paperpeer-review

Abstract

Statistical machine learning algorithms have been successfully applied to many natural language processing (NLP) problems. Compared to manually constructed systems, statistical NLP systems are often easier to develop and maintain since only annotated training text is required. From annotated data, the underlying statistical algorithm can build a model so that annotations for future data can be predicted. However, the performance of a statistical system can also depend heavily on the characteristics of the training data. If we apply such a system to text with characteristics different from that of the training data, then performance degradation will occur. In this paper, we examine this issue empirically using the sentence boundary detection problem. We propose and compare several methods that can be used to update a statistical NLP system when moving to a different domain.
Original languageEnglish
Pages56-62
DOIs
Publication statusPublished - May 2003
Externally publishedYes
EventProceedings of the seventh conference on Natural language learning, CONLL 2003 -
Duration: 1 May 20031 May 2003

Conference

ConferenceProceedings of the seventh conference on Natural language learning, CONLL 2003
Period1/05/031/05/03

Fingerprint

Dive into the research topics of 'Updating an nlp system to fit new domains: an empirical study on the sentence segmentation problem'. Together they form a unique fingerprint.

Cite this