One-step and two-step classification for abusive language detection on twitter

Ji Ho Park, Pascale Fung

Research output: Chapter in Book/Conference Proceeding/ReportConference Paper published in a bookpeer-review

Abstract

Automatic abusive language detection is a difficult but important task for online social media. Our research explores a two-step approach of performing classification on abusive language and then classifying into specific types and compares it with one-step approach of doing one multi-class classification for detecting sexist and racist languages. With a public English Twitter corpus of 20 thousand tweets in the type of sexism and racism, our approach shows a promising performance of 0.827 F-measure by using HybridCNN in one-step and 0.824 F-measure by using logistic regression in two-steps.

Original languageEnglish
Title of host publication1st Workshop on Abusive Language Online, ALW 2017 at the 55th Annual Meeting of the Association for Computational Linguistic, ACL 2017 - Proceedings of the Workshop
EditorsZeerak Waseem, Wendy Hui Kyong Chung, Dirk Hovy, Joel Tetreault
PublisherAssociation for Computational Linguistics (ACL)
Pages41-45
Number of pages5
ISBN (Electronic)9781945626661
Publication statusPublished - 2017
Event1st Workshop on Abusive Language Online, ALW 2017 at the 55th Annual Meeting of the Association for Computational Linguistic, ACL 2017 - Proceedings of the Workshop - Vancouver, Canada
Duration: 4 Aug 2017 → …

Publication series

NameProceedings of the Annual Meeting of the Association for Computational Linguistics
ISSN (Print)0736-587X

Conference

Conference1st Workshop on Abusive Language Online, ALW 2017 at the 55th Annual Meeting of the Association for Computational Linguistic, ACL 2017 - Proceedings of the Workshop
Country/TerritoryCanada
CityVancouver
Period4/08/17 → …

Bibliographical note

Publisher Copyright:
© 2017 Association for Computational Linguistics

UN SDGs

This output contributes to the following UN Sustainable Development Goals (SDGs)

  1. SDG 5 - Gender Equality
    SDG 5 Gender Equality

Fingerprint

Dive into the research topics of 'One-step and two-step classification for abusive language detection on twitter'. Together they form a unique fingerprint.

Cite this