Hate speech’s double damage: A semi-automated approach toward direct and indirect targets
Keywords:hate speech online, user comments, comment moderation, platforms, Twitter, YouTube, online journalism, Netzwerkdurchsetzungsgesetz, NetzDG, online public sphere
Democracies around the world have been facing increasing challenges with hate speech online as it contributes to a tense and thus less discursive public sphere. In that, hate speech online targets free speech both directly and indirectly, through harassments and explicit harm as well as by informing a vicious environment of irrationality, misrepresentation, or disrespect. Consequently, platforms have implemented varying means of comment-moderation techniques, depending both on policy regulations and on the quantity and quality of hate speech online. This study seeks to provide descriptive measures between direct and indirect targets in light of different incentives and practices of moderation on both social media and news outlets. Based on three distinct samples from German Twitter, YouTube, and a set of four news outlets, it applies semi-automated content analyses using a set of five cross-sample classifiers. Thereby, the largest amounts of visible hate speech online depict rather implicit devaluations of ideas or behavior. More explicit forms of hate speech online, such as insult, slander, or vulgarity, are only rarely observable and accumulate around certain events (Twitter) or single videos (YouTube). Moreover, while hate speech on Twitter and YouTube tends to target particular groups or individuals, hate speech below news articles shows a stronger focus on debates. Potential reasons and implications are discussed in light of political and legal efforts in Germany.
How to Cite
Copyright (c) 2022 Mario Haim, Elisa Hoven
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.