Article | Open Access
Fueling Toxicity? Studying Deceitful Opinion Leaders and Behavioral Changes of Their Followers
Views: | 2211 | | | Downloads: | 1129 |
Abstract: The spread of deceiving content on social media platforms is a growing concern amongst scholars, policymakers, and the public at large. We examine the extent to which influential users (i.e., “deceitful opinion leaders”) on Twitter engage in the spread of different types of deceiving content, thereby overcoming the compartmentalized state of the field. We introduce a theoretical concept and approach that puts these deceitful opinion leaders at the center, instead of the content they spread. Moreover, our study contributes to the understanding of the effects that these deceiving messages have on other Twitter users. For 5,574 users and 731,371 unique messages, we apply computational methods to study changes in messaging behavior after they started following a set of eight Dutch deceitful opinion leaders on Twitter during the Dutch 2021 election campaign. The results show that users apply more uncivil language, become more affectively polarized, and talk more about politics after following a deceitful opinion leader. Our results thereby underline that this small group of deceitful opinion leaders change the norms of conversation on these platforms. Hence, this accentuates the need for future research to study the literary concept of deceitful opinion leaders.
Keywords: computational communication science; disinformation; opinion leaders; social media; the Netherlands; Twitter
Published:
Supplementary Files:
Online discussion: Watch the conversation about this article on Let's Talk About
© Puck Guldemond, Andreu Casas Salleras, Mariken van der Velden. This is an open access article distributed under the terms of the Creative Commons Attribution 4.0 license (http://creativecommons.org/licenses/by/4.0), which permits any use, distribution, and reproduction of the work without further permission provided the original author(s) and source are credited.