Open access peer-reviewed chapter - ONLINE FIRST

Amplifying Hate: Mapping the Political Twitter Ecosystem and Toxic Enablers in Greece

Written By

Ioanna Archontaki and Dimitris Papaevagelou

Submitted: 18 June 2024 Reviewed: 26 June 2024 Published: 23 July 2024

DOI: 10.5772/intechopen.1006037

Social Media and Modern Society IntechOpen
Social Media and Modern Society Edited by Ján Višňovský

From the Edited Volume

Social Media and Modern Society [Working Title]

Associate Prof. Ján Višňovský and Dr. Jana Majerová

Chapter metrics overview

1 Chapter Downloads

View Full Metrics

Abstract

Detecting hate speech on social media and the way it spreads have proved trickier than originally thought. Alt-right politicians seem to be ahead of the technological curve, exploiting existing biases in society and platforms to promote hateful messages. As a result, messages get traction by both good and bad faith actors. In the following research, we examined 36.8 million tweets and 1.5 million unique accounts related to Greek politicians on Twitter, in an effort to map the information dissemination ecosystem. In this chapter, we present a scalable model to predict amplification accounts used by various actors in Greece, alongside a toxicity classifier for short messages. We mainly focused on the political context to scan for toxicity spreaders affiliated to Greek politicians. Our approach investigates behavioral characteristics that differentiate normal accounts with the amplifiers without addressing the binary logic—bot or not. Our preliminary results show that the majority of Greek politicians’ networks of followers are amplifier accounts without necessarily meaning that these are bot accounts. In compatibility with other research findings, we find that the majority of the accounts promoting toxicity are predominantly partisans from the right of the political spectrum.

Keywords

  • X (former twitter) bot detection
  • bot classification
  • twitter API
  • online toxicity
  • hyper-partisanship

1. Introduction

Social media were once seen as the forerunners of democracy, especially in such countries where systemic failures and chronic corruption made citizens’ trust in authorities strained and, in some cases virtually nonexistent. Twenty years later, this image has completely flipped on its head. Concerns are multiplying on whether social media is just another tool for political manipulation of the public, with the additional danger that now we can only theorize about who is hiding behind the message. As a result, academia has put in great effort to map disinformation in social media over the past decade, especially since Brexit and the 2016 US presidential elections, when Russian interference was heavily speculated.

Despite earnest attempts, the discussion on social media disinformation mainly swiveled around the existence of automated accounts (bots) as a sign of political manipulation. However, this question can be misleading. Bots are not malicious per se. There are automated accounts in the service of reporting information, raising awareness, distributing content across platforms, providing support, etc., while other automated accounts are used to amplify certain (political) messages, sway public opinion, and suppress dissenting voices. Therefore, focusing on the nature of a social media account but not on its action lumps together useful and harmful accounts.

Social media’s vulnerability to manipulation (i.e., Cambridge Analytica) has casted deep doubts regarding their use for information purposes in most European countries with the notable exceptions of the Balkans and Eastern European region [1]. There, trust in social media seems to go hand in hand with an unfree political culture, a disconnection with national authorities, and decades of deregulation, crisis, and poverty [2].

The case of Greece, the least trusting country in mainstream media in the European continent, further serves to explore the dangers of political manipulation taking place on social media platforms. In a clientelist media system as the Greek one, news coverage on certain topics such as political scandals and corruption allegations is particularly vicious, resulting in spilling over pre-election periods and permanent political campaigning [2, 3, 4]. This system has a series of implications, such as political cynicism and distrust toward institutions, lower voters’ turnout during elections, and a political system that is unable to form coalition governments or achieve wide-base legitimacy. The culmination of Greek media in the hands of a few oligarchs has also the (un)intended effect of stifling any alternative voice, a trend depicted in the concerningly low position of the country in press freedom indexes [5]. As a result, the Greek population has sought to use social media as a way of getting independent information. In reaction, over the course of the past decade, social media has become the battleground for political control over Greek voters. X (former Twitter) in particular, despite the fact that it is only used by 10% of the Greek population [1] has become an indispensable tool for shaping perceptions and opinions, especially since the Greek mainstream media turned to X to construct news stories based on alleged “trending” topics. In the meantime, research is showing that prominent Greek politicians and the majority of political parties have been making extensive use of Twitter farms, amplifier armies, and fake accounts to spread misinformation, attack their opponents, and influence the political discourse.

The drive to control social media is not a solely Greek phenomenon. As far back as the early 2010s, concerns were raised about regulating the toxic social media ecosystem riding on a wave of harassment campaigns against journalists and opinion writers mainly coming from the gaming community (i.e., Gamergate). Similarly, smaller social media platforms such as 4chan and Reddit were gaining notoriety for allowing subcultures of fascism, racism, and sexism to take hold of their channels [6]. To what extent the users of these social media platforms managed to organize over the years is still a matter of fierce debate. The fact remains that in concord with Trump’s candidacy in 2015 several concerning trends came to co-existence: online communities seemed to congregate around social media figureheads, a concise set of anti-Muslimism, anti-immigration, anti-multiculture, anti-LGBTQ , and anti-women arguments spectacularly similar, despite the cultural differences spread from the USA to the EU. These arguments were expressed by nativist and exclusionary political movements that in the following years penetrated the Eurosceptic and far-right political wing. A scientific report from Schoch et al. [7] demonstrated astroturfing political movements online showing coordination patterns across the world. According to this study, these centrally coordinated campaigns consisted of both automated and human-operated accounts and did not necessarily spread disinformation and fake news. They, however, deceived users over their true motivations by using false identities.

In this study, we are mainly concerned with this question: which accounts are truly harmful when it comes to disinformation. To answer this question, we focus on accounts’ activity on X (former Twitter) platform from 2009 to 2021. Instead of focusing on whether an account is a bot or not, we distinguish between accounts that provide original content and accounts that just amplify content. To have a better insight regarding an account’s activity we also consider the range of topics that the account has been engaging in, as well as its activity history.

In the first part of the chapter, we address the problem of online political manipulation, the various tactics employed, and the political reasons behind them. More precisely, we explore concepts such as the bandwagon effect, astroturfing, and online toxicity. Then, in the second part, we analyze the methodology we chose to study the Greek Twittersphere. Our methodology employs a new schema for studying bot accounts affiliated to politicians, as well as a new way to measure online toxicity. In the last part, we discuss the findings, as well as the limitations of our study, and proposals for further research.

The present study comes to add to a long tradition of academic study of bot activity online. We raise the question of whether a bot is harmful, promoting disinformation, and/or fostering online toxicity. Studying bots’ activity in the Greek online sphere also provides the benefit of examining these practices in an unregulated and usually overlooked side of the Greek internet.

Advertisement

2. Literature review

2.1 Political manipulation on X (former twitter)

Social media has turned into virtual spaces where conflicting narratives struggle to gain visibility [8]. As Terranova pointed out [9] the concept of an information society (limitless information) was soon challenged by the scarcity of attention, a quantifiable resource like other economic goods, open to “marketization and financialization.” In this power struggle, various actors compete in online environments, like social media platforms, to reach the widest audience possible and to win audiences’ attention. To do so, they use a variety of means. Since audiences’ attention is also sold to other interested parties, big tech companies designed social media operations to be as addictive as possible, raising concerns over the ethics of the attention economy [10].

By employing cutting-edge AI technologies social media platforms offer to their users personalized/tailored content [11], interactive features [12], as well as a sense of community [13, 14]. Interestingly, Shahbaznezhad et al. [11] show that users are more likely to engage with negative content. Ferrara and Young [15] research suggested that negative messages tend to spread faster than positive ones. Similarly, tweets conveying negative emotions tend to be retweeted more often and more quickly compared to positive and neutral ones [16, 17], as well as populist messages that provoked anger [18]. Pérez Curiel [19] demonstrated the strategies that right-wing populist actors follow to make their messages go viral on X: homogeneity of the message, limited topics, high volume of publishing tweets, and limited response. García Benítez-D’Ávila [20] identified 18 types of personal attacks populist politicians use to confront opposing voices and also demonstrated that the more vicious the personal attacks were, the more polarization and engagement it generated on X. Furthermore, Guldemond et al. [21] showed that users on X became more polarized and used more “uncivil” language when they followed deceitful opinion leaders meaning accounts that spread fake news, rumors, conspiracy theories and disinformation.

By examining the literature, we can agree that messages that convey emotions, especially anger tend to become viral on platforms such as X. On top, we saw that populist politicians engage in toxic debates, not only to silence opponents’ voices but also to gain traction for their message. But is this engagement coming from actual users? Are users on X so prone to these messages that they engage and amplify them as soon as they come across such content? Or do politicians use other methods to fake the image of wide popular support? Based on Liebenstein’s bandwagon effect theory [22], researchers have proven that people are more likely to choose candidates they perceive as more popular or more likely to win [23, 24, 25]. This internalized human tendency for conformity is highly exploited by astroturfing tactics [14] that create the illusion of wide popular support on a given matter, influencing people to align their opinions or behavior accordingly. Astroturfing tactics disguise top-down activities, initiated by actors with political interests, as bottom-up demands [26] and can take place both offline (fake grassroots movements, demonstrations, etc.) and online (campaigns, bots, influencers, etc.). Astroturfing on social media has been the object of research in the context of elections [27] with the use of bots, as a phenomenon of multiple accounts on different platforms with a single author [28], in mapping various detection techniques [29], but most importantly as a coordinated phenomenon across diverse political and geographic contexts and different periods of time [7].

During the 2016 US presidential elections, there is compelling evidence to suggest that the republican party utilized bots to garner support for Donald Trump. Al-Rawi et al. [30] mapped out former’s Twitter traffic related to Trump’s candidacy to discover that at least a third was generated by bots and highly automated accounts. Similarly, Bryden and Silverman [31] have shown that computational propaganda from automated X accounts played a significant role in developing Trump’s online following. The use of bots carried throughout the entirety of Trump’s presidency picking up during significant events like the first Trump impeachment [32, 33]. These bots were instrumental in creating a false sense of wide support for Donald Trump, as well as confusing and demobilizing the opposition spreading alternative facts and influencing the agenda [34]. Following the bot purging by X (former Twitter) in 2018, Silva and Proksch [35] found that the radical right and several Eurosceptic politicians from across the European Union noted the biggest drops in followers, suggesting their extensive use of bots.

2.2 Anatomy of X amplifiers

Research on X (former Twitter) bots has explored a wide range of topics, reflecting the diverse concerns within the academic community. Some of the main themes that have emerged from these studies include technical aspects of creating bots [36, 37], the detection and identification of bots [38, 39, 40, 41], the influence and role of bots on X [40, 42, 43, 44], and the development of methodologies and tools for analyzing bot behavior [45, 46, 47, 48].

While extensive and diverse, research on X bots is not without limitations. Several key challenges and constraints have been identified in the existing literature. First of all, the diversity and complexity of content on X is a significant restraint toward establishing universal bot detection methods [49]. Other researchers have pointed out the scarcity of labeled datasets for training detection models [50] and therefore the insufficiency of solely machine learning techniques to identify bots [36], while available data may also lack in reliability affecting the validation of detection algorithms [51].

In this study, we argue that the automated activity of an account should not outperform the nature of its activities, since mis/disinformation can be spread by all types of accounts (automated, semiautomated, human-operated). By training new models with the help of VouliWatch coders, we put under the spotlight the accounts’ activity instead (whether they produce original content or just amplify content), while examining at the same time how these accounts perform regarding the spread of toxicity.

2.3 Online toxicity

Examples of online toxicity, such as the use of offensive language, trolling, bullying, harassment, physical threats, doxing, obscenity, and racial or identity-based hate among other things, are a constant presence in the online experience ever since the 1990’s [52]. Though there has been some research on cyberbullying and toxic behavior during online games before 2015 [53], the academic research surrounding online toxicity peaked in concord with studying social media’s negative by-products. Social media’s interactivity, especially in the form of commenting, also brought about an escalation of aggressive online rhetoric that definitely pre-existed [52]. There is no doubt that this escalation of toxic discourse had a serious impact on the willingness to engage in public debate on the one hand and skewed the decision-making process of forming one’s personal opinion by misrepresenting or exaggerating public opinion around issues of “alleged” common concern on the other [54]. However, there is no consensus on what this online toxic behavior entails: Some scholars maintain that online toxicity should only include harmful, disrespectful, discriminatory, or other hateful or abusive language [55, 56], while others include borderline unlawful or illegal activities such as doxing, online harassment, cyberbullying, and physical threats [57]. The definition of toxicity also varies from platform to platform, depending on the community norms and linguistic patterns [58]. According to Google’s documentation [59], online toxicity is treated in a case-by-case scenario, based on the guiding principle of banning users that violate terms of conditions or domestic/international laws. This categorization encompasses a wide category of language which is subject to the individual interpretation of the administrator/moderator of the message.

In this study, we categorized toxicity based on its qualitative differentiation making a distinction on severity; therefore, verbal insults were marked as toxic, hateful messages and threats as severely toxic, and insults based on race, gender and other characteristics as identity hate. In addition, based on Angela Nagle’s exposé on online subcultures and toxicity [6], we counter-imposed the above categories on an extreme left-extreme right political axis to determine the origin of online hate in the Greek X sphere. The reasoning for exploring the political origins of online toxicity is to determine whether the horseshoe theory, meaning that the extreme left and the extreme right converge during times of crisis, hold true in the Greek case. An additional investigation is to determine whether bots are used by both extremes with the same goals and employing the same tactics.

Advertisement

3. Methodology

3.1 Approach, data, and specifications

In our approach, we decided to focus exclusively on Greek politicians who were active on Twitter, the targets, and their surrounding network of followers, since in this way we were able to correlate the content of the Greek politicians with the information dissemination ecosystem. We ended up dividing our datasets into three different subsets; (1) targets—with information collected about Greek politicians, (2) followers—with information collected for all the followers of each target up to November 2021, and finally (3) tweets—a sample of up to 800 tweets from each account within the previous datasets up to January 2022. The target dataset comprises a manually curated dataset in collaboration with VouliWatch, with information about each Greek politician including FullName, District, Party, Link, and UserName (Table 1).

IDParty# Targets# Used
KKEKKE22
M25MERA 2577
SRZSY.RI.ZA6363
KINPASOK-KIN.AL1818
NDNew Democracy12563
ELGreek Solution55
GOVGovernment160

Table 1.

Targets.

To collect our data we used Twitter API, v2 to collect account (get-users-by) specific information and followers (get-users-id-followers), and v1 to collect tweets (user-timeline). For each account, we collected basic information provided by the API including ID, CreatedAt, Name, UserName, Description, Location, Protected, Verified, Followers, Following, Tweets, Listed, ProfileImageURL, URL, PinnedTweetID, PinnedTweetCreatedAt, PinnedLang, PinnedSource, PinnedText and additional enriched information including Target and Party (Table 2).

TypeAttributes collected
Tweet dataID, CreatedAt, FullText, Hashtags, Media. URLs, Mentions, Source, Lang, FavoriteCount, RetweetCount, Link, UserName
Account dataID, CreatedAt, Name, UserName, Description, Location, Protected, Verified, Followers, Following, Tweets, Listed, Favorites (v1), ProfileImageURL, URL, PinnedTweetCreatedAt, PinnedLang, PinnedSource, PinnedText
Shared dataTarget, Party

Table 2.

Data collection.

Firstly, we collected data on the 300 members of the Greek Parliament, including 17 members of the General Government coming from the six political parties (KKE, MERA 25, SY.RI.ZA, PASOK-KIN.AL, New Democracy, and Greek Solution) composing the Greek Parliament. Targets who were not elected as members of the Greek Parliament but appointed as members of the Government or the State apparatus were grouped in a separate category (GOV) and were not included in the final analysis of political preference. Of all politicians, 236 had an X (former Twitter) account. To avoid imbalanced data, we only used the data of the first 63 accounts related to the New Democracy party.

In the second stage, we collected data for all targets’ Followers, creating a dataset consisting of approximately 4.68 million accounts of which 1.5 million were unique (Table 3). Almost two-thirds of the accounts collected were inactive or without any activity or followed only one target. At the same time, popular targets such as political party leaders (i.e., K. Mitsotakis, A. Tsipras, and G. Varoufakis accounts), that enjoy high visibility coming from abroad, had followers who followed one, two, or all three targets at the same time without offering any content related to Greece (politicians, journalists and other celebrities from abroad). These accounts were also removed from our final dataset since they created an imbalance in the overall sample. Our final sample consisted of about 450 K accounts.

Total Followers4,677,796
Unique Followers1,560,996
Followers with >1 Targets449,686
Followers with >2 Targets255,229

Table 3.

Followers.

In the final stage, a sample of approximately 36.8 million Tweets in four different cycles was collected, with mentions from January 2007 to January 2022 for each of the accounts included in the final survey. Of the references we collected, about 19.2 million were written in Greek or contained Greek text that we could use, while 39.18% of our sample consisted of primary content (tweets), 34.34% of retweets, 23.36% of replies, and 4.12% of quotes. Finally, we associated each report and account collected with the targets and political parties they were associated with.

3.2 Account/user classification

Amplifiers are accounts that amplify content through likes and retweets and their purpose is to increase the popularity of a post or to promote a narrative, that is, a commercial product, a political slogan. These accounts are occasionally engaged by the overall network to carry out attacks on political opponents, boost the impact of a hashtag, or contribute to the overall narrative by creating content (i.e., Twitter Storm platform). We often refer to these accounts by the designation bots, however, this designation mainly refers to automated accounts or accounts controlled by a central system (Botnet). In the case of amplifiers, their activity is not necessarily automated, but they have similar (or even similar) behavior to that of bots. In our research, we have included bots in the amplifiers category.

To detect the behavior of amplifiers we had to isolate them from the rest of the accounts that had normal behavior. The following categories of accounts were created:

  • Influencers: highly influential accounts, whose content is reproduced no matter what.

  • Active: accounts that interact with all the content and with a large number of topics. They actively participate in organized or unorganized discussions and deal with current topics.

  • Amplifiers: accounts with a strong but at the same time restrained activity regarding topics (i.e., politics and entertainment, politics and sports) with a much lower percentage of primary content.

  • Unknown: locked accounts, accounts without any activity, and accounts that we cannot classify in any of the above-mentioned categories.

  • New: accounts created up to 120 days before their detection. New accounts are identified solely by the difference between the date we collected them and the date the account was created.

Since we wanted to create a scalable machine learning model without depending on linguistic features, we only used features exclusively from the accounts (Table 4).

User metadataDerived features
TweetsNumber of Tweets
FollowersNumber of Followers
FollowingNumber of Following
FavoritesNumber of Favorites
ListedNumber of lists the Account is on
Default ProfileWhether or not, an account has the default profile
VerifiedWhether or not, an account is verified
Actions Frequency(Tweets + Favorites) / Dates Since
Tweets Frequency*Tweets / Dates Since
Reputation**Followers / (Followers + Following)
CredibilityListed / (Followers + Listed)
Followers Growth Rate*Followers / Dates Since
Following Growth Rate*Following / Dates Since
Favorites Growth Rate*Favorites / Dates Since
Listed Growth Rate*Listed / Dates Since
Followers/Following RatioFollowers / Following
Tweets/Favorites RatioTweets / Favorites

Table 4.

List of features used in our framework.

Ref. [60].


Ref. [40].


3.3 Partisanship and hyper-partisanship ratio

To define partisanship, we tracked the sharing of tweets associated with X (former Twitter) accounts (targets) of known political valence. The valence ranged from −1, indicating left-leaning accounts, to +1, indicating right-leaning accounts, and was defined as shown in Table 5. This definition of partisanship assumes that following a target account implies approval of its content [61]. Research has shown that following an account is the strongest predictor of a user’s stance compared to retweeting, replying, etc. [62].

KKEM25SRZKINNDEL
−1- 0.6−0.20.20.61

Table 5.

Political valence.

3.3.1 Partisanship ratio

In accordance with Nikolov et al.’s model [61], we defined the partisanship ratio of each user u as Vu = Σt p(t|u)vt where p(t|u) is the fraction of targets t that a user u follows and vt is the political valence of target t (values from −1 to 1).

3.3.2 Hyper-partisanship ratio

Similarly, we defined the hyper-partisanship ratio of each user u as Hu = ut|t where ut|t is the fraction of targets that a user u follows out of the total targets.

3.4 Tracing toxicity

A frequent debate regarding research on media platforms revolves around the term toxicity. However, as already stated the term itself is poorly defined and often contradictory, encompassing everything from online harassment and bullying to negative commentary. To map online toxicity on the platform we formed seven different categories: (i) hateful, (ii) insulting, (iii) threatening, (iv) racist, (v) sexist, (vi) using xenophobic rhetoric, and (vii) using nationalistic language (Table 6). These categories were clustered into wider groups. Every tweet that contained insulting language against a person was marked as toxic. Further on, tweets that included hateful language, threats and/or insults were marked as severely toxic. Tweets targeting an individual or a group based on identity characteristics such as gender, ethnic minority, and/or religion were marked as identity hate.

Main categoriesAttribute nameDescription
ToxicInsultInsulting, inflammatory, or negative comments toward a person or a group of people, a very rude, disrespectful, or unreasonable comment that is likely to make people leave a discussion
Severe toxicHateOffensive discourse targeting a group or an individual
ThreatDescribes an intention to inflict pain, injury, or violence against an individual or group.
Hate speech* (identity-based toxic language)RacismIdeas or theories of superiority of one race or group of persons of one color or ethnic origin (anti-semitisim, anti-Roma speech, xenophobia)
Anti_refugeeAgainst (im)migration, anti-Refugees, against granting asylum
SexismBeliefs of genders and the role they play in the society, Gender stereotypes, Homophobia, Transphobia, Patriarchy, Sexual Violence, Harassment, Gender Violence
Nationalismstrong, often excessive, feelings of pride in and allegiance to one’s nation (usually the nation in which one is a citizen, although sometimes a nation with which one has ties via ethnicity or heritage) and its culture or belief in its superiority.

Table 6.

Categories used in defining toxicity.

Ref. [63].


Vouliwatch, a Greek nonprofit and nonpartisan organization that monitors the Parliament of Greece and its activities, participated in the annotation process with 15 coders that marked 112.000 tweets from May 10, 2021, to July 17, 2021, [64] resulting in training a Machine Learning Model (open-source) and the MultiLabel Text Classification Model [65].

Advertisement

4. Results

In Figure 1 we see a timeline of the tweets collected from the accounts in our sample marked as amplifiers. We see that approximately 450,000 amplifier accounts produced 19 million tweets in the course of a decade. This counts for more than half (52%) of the total tweets of the period under investigation. The first period of increased publishing takes place during the course of 2015, a turbulent period for Greece with double national elections, and the forming of a coalition government between SY.RI.ZA and AN.EL. parties and the imposition of capital controls on the Greek banks. Although there is another round of national elections taking place in 2019, the levels of amplifiers’ activity during the period remain rather low. It is from 2020 to 2022 that these accounts report unprecedented activity with peaks around key events, such as the Greece-Turkey border crisis in 2020, the COVID-19 pandemic crisis, and the period preceding the Russian invasion of Ukraine. Domestic politics was also shaken during the same period by the outbreak of the wiretapping scandal, known as Predator Gate, involving journalists, politicians, members of the armed forces leadership, and known businessmen as targets and serious indications of government involvement.

Figure 1.

Tweets collected from the Greek X (former Twitter) sphere (n = 19,231,245).

The levels of partisanship of amplifier accounts are shown in Figure 2. As it can be seen, amplifier accounts affiliated to the right of the political spectrum are more than double than those affiliated to the left, while they also demonstrate higher levels of polarization too.

Figure 2.

Levels of partisanship in amplifier accounts across the political spectrum.

Regarding toxicity, we see that the amplifier accounts affiliated to the right of the political spectrum promote more toxic tweets both in terms of volume as well as in intensity (see Figure 3, marked in red). Although accounts on the left of the political spectrum also tweet content marked as toxic, they lack in volume and intensity. One exception is a number of amplifier accounts affiliated with the SYRIZA party that also tweet severely toxic content. However, based on our research the theory of the two extremes cannot be supported, since right affiliated accounts outperform the left ones in terms of polarization and radicalization.

Figure 3.

Levels of toxicity in produced tweets according to partisanship.

With the help of human annotators, we trained a consistent toxic language model. The results of this process are shown in Figure 4. Many of the tweets were marked with two or even three types of toxicity. However, as it is shown very few of them contained actual threats, while insults (personal attacks) and sexism were by far the most common types of toxic content. Hate speech, defined as a verbal attack based on one’s characteristics [64], is also prevalent in almost half of the sample, followed by racist and anti-refugee rhetoric. Nationalistic messages were also traced in 45,766 tweets.

Figure 4.

Number of tweets containing toxic content (May 10, 2021 to July 17, 2021), n = 112,000.

Advertisement

5. Conclusions

This study analyzed the online patterns manifested by amplifier (automated or not) accounts in the Greek Twittersphere. The premise of the study was that whether an account was automated or human-operated was of little consequence compared to its online activity. Besides, a recent study published in Nature Scientific Reports [7] demonstrated that human amplifier accounts working under a central coordinating authority can easily match bot accounts activity (coincidentally during regular office hours) in terms of content creation and amplification. Additionally, a study conducted by MIIR [66] showed that during periods of low political activity amplifier accounts switch from promoting their political sponsors to promoting commercial products. These trends indicate that in Greece as happens around the world, there is a robust industry led by advertising firms that provide shares, likes, and retweets to a wide range of customers, either politicians, celebrities, businesses, or all of the above.

The study implements a scalable classification to predict amplification accounts on X (former Twitter) based on features derived exclusively from the accounts, without focusing solely on the level of automated activity. Given that AI developments are making bot detection an ongoing challenge, attention should shift to understanding the behavior of social bots, particularly in spreading disinformation and the ultimate objectives of the actors deploying these bots. In this effort, we focused on the activity of the accounts affiliated with the Members of the Greek Parliament and their network. In our endeavor, we also looked at other tactics employed by shedding light on their toxic behavior. Therefore, we trained a Machine Learning Model and a MultiLabel Text Classification Model, both open-source and publicly available. Indeed, from our research, it was evident that amplifier accounts along with propaganda are also promoting toxic content, mostly in the form of personal attacks, in an attempt to silence dissenting opinions.

The key takeaway from the study is that even though we could argue for the existence of an online public sphere in Greece up until 2019, the explosion of amplifier accounts from 2020 onwards forced us to understand the Greek Twittersphere as a tool for political manipulation. Additionally, even though a number of amplifier accounts support the predominant left-wing party SYRIZA, the dominance of the New Democracy party when it comes to the sheer number of contents promoted by amplifier accounts is indisputable. The trends hold through when it comes to toxicity as well. Right-wing affiliated amplifiers demonstrate promotion of severely toxic content, mainly insults, sexist remarks, and racist anti-refugee rhetoric. A possible reason behind this trend is that the job of amplifier accounts is to amplify hate and silence tactics against opposition supporters by swarming the Twittersphere with hateful content. Lastly, another tactic is visible: from 2020 to 2022, amidst a maelstrom of political scandals involving the governing party, amplifier accounts associated with New Democracy promoted the official narrative that downplayed, refocused, or outwardly denied political implication of the governing party to these events.

To summarize, the Greek Twittersphere demonstrates all too familiar patterns when it comes to amplifier accounts. The existence of an industry that provides followers and amplifiers should be thoroughly investigated to better understand the political implications. On the other hand, the prevalence of hate content on the right wing of the political spectrum as well as cyber harassment and silencing tactics do not seem to corroborate the existence of a “two extremes” scenario. Plainly said, at least in the case of the Greek Twittersphere toxicity is to a great extent unipolar.

In lieu of a conclusion, we need to again, warn on the concerning development of an increasingly contracting social sphere. With the development of AI technologies, propaganda campaigns online are already too difficult to discern by users without a very good grasp of social media literacy. Similarly, scholars analyzing social media should always be wary of the extent to which the social media environment is representative of a genuine public discourse. Instead, we would suggest that social media are the fertile ground to explore astroturfing campaigns and other propaganda tactics and how these may influence the political and media agendas, and ultimately the electoral decision-making process.

Advertisement

Acknowledgments

We would like to thank Vouliwatch and the remarkable team of researchers whose dedication and expertise have been instrumental in the completion of this chapter. We are deeply grateful for your continuous support.

Advertisement

Conflict of interest

The authors declare no conflict of interest.

References

  1. 1. Newman N, Fletcher R, Eddy K, Robertson CT, Nielsen RK. The Reuters Institute’s Digital News Report. 2023
  2. 2. Papathanassopoulos S, Giannouli I, Archontaki I, Karadimitriou A. The Media in Europe 1990-2020. In: The Media Systems in Europe: Continuities and Discontinuities. Cham: Springer International Publishing; 2023. pp. 35-67
  3. 3. Papathanassopoulos S, Karadimitriou A, Kostopoulos C, Archontaki I. Media concentration and independent journalism between austerity and digital disruption. [Internet]. Available from: https://www.diva-portal.org/smash/get/diva2:1559286/FULLTEXT01.pdf
  4. 4. Hallin DC, Papathanassopoulos S. Political clientelism and the media: Southern Europe and Latin America in comparative perspective. Media, Culture and Society. [Internet]. 2002;24(2):175-195. DOI: 10.1177/016344370202400202
  5. 5. Reporters without Borders (RSF). World Press Freedom Index. 2023. pp. 270-276. Available from: https://rsf.org/en/2023-world-press-freedom-index-journalism-threatened-fake-content-industry
  6. 6. Nagle A. Kill all Normies: Online Culture Wars from 4chan and Tumblr to Trump and the Alt-Right. The Rise of the Radical Right in the Age of Trump. Neiwert, David. Alt-America: John Hunt Publishing; 2017
  7. 7. Schoch D, Keller FB, Stier S, Yang J. Coordination patterns reveal online political astroturfing across the world. Scientific Reports. [Internet]. 2022;12(1):4572. DOI: 10.1038/s41598-022-08404-9
  8. 8. Neumayer C, Rossi L. Social media materialities and political struggle: Power, images, and networks. In: Proceedings of the IS4SI 2017 Summit Digitalisation for a Sustainable Society, Gothenburg, Sweden, 12-16 June 2017. Basel Switzerland: MDPI. 2017. p. 2017
  9. 9. Terranova T. Attention, economy and the brain. [Internet]. Culturemachine.net. 2012. Available from: https://www.culturemachine.net/wp-content/uploads/2019/01/465-973-1-PB.pdf [Assessed: Jun 17, 2024]
  10. 10. Bhargava VR, Velasquez M. Ethics of the attention economy: The problem of social media addiction. Business Ethics Quarterly. [Internet]. 2021;31(3):321-359. DOI: 10.1017/beq.2020.32
  11. 11. Shahbaznezhad H, Dolan R, Rashidirad M. The role of social media content format and platform in users’ engagement behavior. Journal of Interactive Marketing. [Internet]. 2021;53:47-65. DOI: 10.1016/j.intmar.2020.05.001
  12. 12. Gangi D, Wasko PM. Social media engagement theory: Exploring the influence of user engagement on social media usage. Journal of Organizational and End User Computing (JOEUC). 2016;28(2):53-73
  13. 13. Steinmetz C, Rahmat H, Marshall N, Bishop K, Thompson S, Park M, et al. Liking, tweeting and posting: An analysis of community engagement through social media platforms. Urban Policy and Research. [Internet]. 2021;39(1):85-105. DOI: 10.1080/08111146.2020.1792283
  14. 14. Zhang J, Hamilton W, Danescu-Niculescu-Mizil C, Jurafsky D, Leskovec J. Community identity and user engagement in a multi-community landscape. In: Proceedings of the International AAAI Conference on Web and Social Media. Vol. 11. NIH Public Access; 2017. pp. 377-386
  15. 15. Ferrara E, Yang Z. Quantifying the effect of sentiment on information diffusion in social media. PeerJ Computer Science. [Internet]. 2015;1(e26):e26. DOI: 10.7717/peerj-cs.26
  16. 16. Tsugawa S, Ohsaki H. Negative messages spread rapidly and widely on social media. In: Proceedings of COSN ‘15: 2015 ACM Conference on Online Social Networks. Palo Alto, United States of America. Nov 2015. pp. 151-160. DOI: 10.1145/2817946.2817962 [cited 2021 Feb 19]
  17. 17. Stieglitz S, Dang-Xuan L. Emotions and information diffusion in social media—Sentiment of microblogs and sharing behavior. Journal of Management Information Systems. 2013;29(4):217-248
  18. 18. Bobba G, Alberto CC, Cremonesi C. The age of populism? In: ECPR General Conference, Oslo; 2017
  19. 19. Pérez CC. Trend towards extreme right-wing populism on twitter. An analysis of the influence on leaders, media and users. Comunicación y sociedad= Communication & Society. 2020;33(2):175-192
  20. 20. García B-DH. Populism and polarization in the digital arena: Categorising and measuring political attacks on Twitter. [Master’s thesis]. University of Twente. 2022
  21. 21. Guldemond P, Casas Salleras A, Van der Velden M. Fueling toxicity? Studying deceitful opinion leaders and behavioral changes of their followers. Politics and Governance. [Internet]. 2022;10(4):336-348. DOI: 10.17645/pag.v10i4.5756
  22. 22. Leibenstein H. Bandwagon, Snob, and Veblen effects in the theory of consumers demand, the quarterly journal of economics. In: Breit W, Hochman HM, editors. Readings in Microeconomics. New York: Holt, Rinehart and Winston, Inc; 1950. pp. 115-116
  23. 23. Riambau G. Do citizens vote for parties, policies or the expected winner in proportional representation systems? Evidence from four different countries using a multiple-type model. Party Politics. [Internet]. 2018;24(5):549-562. DOI: 10.1177/1354068816668669
  24. 24. Zerback T, Töpfl F, Knöpfle M. The disconcerting potential of online disinformation: Persuasive effects of astroturfing comments and three strategies for inoculation against them. New Media & Society. 2021;23:1080-1098
  25. 25. Bindra S, Sharma D, Parameswar N, Dhir S, Paul J. Bandwagon effect revisited: A systematic review to develop future research agenda. Journal of Business Research. [Internet]. 2022;143:305-317. DOI: 10.1016/j.jbusres.2022.01.085
  26. 26. Kovic M, Rauchfleisch A, Sele M, Caspar C. Digital astroturfing in politics: Definition, typology, and countermeasures. Studies in Communication Sciences. [Internet]. 2018;18(1):69-85. DOI: 10.24434/j.scoms.2018.01.005
  27. 27. Keller FB, Schoch D, Stier S, Yang J. Political astroturfing on twitter: How to coordinate a disinformation campaign. Political Communication. [Internet]. 2020;37(2):256-280. DOI: 10.1080/10584609.2019.1661888
  28. 28. Peng J, Detchon S, Choo K-KR, Ashman H. Astroturfing detection in social media: A binary n-gram–based approach: Astrofurfing detection in social media: A binary N-gram based approach. Concurrency and Computation. [Internet]. 2017;29(17):e4013. DOI: 10.1002/cpe.4013
  29. 29. Mahbub S, Pardede E, Kayes ASM, Rahayu W. Controlling astroturfing on the internet: A survey on detection techniques and research challenges. International Journal of Web and Grid Services. [Internet]. 2019;15(2):139. DOI: 10.1504/ijwgs.2019.099561
  30. 30. Al-Rawi A, Groshek J, Zhang L. What the fake? Assessing the extent of networked political spamming and bots in the propagation of #fakenews on twitter. Online Information Review. [Internet]. 2019;43(1):53-71. DOI: 10.1108/oir-02-2018-0065
  31. 31. Bryden J, Silverman E. Underlying socio-political processes behind the 2016 US election. PLoS One. [Internet]. 2019;14(4):e0214854. DOI: 10.1371/journal.pone.0214854
  32. 32. Rossetti M, Zaman T. Bots, disinformation, and the first impeachment of U.S. President Donald Trump. PLoS One. [Internet]. 2023;18(5):e0283971. DOI: 10.1371/journal.pone.0283971
  33. 33. Galgoczy MC, Phatak A, Vinson D, Mago VK, Giabbanelli PJ. (Re)shaping online narratives: When bots promote the message of president trump during his first impeachment. PeerJ Computer Science. [Internet]. 2022;8(e947):e947. DOI: 10.7717/peerj-cs.947
  34. 34. Alexandre I, Jai-sung Yoo J, Murthy D. Make tweets great again: Who are opinion leaders, and what did they tweet about Donald trump? Social Science Computer Review. [Internet]. 2022;40(6):1456-1477. DOI: 10.1177/08944393211008859
  35. 35. Silva BC, Proksch S-O. Fake it ‘til you make it: A natural experiment to identify European politicians’ benefit from twitter bots. The American Political Science Review. [Internet]. 2021;115(1):316-322. DOI: 10.1017/s0003055420000817
  36. 36. Subrahmanian VS, Azaria A, Durst S, Kagan V, Galstyan A, Lerman K, et al. The DARPA twitter bot challenge. Computer (Long Beach Calif). [Internet]. 2016;49(6):38-46. DOI: 10.1109/mc.2016.183
  37. 37. Tiwari MK, Pal R, Chauhan V, Singh V, Singh V, Dhamodaran DS, et al. A python programming widely utilized in the development of a twitter bot as a sophisticated advance technical tool. International Journal of Computing and Artificial Intelligence. [Internet]. 2024;5(1):102-108. DOI: 10.33545/27076571.2024.v5.i1b.88
  38. 38. Martini F, Samula P, Keller TR, Klinger U. Bot, or not? Comparing three methods for detecting social bots in five political discourses. Big Data & Society. [Internet]. 2021;8(2):205395172110335. DOI: 10.1177/20539517211033566
  39. 39. Chen X, Gao S, Zhang X. Visual analysis of global research trends in social bots based on bibliometrics. Online Information Review. [Internet]. 2022;46(6):1076-1094. DOI: 10.1108/oir-06-2021-0336
  40. 40. Feng S, Wan H, Wang N, Li J, Luo M. TwiBot-20: A Comprehensive Twitter Bot Detection Benchmark. 2021. DOI: 10.48550/ARXIV.2106.13088
  41. 41. Raj SC, Srinivas B, Kumar SP. Detecting malicious twitter bots using machine learning. International Journal of Engineering Technology and Management Sciences. [Internet]. 2022;6(6):382-388. DOI: 10.46647/ijetms.2022.v06i06.068
  42. 42. Rizoiu MA, Graham T, Zhang R, Zhang Y, Ackland R, Xie L. DEBATENIGHT: The role and influence of socialbots on twitter during the 1st 2016 U.S. presidential debate. In: Proceedings of the Twelfth International AAAI Conference on Web and Social Media (ICWSM 2018). [Internet] Vol. 12. no.1. Palo Alto, CA. 25-28 June 2018. pp. 300-309. DOI: 10.1609/icwsm.v12i1.15029
  43. 43. Hickey D, Schmitz M, Fessler D, Smaldino PE, Muric G, Burghardt K. Auditing Elon Musk’s impact on hate speech and bots. In: Proceedings of the International AAAI Conference on Web and Social Media. [Internet]. Vol. 17. 2023. pp. 1133-1137. DOI: 10.1609/icwsm.v17i1.22222
  44. 44. Suarez-Lledo V, Alvarez-Galvez J. Assessing the role of social bots during the COVID-19 pandemic: Infodemic, disagreement, and criticism. Journal of Medical Internet Research. [Internet]. 2022;24(8):e36085. DOI: 10.2196/36085
  45. 45. Brito F, Petiz I, Salvador P, Nogueira A, Rocha E. Detecting social-network bots based on multiscale behavioral analysis. In: Proceedings of the Seventh International Conference on Emerging Security. Barcelona, Spain: System Technology (SECURWARE) 2013. pp. 81-85
  46. 46. Chu Z, Gianvecchio S, Wang H. Bot or human? A behavior-based online bot detection system. In: Lecture Notes in Computer Science. Cham: Springer International Publishing; 2018. pp. 432-449
  47. 47. Luceri L, Deb A, Badawy A, Ferrara E. Red bots do it better: Comparative analysis of social bot partisan behavior. [Internet]. arXiv [cs.SI]. 2019. Available from: http://www.arxiv.org/abs/1902.02765
  48. 48. Tanaka T, Niibori H, Li S, Nomura S, Kawashima H, Tsuda K. Bot detection model using user agent and user behavior for web log analysis. Procedia Computer Science. [Internet]. 2020;176:1621-1625. DOI: 10.1016/j.procs.2020.09.185
  49. 49. Daouadi K, Rebaï R, Amous I. Real-time bot detection from twitter using the Twitterbot+ framework. Journal of Universal Computer Science. [Internet]. 2020;26(4):496-507. DOI: 10.3897/jucs.2020.026
  50. 50. Rossi S, Rossi M, Upreti B, Liu Y. Detecting political bots on twitter during the 2019 Finnish parliamentary election. In: Proceedings of the 53rd Hawaii International Conference on System Sciences (HICSS). 2020. pp. 2430-2439
  51. 51. Echeverria J, Zhou S. Discovery, retrieval, and analysis of the “star wars” botnet in twitter. In: Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2017. New York, NY, USA: ACM. 2017. p. 2017
  52. 52. Avalle M, Di Marco N, Etta G, Sangiorgio E, Alipour S, Bonetti A, et al. Persistent interaction patterns across social media platforms and over time. Nature. [Internet]. 2024;628(8008):582-589. DOI: 10.1038/s41586-024-07229-y
  53. 53. Kwak H, Blackburn J, Han S. Exploring cyberbullying and other toxic behavior in team competition online games. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems – CHI’15. New York, New York, USA: ACM Press; 2015
  54. 54. Anderson AA, Yeo SK, Brossard D, Scheufele DA, Xenos MA. Toxic talk: How online incivility can undermine perceptions of media. International Journal of Public Opinion Research. 2018;30(1):156-168
  55. 55. Üzelgün MA, Giannouli I, Archontaki I, Odstrčilová K, Thomass B, Álvares C. Transforming toxic debates towards European futures: Technological disruption, societal fragmentation, and enlightenment 2.0. Central European Journal of Communication. Special Issue. 2024;35(1):82-102. DOI: 10.51480/1899-5101.17.1(35).711
  56. 56. Petlyuchenko N, Petranová D, Stashko H, Panasenko N. Toxicity phenomenon in German and Slovak media: Contrastive perspective. Lege artis. Language yesterday, today, tomorrow. The Journal of University of SS Cyril and Methodius in Trnava. 2021;2:105-164
  57. 57. Maharani A, Puspita V, Aurora RA, Wiranito N. Understanding toxicity in online gaming: A focus on communication-based behaviours towards female players in Valorant. Jurnal Syntax Admiration. [Internet]. 2024;5(5):1559-1567. DOI: 10.46799/jsa.v5i5.1137
  58. 58. Mall R, Nagpal M, Salminen J, Almerekhi H, Jung S-G, Jansen BJ. Four types of toxic people: Characterizing online users’ toxicity over time. In: Proceedings of the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society. New York, NY, USA: ACM; 2020
  59. 59. Toxicity. [Internet]. Jigsaw. Available from: https://www.current.withgoogle.com/the-current/toxicity/ [Assessed: Jun 18, 2024]
  60. 60. Yang KC, Varol O, Hui PM, Menczer F. Scalable and generalizable social bot detection through data selection. Proceedings of the AAAI conference on artificial intelligence. 2020;34:1096-1103
  61. 61. Nikolov D, Flammini A, Menczer F. Right and left, partisanship predicts (asymmetric) vulnerability to misinformation. HKS Misinformation Review. [Internet]. 2021. DOI: 10.37016/mr-2020-55
  62. 62. Aldayel A, Magdy W. Characterizing the role of bots’ in polarized stance on social media. Social Network Analysis and Mining. [Internet]. 2022;12(1):30. DOI: 10.1007/s13278-022-00858-z
  63. 63. Warner W, Hirschberg J. Detecting hate speech on the world wide web. In: Proceedings of the Second Workshop on Language in Social Media. Montre’al, Canada: Association for Computational Linguistics; 2012. pp. 19-26. Available from: https://www.aclweb.org/anthology/W12-2103
  64. 64. Civic Information Office. Toxic-el. Hugging Face. 2023
  65. 65. Civic Information Office. Comments-el-toxic. [Internet]. Hugging Face. 2024. DOI: 10.57967/HF/2501
  66. 66. MIIR. Tweeting in the Darkside of the web. 2019. Available from: https://miir.gr/titivismata-sti-skoteini-pleyra-toy-diadiktyoy/

Written By

Ioanna Archontaki and Dimitris Papaevagelou

Submitted: 18 June 2024 Reviewed: 26 June 2024 Published: 23 July 2024