
33 minute read
Cheerleading Versus Responsive Propaganda: Evidence from Women 2 Drive Arrests in Saudi Arabia

By: Celia Benhocine
Advertisement
Abstract
This study presents an account of the Saudi regime’s cyber troop activity on social media during the wave of arrests of Women 2 Drive activists in summer 2018. To determine the regime’s overarching propaganda strategy on Twitter in the case of the Women 2 Drive activists’ arrests, I weigh the theories of cheerleading and responsive propaganda with an empirical investigation involving a quantitative of tweets produced by 5,929 Twitter accounts, sampled from the original 88,000 accounts taken down under the suspicion of platform manipulation activity linked to the Saudi regime. The exploration of the data and a sentiment analysis of the sample suggest that the coordinated Twitter activity featured overwhelmingly Arabic and positive content, providing evidence that the Saudi regime’s overarching strategy was to engage in cheerleading using automated and cyborg accounts. Contributing to the existing literature about state-sponsored astroturfing, the investigation aims to fill the gap left by accounts of the Saudi propaganda strategy that are limited to traditional news outlets.
Cette étude présente un compte rendu de l’activité des cybertroupes du régime saoudien sur les réseaux sociaux lors de la vague d’arrestations de militantes de Women 2 Drive à l’été 2018. Pour déterminer la stratégie globale de propagande du régime sur Twitter dans le cas des arrestations de militantes de Women 2 Drive, je compare les théories de la propagande cheerleading et de la propagande réactive avec une enquête empirique impliquant un nombre de tweets produits par 5 929 comptes Twitter, échantillonnés à partir des 88 000 comptes originaux supprimés sous le soupçon d’activité de manipulation de plateforme liée au régime saoudien. L’exploration des données et une analyse des sentiments de l’échantillon suggèrent que l’activité coordonnée sur Twitter comportait un contenu majoritairement arabe et positif, ce qui prouve que la stratégie globale du régime saoudien était de s’engager dans la propagande cheerleading en utilisant des comptes automatisés et cyborg. Contribuant à la littérature existante sur l’astroturf commandité par un État, l’enquête vise à combler le vide laissé par les récits de la stratégie de propagande saoudienne qui se limitent aux médias traditionnels.
Introduction
In In September 2017, the Saudi regime announced the lift of its driving ban on women, which would come into effect in late June of 2018. It was intended to shine a positive light on the kingdom’s new Crown Prince Mohammed bin Salman (MBS), who aimed to elicit enthusiasm for his Vision 2030 plan to modernize the country. 1 The announcement came at a time when protests against the ban were already a long-standing challenge to Saudi authorities. These protests were not a mass movement but involved organized driving on predetermined days where 30 to 70 women would drive around the kingdom’s major cities. Some women posted videos of themselves driving to social media while others simply switched seats with their husbands to run their daily errands. 2 By October 2013, three events of this kind had been registered. 3 The repeated acts of civilian disobedience, often going unnoticed by the police, warranted a warning from the Interior Ministry Spokesman General Mansur al-Turki. MBS’s announcement purposefully intended to silence the background work made by feminist activists as part of the Women 2 Drive campaign they started in 2011 by instructing them to keep quiet about the announcement. 4 When some activists refused to comply with the announcement’s instructions by continuing to tweet about the issue, the regime cracked down on them with over a dozen arrests between May 14 and May 22, 2018, right before the effective implementation of the policy on June 24, 2018. 5 Repression continued until June. 6 Outspokenright before the effective implementation of the anonymous and non-anonymous citizens turned to Twitter to manifest their discontent in support of the feminist activists who currently cumulate approximately 1.2 million followers. 7 Simultaneously, state-sponsored media channels and newspapers launched smear campaigns against the women concerned.
The anecdotal evidence regarding the Saudi propaganda strategy during the event in traditional news outlets is clear — many accounts suggest the use of intimidation tactics by the Saudi Press Agency. For example, the newspaper Okaz published related articles using the headline “No Place for Traitors Among Us,” 8 while the official Saudi Press Agency accused them of “breach[ing] (Saudi) social structure and mar[ring] the national consistency.” 9 However, there are no accounts of regime-affiliated cyber troop activity – “government or political party actors tasked with manipulating public opinion online” – on social media during the event, 10 except for Twitter stating that suspicious activity occurred, warranting the suspension of 5,929 Saudi- linked accounts. 11 In light of this, this paper seeks to answer the question: What was the Saudi regime’s overarching propaganda strategy on Twitter in the case of the Women 2 Drive activists’ arrests? Two theories, that of cheerleading and responsive propaganda, emerge as potential explanations to categorize Saudi astroturfing efforts in this context. I argue that through coordinated cyber troop activity, the state-backed information operation that occurred during the summer 2018 arrests of Women 2 Drive activists produced mainly positive content akin to cheerleading, targeted towards an Arabic- speaking domestic audience. “Positive” content means that the content analyzed uses positively charged words, whereas cheerleading specifically refers to positively charged pro-regime messages such as praise and support. Social media has been documented as a powerful tool for activism 12 – in an organizational capacity, for example –, but digital authoritarianism has long dislodged this status as the frontier of authoritarian learning. 13 More specifically, this paper adds to the existing literature about the circumstances in which authoritarian regimes may use cheerleading as a propaganda strategy.
Using a dataset of 5,929 deactivated accounts by the Twitter Transparency Operation, I analyze tweet activity between 2017 and 2019 and perform a content analysis on a sample of tweets between May 2018 and August 2018. According to Twitter, all of the accounts contained in the data were part of a network of state-backed information operations managed by a digital marketing and communications company called Smaat. 14 In the first two sections of this paper, I outline the theoretical framework used to investigate the cyber troop activity event and present three hypotheses addressing the target audience and messaging valence (whether a message is positively or negatively charged). 15 Then, I highlight the data and methodology used to carry out the analysis. In the two sections that follow, I delve into sample composition and provide evidence of coordination for the operation. After analyzing results of the inquiry, I bring nuance to the findings by considering limitations to the study. The last section concludes.
Theoretical Framework

The literature about authoritarian propaganda strategies mainly deals with domestic electoral constraints (i.e., the absence or presence of national elections and their bindingness). Huang suggests that authoritarian regimes use disinformation to signal their strength and dissuade dissent. 16 It is widely known that many governments deploy significant resources to produce propaganda through state media alongside other displays of power. Similarly, Baggot Carter & Carter draw on state-backed newspaper data to argue that a regime faced with little to no electoral constraints uses absurdly positive content to intimidate rather than try and persuade citizens of the regime’s merits. 17 This is because the lack of pushback or ideological competition towards the regime narrative to which everyone is exposed creates the collective recognition of a credible repression threat. In other words, even if the population knows the content is absurd, it is also impossible to openly challenging it. Finally, King et al. provide evidence from China that cheerleading is used to draw attention away from inflammatory topics to limit the potential for popular mobilization arising on social media. This helps counter journalistic accounts that government workers – hired to conduct disinformation and censorship operations – engage in argumentation online. 18 Both the regime and the population know that state-backed information is most likely false and unreliable. This means that incentives to consume propaganda-laden content are lower. However, this does not matter from the regime’s perspective if the government apparatus employs other traditional repressive and censorship tools to prevent the majority from overtly challenging its domination abilities, as well as creating a credible repression threat. These three accounts suggest that minimal or nonexistent election pressures in an authoritarian context incentivize the regime to engage in both positive and cheerleading-type propaganda content.

Opposing this view is another side of the literature that advances the possibility of authoritarian propaganda acting in response to public opinion. Jin applies textual analysis from the People’s Daily articles in China and reports a change in messaging valence from positive to negative regarding human genome editing technology following a wave of public disapproval. 19 The theoretical framework of responsive authoritarian propaganda relies on two conditions. First, the issue at hand must not threaten the regime’s legitimacy and perceived ability to rule. Second, the issue must be relatively new so that the regime’s discourse concerning it is more malleable without the narrative appearing unstable. With the first condition in mind, it is evident that electoral constraints are not the only factors influencing propaganda strategies. This evidence from China applies to the regime shifting its discourse to reflect the public’s opinion rather than shifting its discourse valence but still going against the public’s opinion. I describe a theoretical possibility of the latter in the Saudi Arabian case in the Hypotheses section.
Limited theory exists about disinformation strategies by authoritarian regimes in the Middle East that may apply more readily to the Saudi Arabian case as opposed to evidence from China. Previous work regarding social media and disinformation in the Middle East and North Africa has addressed civil society’s potential for organization and monitoring, 20 and has documented general trends in social media manipulation by state and non-state actors. 21 Many parallels can be drawn between the two countries to defend the use of the chosen literature as it relates to this paper’s research question. First, both states have similar information environments based on Freedom House’s measures of Internet freedom. Both countries have the same score on several indicators such as self-censorship by journalists, manipulation of information towards political interests, diversity of the informational landscape, and economic/ regulatory constraints on users’ ability to publish content online. 22 Secondly, China and Saudi Arabia were reported to have similar cyber troop capacities in terms of spending, communication strategies, fake accounts, as well as overall messaging and valence. 23
Hypotheses
This framework thus presents a puzzle concerning the Saudi Arabian regime’s propagandic tendencies in its Twitter activity. Both theories are supported by registered evidence of Saudi cyber troops adopting government cheerleading and opponent attack strategies. 24 Let us review the theoretical implications of cheerleading and propaganda responsiveness. Contrary to China, the risk of mobilization from movements originating in Saudi Arabia is much lower. Indeed, there are several accounts documenting thousands of yearly protests in China, 25 including in the academic literature, 26 but that is not the case in Saudi Arabia. Thus, despite the “explosion of advocacy” on Twitter by women activists, 27 there may not have been a tangible threat of social mobilization to the regime. However, it is also important to note that unlike the Chinese government, the Saudi Arabian government does not differentiate between regime criticism that does and does not have collective action potential. In the aftermath of the Arab Spring, the palace implemented its 2014 decree labelling any criticism against the regime as a punishable act of terrorism. 28 This includes association with any political movement (including on social media) and sympathizing with individuals under regime scrutiny. 29 From the regime’s perspective then, the definition of “collective action potential” is incredibly broad. This implies that the threshold for activities the regime perceives as a threat is lower than China’s, justifying its aim to distract under circumstances broader than the those only concerned with the threat of offline mobilization. Distraction, rather than meaningful engagement with dissidents, describes the process of creating so many pro-regime posts that they effectively drown out any content deemed problematic.
This restricts the probability that any user will see problematic posts, while engagement might create more visibility for these posts. 30 Accordingly, the cheerleading theory could hold in this situation. Statebacked accounts could have used a combination of positive message valence on Twitter combined with content amplification strategies to divert attention away from the condemnation of physical repression.
On the other hand, it is also possible that positive messages praising the regime for its forward- looking attitude regarding its decision to allow women to drive could have shifted towards negative attacks on dissidents on Twitter. Since a concession about women’s right to drive was already made within the broader ideological framework enforcing Islamic law that prevents women from being immodest, the issue of women’s driving does not fundamentally and politically challenge the regime. This satisfies the first condition of the responsive authoritarian propaganda framework. However, the second condition is not satisfied since the campaign concerning a woman’s right to drive was a longstanding issue. Because of this, I hypothesize that a cheerleading strategy would dominate for the suspected disinformation event. The alternative explanation would be to observe negative content or content that disparages the opposition (those who criticize the regime and its actions), but this is less convincing because taunting users would detract from regime distraction efforts online. Even in offline repression of online commentators, Pan and Siegel showed that while arrests decreased the activity of those arrested, 31 they also brought followers of the detainees to become more active and engaged with the issue at hand than they previously were.
The cheerleading hypothesis implies that the disinformation campaign would be aimed towards a domestic public, whose discontent could transform into a threat to the system’s authority. Language is a significant feature of the Saudi response because the distraction associated to the cyber troop activity should target a domestic audience, which would be done using Arabic. On the other hand, an attempt to save face on the world stage so as not to tarnish the Crown Prince’s reputation as a reformer would be more likely to use English. In other words, the responsiveness hypothesis could exhibit a growing proportion of tweets aimed at a foreign and Western public. This leads to the formation of hypothesis 1 and its alternative (null) hypothesis:
H1: The Twitter cyber troop operation produced more Arabic than English content.
H1(null): The Twitter cyber troop operation produced similar Arabic and English content.
Next, if the cheerleading theory holds for the overall campaign, the average valence of the Tweets should be predominantly positive. Alternatively, the responsiveness approach would predict the opposite.
H2: The Twitter cyber troop operation produced mainly positive content.
H2 (null): The Twitter cyber troop operation produced mainly negative content.
The third hypothesis seeks to elucidate whether there is a difference in strategy based on the target audience, since a domestic cheerleading campaign should exhibit a higher proportion of positive Arabic tweets compared to English ones. A potential reason for differing strategies across languages is that the Saudi government would be more concerned with distraction domestically and does not intend to distract a foreign audience from the tumults occurring at home.
H3: The Twitter cyber troop operation produced differentially positive Arabic content compared to English tweets.
H3 (null): The Twitter cyber troop operation did not produce differentially positive Arabic content compared to English tweets.
Data and Method
To answer the research question, I used data released in December 2019 by Twitter Information Operations from Twitter’s transparency initiative. Twitter reported the suspension of more than 88,000 accounts over the violation of platform manipulation regulations and linked them to a large state backed information operation originating from Saudi Arabia. Twitter pre-filtered these accounts to remove those posting unrelated spam content. The dataset I used thus contains a representative sample of those filtered accounts, for a total of 5,929 accounts. First, I investigated trends in coordination and automated activity between Summer 2017 and 2019.

Then, I selected observations between May 1 and August 31, 2018, to capture the disinformation event. Although the exact period of repression occurred from May to July, August observations were kept because the social stress caused by the event likely persisted for some time both on the civilian and regime side. I explored trends in the number of tweets and account creation over time in the whole sample to gain some insight regarding the level of coordination in the suspected operation. To this same end, I also summarized the top 5 tweeting platforms used by tweeters in the sample, as well as their follower and following data. The next step was to test H1 using tweeting frequency by language and proportion calculations. I tested H2 and H3 by performing a sentiment analysis. To begin, I selected Arabic and English tweets, cleaning the text data to remove URLs, punctuation, and random arrays of words. Upon manual examination of the data, many tweets contained lines of randomly mixed letters and numbers. Since I could not detect a clear pattern of occurrence, I coded the program such that all English words above ten letters be removed. For Arabic tweets, I removed letters from the Latin alphabet altogether to prevent the translation program from translating characters unnecessarily. Despite this step being a potential limitation, I assumed that the overall valence of a tweet should not change by removing a few English words where they occurred in Arabic tweets. Furthermore, legible English words within Arabic tweets were a rare occurrence in the data. This suggests that any strateg y pertaining to foreign audiences would thus be picked up in English-language tweets. Before proceeding to the text analysis, the Arabic tweets were translated to English using the Google Cloud Translation API. With the English and translated Arabic tweets in a single set, I coded a binary variable indicating the language of origin to differentiate between message valence in Arabic and English subsets. The Quantitative Discourse Analysis package in R evaluated the text sentiment of each tweet on a linear scale from -1 to 1, -1 representing the most negative and 1 representing the most positive score. The value was then converted into positive, negative, and neutral directional qualifiers. To substantiate that sentiment analysis can be used as evidence of cheerleading, I provide a qualitative analysis of 10 randomly selected Tweets with a sentiment score above 0.3 to evaluate whether the content corresponds to cheerleading. I chose a threshold of 0.3 to get a lower bound on sentiment and obser ve whether that still corresponded to cheerleading. The 10 tweets sampled directly or indirectly referenced Mohammed Bin Salman either in the tweet text or in the hashtag, and contained words of encouragement, religious references attesting to his glor y, or general praise. This confirms the viability of sentiment analysis as a measure of cheerleading. The complete table of tweets and their score can be found in the Appendix.
The most positive Tweet with a score of 1 was originally in English and read: “We trust you.. We are with you” with the hashtag “MBS.” The least positive tweet with a score of 0.31 was originally in Arabic and read: “A strong speaker, a very smart man, a brave leader, a politician, he reads things to the smallest detail and has a futuristic outlook.”
Who is Tweeting? Exploring Sample Composition
When analyzing the propaganda strateg y for the event at hand, it is important to consider which accounts are represented in the sample. It is likely that some accounts that were part of the same operation would not have been taken down by Twitter, especially official accounts that did not engage in behavior akin to spamming but still propagated the same message as the overall inf luence campaign. Thus, the sample most likely contains bot and cyborg accounts. Bot accounts are completely automated, whereas c yborg accounts use a mix of automation and human activity. 32 Since the data consisted of disabled accounts, it was not possible to determine whether accounts were bots by using algorithms such as Bot or Not. I thus relied on two indicators suggested by Hindman and Barash. 33 First, automated accounts may have high and proportional ratios of followers to following accounts. Second, they overwhelmingly use third-party applications to post at pre-determined inter vals, like, retweet and follow large numbers of users. Table 1 and Table 2 show summar y statistics for the accounts considered during May 1 and August 31, 2018. In the first quartile of Table 1, we obser ve that the following to follower count is nearly equal, but the ratio diverges as we consider higher quartiles. Half of the users have around 9,000 followers and 1,000 followed accounts, while the top 25 percent of users have nearly 50 thousand followers and 8,000 followed accounts. These numbers are surprising, especially at the median. They indicate that if these accounts represent real people, they most likely engaged in inauthentic engagement since they were flagged by Twitter. Table 2 provides more insight on this supposition. The analysis revealed 126 different Twitter client apps, and the top 5 apps combined only make up 15 percent of the total platforms. The regular Twitter for iPhone, Twitter for Android, and Twitter Web Client represent a minimal portion of all platforms. The average Twitter user is not engaged enough to use third-party apps to maximize their activity. Since the sample mostly contains third-party client apps, it is thus safe to infer that those accounts with high follower counts at the top of the distribution may have been authentic but posted content in an automated way, whereas accounts with lower follower counts were completely automated. This preliminary analysis reveals that most accounts included in the sample are either bots or cyborg accounts. Since the differentiation between bot and cyborg accounts cannot be ascertained, the rest of the analysis does not differentiate between bot and manually produced content.

Troops That Tweet Together, Stay Together: Assessing Evidence of Coordination
Figure 1 shows an enormous spike in tweeting activity starting in May 2018. From June 2017 to June 2018, the monthly average remained around 20,000 tweets. From the beginning of May, this number increases by a factor of 3.5 at its peak until June and remains within a range of 35,000 to 60,000 until the end of August of 2018. In addition, the tweet frequency decreases starting in September and remains around 30,000 thereafter. This timely and disproportionate amount of activity relative to the previous year corresponds to the event of interest, which is indicative of coordinated activity.

Table 1: Follower and Following Count Summary Statistics

Table 2: Top 5 Twitter Client Platforms
Similarly, Figure 2 corroborates this result since the second highest peak in the number of accounts created at one time occurs in the first quarter of 2018, right before the disinformation event. There are two other significant peaks in account creation in 2013 and 2014. From 2012 to 2014, Twitter usage rates boomed in Saudi Arabia, with a year-on-year growth rate of 42 percent between 2012 and 2013 (Marcello, 2018). 34 It is thus possible that surges in account creations during that two-year span reflected the overall trend of Twitter penetration in Saudi Arabia overall. In addition, the same time frame corresponds to the post- Arab Spring period, where some accounts may have been created to guard against the attitudinal shift towards mobilization as a result of the Arab Spring (BBC, 2014). 35
Analyses of the Hypothesis and Discussion
The first condition for the cheerleading hypothesis to hold is that the Saudi operation should be mainly targeted towards the domestic population. Table 3 shows that Arabic tweets make up the majority of the sample. Only 3.5 percent of all the tweets considered for the time frame of interest are in English, which is very small considering that it is the second most used language by the kingdom’s cyber troops. A potential explanation for this is that aside from the domestic population, cyber troop activity may target immediate regional populations in the Middle East and North Africa who mostly consume content in Arabic. For instance, the Saudi government was reported to have produced 34 percent of all Twitter content mentioning the Liby- 32an civil war at the time of the Libyan National Army’s (LNA) advance towards Tripoli in March of 2019 (Centro Studi Internazionali, 2019). 36 The op eration flooded the platform using hashtags such as #securethecapital in Arabic and was intended to promote regional legitimacy for the LNA as an opposing party to the UN-backed Government of National Accord. A caveat to this explanation is that Arabic dialects differ between countries, which could hinder comprehension across audiences between Libya and Saudi Arabia for instance. However, using nonstandard Arabic in disinformation campaign materials would defeat the core purpose of exposing a maximum number of users to the chosen narrative. The present analysis cannot make any conclusions as to the spread and reach of tweets in the Summer of 2018, but it is reasonable to assume that the main target audience was Arabic

speakers. Figure 3 shows the same result while providing some nuance about the proliferation of English publications. Compared to August and September numbers, English tweet numbers are visibly higher during the most inflammatory period of the event and reach as many as 2000 tweets per day. Around July 16, English tweets drop and remain insignificant for the rest of the period. For purposes of scale, I added the tweet frequency for the Catalan language, which was the third most frequent language of tweets in the sample. A potential explanation for this could stem from the Catalonian political parties’ sometimes antagonistic position to the Spanish parliament regarding diplomatic and economic ties between Spain and Saudi Arabia. 37 Given the results in Table 3 and Figure 3, hypothesis 1 is supported.



The next piece of evidence to determine whether the cyber troop operation surrounding the arrest of feminist activists in the Summer of 2018 corresponded to the cheerleading or propaganda responsiveness theoretical framework comes from the sentiment analysis. Hypothesis 2 states that overall, messaging valence should be positive across both English and Arabic tweets. This part of the analysis takes a random sample of 100,000 observations from the dataset. Table 4 shows the results rounded up to the first decimal. Supporting hypothesis 2, the overall percentage of positive tweets is the highest and constitutes a majority. Negative content represents the smallest fraction of the overall sample. Neutral content represented 27.4 percent of the sample. This may indicate that aside from the suspected cheerleading operation, a part of the content produced may have served to drown out participatory tweets aiming to share information about the ongoing situation. The results for hypothesis 3 show that Arabic content was not differentially positive compared to English content. Nonetheless, positive content represents more than 50 percent of the sample in both cases. Moreover, although the percentage of positive English tweets surpasses that of Arabic tweets by 5 percent, the remaining proportions for neutral and negative content are very similar. This suggests the cyber troop operation adopted a parallel strategy in both languages. It is interesting that English tweets were mostly positive because the cheerleading framework does not make any predictions about authoritarian propaganda targeting a foreign audience accessible in a different language than the regime’s respective country. Apart from pure signaling, there is no apparent reason why a regime should be concerned with flexing its repressive ability over a foreign audience, given that it does not have to be concerned by social mobilization from that population. In most cases, foreign media have covered and condemned issues regarding repression, but no tangible consequences have ensued. Comparing the valence of Arabic tweets in Table 5 to the overall valence of tweets in Table 4, it is clear to see that the overall messaging valence of the operation was supported by the high proportion of Arabic tweets compared to English ones. In the context of the online backlash in response to the arrests of feminist Women 2 Drive activists in Saudi Arabia, evidence from the three hypotheses points to the plausibility of the cheerleading strategy. According to the cheerleading theoretical framework, the cyber troop activity suggests that the regime aimed to demotivate online collective action over its repressive measures while re-affirming its dominance over the public discourse in the Twitter online space.. . .


Limitations
When interpreting the results, it is important to keep in mind the limitations that challenge the external and internal validity of the study. As addressed in the sample composition section of this paper, the analysis likely only reflects the content published by bot and cyborg accounts. The data does not contain any information about official state accounts, and accounts run by influential figures in mainstream media that have a significant following and may disseminate content with a different valence than presently analyzed. These accounts were unlikely to have been shut down by Twitter and are thus not reflected in the dataset. It is not possible to assert whether the cheerleading theory holds better than the responsive propaganda theory for this subset of cases. Further research aimed at these cases would be needed to draw such conclusions. Therefore, the results cannot be generalized to the entirety of Saudi cyber troop activity surrounding the arrests and beyond. Additionally, the analysis cannot make any causal links attributing the observed positive valence of tweets to that event itself. Quantitative and systematic evidence can only confirm that suspicious online activity occurred in parallel to evidence that suggests cheerleading. The remaining limitations regard internal validity. It was not possible to assess the accuracy of the translation of Arabic tweets due to language limitations, which means the valence results entirely depend on the quality of the Google Translate API. Though more developed and precise than the traditional Google Translate platform, the lack of an accuracy measure could be a concern. However, it is reassuring that the message valence of both languages’ tweets was similar. Equally as important is the operation of the content analysis package. The dictionary contains only 1280 positive words and 2952 negative words, which means that any word not covered by the dictionary is not considered in the computation. Relatedly, the dictionary takes into account all positive words it recognizes, regardless of the context. Some tweets may merely constitute amplification and distracting positive content unrelated to the regime, politics, or the situation regarding the arrests. Furthermore, the direction of the sentiment may be overestimated in the positive or negative direction because the conversion is not nuanced enough to distinguish very small positive or negative values from zero, which corresponds to the neutral direction. In this case, a threshold indicating the probability that a certain text has a valence far enough from zero would be useful. Contributing to this limitation is that, regardless of the dictionary in the package, the algorithm cannot pick up on negation modifiers like the adverb “not.” Hence, a sentence with such a modifier would be counted as equally positive as the same sentence without the modifier despite having opposite meanings.. . .

Conclusion
This study contributes to the literature about cheerleading as an authoritarian propaganda strategy in cyber troop activity by automated and cyborg accounts. The coordinated Twitter activity featured overwhelmingly Arabic and positive content, providing evidence that the Saudi regime’s overarching strategy was to engage in cheerleading. In addition, the messaging valence of Arabic and English tweets as positive in similar proportions, suggests that the propaganda strategy across target audiences matched. The overall findings refuted the responsive propaganda theory and corroborated the framework suggested by Huang, King et al., and Baggot Carter & Carter in the context of online activism about women’s rights. 38 Indeed, Saudi cyber troop activity on Twitter in the case of the Women 2 Drive activists’ arrests corresponded to the theoretical expectation regarding an authoritarian regime’s use of propaganda as a signal of authority, while the use of bots and automated accounts paired with positive valence messaging matched the “distract, don’t en gage” approach suggested by King et al. 39
Endnotes
1. Samuel Sigal, “A Saudi Woman’s ‘Mixed Feelings’ About Winning the Right to Drive,” The Atlantic, September 27, 2017, https://www.theatlantic.com/international/archive/2017/09/saudi-arabia-women-driving/541275/. 2. Mary Casey-Baker and Joshua Haber, “Saudi Arabia Warning against Women’s Driving Campaign,” Foreign Policy (blog), October 25, 2013, https://foreignpolicy.com/2013/10/25/saudi-arabia-issues-warning-against-womens-driving-campaign/. 3. Jason Burke, “Saudi Arabia Women Test Driving Ban,” The Guardian, June 17, 2011, sec. World news, https://www.theguardian.com/world/2011/jun/17/saudi-arabia-women-drivers-protest. Casey-Baker and Haber, “Saudi Arabia Warning Women’s Campaign.” 4. Rothna Begum, “The Brave Female Activists Who Fought to Lift Saudi Arabia’s Driving Ban,” Human Rights Watch (blog), September 29, 2017, https://www.hrw.org/news/2017/09/29/ brave-female-activists-who-fought-lift-saudi-arabias-driving-ban. Sigal, “A Saudi Woman’s ‘Mixed Feelings’ About Winning the Right to Drive.” 5. “Saudi Police Arrest Three More Women’s Rights Activists,” Independent, May 23, 2018, https://content.jwplatform.com/previews/1hd4zKiz-9ygSIn9G. 6. “Saudi Arabia Arrests More Women’s Rights Activists: HRW,” Reuters, June 20, 2018, https://www.reuters.com/article/us-saudi-arrests-idUSKBN1JG0VL. 7. Liz Ford, “Saudi Women Strive to Bring Male Guardians to a Twitter End,” The Guardian, March 28, 2018, sec. Global Development, https://www.theguardian.com/global-development/2018/mar/28/saudi-arabia-women-strive-to-bring-maleguardians-to-a-twitter-end. 8. Sarah El Sirgany and Tamara Qiblawi, “Rights Groups Slam ‘Smear Campaign’ against Saudi Activists,” CNN, May 21, 2018, https://www.cnn.com/2018/05/21/middleeast/saudi-women-activists-arrests-intl/index.html. 9. Ibid. 10. Samantha Bradshaw and Philip N. Howard, “The Global Organization of Social Media Disinformation Campaigns,” Journal of International Affairs 71, no. 1.5 (2018): 23–32. 11. “New Disclosures to Our Archive of State-Backed Information Operations,” Twitter (blog), December 20, 2019, https://blog. twitter.com/en_us/topics/company/2019/new-disclosures-to-ourarchive- of-state-backed-information-operations.html. 12. Social media as a tool for activism: Paolo Gerbaudo, Tweets and the Streets: Social Media and Contemporary Activism (Pluto Press, 2012); Dhiraj Murthy, “Introduction to Social Media, Activism, and Organizations,” Social Media + Society 4, no. 1 (2018): 2056305117750716. 13. Digital authoritarianism as the frontier of authoritarian learning: Marc Lynch, “After Egypt: The Limits and Promise of Online Challenges to the Authoritarian Arab State,” Perspectives on Politics 9, no. 2 (2011): 301–10; Steven Heydemann and Reinoud Leenders, “Authoritarian Learning and Authoritarian Resilience: Regime Responses to the ‘Arab Awakening,’” Globalizations 8, no. 5 (October 2011): 647–53, https://doi.org/10.1080/14747731.2011 .621274; Alina Polyakova and Chris Meserole, “Exporting Digital Authoritarianism: The Russian and Chinese Models,” Democra- cy and Disorder (Brookings, n.d.), https://www.brookings.edu/ wp-content/uploads/2019/08/FP_20190827_digital_authoritarianism_polyakova_meserole.pdf. 14. “New Disclosures to Our Archive of State-Backed Information Operations.” 15. Samantha Bradshaw and Philip N Howard, “Troops, Trolls and Troublemakers: A Global Inventory of Organized Social Media Manipulation,” Working Paper No.2017.12 (Oxford Computational Propaganda Project, 2017), https://fpmag.net/wp-content/ uploads/2017/11/Troops-Trolls-and-Troublemakers.pdf. 16. Haifeng Huang, “Propaganda as Signaling,” Comparative Politics 47, no. 4 (July 1, 2015): 419–44, https://doi. org/10.5129/001041515816103220. 17. Erin Baggot Carter and Brett L. Carter, “Autocratic Propaganda in Global Perspective” (Working Paper, School of International Relations, University of Southern California, 2019). 18. Gary King, Jennifer Pan, and Margaret E. Roberts, “How the Chinese Government Fabricates Social Media Posts for Strategic Distraction, Not Engaged Argument,” American Political Science Review 111, no. 3 (2017): 484–501. 19. Shuai Jin, “Does Authoritarian Propaganda Ever Respond to Public Opinion?” (University of Massachusetts Boston, 2019), https://polmeth.mit.edu/sites/default/files/documents/Shuai_Jin. pdf. 20. For potential of civil society organization and monitoring: Mona Elswah and Philip N. Howard, “The Challenges of Monitoring Social Media in the Arab World: The Case of the 2019 Tunisian Elections” (Oxford Computational Propaganda Project, March 23, 2020), https://demtech.oii.ox.ac.uk/wp-content/uploads/sites/93/2020/03/Tunisia-memo-English.pdf; Lynch, “After Egypt.” 21. For trends in social media manipulation, see for example: Alexei Abrahams and Andrew Leber, “Electronic Armies or Cyber Knights? The Sources of Pro-Authoritarian Discourse on Middle East Twitter,” International Journal of Communication 15, no. 2021 (January 2021): 1173–99; Sara El-Khalili, “Social Media as a Government Propaganda Tool in Post-Revolutionary Egypt,” First Monday 18, no. 3 (March 4, 2013), https://doi. org/10.5210/fm.v18i3.4620; Andrew Leber and Alexei Abrahams, “A Storm of Tweets: Social Media Manipulation During the Gulf Crisis,” Review of Middle East Studies 53, no. 2 (2019): 241–58, https://doi.org/10.1017/ rms.2019.45. 22. “Saudi Arabia: Freedom on the Net 2020 Country Report,” Freedom House, 2020, https://freedomhouse.org/country/saudi-arabia/freedom-net/2020; “China: Freedom on the Net 2020 Country Report,” Freedom House, 20, https://freedomhouse.org/country/china/freedom-net/2020. 23. Samantha Bradshaw and Philip N Howard, “The Global Disinformation Order: 2019 Global Inventory of Organised Social Media Manipulation,” 2019, 27. 24. Ibid. 25. Will Freeman, “The Accuracy of China’s ‘Mass Incidents,’” Financial Times, March 2, 2010, https://www.ft.com/content/9ee6fa64-25b5-11df-9bd3-00144feab49a; Tao Ran, “China’s Land Grab Is Undermining Grassroots Democracy,” The Guardian, December 16, 2011, sec. Opinion, https://www.theguardian.com/commentisfree/2011/dec/16/china-land-grab-undermining-democracy; “Why Protests Are so Common in China,” The Economist, October 4, 2018, https://www.economist.com/china/2018/10/04/whyprotests-are-so-common-in-china. 26. Yao Li, “A Zero-Sum Game? Repression and Protest in China | Government and Opposition | Cambridge Core,” Government and Opposition 54, no. 2 (2019): 309–35. 27. “Saudi Arabia Arrests More Women’s Rights Activists: HRW.” 28. Pascal Menoret, “Repression and Protest in Saudi Arabia,” Middle East Brief 101 (Brandis University, Crown Center for Middle East Studies, August 2016), https://www.brandeis.edu/ crown/publications/middle-east-briefs/pdfs/101-200/meb101. pdf. 29. Ibid. 30. King, Pan, and Roberts, “Social Media Posts for Strategic Distraction.” 31. Jennifer Pan and Alexandra A. Siegel, “How Saudi Crackdowns Fail to Silence Online Dissent,” American Political Science Review 114, no. 1 (February 2020): 109–25, https://doi. org/10.1017/S0003055419000650. 32. Bradshaw and Howard, “The Global Disinformation Order.” 33. Matthew Hindman and Vlad Barash, “Disinformation, ‘Fake News’ and Influence Campaigns on Twitter,” Knight Foundation, October 2018, https://knightfoundation.org/reports/disinformation-fake-news-and-influence-campaigns-on-twitter/. 34. Mari Marcello, “Twitter Usage Is Booming in Saudi Arabia,” GWI, March 20, 2013, https://blog.gwi.com/chart-of-the-day/ twitter-usage-is-booming-in-saudi-arabia/. 35. “Reporting Saudi Arabia’s Hidden Uprising,” BBC News, May 30, 2014, sec. Middle East, https://www.bbc.com/news/worldmiddle-east-27619309. 36. “Information Warfare in Libya: The Online Advance of Khalifa Haftar.” (Centro Studi Internazionali, May 2019), https:// cesi-italia.org/contents/Analisi/Information%20warfare%20 in%20Libya%20The%20online%20advance%20of%20Khali- fa%20Haftar_1.pdf?fbclid=IwAR3dbzi-7hEjs8fJXNs0T4HDx- 22LXhMxQ6tZIoP3ezl28E17haIDjJHWQCc. 37. Alan Ruiz Terol, “Saudi Arabia: Spain Rejects Arms Embargo Requested by Catalan Parties,” accessed March 24, 2022, http:// www.catalannews.com/politics/item/saudi-arabia-spain-rejects-arms-embargo-requested-by-catalan-parties, https://www. catalannews.com/politics/item/saudi-arabia-spain-rejects-armsembargo-requested-by-cat 38. Huang, “Propaganda as Signaling;” King, Pan, and Roberts, “Social Media Posts for Strategic Distraction;” Baggot Carter and Carter, “Autocratic Propaganda in Global Perspective.” 39. Huang, “Propaganda as Signaling.” R Code can be downloaded at: https://drive.google.com/file/d/1Ajm6lHWVwfaxB- 3gajjoaQ8cWA8j5Xm0d/view?usp=sharing