Citation: Kearney, M. W. (in press). Automated accounts in partisan user networks on Twitter. In S. Jarvis (Ed.) New agendas in communication: Conservatively speaking: How right-wing media and messaging (re)made American politics. New York: Routledge.

Introduction

Whether social media has an effect on public opinion and political elections is no longer in question. Research suggests exposure to and use of social media can affect where people stand on certain issues (Messing and Westwood 2012; Holt et al. 2013), their favorability toward political figures (Barnidge, Gil de Zúñiga, and Diehl 2017), and their likelihood of voting (Boulianne 2015; Gil de Zúñiga and Jung 2012; Gil de Zúñiga and Molyneux 2014; Y. Kim, Hsu, and Zúñiga 2013). Social media’s influence is not entirely surprising given 69% percent of adults in the United States have at least one social media account while 66% percent report getting some of their news from social media (Shearer and Gottfried 2017). But the widespread use of social media has yet to translate into a representative snapshot of public opinion (Mellon and Prosser 2017).

Although social media use among Americans has become the norm, opinions expressed on social media are still not representative of opinions found in the general public. For instance, we know social media discussions are predominantly comprised of people who are highly educated, men, and/or white (Hargittai 2018). We also know that much of social media activity— at any given time or on any given topic—is disproportionately driven by a handful of highly active or highly influential accounts (Dang-Xuan et al. 2013; Weeks, Ardèvol-Abreu, and Gil de Zúñiga 2017). Despite its representative shortcomings, however, social media activity often gets interpreted and used as a barometer of public opinion (DiGrazia et al. 2013; Gleason 2010).

The combination of the growing role of social media in the dissemination of news (Vis 2013) and the rise of automated accounts on social media platforms (Ferrara et al. 2016) has naturally given rise to concerns about the manipulation of our political landscape via inauthentic, automated accounts on social media (Ehrenberg 2012). To date, research on the subject of automation and manipulation of information on social media has examined political rumors (Shin et al. 2016), indicators of influence (Haustein et al. 2016), and distortion of political discussion (Bessi and Ferrara 2016; Dickerson, Kagan, and Subrahmanian 2014; Ratkiewicz et al. 2011). But relatively little has been done to describe the extent to which partisan user networks vary in terms of their connections and interactions with automated users.

Roadmap

The purpose of this chapter is to explore the extent to which automated, or bot, accounts exist in relation to partisan user networks on a major social media platform—Twitter.

Conservatives more bots

There is no innate reason that conservative user networks would be intrinsically vulnerable or welcoming to bots on Twitter. Indeed, political ideologies are simply provisional snapshots of socio-political norms. Over time, for example, political ideologies often do shift and change–even in ways that would, by many, be considered contradictory. With that said, in this particular political- cultural moment, there are a reasons to suspect it that American-centric conservative user networks are more likely to connect and interact with Twitter bots.

American-centric conservative user networks on Twitter may be more likely to include bots than liberal or politically moderate user networks because low- status networks are more susciptible to persuasion and exploitation from relatively unknown or non-traditional sources. At the current time, American conservative identity frequently identifies itself as outside the “mainstream”–especially as it relates to media coverage. It makes sense that this status imbalance–i.e., perceived under-representation in “mainstream” channels of information–would result in the low-status group being more open and willing to accept information from non-traditional or relatively unknown digital entities. This line of reasoning is also consistent with recent research, which found that conservatives were more vulnerable to misinformation due to the structure of their network and information sytems and their historical use of social media (McCright and Dunlap 2017; Tucker et al. 2018). Thus, the current study theorizes the following:

Conservative user networks may also attract more anonymous accounts, which may, in turn, be more likely to interact with bots than liberal or politically moderate user networks. In at least recent history, extreme conservative views have often been
portrayed as reactionary and, as a consequence, criticized for being close-minded and outdated. It makes sense, then, that views more likely to be perceived as offensive are more likely to come from anonymous accounts. And, because, in theory, anonymous web users experience less social pressure than non-anonymous users, it seems reasonable to assume they would be more willing to interact with bots. This would explain why, for example, bots in 2016 election were more conservative and/or pro-Trump and why conservative users were more likely to retweet posts by bots (Badawy, Ferrara, and Lerman 2018). The current study therefore theorizes the following:

Use lists to detect bots

Scholars have taken a number of different approaches when examining the detection fake accounts on social media (Xiao, Freeman, and Hwa 2015; Chu et al. 2012), but exporting these approaches and/or reproducing the human labor used to power the classification of automated accounts in these studies remain unrealistic. Fortunately, there is an alternative to using potentially outdated lists of automated accounts and labor- intensive classification systems. By leveraging a user-drive labelling system built-in to Twitter’s platform–i.e., publicly available Twitter “lists”–it is possible to identify clusters of accounts that are similarly categorized as “bots” or other relatively clear clear words.

social media and polarization research (J. K. Lee et al. 2014; F. L. Lee 2016; Barberá 2014)

Method

I examined the friend networks of users that were randomly sampled from followers of well-known partisan accounts

Data

Selected all accounts followed by more than 20 users in the sample. This resulted in a final data set of of 6761 observations with 2,088 accounts followed most frequently by Democrats, 2,310 accounts followed most frequently by Moderates, and 2,363 accounts followed most frequently by Republicans.

Summary statistics are provided below.

Variable Mean S.D. Min Median Max
Account age 6.20 2.22 .24 6.96 10.48
Favourites count 16.36 42.87 .00 2.97 904.87
Followers count 1072.71 4362.19 6.57 212.81 106927.34
Friend count 17.66 59.68 .00 1.35 1664.73
Nchar desc 100.75 49.10 .00 111.00 177.00
Nchar loc 10.85 8.73 .00 12.00 142.00
Profile url .79 .41 .00 1.00 1.00
D partisan .31 .46 .00 .00 1.00
E partisan .34 .47 .00 .00 1.00
R partisan .35 .48 .00 .00 1.00
Bot probability .53 .33 .00 .52 1.00

Correlation table is here

Variable 1 2 3 4 5 6 7 8 9 10 11 12
1. account_age 1.00
2. favourites -.12 1.00
3. followers .39 -.19 1.00
4. friends -.11 .41 -.29 1.00
5. nchar_desc -.01 .16 -.23 .18 1.00
6. nchar_loc .10 .11 -.10 .15 .21 1.00
7. partisan_d .20 -.01 -.02 -.05 .08 .07 1.00
8. partisan_e .14 -.12 .57 -.29 -.24 -.15 -.48 1.00
9. partisan_r -.33 .12 -.55 .34 .16 .08 -.49 -.53 1.00
10. prob_bot -.16 -.10 .18 .11 -.14 -.17 -.21 .13 .08 1.00
11. profile_url .30 -.05 .22 -.12 .13 .15 .16 .05 -.20 -.15 1.00
12. statuses .17 .39 -.06 .36 .23 .13 .10 -.19 .09 -.17 .08 1.00

Results

Bot probabilities of accounts followed by users sampled from source accounts

source_account Bot probability
AMC_TV .63
SarahPalinUSA .62
seanhannity .61
DRUDGE_REPORT .56
SInow .54
survivorcbs .54
foxnewspolitics .47
maddow .47
paulkrugman .45
Salon .42
HuffPostPol .37

Four quasi-binomial models were estimated, predicting the probabilities of accounts being bots. To isolate the unique contribution of the partisan grouping variable, the first model, Model 1, contains only covariates— account age, statuses, favorites, followers, and friends. Model 2 includes the same covariates but also adds the partisan grouping variable. Model 3 adds the interaction between statuses and account age (rate of activity). And, finally, Model 4 adds the final interaction between the number of friends and followers (friend-follower ratios). Model coefficients for all four models can be found in Table 1.

Predictor M1 M2 M3 M4
Estimate Estimate Estimate Estimate
(S.E.) (S.E.) (S.E.) (S.E.)
(Intercept) .40*** .11* .11* .03
(.04) (.05) (.05) (.05)
Account age -.30*** -.26*** -.25*** -.24***
(.02) (.02) (.02) (.02)
Statuses count -.22*** -.22*** -.22*** -.23***
(.02) (.02) (.02) (.02)
Favourites count -.18*** -.17*** -.16*** -.16***
(.02) (.02) (.02) (.02)
Followers count .46*** .53*** .53*** .50***
(.02) (.02) (.02) (.02)
Friends count .44*** .40*** .40*** .40***
(.02) (.02) (.02) (.02)
Nchar desc -.06*** -.06*** -.06*** -.07***
(.02) (.02) (.02) (.02)
Nchar loc -.16*** -.17*** -.17*** -.17***
(.02) (.02) (.02) (.02)
Profile url -.34*** -.29*** -.30*** -.25***
(.04) (.04) (.04) (.04)
PartisanE . .18*** .18*** .16***
(.04) (.04) (.04)
PartisanR . .56*** .57*** .49***
(.04) (.04) (.04)
Account age:statuses count . . .05** .07***
(.02) (.02)
Followers count:friends count . . . -.24***
(.02)
N 6761 6761 6761 6761
Deviance 2906.08 2842.63 2838.63 2776.13
χ2 615.35*** 678.80*** 682.80*** 745.30***

For the sake of interpretability, estimates from Ordinary Least Squares (OLS) versions of the models, which yielded similar results to those provided by the generalized linear model, can be found in Table 4.

Predictor M1 M2 M3 M4
Estimate Estimate Estimate Estimate
(S.E.) (S.E.) (S.E.) (S.E.)
(Intercept) .59*** .52*** .52*** .50***
(.01) (.01) (.01) (.01)
Account age -.07*** -.06*** -.06*** -.05***
(.00) (.00) (.00) (.00)
Statuses count -.05*** -.05*** -.05*** -.05***
(.00) (.00) (.00) (.00)
Favourites count -.04*** -.04*** -.04*** -.04***
(.00) (.00) (.00) (.00)
Followers count .10*** .12*** .12*** .11***
(.00) (.01) (.01) (.00)
Friends count .10*** .09*** .09*** .09***
(.00) (.00) (.00) (.00)
Nchar desc -.01** -.01** -.01** -.01**
(.00) (.00) (.00) (.00)
Nchar loc -.04*** -.04*** -.04*** -.04***
(.00) (.00) (.00) (.00)
Profile url -.08*** -.07*** -.07*** -.06***
(.01) (.01) (.01) (.01)
PartisanE . .04*** .05*** .04***
(.01) (.01) (.01)
PartisanR . .13*** .13*** .11***
(.01) (.01) (.01)
Account age:statuses count . . .01* .01**
(.00) (.00)
Followers count:friends count . . . -.05***
(.00)
N 6761 6761 6761 6761
RMSE .30 .29 .29 .29
R2 .20 .21 .22 .23
Adj R2 .19 .21 .21 .23

Discussion

The results presented here call into question related findings that suggest political bots are linked to lower levels of political discussion (Scheufele and Tewksbury 2007). In fact, it is possible bots may even frequently push more out-of-the-ordinary, interesting, or discussion-provoking content. They may also decentralize the range of acceptable information sources, resulting in a greater number of accounts perceived to be informative and/or credible.

Limitations

  1. Hard to detect bots

  2. Lists have a lot of baggage and undoubtedly reflect systematic biases that we don’t fully understand. At the same time, however, lists may be the only viable way to capture Twitter-specific consensus or conventions. In other words, it may not be far off to say the unknown systematic sources of variance shaping patterns and use of Twitter lists may be unique and valuable effects of the platform itself–e.g., list use may be a reflection of users managing timeline or friend/follower dynamics (they offer users a way to keep up with other accounts without cluttering up their timeline feeds or inflating their friend-to-follower ratio).



Badawy, Adam, Emilio Ferrara, and Kristina Lerman. 2018. “Analyzing the Digital Traces of Political Manipulation: The 2016 Russian Interference Twitter Campaign.” ArXiv Preprint ArXiv:1802.04291.

Barberá, Pablo. 2014. “How Social Media Reduces Mass Political Polarization. Evidence from Germany, Spain, and the Us.” Job Market Paper, New York University 46.

Barnidge, Matthew, Homero Gil de Zúñiga, and Trevor Diehl. 2017. “Second Screening and Political Persuasion on Social Media.” Journal of Broadcasting & Electronic Media 61 (2). Taylor & Francis: 309–31.

Bessi, Alessandro, and Emilio Ferrara. 2016. “Social Bots Distort the 2016 U.S. Presidential Election Online Discussion.” First Monday 21 (11). http://firstmonday.org/ojs/index.php/fm/article/view/7090.

Boulianne, Shelley. 2015. “Social Media Use and Participation: A Meta-Analysis Ofcurrent Research.” Information, Communication & Society 18 (5). Taylor & Francis: 524–38.

Chu, Zi, Steven Gianvecchio, Haining Wang, and Sushil Jajodia. 2012. “Detecting Automation of Twitter Accounts: Are You a Human, Bot, or Cyborg?” IEEE Trans. Dependable Secur. Comput. 9 (6). Los Alamitos, CA, USA: IEEE Computer Society Press: 811–24. doi:10.1109/TDSC.2012.75.

Dang-Xuan, Linh, Stefan Stieglitz, Jennifer Wladarsch, and Christoph Neuberger. 2013. “An Investigation of Influentials and the Role of Sentiment in Political Communication on Twitter During Election Periods.” Information, Communication &Amp; Society 16 (5). Taylor & Francis: 795–825.

Dickerson, J. P., V. Kagan, and V. S. Subrahmanian. 2014. “Using Sentiment to Detect Bots on Twitter: Are Humans More Opinionated Than Bots?” In 2014 Ieee/Acm International Conference on Advances in Social Networks Analysis and Mining (Asonam 2014), 620–27. doi:10.1109/ASONAM.2014.6921650.

DiGrazia, Joseph, Karissa McKelvey, Johan Bollen, and Fabio Rojas. 2013. “More Tweets, More Votes: Social Media as a Quantitative Indicator of Political Behavior.” PloS One 8 (11). Public Library of Science: e79449.

Ehrenberg, Rachel. 2012. “Social Media Sway: Worries over Political Misinformation on Twitter Attract Scientists’ Attention.” Science News 182 (8). Wiley Online Library: 22–25.

Ferrara, Emilio, Onur Varol, Clayton Davis, Filippo Menczer, and Alessandro Flammini. 2016. “The Rise of Social Bots.” Commun. ACM 59 (7). New York, NY, USA: ACM: 96–104. doi:10.1145/2818717.

Gil de Zúñiga, Homero, and Sebastiá n Jung Nakwon andValenzuela. 2012. “Social Media Use for News and Individuals’ Social Capital,civic Engagement and Political Participation.” Journal of Computer-Mediated Communication 17 (3). Wiley Online Library: 319–36.

Gil de Zúñiga, Homero, and Pei Molyneux Logan andZheng. 2014. “Social Media, Political Expression, and Politicalparticipation: Panel Analysis of Lagged and Concurrentrelationships.” Journal of Communication 64 (4). Wiley Online Library: 612–34.

Gleason, Stephanie. 2010. “Harnessing Social Media: News Outlets Are Assigningstaffers to Focus on Networking.” American Journalism Review 32 (1). University of Maryland: 6–8.

Hargittai, Eszter. 2018. “Potential Biases in Big Data: Omitted Voices on Social Media.” Social Science Computer Review. SAGE Publications Sage CA: Los Angeles, CA, 0894439318788322.

Haustein, Stefanie, Timothy D Bowman, Andrew Holmberg Kimand Tsou, Cassidy R Sugimoto, and Vincent Lariviè re. 2016. “Tweets as Impact Indicators: Examining the Implicationsof Automated ‘Bot’ Accounts on Twitter.” Journal of the Association for Information Science and Technology 67 (1). Wiley Online Library: 232–38.

Holt, Kristoffer, Adam Shehata, Jesper Strö mbä ck, and Elisabet Ljungberg. 2013. “Age and the Effects of News Media Attention and Socialmedia Use on Political Interest and Participation: Dosocial Media Function as Leveller?” European Journal of Communication 28 (1). SAGE Publications: 19–34.

Kim, Yonghwan, Shih-Hsien Hsu, and Homero Gil de Zúñiga. 2013. “Influence of Social Media Use on Discussion Networkheterogeneity and Civic Engagement: The Moderating Role Ofpersonality Traits.” Journal of Communication 63 (3). Wiley Online Library: 498–516.

Lee, Francis LF. 2016. “Impact of Social Media on Opinion Polarization in Varying Times.” Communication and the Public 1 (1). SAGE Publications Sage UK: London, England: 56–71.

Lee, Jae Kook, Jihyang Choi, Cheonsoo Kim, and Yonghwan Kim. 2014. “Social Media, Network Heterogeneity, and Opinionpolarization.” Journal of Communication 64 (4). Wiley Online Library: 702–22.

McCright, Aaron M, and Riley E Dunlap. 2017. “Combatting Misinformation Requires Recognizing Its Types and the Factors That Facilitate Its Spread and Resonance.” Journal of Applied Research in Memory and Cognition 6 (4). Elsevier: 389–96.

Mellon, Jonathan, and Christopher Prosser. 2017. “Twitter and Facebook Are Not Representative of the General Population: Political Attitudes and Demographics of British Social Media Users.” Research & Politics 4 (3). SAGE Publications Sage UK: London, England: 2053168017720008.

Messing, Solomon, and Sean J Westwood. 2012. “Selective Exposure in the Age of Social Media:Endorsements Trump Partisan Source Affiliation Whenselecting News Online.” Communication Research. SAGE Publications, 0093650212466406.

Ratkiewicz, Jacob, Michael Conover, Mark Meiss, Bruno Goncalves, Alessandro Flammini, and Filippo Menczer. 2011. “Detecting and Tracking Political Abuse in Social Media.” https://www.aaai.org/ocs/index.php/ICWSM/ICWSM11/paper/view/2850.

Scheufele, and Tewksbury. 2007. “Framing, Agenda Setting, and Priming: The Evolution of Three Media Effects Models.” Journal of Communication, 9–20.

Shearer, Elisa, and Jeffrey Gottfried. 2017. “News Use Across Social Media Platforms 2017.” Pew Research Center, Journalism and Media.

Shin, Jieun, Lian Jian, Kevin Driscoll, and François Bar. 2016. “Political Rumoring on Twitter During the 2012 Uspresidential Election: Rumor Diffusion and Correction.” New Media & Society, March. American Psychological Association, 1–22. doi:10.1177/1461444816634054.

Tucker, Joshua, Andrew Guess, Pablo Barberá, Cristian Vaccari, Alexandra Siegel, Sergey Sanovich, Denis Stukal, and Brendan Nyhan. 2018. “Social Media, Political Polarization, and Political Disinformation: A Review of the Scientific Literature.” William+ Flora Hewlett Foundation.

Vis, Farida. 2013. “Twitter as a Reporting Tool for Breaking News:Journalists Tweeting the 2011 UK Riots.” Digital Journalism 1 (1). Taylor & Francis: 27–47.

Weeks, Brian E, Alberto Ardèvol-Abreu, and Homero Gil de Zúñiga. 2017. “Online Influence? Social Media Use, Opinion Leadership, and Political Persuasion.” International Journal of Public Opinion Research 29 (2). Oxford University Press: 214–39.

Xiao, Cao, David Mandell Freeman, and Theodore Hwa. 2015. “Detecting Clusters of Fake Accounts in Online Social Networks.” In Proceedings of the 8th Acm Workshop on Artificial Intelligence and Security, 91–101. AISec ’15. New York, NY, USA: ACM. doi:10.1145/2808769.2808779.