Politics as Usual

Social media has emerged as an influential platform for political engagement, allowing users to directly call out political opponents and publicly debate hot-button issues. Tools such as the twindex directly tap into this real-time stream of public sentiment, predicting political outcomes in the same way a traditional Gallup poll would. As social media matures and citizens place their trust in sites like Facebook and Twitter as a source of political truth, such trust is misplaced due to weaknesses in how popular content is bubbled up and how political accounts are ranked and recommended to users. Similarly, the viral nature of content makes mud slinging and misinformation all the more alluring — or, in other words, politics as usual.

Inflating Popularity

The only hurdle between me and gaining a million followers (besides having a private account) is $5,000. At least, that’s the case if I were to purchase followers on Twitter at a going rate of $5-20 per thousand. If popularity is simply a measure of counts rather than information diffusion (e.g. the thousands of Lady Gaga fans willing to retweet her content), then such metrics can be easily gamed due to the ease by which new Twitter accounts can be created. When it comes to social media, there is a fundamental tension between growth and security. Email confirmations, CAPTCHA solutions, and unique IP rules all stymie legitimate users from registering a new account, even if those same tools are necessary to dam the floodgates of spam. Similarly, the cost of false positives where a legitimate user is banned from communicating far outweigh the cost of false negatives (uncaught fraudulent accounts), tipping the balance in favor of miscreants and spammers accessing social media.

When Mitt Romney gained over 100,000 followers in a single day, the media balked at whether these accounts were real users. While political events can certainly trigger an influx of interest, all of the new followers reflected low in-degree accounts that no one else on Twitter was following. Now, well after the story broke, at least 60,000 of these accounts have been suspended, as seen from Romney’s twittercounter.

Whether the accounts were intentionally purchased by Mitt Romney or an adversary (or equally likely, a software glitch of spam accounts set to follow popular Twitter users in order to appear more realistic, but consequently all performing the same action) is unknown. But accusations of purchasing followers is nothing new, with similar rumors plaguing the Newt Gingrich campaign.

The Mitt Romney story also illustrates the susceptibility of brands in social media to smear attacks. If a political opponent purchases followers for a candidate and then cries wolf, the ‘evidence’ of new followers is plain to see, while the target of the attack can only vehemently deny involvement. Similarly, if a political brand launches a trending topic and it is co-opted by opponents (legitimately or not) in a 4chan like manner that degenerates into offensive content (hello pedobear), the original brand has no control over their message once it hits social media, even though its directly linked to their brand (e.g. on a Facebook page or on a promoted hashtag where the affiliation is clear). It’s the double edged sword of mass connectivity in social media.

Silencing Dissidents

One of the more sad applications of social media involves intentionally manipulating anti-spam tools to silence political dissidents. Both Facebook and Twitter grant users the ability to report offensive content, block messages from accounts, and report users for spam. These metrics in turn can be used for removing spam accounts, but are fragile to abuse. A prominent example of this abuse occurred during a political battle between far-right and far-left Israeli groups on Facebook, where thousands of users from one side would report bomb [Hebrew] an account, resulting in its temporary expulsion from Facebook.

While nation states have their own (legal) means to censor and control social media, the aforementioned attack is a chilling reminder of the adversarial nature of user-generated input. Taken at face value, user reports of child pornography or spam can be used to shut down the accounts of political adversaries where the only real victim is free speech.

Controlling Discussions

Social media allows millions of users to connect and discuss political concerns, but whether those issues or the accounts participating are real is an entirely different issue. On Facebook, public pages serve as a forum for commenting and discussion, while on Twitter trending topics allow users with no social connections to interact. The organic nature of how discussions are conceived and the fact that anyone can participate make fake accounts a valuable resource in skewing the tone of conversation. Fake stories, co-opting existing stories, and astroturfing are a growing problem in social media. As detailed in a previous post, the discussion surrounding the Russian parliamentary election on Twitter was swarmed with thousands of fake accounts, while topics like #freetibet continue to be attacked by politically-motivated bots. When topics are effectively voted up by users, there is no such thing as a direct democracy in the presence of thousands of fake accounts.

One Comment

Comments are closed.