Research Survey: Social Network Abuse

I decided to compile a list of social networking papers that I’ve read. The list is likely incomplete, but gives shape to the current research pushes surrounding social network spam and abuse. Whenever appropriate, I detail the methodology for how a study was conducted; most data collection techniques carry an inherent bias that is worth being forthright about.

Social Network Spam and Abuse: Measurement and Detection

The following is a list of academic papers on the topics of social network spam and abuse. In particular, the papers cover (1) how spammers monetize social networks, (2) how spammers engage with social network users, and (3) how to detect both compromised and fraudulent accounts.

Detection falls into two categories: at-signup detection which attempts to identify spam accounts before an account takes any actions visible to social network users; and at-abuse detection which relies on identifying abusive behavior such as posting spam URLs or forming too many relationships. At-signup detection has yet to receive much academic treatment (mostly a data access problem), while at-abuse detection has been conducted based on tweet content; URL content and redirects; and account behaviors such as the frequency of tweeting or participation in trending topics. Papers that detect spam accounts based on the social graph are detailed in the following section.

  • [July, 2010] Detecting spammers on twitter: The authors develop a classifier to detect fraudulent accounts based on the fraction of URLs in tweets; the fraction of tweets with hashtags; account age; follower-following ratios; and other account-based features. The most discriminative features were the number of URLs sent and an account’s age.
    Dataset: 8,207 manually labeled Twitter accounts
    Time period: December 8, 2008 — September 24,2010
    Source: Crawl of all accounts with users IDs less than 80 million; filtered to only include accounts that tweet topics including (1) #musicmonday, (2) Boyle, (3) Jackson.
  • [October, 2010] @spam: The underground on 140 characters or less: Our study of compromised Twitter accounts, the clickthrough rate on spam tweets, and the ineffectiveness of blacklists at detecting social network spam in a timely fashion. Detailed summary here.
    Dataset: 3 million spam tweets classified by whether the URL in the tweet was blacklisted by URIBL, Joewein, or Google Safebrowsing. Classification includes the initial URL, all redirect URLs, and the final landing URL.
    Time period: Janurary, 2010 — February, 2010
    Source: Streaming API, predicated on containing URLs
  • [November, 2010] Detecting and characterizing social spam campaigns: The authors develop a classifier to detect Facebook spam accounts based on the bursty nature of spam campaigns (e.g. many messages sent in a short period) and diverse source of accounts (e.g. multiple accounts coordinating together, typically sending similar text). Surprisingly, 97% of the accounts identified were suspected of being compromised rather than fraudulent. The most popular spam types were “someone has a crush on you”-scams, ringtones, and pharma spam.
    Dataset: 187M wall posts, 212,863 of which are detected as spam sent by roughly 57,000 accounts (validated via blacklists or an obfuscation heuristic)
    Time period: January, 2008 — June, 2009
    Source: Crawl of Facebook networks, described in detail in User Interactions in Social Networks and their Implications
  • [December, 2010] Detecting spammers on social networks: The authors develop a classifier based on the fraction of messages containing URLs; the similarity of messages; total messages sent; and the total number of friends. The spam sent from detected accounts include dating, porn, ad-based monetization, and money making scams.
    Dataset: 11,699 Twitter accounts; 4,055 Facebook accounts
    Time period: June 6, 2009 — June 6, 2010
    Source: Spammers messaging or forming relationships with 300 passive honeypots each for Facebook and Twitter
  • [April, 2011] Facebook immune system: A brief summary of Facebook’s system for detecting spam. There is no mention of the volume of spam Facebook receives (though SEC filings say its roughly 1%) or the threats the service faces.
  • [May, 2011] Design and Evaluation of a Real-Time URL Spam Filtering Service: Our study on developing a classifier based on the content of URLs posted to social networks and email. Features include ngrams of the posted URL, interstitial redirects, and the final landing URL; HTML content; pop-ups and JavaScript event detection; headers; DNS data; and geolocation and routing information. Detailed summary here.
    Dataset: 567,784 spam URLs posted to Twitter (as identified by Google Safebrowsing, SURBL, URIBL, Anti-Phishing Work Group, and Phishtank); 1.25 million spam URLs in emails (as identified by spam traps)
    Time period: September, 2010 — October, 2010
    Source: Email spam traps; Twitter Streaming API
  • [November, 2011] Suspended accounts in retrospect: An analysis of twitter spam: Our analysis of fraudulent Twitter accounts, the tools used to generate spam, and the resulting spam campaigns and monetization strategies. Detailed summary here.
    Dataset: 1,111,776 accounts suspended by Twitter
    Time period: August 17, 2010 — March 4, 2011
    Source: Streaming API, predicated on containing URLs
  • [February, 2012] Towards Online Spam Filtering in Social Networks: The authors build a detection framework for Twitter spam that hinges on identifying duplicate content sent from multiple accounts. A number of other account-based features are used as well.
    Dataset: 217,802 spam wall posts from Facebook (from previous study); 467,390 spam tweets as identified by URL shorteners no longer serving the URL
    Time period: Janurary, 2008 — June, 2009 for Facebook; June 1, 2011 — July 21, 2011 for Twitter
    Source: Facebook Crawl; Twitter API predicated on trending topics
  • [February, 2012] Warningbird: Detecting suspicious urls in twitter stream: In contrast to content-based spam detection or account-based spam detection, the authors rely on the URL redirect chain used to cloak spam content as a detection mechanism. The core idea is that for cloaking to occur (e.g. when an automated crawler is shown one page and a victim a second, distinct page), some site must perform the multiplexing and that site is frequently re-used across campaigns. The final features used for classification include the redirect chain length, position of URL in redirect chain, distinct URLs leading to interstitial, and distinct URLs pointed to by interstitial. A number of account-based features are also used including age, number of posters, tweet similarity, and following-follower ratio.
    Dataset: 263,289 accounts suspended by Twitter and the URLs
    Time period: April, 2011 — August, 2011
    Source: Streaming API predicated on containing URLs
  • [December, 2012] Twitter games: how successful spammers pick targets: The authors examine how spammers engage with Twitter users (e.g. mentions, hashtags, retweets, social graph) and the types of spam sent. The vast majority of spammers are considered unsuccessful based on the speed they are suspended; more long lasting accounts rely on Twitter’s social graph and unsolicited mentions to spam.
    Dataset: 82,274 accounts suspended by Twitter
    Time period: November 21, 2011 — November 26, 2011
    Source: Streaming API
  • [February, 2013] Social Turing Tests: Crowdsourcing Sybil Detection: The authors examine the accuracy of using crowdsourcing (specifically Mechanical Turk) to identify fraudulent accounts in social networks in addition to the best criteria for selecting experts.
    Dataset: 573 fraudulent Facebook accounts with profile images appearing in Google Image Search, later deactivated by Facebook; 1082 fraudulent Renren accounts deactivated and provided by Renren
    Time period: December, 2011 — Janurary, 2012
    Source: Facebook Crawler; Renren data sharing agreement

Social Graphs of Spammers: Measurement and Detection

The following are a list of papers that leverage discrepancies in how legitimate users form social relationships in contrast to spammers as a detection mechanism. Many of these systems hinge on the assumption that spammers have a difficult time coercing legitimate users into following or befriending them, or alternatively, compromising accounts to seed relationships with. In practice, automated follow-back accounts and cultural norms may muddle their application. (For instance in Brazil and Turkey, most relationships are reciprocated; the social graph is more a status symbol than an act of curating interesting content or signifying trust.)

  • [September, 2006] Sybilguard: defending against sybil attacks via social networks: SybilGuard is an early work in detecting sybil accounts in social networks, though premairly for peer-to-peer networks and not online social networks. The primary assumption is that there is a small cut that exists between the legitimate social graph and spammers (who create a graph amongst themselves). If this is true, then the mixing time of random walks started from the legitimate network should rarely end in the sybil region due to limited paths. (Existing work including Measuring the mixing time of social graphs shows mixing times in online social networks are slower in practice than expected and thus some legitimate users may be construde as sybils due to slow mixing)
  • [February, 2009] Sybilinfer: Detecting sybil nodes using social networks: SybilInfer is identical in reasoning to SybilGuard, but the way in which random walks are performed differs, offering a better performance bound.
  • [June, 2010] SybilLimit: A near-optimal social network defense against sybil attacks:SybilLimit is a follow-on work to SybilGuard, improving performance guarantees.
  • [September, 2011] Spam filtering in twitter using sender-receiver relationship: The authors present a spam detection system for unsolicted mentions whereby the distance between users is used to classify communication as spam or benign.
    Dataset: 308 Twitter spam accounts posting 10K tweets
    Time period: February, 2011 — March, 2011
    Source: User reported spam accounts to @spam Twitter handle.
  • [September, 2011] Die Free or Live Hard? Empirical Evaluation and New Design for Fighting Evolving Twitter Spammers: The authors develop a classifier of spam accounts (though biased towards likely phished accounts) with network-based features including clustering, the bi-directionality of relationships, and betweenness centrality.
    Dataset: 2,060 accounts posting phising URLs on Twitter (from the perspective of Capture-HPC and Google Safebrowsing; includes redirects in blacklist check); drawn from a sample of 485,721 accounts
    Time period: Unspecified
    Source: Breadth first search over Twitter, seeded from Streaming API
  • [November, 2011] Uncovering Social Network Sybils in the Wild: The authors develop a sybil detection scheme based on clustering coefficients between accounts and the rate of incoming and outgoing relationship formation requests. A larger dataset is then used to understand how spammers form social relationships, where the authors find the vast majority of spam accounts do not form relationships amongst themselves.
    Dataset: 1000 fraudulent Renren accounts for classification; 660,000 fraudulent Renren accounts for study.
    Time period: Circa 2008 — February, 2011
    Source: Data sharing agreement with Renren
  • [April, 2012] Understanding and combating link farming in the twitter social network The authors examine how spammers form relationships in Twitter and find in 2009, the vast majority of spammers followed and were followed by ‘social capitalists’; legitimate users who automatically reciprocate relationships.
    Dataset: 41,352 suspended Twitter accounts that posted a blacklisted URL
    Time period: August, 2009 crawl; February, 2011 suspension check
    Source: Previous crawl of Twitter social graph in 2009
    Note: Lists of these users are available on blackhat forums; simply do a search for ‘twitter followback list’. Examples:

    • hxxp://www.blackhatworld.com/blackhat-seo/social-networking-sites/358556-free-list-18k-followback-twitter-users.html
    • hxxp://http://www.blackhatworld.com/blackhat-seo/social-networking-sites/365067-fresh-35k-twitter-followback-list.html
  • [April, 2012] Analyzing spammers’ social networks for fun and profit: a case study of cyber criminal ecosystem on twitter: The authors examine the social graph of a small subset of Twitter spammers (or compromised users) and determine that a large portion of their following arises from ‘social butterflies’ (e.g. ‘social capitalists’ in similar studies)
    Dataset: 2,060 accounts posting phising URLs on Twitter (from the perspective of Capture-HPC and Google Safebrowsing; includes redirects in blacklist check); drawn from a sample of 485,721 accounts
    Time period: Unspecified
    Source: Breadth first search over Twitter, seeded from Streaming API
  • [April, 2012] Aiding the Detection of Fake Accounts in Large Scale Social Online Services: The authors develop a SybilGuard-like algorithm and deploy it on the Tuenti social network, detecting 200K likely sybil accounts.
  • [August, 2012] Poultry Markets: On the Underground Economy of Twitter Followers: The authors examine an emerging marketplace for purchased relationships, an alterantive to link farming performed by follow-back accounts.
  • [October, 2012] Innocent by Association: Early Recognition of Legitimate Users: The authors develop a SybilGuard-like system, but rather than attempt to find sybil accounts, the system allows legitimate users to vouche for new users (through a transparent action such as sending a message) with the assumption legitimate users would otherwise not associate with spammers. Unvouched users can then be throttled via CAPTCHAs or other automation barriers to reduce the impact they have on a systme.

Social Astroturfing & Political Abuse

Abuse of social networks is not limited to spam and criminal monetization; a number of politically-motivated attacks have occurred over the past few years. These attacks aim to either sway public opinion, disseminate false information, or disrupt the conversations of legitimate users.

Service Abuse

Monetization in soical networks hinges on having URLs that convert traffic into a profit. In this process, a number of other services are abused, the most prominent of which are URL shorteners.

Social Malware & Malicious Applications

Compromised accounts in social networks allow miscreants to leverage the trust users place in their friends and family as a tool for increased clickthrough and propagation. The following is a list of papers detailing known social engineering campaigns or applications used to compromise social networking accounts.