I found this message from hxxp://spamvilla.com amusing for some reason (and totally unrelated to my research):
This post is based on research conducted in collaboration with Twitter, to appear in Usenix Security 2013. A pdf is available under my publications. Any views or opinions discussed herein are my own and not those of Twitter.
As web services such as Twitter, Facebook, Google, and Yahoo now dominate the daily activities of Internet users, cyber criminals have adapted their monetization strategies to engage users within these walled gardens. This has lead to a proliferation of fraudulent accounts — automatically generated credentials used to disseminate scams, phishing, and malware. Recent studies from 2011 estimate at least 3% of active Twitter accounts are fraudulent. Facebook estimates its own fraudulent account population at 1.5% of its active user base, and the problem extends to major web services beyond just social networks.
The complexities required to circumvent registration barriers such as CAPTCHAs, email confirmation, and IP blacklists have lead to the emergence of an underground market that specializes in selling fraudulent accounts in bulk. Account merchants operating in this space brazenly advertise: a simple search query for “buy twitter accounts” yields a multitude of offers for fraudulent Twitter credentials with prices ranging from $10–200 per thousand. Once purchased, accounts serve as stepping stones to more profitable spam enterprises that degrade the quality of web services, such as pharmaceutical spam or fake anti-virus campaigns.
To understand this shadowy economy, we investigate the market for fraudulent Twitter accounts to monitor prices, availability, and fraud perpetrated by 27 merchants over the course of a 10-month period. We use our insights to develop a classifier to retroactively detect several million fraudulent accounts sold via this marketplace, 95% of which we disable with Twitter’s help. During active months, the 27 merchants we monitor appeared responsible for registering 10–20% of all accounts later flagged for spam by Twitter, generating $127–459K for their efforts.
Account Merchants and Pricing
With no central operation of the underground market, we resort to investigating common haunts: advertisements via search engines, blackhat forums such as hxxp://blackhatworld.com, and freelance labor pages including Fiverr and Freelancer. In total, we identify a disparate group of 27 merchants whom we elect to purchase accounts from. We conduct a total of 140 successful orders of accounts, purchasing roughly 120K accounts over a period from June, 2012 — April, 2013. Prices throughout our study are relatively stable, as shown below:
Of the orders we placed, merchants fulfilled 70% in a day and 90% within 3 days. We believe the stable pricing and ready availability of fraudulent accounts is a direct result of minimal adversarial pressures on account merchants.
Circumventing Automated Registration Barriers
IP Addresses: Unique IP addresses are a fundamental resource for registering accounts in bulk. Without a diverse IP pool, fraudulent accounts would fall easy prey to network-based blacklisting and throttling.
As a whole, miscreants registered 79% of the accounts we purchase from unique IP addresses located across the globe. India is the most popular origin of registration, accounting for 8.5% of all fraudulent accounts in our dataset. Other “low-quality” IP addresses (e.g. inexpensive hosts from the perspective of the underground market) follow in popularity.
|Registration Origin||Total Accounts Registered from Origin||Unique IPs Popularity|
Email Confirmation: Web services frequently inhibit automated account creation by requiring new users to confirm an email address with a challenge response code. Unsurprisingly, we find this barrier is not insurmountable, but it does impact the pricing of accounts, warranting its continued use. A list of the most abused email address is as follows:
|Email Provider||Accounts Abused||Fraction of All Email Confirmed Accounts|
In total, merchants email confirm 77% of accounts we acquire, all of which they seeded with a unique email. The failure of email confirmation as a barrier directly stems from pervasive account abuse tied to web mail providers. Merchants abuse Hotmail addresses to confirm 60% of Twitter accounts, followed in popularity by Yahoo and mail.ru. This highlights the interconnected nature of account abuse, where credentials from one service can serve as keys to abusing yet another.
Despite the ability of merchants to verify an email address, we find that merchants selling email-confirmed accounts are 56% more expensive than their non-confirmed counterparts. This difference likely includes the base cost of an email address and any related overhead due to the complexity of responding to a confirmation email.
CAPTCHA Solving: As with email confirmation, CAPTCHAs are not an insurmountable barrier to automated account creation, but they do prevent a substantial number of fraudulent account registrations. We find that 92% of fraudulent accounts that are shown a CAPTCHA fail at generating a valid solution. (This is slightly higher than expected, where automated solvers previously studied provided a success rate of 18–30%). Despite this fact, account sellers are still able to register thousands accounts over the course of time, simply playing a game of odds.
Impact of Merchants on Twitter Spam
In order to gauge the impact that merchants have on Twitter spam, we develop a classifier that retroactively identifies several million spam accounts registered in the last year. Of these, 73% were sold and actively tweeting or forming relationships at one point in time, while the remaining 37% remained dormant and were yet to be purchased. We find that, during active months, the underground market was responsible for registering 10–20% of all accounts that Twitter later flagged as spam.
The most damaging merchants from our impact analysis operate out of blackhat forums and web storefronts, while Fiverr and Freelancer sellers generate orders of magnitude fewer accounts.
The End Goal — Profit
We estimate the revenue generated by the underground market based on the total accounts sold and the prices charged during their sale. We distinguish accounts that have been sold from those that lay dormant and await sale based on whether an account has sent tweets or formed relationships. For sold accounts, we identify which merchant created the account and determine the minimum and maximum price the merchant would have charged for that account based on our historical pricing data.
We estimate that the total revenue generated by the underground account market through the sale of Twitter credentials is between the range of $127,000–$459,000 over the course of a year. We note that many of the merchants we track simultaneously sell accounts for a variety of web services, so this value likely represents only a fraction of their overall revenue. Nevertheless, our estimated income is far less than the revenue generated from actually sending spam or selling fake anti-virus, where revenue is estimated in the tens of millions. As such, account merchants are merely stepping stones for larger criminal enterprises, which in turn disseminate scams, phishing, and malware throughout Twitter.
Disrupting the Underground Marketplace
With Twitter’s cooperation, we suspend an estimated 95% of all fraudulent accounts registered by the 27 merchants we track, including those previously sold but not yet suspended for spamming. We estimate our precision through this process at 99.9942%.
Immediately after Twitter suspended the last of the underground market’s accounts, we placed 16 new orders for accounts from the 10 merchants we suspected of controlling the largest stockpiles. Of 14,067 accounts we purchased, 90% were suspended on arrival due to Twitter’s previous intervention. When we requested working replacements, one merchant responded with:
All of the stock got suspended … Not just mine .. It happened with all of the sellers .. Don’t know what twitter has done …
Similarly, immediately after suspension, hxxp://buyaccs.com put up a notice on their website stating “Временно не продаем аккаунты Twitter.com”, translating via Google roughly to “Temporarily not selling Twitter.com accounts”.
While Twitter’s initial intervention was a success, the market has begun to recover. Of 6,879 accounts we purchased two weeks after Twitter’s intervention, only 54% were suspended on arrival. As such, long term disruption of the account marketplace requires both increasing the cost of account registration and integrating more robust at-signup time abuse classification into the account registration process.
In the process of working on my thesis, I’ve had to write some new background content on the taxonomy of social network spam. I figured I would share these ideas here, since the probability of someone reading my search-indexed blog >> than the probability of someone reading a 150, non-indexed document. As usual, any views or opinions discussed herein are my own.
As the underground economy adapts its strategies to target users in social networks, attacks require three components: (1) account credentials, (2) a mechanism to engage with legitimate users (i.e. the victims that will be exploited to realize a profit), and (3) some form of monetizable content. The latter is typically a link that redirects a victim from the social network to a website that generates a profit via spamvertised products, fake software, clickfraud, banking theft, or malware that converts a victims machine or assets (e.g. credentials) into a commodity for the underground economy. With respect to Twitter, the underpinnings of each of these components can be outlined as follows:
What becomes apparent from this taxonomy is that, while there are several ways to engage with victims (and more constantly emerge as new features are added — such as Vine), the ingress and egress points of abuse are much fewer. For this reason, I typically advocate for anti-spam teams to develop URL-based defenses and at-registration time defenses. Strangling those two choke points collapses all the other pain points of social network spam and abuse which are arguably harder to solve given the diverse ways legitimate users engage one another within social networks.
The rest of this post spends a little time defining the different components of this abuse taxonomy.
Credentials — The Ingress Point
In order to interact users in a social network, criminals must first obtain credentials for either new or existing accounts. This has lead to a proliferation of fraudulent accounts — automatically generated credentials used exclusively to disseminate scams, phishing, and malware — as well as compromised accounts –- legitimate credentials that have fallen into the hands of miscreants, which criminals repurpose for nefarious ends. Notable sources of compromise include the brute force guessing of weak passwords, password reuse with compromised websites, as well as worms or phishing attacks that propagate within the network.
Any of the multitude of features on Twitter can be targets of abuse in a criminal’s quest for drawing an audience. While its possible to solve one facet of abuse, criminals are constantly evolving how they engage with users to leverage new features added to social networks as well as to adapt to defense mechanisms employed by online social network operators. The result is a reactive development cycle that never affords defenders any reprieve. To illustrate this point, here are just some ways in which criminals engage with users.
Mention Spam consists of sending an unsolicited @mention or @reply to a victim, bypassing any requirement of sharing a social connection with a victim. Spammers can either initiate a conversation or join an existing conversation to appear in the expanded list of tweets associated with a conversation between a victim and her followers.
Direct Message Spam is identical to mention spam, but requires that a criminal’s account be followed by a victim. As such, DM spam is typically used when an account has become compromised due to the low rate of fraudulent accounts (11% — “Suspended Accounts in Retrospect”) that form relationships with legitimate users.
Trend Poisoning relies on embedding popular #hashtags in a spam tweet, allowing the tweet to appear in real-time searches about breaking news and world events performed by victims. Even relevance-based searches can be gamed by inflating the popularity of a spam account or tweet, similar to search engine optimization.
Search Poisoning is identical to trend poisoning, but instead of emerging topics typified by #hashtags, spammers embed specific keywords/brands in their tweets such as “viagra” and “ipad”. From there, users that search for information relevant to a keyword/brand will be exposed to spam.
Fake Trends leverage the availability of thousands of accounts under the control of a single criminal to effectively generate a new trend. From there, victims looking at emerging content will be exposed to the criminal’s message.
Follow Spam occurs when criminal leverages an account to generate hundreds of relationships with legitimate users. The aim of this approach is to either have a victim reciprocate the relationship or at least view the criminal’s account profile which often has a URL embedded in its bio.
Favorite Spam relies on abusing functionality on Twitter which allows a user to favorite, or recommend, a tweet. Criminals will mass-favorite tweets from victims in the hopes they either reciprocate a relationship or view the criminal’s account profile, just like follow spam.
Fake Followers are distinct from follow spam, in that a criminal purchases relationships from the underground economy. The goal here is to inflate the popularity of a criminal’s account (often for SEO purposes).
Retweet Spam entails hundreds of spam accounts all retweeting another (spam) account’s tweet (often for SEO purposes).
Profit lies at the heart of the criminal abuse ecosystem. Monetization strategies form a spectrum between selling products to a user with their consent to stealing from a victim without consent. In order to monetize a victim, users are funneled from Twitter to another website via a link. The exception to this rule is when abuse lacks a clear path for generating a profit. Examples of this are celebrities who buy fake followers to inflate their popularity (thus never requiring a link to achieve a payout — the payout is external to Twitter) as well as politically-motivated attacks such as censoring speech or controlling the message surrounding emerging trends (where the payout is political capital or damage control). While the latter attacks are realistic threats, the vast majority of abuse currently targeting social networks is more criminal in nature.
Spamvertised Products include advertisements for pharmacuticals, replica goods, and pirated software. Spam in this case is a means to an end to getting users to willingly buy products, freely offering their credit card information in return for a product.
Fake Software includes any malware or webpage that prompts a user to buy ineffectual software. The most prominent approach here is selling rogue antivirus, where users are duped into paying an annual or lifetime fee in return for “anti-virus” software that in fact provides no protection.
Clickfraud generates revenue by compromising a victim’s machine or redirecting their traffic to simulate legitimate traffic to pay-per-click advertisements. These ads typically appear on pages controlled by miscreants, while the ads are syndicated from advertising networks such as Google AdSense. Money is thus siphoned from advertisers into the hands of criminals.
Banking Theft, epitomized by information stealers such as Zeus or SpyEye, relies on installing malware on a victim’s machine or phishing their credentials in order to harvest sensitive user data including documents, passwords, and banking credentials. A criminal can then sell access to these accounts or liquidate the account’s assets.
Underground Infrastructure is the final source of potential profit. Instead of directly going after assets controlled by a victim (e.g. wealth, traffic, credentials), criminals can sell access to a victim’s compromised machine and convert it into a proxy or web host. Alternatively, criminals can sell installs of malware to the pay-per-install market or exploit-as-a-service market, whereby another criminal that specializes in one of the aforementioned monetization techniques utilizes the compromised machine, paying a small finders fee to the criminal who actually compromises a host.
The process of monetizing victims in social networks is a complex chain of dependencies. If any component of that chain should fail, spam and abuse cannot be profitable. To simplify the abuse process for spammers, an underground economy has emerged that connects criminals with parties selling a range of specialized products and services including spam hosting, CAPTCHA solving services, pay-per-install hosts, and exploit kits. Even simple services such as garnering favorable reviews or writing web page content are for sale.
Specialization within this ecosystem is the norm. Organized criminal communities include carders that siphon credit card wealth; email spam affiliate programs; and browser exploit developers and traffic generators. These distinct roles allow miscreants to abstract away certain complexities of abuse, in turn selling their speciality to the underground market for a profit.
I decided to compile a list of social networking papers that I’ve read. The list is likely incomplete, but gives shape to the current research pushes surrounding social network spam and abuse. Whenever appropriate, I detail the methodology for how a study was conducted; most data collection techniques carry an inherent bias that is worth being forthright about.
Social Network Spam and Abuse: Measurement and Detection
The following is a list of academic papers on the topics of social network spam and abuse. In particular, the papers cover (1) how spammers monetize social networks, (2) how spammers engage with social network users, and (3) how to detect both compromised and fraudulent accounts.
Detection falls into two categories: at-signup detection which attempts to identify spam accounts before an account takes any actions visible to social network users; and at-abuse detection which relies on identifying abusive behavior such as posting spam URLs or forming too many relationships. At-signup detection has yet to receive much academic treatment (mostly a data access problem), while at-abuse detection has been conducted based on tweet content; URL content and redirects; and account behaviors such as the frequency of tweeting or participation in trending topics. Papers that detect spam accounts based on the social graph are detailed in the following section.
- [July, 2010] Detecting spammers on twitter: The authors develop a classifier to detect fraudulent accounts based on the fraction of URLs in tweets; the fraction of tweets with hashtags; account age; follower-following ratios; and other account-based features. The most discriminative features were the number of URLs sent and an account’s age.
Dataset: 8,207 manually labeled Twitter accounts
Time period: December 8, 2008 — September 24,2010
Source: Crawl of all accounts with users IDs less than 80 million; filtered to only include accounts that tweet topics including (1) #musicmonday, (2) Boyle, (3) Jackson.
- [October, 2010] @spam: The underground on 140 characters or less: Our study of compromised Twitter accounts, the clickthrough rate on spam tweets, and the ineffectiveness of blacklists at detecting social network spam in a timely fashion. Detailed summary here.
Dataset: 3 million spam tweets classified by whether the URL in the tweet was blacklisted by URIBL, Joewein, or Google Safebrowsing. Classification includes the initial URL, all redirect URLs, and the final landing URL.
Time period: Janurary, 2010 — February, 2010
Source: Streaming API, predicated on containing URLs
- [November, 2010] Detecting and characterizing social spam campaigns: The authors develop a classifier to detect Facebook spam accounts based on the bursty nature of spam campaigns (e.g. many messages sent in a short period) and diverse source of accounts (e.g. multiple accounts coordinating together, typically sending similar text). Surprisingly, 97% of the accounts identified were suspected of being compromised rather than fraudulent. The most popular spam types were “someone has a crush on you”-scams, ringtones, and pharma spam.
Dataset: 187M wall posts, 212,863 of which are detected as spam sent by roughly 57,000 accounts (validated via blacklists or an obfuscation heuristic)
Time period: January, 2008 — June, 2009
Source: Crawl of Facebook networks, described in detail in User Interactions in Social Networks and their Implications
- [December, 2010] Detecting spammers on social networks: The authors develop a classifier based on the fraction of messages containing URLs; the similarity of messages; total messages sent; and the total number of friends. The spam sent from detected accounts include dating, porn, ad-based monetization, and money making scams.
Dataset: 11,699 Twitter accounts; 4,055 Facebook accounts
Time period: June 6, 2009 — June 6, 2010
Source: Spammers messaging or forming relationships with 300 passive honeypots each for Facebook and Twitter
- [April, 2011] Facebook immune system: A brief summary of Facebook’s system for detecting spam. There is no mention of the volume of spam Facebook receives (though SEC filings say its roughly 1%) or the threats the service faces.
Dataset: 567,784 spam URLs posted to Twitter (as identified by Google Safebrowsing, SURBL, URIBL, Anti-Phishing Work Group, and Phishtank); 1.25 million spam URLs in emails (as identified by spam traps)
Time period: September, 2010 — October, 2010
Source: Email spam traps; Twitter Streaming API
- [November, 2011] Suspended accounts in retrospect: An analysis of twitter spam: Our analysis of fraudulent Twitter accounts, the tools used to generate spam, and the resulting spam campaigns and monetization strategies. Detailed summary here.
Dataset: 1,111,776 accounts suspended by Twitter
Time period: August 17, 2010 — March 4, 2011
Source: Streaming API, predicated on containing URLs
- [February, 2012] Towards Online Spam Filtering in Social Networks: The authors build a detection framework for Twitter spam that hinges on identifying duplicate content sent from multiple accounts. A number of other account-based features are used as well.
Dataset: 217,802 spam wall posts from Facebook (from previous study); 467,390 spam tweets as identified by URL shorteners no longer serving the URL
Time period: Janurary, 2008 — June, 2009 for Facebook; June 1, 2011 — July 21, 2011 for Twitter
Source: Facebook Crawl; Twitter API predicated on trending topics
- [February, 2012] Warningbird: Detecting suspicious urls in twitter stream: In contrast to content-based spam detection or account-based spam detection, the authors rely on the URL redirect chain used to cloak spam content as a detection mechanism. The core idea is that for cloaking to occur (e.g. when an automated crawler is shown one page and a victim a second, distinct page), some site must perform the multiplexing and that site is frequently re-used across campaigns. The final features used for classification include the redirect chain length, position of URL in redirect chain, distinct URLs leading to interstitial, and distinct URLs pointed to by interstitial. A number of account-based features are also used including age, number of posters, tweet similarity, and following-follower ratio.
Dataset: 263,289 accounts suspended by Twitter and the URLs
Time period: April, 2011 — August, 2011
Source: Streaming API predicated on containing URLs
- [December, 2012] Twitter games: how successful spammers pick targets: The authors examine how spammers engage with Twitter users (e.g. mentions, hashtags, retweets, social graph) and the types of spam sent. The vast majority of spammers are considered unsuccessful based on the speed they are suspended; more long lasting accounts rely on Twitter’s social graph and unsolicited mentions to spam.
Dataset: 82,274 accounts suspended by Twitter
Time period: November 21, 2011 — November 26, 2011
Source: Streaming API
- [February, 2013] Social Turing Tests: Crowdsourcing Sybil Detection: The authors examine the accuracy of using crowdsourcing (specifically Mechanical Turk) to identify fraudulent accounts in social networks in addition to the best criteria for selecting experts.
Dataset: 573 fraudulent Facebook accounts with profile images appearing in Google Image Search, later deactivated by Facebook; 1082 fraudulent Renren accounts deactivated and provided by Renren
Time period: December, 2011 — Janurary, 2012
Source: Facebook Crawler; Renren data sharing agreement
Social Graphs of Spammers: Measurement and Detection
The following are a list of papers that leverage discrepancies in how legitimate users form social relationships in contrast to spammers as a detection mechanism. Many of these systems hinge on the assumption that spammers have a difficult time coercing legitimate users into following or befriending them, or alternatively, compromising accounts to seed relationships with. In practice, automated follow-back accounts and cultural norms may muddle their application. (For instance in Brazil and Turkey, most relationships are reciprocated; the social graph is more a status symbol than an act of curating interesting content or signifying trust.)
- [September, 2006] Sybilguard: defending against sybil attacks via social networks: SybilGuard is an early work in detecting sybil accounts in social networks, though premairly for peer-to-peer networks and not online social networks. The primary assumption is that there is a small cut that exists between the legitimate social graph and spammers (who create a graph amongst themselves). If this is true, then the mixing time of random walks started from the legitimate network should rarely end in the sybil region due to limited paths. (Existing work including Measuring the mixing time of social graphs shows mixing times in online social networks are slower in practice than expected and thus some legitimate users may be construde as sybils due to slow mixing)
- [February, 2009] Sybilinfer: Detecting sybil nodes using social networks: SybilInfer is identical in reasoning to SybilGuard, but the way in which random walks are performed differs, offering a better performance bound.
- [June, 2010] SybilLimit: A near-optimal social network defense against sybil attacks:SybilLimit is a follow-on work to SybilGuard, improving performance guarantees.
- [September, 2011] Spam filtering in twitter using sender-receiver relationship: The authors present a spam detection system for unsolicted mentions whereby the distance between users is used to classify communication as spam or benign.
Dataset: 308 Twitter spam accounts posting 10K tweets
Time period: February, 2011 — March, 2011
Source: User reported spam accounts to @spam Twitter handle.
- [September, 2011] Die Free or Live Hard? Empirical Evaluation and New Design for Fighting Evolving Twitter Spammers: The authors develop a classifier of spam accounts (though biased towards likely phished accounts) with network-based features including clustering, the bi-directionality of relationships, and betweenness centrality.
Dataset: 2,060 accounts posting phising URLs on Twitter (from the perspective of Capture-HPC and Google Safebrowsing; includes redirects in blacklist check); drawn from a sample of 485,721 accounts
Time period: Unspecified
Source: Breadth first search over Twitter, seeded from Streaming API
- [November, 2011] Uncovering Social Network Sybils in the Wild: The authors develop a sybil detection scheme based on clustering coefficients between accounts and the rate of incoming and outgoing relationship formation requests. A larger dataset is then used to understand how spammers form social relationships, where the authors find the vast majority of spam accounts do not form relationships amongst themselves.
Dataset: 1000 fraudulent Renren accounts for classification; 660,000 fraudulent Renren accounts for study.
Time period: Circa 2008 — February, 2011
Source: Data sharing agreement with Renren
- [April, 2012] Understanding and combating link farming in the twitter social network The authors examine how spammers form relationships in Twitter and find in 2009, the vast majority of spammers followed and were followed by ‘social capitalists’; legitimate users who automatically reciprocate relationships.
Dataset: 41,352 suspended Twitter accounts that posted a blacklisted URL
Time period: August, 2009 crawl; February, 2011 suspension check
Source: Previous crawl of Twitter social graph in 2009
Note: Lists of these users are available on blackhat forums; simply do a search for ‘twitter followback list’. Examples:
- [April, 2012] Analyzing spammers’ social networks for fun and profit: a case study of cyber criminal ecosystem on twitter: The authors examine the social graph of a small subset of Twitter spammers (or compromised users) and determine that a large portion of their following arises from ‘social butterflies’ (e.g. ‘social capitalists’ in similar studies)
Dataset: 2,060 accounts posting phising URLs on Twitter (from the perspective of Capture-HPC and Google Safebrowsing; includes redirects in blacklist check); drawn from a sample of 485,721 accounts
Time period: Unspecified
Source: Breadth first search over Twitter, seeded from Streaming API
- [April, 2012] Aiding the Detection of Fake Accounts in Large Scale Social Online Services: The authors develop a SybilGuard-like algorithm and deploy it on the Tuenti social network, detecting 200K likely sybil accounts.
- [August, 2012] Poultry Markets: On the Underground Economy of Twitter Followers: The authors examine an emerging marketplace for purchased relationships, an alterantive to link farming performed by follow-back accounts.
- [October, 2012] Innocent by Association: Early Recognition of Legitimate Users: The authors develop a SybilGuard-like system, but rather than attempt to find sybil accounts, the system allows legitimate users to vouche for new users (through a transparent action such as sending a message) with the assumption legitimate users would otherwise not associate with spammers. Unvouched users can then be throttled via CAPTCHAs or other automation barriers to reduce the impact they have on a systme.
Social Astroturfing & Political Abuse
Abuse of social networks is not limited to spam and criminal monetization; a number of politically-motivated attacks have occurred over the past few years. These attacks aim to either sway public opinion, disseminate false information, or disrupt the conversations of legitimate users.
- [July, 2011] Detecting and Tracking Political Abuse in Social Media
- [April, 2012] Serf and turf: Crowdturfing for fun and profit
- [April, 2012] Adapting Social Spam Infrastructure for Political Censorship
- [June, 2012] #bias: Measuring the Tweeting Behavior of Propagandists
Monetization in soical networks hinges on having URLs that convert traffic into a profit. In this process, a number of other services are abused, the most prominent of which are URL shorteners.
- [March, 2011] we.b: The web of short URLs
- [September, 2011] Phi.sh/$oCiaL: the phishing landscape through short URLs
Social Malware & Malicious Applications
Compromised accounts in social networks allow miscreants to leverage the trust users place in their friends and family as a tool for increased clickthrough and propagation. The following is a list of papers detailing known social engineering campaigns or applications used to compromise social networking accounts.
- [October, 2010] The Koobface Botnet and the Rise of Social Malware
- [August, 2012] Efficient and scalable socware detection in online social networks
This post is based on research conducted in collaboration with Google, to appear in CCS 2012. A pdf is available under my publications. Any views or opinions discussed herein are my own and not those of Google.
Driveby downloads — webpages that attempt to exploit a victim’s browser or plugins (e.g. Flash, Java) — have emerged as one of the dominant vectors for infecting hosts with malware. This revolution in the underground ecosystem has been fueled by the exploit-as-a-service marketplace, where exploit kits such as Blackhole and Incognito provide easily configurable tools that handle all of the “dirty work” of exploiting a victim’s browser in return for a fee. This business model follows in the footsteps of a dramatic evolution in the world of for-profit malware over the last five years, where host compromise is now decoupled from host monetization. Specifically, the means by which a host initially falls under an attacker’s control are now independent of the means by which an(other) attacker abuses the host in order to realize a profit, such as sending spam, information theft, or fake anti-virus.
In the case of exploit kits, attackers can funnel traffic from compromised sites or SEO boosted content to exploit kits, taking control of a victim’s machine without any knowledge of the complexities surrounding browser and plugin vulnerabilities. These hosts can in turn be sold to the pay-per-install marketplace or directly monetized by the attacker. From the perspective of Google Chrome, driveby downloads outstrip social engineering as the most prominent threat, while Microsoft’s latest security intelligence report (SIRv12) highlights the growing threat of driveby downloads, shown below:
In order to understand the impact of the exploit-as-a-service paradigm on the malware ecosystem, we performed a detailed analysis of:
- The prevalence of exploit kits across malicious URLs
- The families of malware installed upon a successful browser exploit, compared to executable found in email spam, software torrents, the pay-per-install market, and live network traffic
- The traffic volume, lifetime, and popularity of malicious websites.
To carry out this study, we analyzed 77,000 malicious URLs provided to us by Google, along with a crowd-sourced feed of blacklisted URLs known to direct to exploit kits. These URLs led to over 10,000 distinct binaries, which we ran in a contained environment (i.e. no side-effects visible to the outside world) to determine the family of malware as well as its monetization approach. We also aggregated and executed over 50,000 distinct binaries pulled from email spam, software and warez torrents, pay-per-install distribution sites, and live network traffic containing malware from corporate settings.
Anatomy of Driveby Download
From the time a victim accesses a malicious website up to the installation of malware on their system, there is a complex chain of events that underpins a successful driveby download. The infection chain for a real driveby that appeared in our study is shown below, where I obfuscate only the compromised website that launched the attack:
In this particular case, victims that visited a compromised website  were funneled through a chain of redirects  before finally being exposed to an exploit kit . Depending on the time the compromised site was visited, either Blackhole or a yet unknown exploit kit would attempt to exploit the victim’s browser. If successful, different malware including SpyEye (information stealer), ZeroAccess (information stealer), and Rena (fake anti-virus) supplied by third-parties  would be installed on the victim’s machine . This chain highlights the multiple actors involved in the exploit-as-a-service market: attackers’s purchasing installs, exploit kit developers, and miscreants compromising websites and redirecting traffic to exploit kits. Depending an attacker’s preference, all three roles can be conducted by a single party or outsourced to the underground marketplace.
Popular Exploit Kits
Of the 77,000 URLs we received from Google’s Safe Browsing list, over 47% of initial domains tied to driveby downloads terminate at an exploit kit. Of the remaining domains, 49% lead directly to executables without a pack, and 4% could not be classified. The table below provides a detailed breakdown of the kits we identified:
|Rank||Exploit Kit||Initial Domains||Final Domains|
The most popular exploit kit is Blackhole, which anecdotally based on screenshots like the one below, has a success rate of 7-12% at compromising a victim's browser. Incognito follows in popularity, along with a short list of other kits.
Our results show that exploit kits play a vital role in the driveby ecosystem. Surprisingly, only a handful of kits exist, making them one of the weakest links in the exploit-as-a-service marketplace. These types of bottlenecks are far more attractive for disruption compared to taking down the 6,300 unique domains hosting driveby exploits in our dataset (just a fraction of malicious sites in the wild).
Malware Dropped by Kits
We collect the unique binaries installed upon a successful exploit for each of the driveby domains in our dataset (10,308 binaries in total). During the same time period we also acquire a feed of executables found in email spam attachments (2,817 binaries), pay-per-install programs (2,691 binaries from the droppers that install a client’s software), warez and torrents (17,182 binaries and compressed files), and live network traffic (28,300 binaries from Arbor ASERT). We execute all of these binaries in a contained environment which prohibits outgoing network traffic other than for manually crafted whitelist policies in order to allow test connections and guide execution.
Through a combination of automated clustering and manual labeling by analysts, we classify the vast majority of binaries in our dataset, with the most prominent families per infection vector shown below. (Note: torrents and live traffic contained a number of benign binaries, bringing down the total fraction of malicious samples.)
|1||Emit (12%)||Clickpotato (6%)||Lovegate (44%)||Unknown.Adware.A (0.1%)||TDSS (2%)|
|2||Fake WinZip (8%)||Palevo (3%)||MyDoom (6%)||Sefnit (0.07%)||Clickpotato (1%)|
|3||ZeroAccess (5%)||NGRBot (2%)||Bagle (1%)||OpenCandy (0.07%)||NGRBot (1%)|
|4||SpyEye (4%)||Gigabid (2%)||Sality (0.5%)||Unknown.Adware.B (0.06%)||Toggle Adware (0.5%)|
|5||Windows Custodian (4%)||ZeroAccess (2%)||TDSS (0.1%)||ZeroAccess (.01%)||ZeroAccess (0.3%)|
|6||Karagany (4%)||Emit (1%)||(0.03%)||Whitesmoke (0.01%)||Gigabid (0.2%)|
|Total||32 families||19 families||6 families||6 families||40 families|
Through passive DNS data collected from a number of ISPs (details available in the paper), we are able to determine which families are installed most frequently by driveby domains. This provides a more meaningful ranking than using unique MD5 sums, which only measures polymorphism. We also compare whether any of the families installed by drivebys appear in our other feeds: (D)roppers, (A)ttachments, (L)ive, and (T)orrents.
|Family||Monetization||Fraction of Installs||Other Feeds|
|Windows Custodian||Fake AV||10.3%||–|
|Cluster A||Browser Hijacking||5.1%||–|
|Cluster B||Fake AV||2.2%||–|
|Perfect Keylogger||Information Stealer||1.9%||D;L|
|Votwup||Denial of Service||1.6%||–|
|Fake Rena||Fake AV||1.5%||–|
|Cluster C||Information Stealer||0.7%||–|
Variants including ZeroAccess and Emit rely on multiple infection vectors, while many of the other prominent variants are distributed solely through drivebys. Given that we identify 32 variants from drivebys and 19 from droppers, compared to only 6 from attachments and torrents, it is clear that the exploit-as-a-service and pay-per-install marketplace dominate the underground economy as a source of installs.
Catch Me If You Can
Using passive DNS data, we measure the time that a domain used to host an exploit kit receives traffic. We find malicious domains survive for a median of 2.5 hours before going dark, with 43% of compromised pages that siphon traffic towards exploit kits linking to more than one final domain. As such, attempting to detect sites hosting exploit kits is a losing battle where domain registration far outstrips the pace of detection. Instead, detection should concentrate on identifying compromised sites. Such detection should also occur in-browser in order to circumvent the challenges associated with cloaking or time of crawl vs. time of use variations.
Social media has emerged as an influential platform for political engagement, allowing users to directly call out political opponents and publicly debate hot-button issues. Tools such as the twindex directly tap into this real-time stream of public sentiment, predicting political outcomes in the same way a traditional Gallup poll would. As social media matures and citizens place their trust in sites like Facebook and Twitter as a source of political truth, such trust is misplaced due to weaknesses in how popular content is bubbled up and how political accounts are ranked and recommended to users. Similarly, the viral nature of content makes mud slinging and misinformation all the more alluring — or, in other words, politics as usual.
The only hurdle between me and gaining a million followers (besides having a private account) is $5,000. At least, that’s the case if I were to purchase followers on Twitter at a going rate of $5-20 per thousand. If popularity is simply a measure of counts rather than information diffusion (e.g. the thousands of Lady Gaga fans willing to retweet her content), then such metrics can be easily gamed due to the ease by which new Twitter accounts can be created. When it comes to social media, there is a fundamental tension between growth and security. Email confirmations, CAPTCHA solutions, and unique IP rules all stymie legitimate users from registering a new account, even if those same tools are necessary to dam the floodgates of spam. Similarly, the cost of false positives where a legitimate user is banned from communicating far outweigh the cost of false negatives (uncaught fraudulent accounts), tipping the balance in favor of miscreants and spammers accessing social media.
When Mitt Romney gained over 100,000 followers in a single day, the media balked at whether these accounts were real users. While political events can certainly trigger an influx of interest, all of the new followers reflected low in-degree accounts that no one else on Twitter was following. Now, well after the story broke, at least 60,000 of these accounts have been suspended, as seen from Romney’s twittercounter.
Whether the accounts were intentionally purchased by Mitt Romney or an adversary (or equally likely, a software glitch of spam accounts set to follow popular Twitter users in order to appear more realistic, but consequently all performing the same action) is unknown. But accusations of purchasing followers is nothing new, with similar rumors plaguing the Newt Gingrich campaign.
The Mitt Romney story also illustrates the susceptibility of brands in social media to smear attacks. If a political opponent purchases followers for a candidate and then cries wolf, the ‘evidence’ of new followers is plain to see, while the target of the attack can only vehemently deny involvement. Similarly, if a political brand launches a trending topic and it is co-opted by opponents (legitimately or not) in a 4chan like manner that degenerates into offensive content (hello pedobear), the original brand has no control over their message once it hits social media, even though its directly linked to their brand (e.g. on a Facebook page or on a promoted hashtag where the affiliation is clear). It’s the double edged sword of mass connectivity in social media.
One of the more sad applications of social media involves intentionally manipulating anti-spam tools to silence political dissidents. Both Facebook and Twitter grant users the ability to report offensive content, block messages from accounts, and report users for spam. These metrics in turn can be used for removing spam accounts, but are fragile to abuse. A prominent example of this abuse occurred during a political battle between far-right and far-left Israeli groups on Facebook, where thousands of users from one side would report bomb [Hebrew] an account, resulting in its temporary expulsion from Facebook.
While nation states have their own (legal) means to censor and control social media, the aforementioned attack is a chilling reminder of the adversarial nature of user-generated input. Taken at face value, user reports of child pornography or spam can be used to shut down the accounts of political adversaries where the only real victim is free speech.
Social media allows millions of users to connect and discuss political concerns, but whether those issues or the accounts participating are real is an entirely different issue. On Facebook, public pages serve as a forum for commenting and discussion, while on Twitter trending topics allow users with no social connections to interact. The organic nature of how discussions are conceived and the fact that anyone can participate make fake accounts a valuable resource in skewing the tone of conversation. Fake stories, co-opting existing stories, and astroturfing are a growing problem in social media. As detailed in a previous post, the discussion surrounding the Russian parliamentary election on Twitter was swarmed with thousands of fake accounts, while topics like #freetibet continue to be attacked by politically-motivated bots. When topics are effectively voted up by users, there is no such thing as a direct democracy in the presence of thousands of fake accounts.
This post is based on research from “Adapting Social Spam Infrastructure for Political Censorship” published in LEET 2012 – a pdf is available under my publications. Any views or opinions discussed herein are my own.
In recent years social networks have emerged as a significant tool for both political discussion and dissent. Salient examples include the use of Twitter for a town hall with the Whitehouse. The Arab Spring that swept over the Middle-East also embraced Twitter and Facebook as a tool for organization, while Mexicans have adopted social media as a means to communicate about violence at the hands of drug cartels in the absence of official news reports. Yet, the response to the growing importance of social networks in some countries has been chilling, with the United Kingdom threatening to ban users from Facebook and Twitter in response to rioting in London and Egypt blacking out Internet and cell phone coverage during its political upheaval. While nation states can exert their control over Internet access to outright block connections to social media, parties without such capabilities may still desire to control political expression.
Recently, I had a chance to retroactively examine an attack that occurred on Twitter surrounding the Russian parliamentary elections. For a little back story on the attack, upon the announcement of the Russian election results, accusations of fraud quickly followed and protesters organized at Moscow’s Triumfalnaya Square. When discussions of the election results cropped up on Twitter, a wave of bots swarmed the hashtags that legitimate users were using to communicate in an attempt to control the conversation and stifle search results related to the election.
In total, there were 46,846 Twitter accounts discussing the election results. It turns out, 25,860 of these were controlled by attackers in order to send 440,793 misleading or nonsensical tweets, effectively shutting down conversations about the election. While abusing social networks for political motives is nothing new, the attack is noteworthy because (1) it relied on fraudulent accounts purchased from spam-as-a-service marketplaces and (2) it relied on over 10,000 compromised hosts located around the globe. These marketplaces — such as hxxp://buyaccs.com/ — are traditionally used to outfit spam campaigns, freeing spammers from registering accounts in exchange for a small fee. However, this attack shows that malicious parties can easily adapt these services for other forms of attacks, including political censorship and astroturfing.
Below is a preview of our results. For more details, check out the full LEET 2012 paper.
We identify 20 hashtags that correspond with the Russian election, the top 10 of which are shown below.
|победазанами||Victory will be ours||10,380|
We aggregate all of the accounts that participated in the hashtags and then segment them into accounts that are now suspended by Twitter (25,860) and those that appear to be legitimate (20,986). We then aggregate all of the tweets sent by these accounts during the attack. As shown in the following figure, legitimate conversations (black line) appear diurnally over the course of 2 days from December 5th — December 6th. Conversely, the attack (blue line) occurs in two distinct waves, outproducing tweets compared to legitimate users at certain periods.
If we restrict our analysis to only tweets with the relevant election hashtags, the impact of the attack is even starker.
One of the most interesting aspects of the attack was the accounts involved. Accounts were registered in four distinct waves, where each wave has a uniquely formatted account profile that can be captured by a regular expression. We call these waves Type-1 through Type-4. Accounts were acquired as far back as seven months proceeding the attack. For most of the time, the accounts were dormant, though some did come alive at various intervals to tweet politically-orientated tweets prior to the attack.
Interestingly, all of the accounts were registered with mail.ru email addresses, which allows us to extend our analysis one step further. We take the regular expressions that capture the accounts used in the attack and apply it to all Twitter accounts registered with mail.ru emails in the last year. From this, we identify roughly 975,000 other spam accounts, 80% of which remain dormant with 0 following, 0 followers, and 0 tweets. These accounts were registered in disparate bursts over time and likely all belong to a single spam-as-a-service program. The registration times of these presumed spam accounts are shown below. Legitimate accounts show a relative trend in growth, while the anomalous bursts in registrations are attributed to a malicious party registering accounts to later sell.
We examine one final aspect of the attack: the geolocation of IPs used to access accounts tweeting about the election. Legitimate accounts tend to be accessed from either the United States (20%) or Russia (56%). In contrast, the accounts controlled by the attacker were accessed from hosts located around the globe; only 1% of logins originated from Russia. Furthermore, 39% of these IP addresses appear in blacklists, indicating many of the hosts are used simultaneously in other spam-related activities. Combined, this information reveals that the attackers relied on compromised hosts which again may have been purchased from the spam-as-a-service underground.
This post is based on research from IMC 2011 – a pdf is available under my publications. Any views or opinions discussed herein are my own and are based solely on research I conducted prior to working at Twitter.
As Twitter continues to grow in popularity, so does the marketplace for abusing Twitter as a service for spamming. In order to understand this phenomenon, we tracked the behavior of 1.1 million accounts suspended by Twitter for disruptive activities (e.g. spamming, aggressive following) over the course of seven months. In the process, we collected a dataset of 80 million tweets sent by spam accounts in addition to 37.8 million URLs presumed to direct to spam. What follows is an analysis of the abuse of online social networks through the lens of the tools, techniques, and support infrastructure spammers rely upon.
Our Dataset and All Its Caveats
Our dataset was derived from Twitter’s garden hose which provides a sample of all tweets appearing on Twitter. More precisely, we received 150 tweets/second, amounting to 12 million tweets per day in the absence of network outages or errors. Rather than receive generic tweets, we specifically requested tweets that contain URLs, simply because they are more interesting from a spam perspective; they have a clear monetization angle we could manually analyze. In total, we collected 1.8 billion tweets from August 2010 through March 2011, only 80 million of which turned out to be spam. Here was our daily sample size, with breaks indicating an outage in our collection (oops, measurement is hard!):
Due to rate limiting performed by Twitter, our sample rate was strictly decreasing; we were limited to 150 tweets/second, while Twitter continued to grow in volume. As a result, the total fraction of tweets with URLs we received dropped from 90% at the onset of our study down to 60% at its completion:
For further details on our collection methodology, validation, and sampling, check out the paper.
State of Twitter Spam – How Much?
Using our dataset, we counted the number of spam tweets sent by accounts suspended by Twitter each day from August 2010 through March 2011. The results are shown here:
Our calculations are a strict lower-bound as we rely on Twitter to identify spam; something we know is imperfect. Based on manual analysis, we estimated that Twitter caught 37% of spam, which means the actual number of spam tweets per day is likely much higher. Nevertheless, we can discern that at least half a million spam tweets are sent each day. Interestingly, the highest volume of spam preceded the holiday season; even spammers have gift suggestions for you and your family.
One noteworthy observation is that while the total volume of spam appears to be flat, our sample size was decreasing. This would indicate that spam on Twitter was actually increasing over time.
Spam Accounts – How Many, How Active, How Long?
To be continued…
After a bit of juggling and finishing up research at Berkeley for the Spring (fingers crossed for IMC 2011), I landed an internship at Twitter over the Summer. My goal is to examine spam that is targeting their systems and to see whether any of the research ideas coming out of our Berkeley group are transferable. Plus, I get sweet sweet access to data.
Also, to my surprise, I won a fellowship from Facebook to continue performing security research into social networks (starting next Fall). The meat of my proposal include understanding malicious application usage, account abuse, and characterizing the monetization of social network spam. I’m delighted my proposal got some traction; now to just get the work done next year.
This post is based on research from Oakland Security & Privacy 2011 – a pdf is available under my publications
Recently we presented our research on Monarch, a real-time system that crawls URLs as they are submitted to web services and determines whether the URLs direct to spam. The system is geared towards environments such as email or social networks where messages are near-interactive and accessed within seconds after delivery.
The two major previous approaches for detecting and filtering spam include domain and IP blacklists for email and account-based heuristics in social networks which attempt to detect abusive user behavior. However, these approaches fail to protect web services. In particular, blacklists are too inaccurate and slow in listing new spam URLs. Similarly, account-based heuristics incur delays between a fraudulent account’s creation and its subsequent detection due to the need to build a history of (mis-)activity. Furthermore, these heuristics for automation fail to detect compromised accounts that exhibit a mixture of spam and benign behaviors. Given these limitations, we seek to design a system that operates in real-time to limit the period users are exposed to spam content; provides fine-grained decisions that allow services to filter individual messages posted by users; but functions in a manner generalizable to many forms of web services.
To do this, we develop a cloud-based system for crawling URLs in real-time that classifies whether a URL’s content, underlying hosting infrastructure, or page behavior exhibits spam properties. This decision can then be used by web services to either filter spam or as a signal for further analysis.
When we developed Monarch, we had six principles that influenced our architecture and approach:
- Real-time results. Social networks and email operate as near-interactive, real-time services. Thus, significant delays in filtering decisions degrade the protected service.
- Readily scalable to required throughput. We aim to provide viable classification for services such as Twitter that receive over 15 million URLs a day.
- Accurate decisions. We want the capability to emphasize low false positives in order to minimize mistaking non-spam URLs as spam.
- Fine-grained classification. The system should be capable of distinguishing between spam hosted on public services alongside non-spam content (i.e., classification of individual URLs rather than coarser-grained domain names).
- Tolerant to feature evolution. The arms-race nature of spam leads to ongoing innovation on the part of spammers’ efforts to evade detection. Thus, we require the ability to easily retrain to adapt to new features.
- Context-independent classification. If possible, decisions should not hinge on features specific to a particular service, allowing use of the classifier for different types of web services.
The architecture for Monarch consists of four components. First, messages from web services (tweets and emails in our prototype) are inserted into a dispatch Kestrel queue in a phase called URL Aggregation. These are then dequeued for Feature Collection, where a cluster of EC2 machines crawls each URL to fetch the HTML content, resolve all redirects, monitor all IP addresses contacted, and perform a number of host lookups and geolocation resolution. We optimize feature collection to include caching and whitelisting of popular benign content. These features are then stored in a database, which is later used during Feature Extraction to transform the data into meaningful binary vectors. These are then supplied to Classification. We obtain a labeled dataset from email spam traps as well as blacklists (our only means of obtaining a ground truth set of spam on Twitter). Using a distributed logistic regression with L1-regularization, which we detail in the paper, we are able to reduce from 50 million features down to 100,000 of the most meaningful features and build a model of spam in 45 minutes for 1 million samples. During live operation, we simply use this model to classify the features of a URL. Overall, it takes roughly 6 seconds from insertion into the dispatch queue to obtain a final decision for whether a URL is spam, with network delay accounting for the majority of overhead.
- Training on both email and tweets, we are able to generate a unified model that correctly classifies 91% of samples, with 0.87% false positives and 17.6% false negatives.
- Throughput of the system is 638,000 URLs/day when running on 20 EC2 instances.
- Decision time for a single URL is ~6 seconds
One of the unexpected results is that Twitter spam appears to be independent from email spam, with different campaigns occurring in both services simultaneously. This seems to indicate the actors targeting email haven’t modified their infrastructure to attack Twitter yet, though this may change over time.
There remain a number of challenges in running a system like Monarch that are discussed in the paper as well as pointed out by other researchers.
- Feature Evasion: Spammers can attempt to game the machine learning system. Given the real-time feedback for whether a URL is spam, they can attempt to modify their content or hosting to avoid detection.
- Time-based Evasion: URLs are crawled immediately upon their submission to the dispatch queue. This creates a time of click, time of use challenge where spammers can present benign content upon sending an email/tweet, but then change the content to spam after the URL is cleared.
- Crawler Evasion: Given we operate on a limited IP space and use single browser type, attackers can fingerprint both our hosting and browser client. They can then redirect our crawlers to benign content, while sending legitimate visitors to hostile content.
- Side effects: Not all websites adhere to the standard that GET requests should have no side effects. In particular, subscribe and unsubscribe URLs as well as advertisements may have side effects introduced by our crawler.
Other interesting questions also remain to be answered. In particular, it would be useful to understand how accuracy performs over time on a per campaign basis. Some campaigns may last a long time, increasing our overall accuracy, while quickly churning campaigns that introduce new features may result in lower accuracy. Similarly, it would be useful to understand whether the features we identify appear in all campaigns (and are long lasting), or whether we are able to quickly adapt to the introduction of new features and new campaigns.