Anyone Can Become a Troll

Analysis and simulation of online discussion sections show circumstances that can cause civil commentators to engage in aggressive behavior.

Communications Computer Sociology Human Ecology

Current Issue

This Article From Issue

May-June 2017

Volume 105, Number 3
Page 152

DOI: 10.1511/2017.105.3.152

“Fail at life. Go bomb yourself.”

Comments such as this one, found attached to a CNN article about how women perceive themselves, are prevalent today across the internet, whether the location is Facebook, Reddit, or a news website. Such commenting behavior can range from profanity and name-calling to personal attacks, sexual harassment, or hate speech.

 

A recent Pew Internet Survey found that 4 out of 10 people online have been harassed on the internet, with far more having witnessed such behavior. Trolling has become so rampant that several websites have even resorted to completely removing comments.

Many believe that trolling is solely done by a small, vocal minority of sociopathic individuals. This belief has been reinforced not only in the media, but also in past research on trolling, which focused on interviewing these individuals. Some studies even showed that trolls have predisposing personal and biological traits, such as sadism and a propensity to seek excessive stimulation.

 
 
Ad Right

But what if all trolls aren’t born trolls? What if some of them are ordinary people like you and me? In our research, we found that people can be influenced to troll others under the right circumstances in an online community. By analyzing 16 million comments made on CNN.com and conducting an online controlled experiment, we identified two key factors that can lead ordinary people to troll.

 

What Makes a Troll?

We recruited 667 participants through an online crowdsourcing platform and asked them to first take a quiz, then read an article and engage in discussion. Every participant saw the same article, but some were given a discussion that had started with comments by trolls, whereas others saw neutral comments instead. The quiz given beforehand was also varied to be either easy or difficult.

 

Here, trolling was defined using standard community guidelines; it consists of such activity as name-calling, profanity, racism, or harassment. This behavior is in contrast to cyberbullying, defined as behavior that is repeated, intended to harm, and targeted at specific individuals. Thus this definition of trolling encompasses a broader set of behaviors that may be one-off, unintentional, or untargeted.

 

If a discussion begins with a “troll comment,” then it is twice as likely to be trolled by other participants later on, compared with a discussion that does not start with a troll comment.

Our analysis of comments on the site CNN.com helped to verify and extend our experimental observations. Disqus, a commenting platform that hosted these discussions on CNN.com, provided us with a complete trace of user activity from December 2012 to August 2013, consisting of 865,248 users (20,197 banned), 16,470 discussions, and 16,500,603 posts, of which 571,662 (3.5 percent) were flagged and 3,801,774 (23 percent) were deleted. Out of all flagged posts, 26 percent were made by users with no prior record of having been flagged in previous discussions; also, out of all users with flagged posts who authored at least 10 posts, 40 percent had less than 3.5 percent of their posts flagged (the baseline probability of a random post being flagged on CNN.com). We filtered out banned users, as well as any users who had all of their posts deleted.

 

The first factor that seems to influence trolling is a person’s mood. In our experiment, people put into negative moods were much more likely to start trolling. We also discovered that trolling ebbs and flows with the time of day and day of the week, in sync with natural human mood patterns. Trolling is most frequent late at night, and least frequent in the morning. Trolling also peaks on Monday, at the beginning of the work week.

Moreover, we discovered that a negative mood can persist beyond the events that brought about those feelings. Suppose that a person participates in a discussion in which other people wrote troll comments. If that person goes on to participate in an unrelated discussion, they are more likely to troll in that discussion too.

The second factor is the context of a discussion. If a discussion begins with a “troll comment,” then it is twice as likely to be trolled by other participants later on, compared with a discussion that does not start with a troll comment. In fact, these troll comments can add up. The greater the number of troll comments in a discussion, the more likely it is that future participants will also troll the discussion. Altogether, these results show how the initial comments in a discussion set a strong, lasting precedent for later trolling.

 

We wondered whether, by using these two factors, we could predict when trolling would occur. Using machine learning algorithms, we were able to forecast correctly whether a person was going to troll about 80 percent of the time.

 

Interestingly, mood and discussion context together are a much stronger indicator of trolling than is identifying specific individuals as trolls. In other words, trolling is caused more by the person’s environment than by any inherent trait.

 

Because trolling is situational, and ordinary people can be influenced to troll, such behavior can end up spreading from person to person. A single troll comment in a discussion—perhaps written by a person who woke up on the wrong side of the bed—can lead to worse moods among other participants, resulting in even more troll comments being made elsewhere. As this negative behavior continues to propagate, trolling can end up becoming the norm in communities if left unchecked.

 
 

Why did negative context and negative mood increase the rate of trolling? Prior research explaining the mechanism of contagion has found that participants may have initial negative reactions to reading an article, but are unlikely to bluntly externalize their reactions because of self-control or environmental cues. A negative context provides evidence that others had similar reactions, making it more acceptable to also express them. A negative mood further accentuates any perceived negativity the participant experiences from reading the article and reduces self-inhibition, making participants more likely to act out.

Fighting Back

The continuing endurance of the idea that trolling is innate may be explained using the fundamental attribution error: People tend to attribute others’ behavior to their internal characteristics rather than to external factors—for example, interpreting snarky remarks as originating from a user’s general mean-spiritedness (the poster’s disposition), rather than a bad day (the situation that may have led to such behavior). This line of reasoning may lead communities to incorrectly conclude that trolling is caused by people who are unquestionably trolls, and that trolling can be eradicated by banning these users. However, such an approach does little to curb situational trolling, to which many ordinary users may be susceptible. There are several ways this research can help us create better online spaces for public discussion.

By understanding what leads to trolling, we can now better predict when trolling is likely to happen. This prediction lets us identify potentially contentious discussions ahead of time and preemptively alert moderators, who can then intervene in these aggressive situations.

 

Machine learning algorithms can also sort through millions of posts more quickly than any human. By training computers to spot trolling behavior, we can identify and filter undesirable content with much greater speed.

Social interventions can also reduce trolling. Mood can be inferred through recent posting behavior (such as if a user just participated in a heated debate) or other behavioral traces such as keystroke movements. Then, measures can be taken to selectively enforce civil discourse—for example, limiting the rate at which users can post may discourage users from posting in the heat of the moment. If we also allow people to retract recently posted comments, then we may be able to minimize regret from hasty posting. Altering the context of a discussion, by prioritizing constructive comments, can increase the perception of civility. Even simply pinning a post about a community’s rules to the top of discussion pages helps, as a recent experiment conducted on Reddit showed.

 

Nonetheless, much more work is still needed to address trolling. Understanding the role of organized trolling can limit some types of undesirable behavior.

 

Trolling also can vary in severity, ranging from swearing to targeted bullying, and each level of severity necessitates a different response.

 

It’s also important to differentiate between the impact of a troll comment and the author’s intent: Did the troll mean to hurt others, or was he or she just trying to express a different viewpoint? This distinction can help separate undesirable individuals from those who just need help communicating their ideas more effectively.

 

When online discussions break down, it’s not just sociopaths who are to blame. We are also at fault. Many “trolls” are just people like ourselves who are having a bad day. Understanding that we are all responsible for both the inspiring and the depressing conversations we have on the internet is key to having more productive online discussions.

Bibliography

  • Buckels, E. E., P. D. Trapnell, and D. L. Paulhus. 2014. Trolls just want to have fun. Personality and Individual Differences 67: 97–102.
  • Cheng, J., M. Bernstein, C. Danescu-Niculescu-Mizil, and J. Leskovec. 2017. Anyone can become a troll: Causes of trolling behavior in online discussions. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW ‘17), pp. 1217–1230. New York, NY: ACM.
    • Golder, S. A., and M. W. Macy. 2011. Diurnal and seasonal mood vary with work, sleep, and day length across diverse cultures. Science 333:1878–1881.
    • Hardaker, C. 2010. Trolling in asynchronous computer-mediated communication: From user discussions to academic definitions. Journal of Politeness Research 6:215–242.
    • Matias, J. N. 2016. Posting rules in online discussions prevents problems and increases participation. https://civilservant.io/moderation_experiment_r_science_rule_posting.html
 

This article has been adapted and expanded from a version on The Conversation.

 

American Scientist Comments and Discussion

To discuss our articles or comment on them, please share them and tag American Scientist on social media platforms. Here are links to our profiles on Twitter, Facebook, and LinkedIn.

If we re-share your post, we will moderate comments/discussion following our comments policy.