Researchers Develop A Troll-Hunting Algorithm

Automatic detection of the internet's "future banned users"

This is what an Internet troll looks like.Eirik Solheim/Flicker CC BY-SA 2.0

Cornell University researchers are coming for you, Internet trolls.

After an 18-month study of banned commenters on cnn.com, breitbart.com and ign.com, three researchers claim they can identify Internet trolls with great accuracy. The say their system can spot an inflammatory commenter in less than 10 posts.

Google helped fund the study, which compared anti-social users, or “Future Banned Users” (FBUs), to more cooperative commenters, or “Never Banned Users” (NBUs). Nearly all of the 10,000 FBUs studied commented at a lower perceived standard of literacy and clarity than the average, and this standard only declined until they were banned. Also, troublemaking commenters are more likely to focus their efforts on fewer comment threads relative to the amount they post. In other words, they’re looking for a fight.

All trolls are not created equal, however. Instigators on CNN were more likely to initiate new posts or sub-threads, and at Breitbart and IGN they were more likely to comment on existing threads. In general, FBUs are incendiary and persistent commenters with poor grammar, and they tend to get into heated arguments right before they are banned.

Host communities aren’t just victims in these situations – their policies can sometimes foster rude comments. For example, excessive censoring or going on a post-deleting spree will not win them points with commenters.

These findings pose the possibility of automatically identifying and even auto-banning comment tyrants. Researchers are hesitant, however, since one in five users were wrongly accused.

According to the paper, “Taking extreme action against small infractions can exacerbate antisocial behavior (e.g., unfairness can cause users to write worse).”