BotOrNot tool created in Indiana

Researchers in Indiana check more than 1,000 account features to determine fakes

So-called Twitter “bots” – automated accounts which either entertain, annoy or deliver malware to users of the social network –  can now be unmasked through BotOrNot, a tool created by researchers at Indiana University. Photograph: Chris Ison/PA Wire
So-called Twitter “bots” – automated accounts which either entertain, annoy or deliver malware to users of the social network – can now be unmasked through BotOrNot, a tool created by researchers at Indiana University. Photograph: Chris Ison/PA Wire

On more than 430,000 occasions when

Twitter

users have spelled the phrase “sneak peek” incorrectly, an account by the name of Stealth Mountain has pointed out their error.

One of the more courteous examples of a Twitter "bot", it's one of millions of automated accounts which either entertain, annoy or deliver malware to users of the social network. However, each one of these accounts can now be unmasked through BotOrNot, a tool created by researchers at Indiana University.

READ MORE

The tool "illustrates the potential and challenges of distinguishing social bots from humans", says Prof Filippo Menczer, who is heading up Truthy, the project responsible for BotOrNot at the University's Centre for Complex Networks and Systems Research.

Bots represent one of main online tools “used to spread misinformation” such as “rumours, political astroturf, malware, smear campaigns, and the like”, Prof Menczer said. BotOrNot analyses more than 1,000 features of a Twitter account’s actions to spot fakes, with visitors to the tool’s website able to check whether or not an account is run by an actual human.

A Twitter representative told The Irish Times "we don't comment" on external tools regarding the site, and Prof Menczer confirms that "Twitter did not contact us" regarding BotOrNot's "secret sauce" algorithm for spotting fakes. Some estimates claim one in every 10 Twitter accounts is fake.

“I have no doubt Twitter has a bunch of people constantly working on these problems but don’t advertise [this],” says Prof Menczer, “because their goal is to detect them and secretly delete them without saying Twitter is being used this way.”

While Twitter has rules regarding use of the platform relating to spam and abuse, and has automated and manual controls to detect, flag, and suspend accounts created solely for spam purposes, Prof Menczer said his team has “found plenty of bots that have been undetected by Twitter and that violate terms of service”.