How malicious Twitter bots evolve to evade detection
Weblog: Fake news bots mimic human usage patterns and leech onto popular accounts
The researchers also observed a co-ordinated multi-bot strategy, ‘possibly to create an illusion of a consensus’
Social media bots created with the express purpose of spreading fake news and disinformation can have a knock-on effect on elections through their influence on people’s opinions. What is more concerning is that while platforms like Twitter work to eradicate these bots, they are evolving fast to escape detection.
Researchers compared the activity of bots involved in political discussion on Twitter during the 2016 US presidential election and the 2018 midterms. They found that bots have changed tactic since 2016, mimicking human usage patterns to evade detection. Humans are more likely to reply than retweet content at a high volume. The researchers also observed a co-ordinated multi-bot strategy, “possibly to create an illusion of a consensus”.
Drilling down into the activities of these accounts, they even found what appeared to be human-bot parasitism of sorts. Bots are good at detecting popular accounts, thus targeting and retweeting their content, which the human user may be unaware of.
“Understanding how human users deal with these automated accounts and manipulation attempts is of paramount importance. On the other hand, detecting and keeping the pace of increasingly sophisticated malicious accounts is needed to build and adapt effective countermeasures,” said researchers Luca Luceri, Ashok Deb, Silvia Giordano and Emilio Ferrara.