Microsoft creates bot that generates fake news comments

Web Log: DeepCom has potential to dupe people into thinking comments are human-generated

Microsoft has created a bot that can generate fake news comments, that is to say: artificially generated comments based on real news stories. The bot, known as DeepCom, is actually made up of two neural networks: the first network is trained to read and comprehend online news stories, distilling them into a few key points, while the second bot takes these points and uses them to create a comment.

When these Microsoft researchers first put their paper online, it included the enthusiastic suggestion that these comments could pass as human-generated, thus populating the comment sections of online publications in order to encourage readers to engage in discussion below the line. The problem with this – as the researchers realised – is that it assumes it is okay to dupe people into thinking bot content is real.

Political manipulation

In the second version of their paper, the researchers address the ethical issues of using such a bot: “In particular, we note that people expect comments on news to be made by people. Thus, there is a risk that people and organisations could use these techniques at scale to feign comments coming from people for purposes of political manipulation or persuasion.”

The code for DeepCom is now available on GitHub so, although Microsoft acknowledges it would be unethical to use this to deceive people, there is nothing stopping those with the technical know-how from doing so.