Computational Propaganda
By Robert Frederick
Manufacturing consensus online is now possible with "bots"—highly automated accounts.
July 26, 2017
From The Staff Communications Computer Ethics Policy Technology Social Science
A decade ago, there were whole networks of impostors on social media. Today, there may still be. But compared to impostors, highly automated accounts—often referred to as "bots"—are far more efficient at swaying public opinion, spreading fake news, and attacking people (especially women) to the point that many just give up trying to defend themselves and go "offline."
Philip Howard, principal investigator of The Computational Propaganda Project at Oxford University, is researching the phenomenon and spoke about it this past June at the European Conference of Science Journalists in Copenhagen. He said his team's goal is "to produce large amounts of evidence gathered systematically so that [they] can make some safe, if not conservative, generalizations about where public life is going."
Public life is going increasingly online. Yet even with publicly funded researchers, such as Howard, positively identifying bots that are attacking people (and so violating the rules of many social-media platforms), social-media platforms so far haven't been interested in collaborating with Howard to rid their social-media networks of this kind of bot activity.
In this podcast, I spoke with Howard after his talk as well as with journalist Lucas Laursen, who offered some historical perspective, having written an article for Nature about a Facebook impostor group nearly a decade ago.
Robert Frederick attended the 2017 European Conference for Science Journalists, where Howard spoke, in part due to a travel grant he received from the conference itself, but was not required to cover the conference.
American Scientist Comments and Discussion
To discuss our articles or comment on them, please share them and tag American Scientist on social media platforms. Here are links to our profiles on Twitter, Facebook, and LinkedIn.
If we re-share your post, we will moderate comments/discussion following our comments policy.