Bots on social media threaten democracy. But we are not helpless | Sam Wooley and Marina Gorbis

Share on Facebook0Share on Google+0Tweet about this on TwitterShare on LinkedIn0

Can social bots – pieces of software that perform automated tasks – influence humans on social media platforms? That’s a question Congressional investigators are asking social media companies ever since fears emerged that they were deployed in 2016 to influence the presidential election.

Half a decade ago we were among a handful of researchers who could see the power of relatively simple pieces of software to influence people. Back in 2012, the Institute for the Future, for which we work, ran an experimental contest to see how they might be used to influence people on Twitter.

The winning bot was a “business school graduate” with a “strong interest in post-modern art theory,” which racked up 14 followers and 15 re-tweets or replies from humans. To us, this confirmed that bots can generate followers and conversations. In other words, they can influence social media users.

We saw their power as potential tools for social good – to warn people of earthquakes or to connect peace activists. But we also saw that they can be used for social ill – to spread falsehoods or skew online polls.

When we published papers and the findings of our experiments on bots, they were reported in the popular press. So why didn’t the alarm spread to the tech, policy, and social activist communities before automated social media manipulation became front-page news in 2017?

Since 2012, thanks to investments in online marketing, bots have become far more sophisticated than the models in our experiment. Those who build bots now spend time and effort generating believable personas that often have a powerful presence on multiple sites and can influence thousands of people instead of just a few.

Innovations in natural language processing, increases in computational power, and cheaper, more readily available data allow social bots to be more believable as real people and more effective in altering the flow of information.

Over the last five years, this type of bot usage has been mapped on to political communications. Research from several universities, including Oxford and the University of Southern California, shows that bots can be used to make politicians and political ideas look more popular than they are or to massively scale up attacks upon the opposition.

It appears that in 2016, they were deliberately unleashed on social media to do just that – sway voter opinion by spreading fake news and deceiving trending algorithms.

And political manipulation over social media has very real implications for the 2018 US midterm elections. Recent research suggests that those initiating digital propaganda campaigns are beginning to focus their attentions upon specific subsections of the US population and constituencies in swing states.

The more focused such attacks become, the more likely they are to have a significant effect on electoral outcomes. Furthermore, the unrealized promises of “psychographic” targeting, marketed by groups like Cambridge Analytica in 2016, may be achieved in 2018 with technological advancements.

Social media platforms may be able to track and report on political advertisements from foreign entities, but will they divulge information on pervasive and personalized advertising from their domestic political clients?

This is a pressing question, because social bots are likely to continue to grow in sophistication. At a recent roundtable on the Future of AI and Democracy, several technology experts forecast that bots will become even more persuasive, more emotional, and more personalized.

They will be able to not just spread information, but to truly converse and persuade their human interlocutors in order to even more effectively push the latter’s emotional buttons.

Bring together advances in neuroscience, the ability to analyze massive amounts of behavioral data and the proliferation of sensors and connectivity and you have a powerful recipe for affecting society though computational means. So what do we need to do to stop this technology from going astray?

Consider the advances in modern oceanography. In the not too distant past, scientists collected samples and measurements from the ocean floor episodically —in select places and at specific times. The data was limited and usually not shared widely. Threats were not easily detected.

Today, we find portions of an ocean floor instrumented with wireless interactive sensors and cameras that enable scientists (and laypeople) to see what is happening 24 hours a day, seven days a week. This allows scientists to “take the pulse” of the ocean, forecast a range of possible threats and suggest powerful interventions when needed.

If we can do this for monitoring our oceans, we can do it for our social media platforms. The principles are the same—aggregating multiple streams of data, making such data transparent, applying the best analytical and computational tools to uncover patterns and detect signals of change.

Then we will be able to provide such data to experts and laypeople, including technology companies, policymakers, journalists, and citizens of political bot attacks or other large-scale disinformation campaigns before these take hold.

We know how to do this in many realms, what we need now is the will to apply this knowledge to our social media environment.

from Artificial intelligence (AI) | The Guardian http://ift.tt/2kU1ijF

Advertisements

Share on Facebook0Share on Google+0Tweet about this on TwitterShare on LinkedIn0