News Posts

How The Media Gets AI Alarmingly Wrong

Share on FacebookTweet about this on Twitter

In June of last year, five researchers at Facebook’s Artificial Intelligence Research unit published an article showing how bots can simulate negotiation-like conversations.

While for the most part the bots were able to maintain coherent dialogue, the researchers found that the software agents would occasionally generate strange sentences like: “Balls have zero to me to me to me to me to me to me to me to.

On seeing these results, the team realized that they had failed to include a constraint that limited the bots to generating sentences within the parameters of spoken English, meaning that they developed a type of machine-English patois to communicate between themselves.

These findings were considered to be fairly interesting by other experts in the field, but not totally surprising or groundbreaking.

A month after this initial research was released, Fast Company published an article entitled AI Is Inventing Language Humans Can’t Understand. Should We Stop It?




The story focused almost entirely on how the bots occasionally diverged from standard English – which was not the main finding of the paper – and reported that after the researchers “realized their bots were chattering in a new language” they decided to pull the plug on the whole experiment, as if the bots were in some way out of control.

The ice of AI’s first winter only fully retreated at the beginning of this decade after a new generation of researchers started publishing papers about successful applications of a technique called “deep learning”.

While this was fundamentally a decades-old statistical technique similar to Rosenblatt’s perceptron, increases in computational power and availability of huge data sets meant that deep learning was becoming practical for tasks such as speech recognition, image recognition and language translation.

As reports of deep learning’s “unreasonable effectiveness” circulated among researchers, enrollments at universities in machine-learning classes surged, corporations started to invest billions of dollars to find talent familiar with the newest techniques, and countless startups attempting to apply AI to transport or medicine or finance were founded.

As this resurgence got under way, AI hype in the media resumed after a long hiatus.

In 2013, John Markoff wrote a feature in the New York Times about deep learning and neural networks with the headline Brainlike Computers, Learning From Experience.

Not only did the title recall the media hype of 60 years earlier, so did some of the article’s assertions about what was being made possible by the new technology.

Since then, far more melodramatic and overblown articles about “AI apocalypse”, “artificial brains”, “artificial superintelligence” and “creepy Facebook bot AIs” have filled the news feed daily.

Please like, share and tweet this article.

Pass ot on: Popular Science

Leave a Reply

Your email address will not be published. Required fields are marked *