Some of today’s top techies and scientists are very publicly expressing their concerns over apocalyptic scenarios that are likely to arise as a result of machines with motives.
Among the fearful are intellectual heavyweights like Stephen Hawking, Elon Musk, and Bill Gates, who all believe that advances in the field of machine learning will soon yield self-aware A.I.s that seek to destroy us.
Or perhaps just apathetically dispose of us, much like scum getting obliterated by a windshield wiper.
In fact, Dr. Hawking told the BBC, “The development of full artificial intelligence could spell the end of the human race.”
Indeed, there is little doubt that future A.I. will be capable of doing significant damage.
For example, it is conceivable that robots could be programmed to function as tremendously dangerous autonomous weapons unlike any seen before.
Additionally, it is easy to imagine an unconstrained software application that spreads throughout the Internet, severely mucking up our most efficient and relied upon medium for global exchange.
But these scenarios are categorically different from ones in which machines decide to turn on us, defeat us, make us their slaves, or exterminate us.
In this regard, we are unquestionably safe. On a sadder note, we are just as unlikely to someday have robots that decide to befriend us or show us love without being specifically prompted by instructions to do so.
This is because such intentional behavior from an A.I. would undoubtedly require a mind, as intentionality can only arise when something possesses its own beliefs, desires, and motivations.
The type of A.I. that includes these features is known amongst the scientific community as “Strong Artificial Intelligence”. Strong A.I., by definition, should possess the full range of human cognitive abilities.
This includes self-awareness, sentience, and consciousness, as these are all features of human cognition.
On the other hand, “Weak Artificial Intelligence” refers to non-sentient A.I. The Weak A.I. Hypothesis states that our robots—which run on digital computer programs—can have no conscious states, no mind, no subjective awareness, and no agency.
Such A.I. cannot experience the world qualitatively, and although they may exhibit seemingly intelligent behavior, it is forever limited by the lack of a mind.
A failure to recognize the importance of this strong/weak distinction could be contributing to Hawking and Musk’s existential worries, both of whom believe that we are already well on a path toward developing Strong A.I.
To them it is not a matter of “if”, but “when”.
But the fact of the matter is that all current A.I. is fundamentally Weak A.I., and this is reflected by today’s computers’ total absence of any intentional behavior whatsoever.
Although there are some very complex and relatively convincing robots out there that appear to be alive, upon closer examination they all reveal themselves to be as motiveless as the common pocket calculator.
This is because brains and computers work very differently. Both compute, but only one understands—and there are some very compelling reasons to believe that this is not going to change.
It appears that there is a more technical obstacle that stands in the way of Strong A.I. ever becoming a reality.
Please like, share and tweet this article.
Pass it on: Popular Science