Hawking says, “This aggression may destroy us all by nuclear or biological war. We need to control this inherited instinct by our logic and reason.” By Aatif Sulleyman, published in The Independent on March 9, 2017.
Mr Hawking has previously said that AI could grow so powerful it would be capable of killing us entirely unintentionally. Photo credit Getty
Stephen Hawking has warned that technology needs to be controlled in order to prevent it from destroying the human race.
The world-renowned physicist, who has spoken out about the dangers of artificial intelligence in the past, believes we need to establish a way of identifying threats quickly, before they have a chance to escalate.
“Since civilisation began, aggression has been useful inasmuch as it has definite survival advantages,” he told The Times.
“It is hard-wired into our genes by Darwinian evolution. Now, however, technology has advanced at such a pace that this aggression may destroy us all by nuclear or biological war. We need to control this inherited instinct by our logic and reason.”
He suggests that “some form of world government” could be ideal for the job, but would itself create more problems.
“But that might become a tyranny,” he added. “All this may sound a bit doom-laden but I am an optimist. I think the human race will rise to meet these challenges.”
In a Reddit AMA back in 2015, Mr Hawking said that AI would grow so powerful it would be capable of killing us entirely unintentionally.
“The real risk with AI isn’t malice but competence,” Professor Hawking said. “A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.
“You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”
Tesla CEO Elon Musk shares a similar viewpoint, having recently warned that humans are in danger of becoming irrelevant.
“Over time I think we will probably see a closer merger of biological intelligence and digital intelligence,” he said, suggesting that people could merge with machines in the future, in order to keep up.