The “biggest existential threat” to humanity, he thinks, is a Terminator-like super machine intelligence that will one day dominate humanity. Luckily, Mr Musk is mistaken.
Plenty of machines can do amazing things, often better than humans.
For instance, IBM’s Deep Blue computer played and beat the Grand Master Garry Kasparov at chess in 1997.
In 2011, another IBM machine, Watson, won an episode of the TV quiz show Jeopardy, beating two human players, one of whom had enjoyed a 74-show winning streak.
The sky, it seems, is the limit.
Yet Deep Blue and Watson are versions of the “Turing machine”, a mathematical model devised by Alan Turing which sets the limits of what a computer can do.
A Turing machine has no understanding, no consciousness, no intuitions — in short, nothing we would recognize as a mental life. It lacks the intelligence even of a mouse.
Believers in the coming of AI disagree.
Stephen Hawking has argued that “the development of full artificial intelligence could spell the end of the human race”.
He is right — but the same is true of the appearance of the Four Horsemen of the Apocalypse.
Ray Kurzweil, the American inventor and futurist, has predicted that by 2045 the development of computing technologies will reach a point at which AI outstrips the ability of humans to comprehend and control it.
Scenarios such as Kurzweil’s are extrapolations from Moore’s law, according to which the number of transistors in computers doubles every two years, delivering greater and greater computational power at ever-lower cost.
However, Gordon Moore, after whom the law is named, has himself acknowledged that his generalization is becoming unreliable because there is a physical limit to how many transistors you can squeeze into an integrated circuit.
In any case, Moore’s law is a measure of computational power, not intelligence.
My vacuum-cleaning robot, a Roomba, will clean the floor quickly and cheaply and increasingly well, but it will never book a holiday for itself with my credit card.