Artificial intelligence is, well, artificial. And the use of ‘intelligence’ in this label is a misnomer. AI machines cannot think. They are just (increasingly) very, very, very fast calculating machines.
Which is why they can be so helpful to us humans. For example, AI can help people find their way around, with or without physical disabilities. Medicine is much more marvellous because of technology, including AI. Industry has been able to increase productivity because of AI, including the development of machines that look autonomous (when in ‘controlled’ circumstances). Language translation, recreational games, enquiry response: all have become more available and accessible (on average) because of AI.
But there is currently lots of discussion about how dangerous AI might be for us humans. I remember the same fearful responses when personal computers were becoming much more prevalent through the 1990s. We were told that humans would run out of jobs. We were told that like in the movies, computers would take control of our lives. And of course, because of these threats, we would all need extra government intervention to keep us safe.
It seems there is a similar confusion with AI. Of course, any powerful tool needs to be watched with appropriate caution – that is why we have rules about how we make and use our cars. But like with our vehicles, it is the designer and user where most risk lives, not the machine per se. John Lennox, Professor Emeritus of Mathematics from Oxford University, has captured some helpful commentary and reflection in his cleverly titled 2084. This comment from Stephen Hawking is an apt example:
The real risk of AI isn’t malice but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.
But this, of course, begs the question about how AI determines its goals. It doesn’t. The designer and programmer do. Thus, Hawking is right to note that if a designer/programmer thinks he or she has written code that is safe for humans, but if it is done incompetently, we are in trouble. It is why Lennox further explains that:
In short, a machine learning system takes in information about the past and makes decisions or predictions when it is presented with new information…. [But] the human involvement is conscious. The machine is not.
The misplaced discussions about machines developing consciousness properly belong in movies like the 2004 I, Robot. Such constructions of reality seem very popular (the movie did very well, and when I typed it into my search engine, there were about three and a half million entries from which to choose!). But do movies like this always foreshadow the future? Philosophers from different schools of thought are sceptical about AI’s capacity for self-consciousness. Richard Dawkins admitted that he and his atheist colleagues had no coherent theory about human consciousness. Thomas Nagel tried to encourage his fellow atheists to do harder to find some coherent theory of consciousness, and human self-consciousness, because the coherent one that he saw was from Alvin Plantinga – and Nagel did not want to accept Plantinga’s starting point, which involved an immaterial first-cause Mind (God). Such debates take us to the concept of the nature of our humanness – does it involve non-physical aspects of reality? Even some materialists say ‘yes’ to this, because of their ‘emergent theories of mind’.
However, these philosophers all seem to come to the same point with reference to AI. Whatever is happening in human thinking, it is not happening in AI. Two more quotes from Lennox’s collection summarise it well: Stephen Shankland from Google explains that, ‘AI is still very, very, stupid.’ The Forbes contributor Kalev Leetaru explains, ‘At the end of the day the deep learning systems are less “AI” than [they are] fancy pattern extractors.’
So, before our governments jump and want to control the field in its usual bureaucratic and anti-productivity manner, what is reasonable to consider in how to manage the use of such technology? Following Hawkins’ logic, keeping a track of quality control when physical risk is involved seems important. We do this with cars. So, it makes sense to do it with other machinery involving AI and our physical safety. We also do this with many items in our shops, from tools to pushbikes to baby products.
However, the critique Lennox gives about AI’s consciousness and conscience living with the programmers is:
For AI computer systems have no conscience, and so the morality of any decisions they make will reflect the morality of the computer programmers…
This leads us to the thorny question of what morality guidelines do we think are helpful for the designers of applied AI?
I don’t know. By that, I mean that I do not know what morality guidelines our current leaders are using. Their actions confuse me in terms of how they live morally according to any consistent ethical code. There are many current examples, such as our quest to save the planet and make more poor people poorer, killing more of them than a slower path to ‘renewables’ (another misnomer – what captures the wind and solar are very much not renewable)?
Writing a new chapter into our foundational document that outlines how we will live together while enshrining division based on race?
Improving our universal respect of women by banishing one who stands up for who women are as women?
Or rejecting the rule of law by dismissing the caveat of innocent before being proven guilty?
Or taking over an independent Catholic hospital in the guise of improving health care (the worst of which is being run by the government)?
Or supporting freedom of religion by not letting religious people openly express their opinions through their own organisations – or any general business?
And of course, there is the economic value of dismantling innovation and productivity through the creation of a socialist version of capitalism (the oxymorons continue)…
Perhaps we can see the moral understanding, wisdom, and discernment of our leaders in the moral interventions in our last major ‘crisis’ (also known as our last great ‘panic’) – Covid. Oops – that was not about moral actions supporting the common good. It was an example of political opportunism run amok, with little sustained expression of respect towards science and natural law.
What then is currently realistic to expect in the AI field? Perhaps this, as reported recently in the Australian:
AI is a big A, for artificial, and a really tiny I, for intelligence. So it’s not going to take over the world anytime soon. [Richard White, the founder of WiseTech Global]
But beware – some politicians will enjoy their versions of the impact of AI in order to create the next phobia, from which they will rush in and rescue us from ourselves. It would be the next version of politicians that are blinded by the fantastical hope of a ‘Sonny’ from I, Robot.