Cyborg assassins aka ‘Terminators’ (of a sort) have been legalised in San Francisco. The San Francisco Police Department’s (SFPD) board of supervisors voted 8–3 on November 29 to allow robots a licence to kill. The draft policy was unanimously passed by the rules committee.
The policy decrees that, ‘Robots will only be used as a deadly force option when risk of loss of life to members of the public or officers is imminent and outweighs any other force option available to SFPD.’
The robots currently being referred to are the SFPD’s 17 robots (of which only 12 are functioning) whose primary function is to defuse bombs or work with hazardous materials. However, there are newer Remotec models which have optional weapons systems. There is also the QinetiQ Talon which can be modified to use weapons. These are used by the US Army which can equip them with machine guns, grenade launchers, and a .50 caliber anti-materiel rifle.
Deploying lethal robots has been done before when the Dallas Police Department used a bomb disposal robot (a Remote F5A owned by the SFPD) to kill a man who killed five police officers and wounded others.
Robots will no doubt be termed as safe and effective in the future but are they?
Lionel Robert, associate professor at the University of Michigan School of Information told Futurity that, ‘Robots will make mistakes when working with humans, decreasing humans’ trust in them.’
Humans who stayed at the world’s first all robot-run hotel, Henn na (translated as Strange) which opened in 2015, could attest to a decreasing trust in these cyborgs. The Economist revealed that the Henn na robots had many challenges including luggage carriers that could not climb stairs nor go outside and communication problems such as one guest who snored was woken up repeatedly by a robot saying he couldn’t understand what the snorer was saying. The hotel’s direction changed in 2019 when half the robots were replaced with humans.
Another farcical example of robots gone wrong is in the use of facial technology. Amazon’s facial recognition matched 28 members of Congress with 28 criminals in 2018. In a 2020 Inverness v Caledonian Thistle FC soccer game, facial recognition was replaced with ball recognition. The AI cameras were programmed to follow the ball but unfortunately rather than filming the action and goals scored, TV spectators were treated to the referee’s bald head. (One fan suggested he wear a toupee next time.)
There are many more examples of robots doing badly from vacuuming robots spreading dog faeces all over the house through to news outlets reporting a 6.8 earthquake in 2017 which occurred in 1925 because the US Geological survey was updating their information.
While these examples are harmless, robot mistakes can also be fatal. The University of Illinois at Urbana, the Massachusetts Institute of Technology, and Rush University Medical Center wrote a report entitled Adverse Events in Robotic Surgery. They researched 10,624 events related to the use of robotic systems and instruments and found that 1,535 had adverse results with 1,391 patients injured and 144 deaths.
The problem with robots and cyborg assassins is that robots are unable to think critically and thus cannot handle unexpected situations. So, if the correct powers in SFPD agreed to let out the robocops to take someone’s life, what could go wrong? For starters, if the gunman sees or knows about the lethal robot, how will he react? Will he take out as many people as possible as there may be no room for negotiation? Could the robot mistake someone else for the gunman and kill them instead? Could the robot malfunction and start killing those it was not meant to?
There are so many questions as dystopia becomes reality. It seems that the world of the Terminator is not far off, if only there were a T-1000 to get back to the past because I am not leaving my heart in San Francisco.
If you would like to support the work of Nicole Lenoir-Jourdan she’d be very grateful for a sponsor for her English Breakfast tea habit.