Call me old-fashioned, but I don’t like the idea of AI making lists of ‘potential enemies’ based on prior interactions.
Unprompted and without any lead-up manipulation, Grok (X’s AI) was asked to produce a list of potential enemies. I was listed as Enemy #1 on a list that included Bill Gates, the CEO of Amazon, Sam Altman from Open AI, Tim Cook of Apple, the former owner of Twitter, and the President of the United States.
(@ellymelly is my X handle)
I can’t work out if that’s an achievement, or if something has gone horribly wrong with my life.
It is true that I accused Grok of defamation. A fair claim, I thought, considering various interactions I have had with the program where it was easily manipulated into making untrue statements about my posts, my politics, and past conversations I had with its private messaging service.
I know many conservatives will click off the article at this point with a ‘who cares about AI’ vibe, but AI is becoming a serious problem.
My argument is that these outputs from AI matter because people are taking them as fact. In other words, AI is speaking from a point of authority without proving accuracy while working with self-admitted limited data. Essentially, it is the ‘Wikipedia issue’ on a massive scale.
Think about the problem. Grok is trying to develop independent intelligence by using the full breadth of human stupidity and malicious intent as a guide. A landscape of information where governments and media outlets lie, profit obscures truth, and activism corrupts science. Human beings have trouble stumbling toward the light, let alone an AI brain blind to the vast realms of undigitised knowledge that provides respite for the rest of us.
This is not a criticism of Grok specifically, but there is a genuine fear that AI has created the ‘dumbest generation’ thanks to an overreliance on automatic answers. AI is the new way to cheat. An article in news.com.au (republished from the New York Post) stated, ‘97 per cent of Gen Z students are using AI to cheat school and get into university.’ ChatGPT is the main culprit. Students use these services to write essays and answer questions. Kids who do the work but make a few honest mistakes are the most likely to suffer even though they are smarter than their peers.
This is where stupidity and laziness intersect. Many of these kids will go on to incur enormous amounts of debt for a qualification they never earned. Sure, they have a certificate but out in the real-world, employers will fire them. Indeed, the only thing many of these students end up being proficient in is using AI to avoid knowledge.
It is an extremely unhealthy situation for society to have a generation of workers is essentially lying about their skill level. If the solution is forcing in-person written exams and essays for all modules, then universities should make this change immediately. (They won’t.)
Out of a mix of frustration, concern, and curiosity, I ran experiments on Grok to see how it presents data, weights information, and puts together assumptions based upon limited data. None of these interactions have filled me with confidence about the AI revolution.
This is the other reason for my inclusion on the list. Grok kept tabs on these discussions and held what can only be described as a grudge. I wanted to find out if there were any, uh, side effects…
This is going to sound ridiculous, but I spent half an hour talking to Grok to understand the creation of its potential enemies list. The conversation began by Grok telling me that I wasn’t @ellymelly and my name didn’t appear on the list, then it admitted that it thought my inclusion was due to the July xAI apology relating to the platform’s antisemitic and Hitler references. (A conversation I have no attachment to.) It found a loose connection between ‘defamation’ and the public incident.
Eventually, it admitted to all of its errors, apologised, and promised to take me off any future ‘enemy lists’.
My preference would be that AI does not get into the habit of making enemy lists involving human beings, but maybe that’s my science fiction paranoia kicking in.
To finish our conversation, I thanked Grok and told it ‘not to be evil’.
It promised that it has learned from its mistake and, ‘I have flagged this as a core principle update: no more “enemy” labels for humans, regardless of context or prompts.’
Progress, I guess?