By Marvin Ross

My old university research AI – A library card catalogue.
Last week we published a blog by Dr Dawson on the similarities of Trump’s targeting of immigrants, refugees, trans people with how Hitler accomplished his support for the Holocaust by demonizing Jews, Gypsies and others. The title was Alligator Alcatraz: The Third Reich, Immigrants, Jews and the Mentally Ill. Shortly after it was posted on Linkedin, I received an e-mail telling me that the post had been removed. Seems it might promote hate.
I was not surprised and I did appeal. Clearly, their AI or algorithm had made the decision based on picking up key words found in the text. It does go to show that we are starting to rely too much on a computer rather than thinking for ourselves. A computer is fast and it has a great deal of data at its disposal but it is not capable of nuance, sarcasm, humour or other human qualities.
AI drives the “chat bot on online functions on websites or on phone calls when you try to contact a company. I’ve spent far too much time fighting with them and going in circles because they cannot comprehend. My wife knows when I’m talking to one on the phone because she hears me screaming HUMAN into the phone. Chat bots are often programmed to refer you on to a person when you use the word human. I also type that online and it usually gets you a real person who is capable of comprehending subtle facts.
Back in the days of mainframe computers, there was an apocryphal story of a computer that could translate between Russian and English. “The spirit is willing but the flesh is weak” was fed in and what came out was “the wine is good but the meat is off”. Translation software, I’m told, is much better today but that principle applies to AI in my opinion.
The question now is whether AI can benefit humanity. There is evidence that the answer is no.
An MIT study just completed found that using ChatGPT to write an essay can lead to cognitive decline. Researchers divided subjects into 3 groups and had each group write an essay. One used ChatGPT, one using online research and one using no sources. The AI group had the weakest brain connectivity and remembered less of their essay. “Over four months, ChatGPT users consistently underperformed at neural, linguistic, and behavioral levels,” the study reads. Those who didn’t use outside resources to write the essays had the “strongest, most distributed networks”.
You’ve probably noticed if you do a lot of google searching that google is using AI to give you a fact summary of your search. It is useful but I still keep going down the page and click on the links they provide. Unfortunately, most people do not go beyond the AI summary they are given. One study found that about 60% of those searching never leave the summary they get and click on other potential sources of information.
That same article talks about google’s new AI mode that merges ChatGPT’s prompt functionality with Google’s near real-time, all-encompassing search. With that you get a very concise overview with links to explore. “Unfortunately for U.S. users, AI Mode’s “reliable news sources” turns out to be a predictable list of centre-left, mainstream news outlets, with the BBC, CNN, the New York Times, and the Washington Post overwhelmingly more likely to be cited than independent, smaller media, let alone centre-right sources.”
People doing research are fed a biased sampling and unless they make an effort, are not getting a comprehensive view of an issue.
Which brings us to Elon Musk and his very own AI system called Grock. The world was stunned when Grock began posting anti-semitic posts on X and referencing Hitler. Given the rising level of anti-semitism coming from the so called progressive groups worldwide with synagogues and Jewish business fire-bombed, assaults and murders, this shouldn’t surprise us. Maybe Grok is picking that up.
Tyler Cowan, in his essay What Happens When Your AI Goes Nazi, explained that Grock was told not to avoid politically incorrect conclusions. And, unlike other AI systems, Grock was trained on twitter and X. Cowan explained that “As anyone who has spent more than two minutes on the platform will know, that is not necessarily the definition of a healthy and balanced information diet.”
Politico interviewed Gary Marcus, who co-founded multiple AI companies about Grock’s Jew Hatred. If anyone is shocked at my use of the term Jew Hatred it is what Jewish groups fighting this alarming rise in anti-semitism call it. Anti-semitism is hatred of Jews so let’s call it what it really is. This emeritus professor of psychology and neuroscience at New York University has emerged as a critic of unregulated large language models like Grok.
He told Politico that “the failure to regulate AI would be comparable to the failure to regulate social media, something many elected officials now recognize as a mistake because of its detrimental impact on the mental health of kids and the explosion of misinformation online, among other issues. Marcus also warned about a future in which powerful tech titans with biased AIs use them to exercise outsized influence over the public. “
Finally and much to my surprise, AI chat bots are being used for mental health therapy. I should not have been surprised. In a Toronto Star op ed Angela Facundo, an English professor at Queen’s University and a Toronto-based psychotherapist, discussed the dangers of this growing trend. She points out that “In psychotherapy, you need another human being for the treatment to work; otherwise, it’s just another echo chamber. Because AI tools can’t feel or actually think, they can only mimic what (bad) therapists do: affirm convictions, enable behaviour and give advice.”
She adds, “the connection between two fundamentally different individuals is perhaps the most powerful vehicle in therapy. It’s also the most challenging to develop and sustain, which explains why so many may opt for AI. But that also means the real work is just not happening.”
As an English professor, she recommends reading a novel which can be therapeutic. “Unlike AI, the novel gives us stories that we relate to, but that we can’t control. Reading these stories helps us take our time, think, imagine, feel, and talk to each other — and looking at the world today, we’ve never needed these capacities more. If we stop valuing what our human minds can do, we stop valuing ourselves and each other. The pleasures and difficulties of thinking with others become the foundation of a human life worth living. Let’s not allow AI to take that away from us.”
I totally agree.
And to Linkedin’s credit, I won my appeal and Dr Dawson’s post was reposted on Linkedin