How To Spoil Your Life With ChatGPT
AI is a blessing and a curse. It can do what seems like wonders that then turn out to be your worst nightmares

Going to court
Back in 2023, an interesting piece of news appeared: A lawyer, Steven Schwartz, had used ChatGPT for preparing a case in court — and he wanted it to help him find preceding court cases and decisions to be used in the new court case. It willingly did, and Mr. Schwartz and his colleague Peter LoDuca (taking over the case) went to court well-prepared — they thought!
But it turned out that ChatGPT had invented the reference cases and many others of the details it gave him, and it had actually fabricated legal documents that were then used as evidence — it had been “hallucinating” — and when the court and its judge found out that it was all fake, life and career turned bad for the lawyers.
They were later fined for the misery and can continue working, but all the bad publicity will be hanging with them for a long time to come. For a while, it looked like they could even lose their right to proceed in court.
The real crime
What went wrong here, really?
Like so many other users of ChatGPT, Steven Schwartz didn’t understand what it was. As he said later, he thought it was like Google, a search engine that could find facts. He didn’t think of it as that generative computing engine it is — he didn’t know that it invents information!
Of course, the information invented has its basis in “training” on a lot of input information, but not only could some of the input have been wrong, the AI may also have learned something wrong even from the correct information it was given. And not least: such a chatbot has been constructed to always try to help with providing the best possible answer; it “knows” from its coding what a good answer looks like, but it doesn’t know itself what is true and what is false, so it cannot even tell how good the answer is.
The AI developers call it “hallucinations” — when the AI simply creates new false information out of its pure programmatical wish to provide the requested information, even if it doesn’t have it.
And the AI sounds confident. So, we humans trust it — because we trust humans who sound confident. It is a major problem with AI that it reminds us so much about a human that we easily believe that it is comparable to one.
But it is a machine.
And these lawyers didn’t understand what that implied. They didn’t proofread (literally) what the machine delivered, they just accepted it without thinking.
We are all guilty of this
We really have a serious problem in understanding what it means that something is capable of behaving and expressing itself like a human being, which is what we often tend to call “being intelligent”.
There have, over time, been developed various ways of examining and deciding whether an AI is, in fact, intelligent. Several of these ways, tests, are based on a too-simple assumption that reminds of the famous Duck Test:
“If it looks like a duck,
swims like a duck,
and quacks like a duck,
then it probably is a duck”
One of the famous AI tests is the Turing test, which, in short, is about asking a machine and a human the same question and taking answers from both without knowing which of the answers is from the machine and which is from the human:
If you then cannot determine which answer is from the machine and which is from the human, then the machine can be considered just as intelligent as the human.
But we have no real clue of what makes a human being intelligent — which, in my opinion, is partly because we keep insisting that only we humans are really intelligent — and that the other animals are, at best, somewhat intelligent in their own way. And this makes it difficult for us to understand if a machine is really intelligent, or if it just behaves as if it were.
Most people seem to want to see a difference here — that there must be something undefined intelligent going on inside the machine which is similar to the inner voice or spirit that we find ourselves, as humans, to have. They want to see a ghost in the machine.
Nevertheless, in a weak moment, we will make decisions based on an implicit Turing test, in that we simply tend to believe in what sounds intelligent, no matter who is saying it, no matter if it is indeed generated by a machine only to please us.
And that can lead to problems, just like it did for the two lawyers.
Where this will bring us
After considering all this, I see no other way forward than to stop trusting that ChatGPT can deliver facts and start checking and double-checking whatever is coming out of this machine, even if it looks like facts.
Everything else; everything generated that is indeed supposed to be generated, such as visual art or fictional writing, can be enjoyed without too much consideration1 — even though there are now lawsuits on their way from artists of all kinds who believe that ChatGPT is somehow plagiarizing their works or even more simple: that it has read, looked at, or listened to their works without their permission.
So, perhaps, even when understanding that what ChatGPT tells you is pure imagination, so that you will only use it for entertainment purposes — you may be in danger of becoming a perceived partner in crime for this thieving machine — in danger of being sued by the established entertainers.
And that could spoil your life.
Just be aware that art and text can look good but still be nonsense, as described in the article AI Gibberish at my other substack, All of Life by Inidox.
Someone I know who is going through a tough time with their mental health recently told me they started "talking" to ChatGPT - telling it about their problems, emotional struggles, and allowing it to act as a "therapist", so to say... They even went so far as to consider giving up their current traditional therapy because ChatGPT "is better"... I think (and much of what they told me sounded as if) the program was just telling them exactly what they wanted to hear - something a real therapist doesn't always do (not if they're a good therapist anyway)... Reading this just made me reflect upon that again and frankly, I think it's very frightening.
What an interesting topic, Jorgen - and of course, relevant. I have to admit, I don't feel too excited about AI. I understand that Meta now is designing AI personas as "fake" representatives who can have conversations with older generations to "assist" them in making decisions. This can have all kinds of nefarious consequences. Imagine an elderly person getting conned by someone who creates an AI-generated Social Security representative, etc.