Someone I know who is going through a tough time with their mental health recently told me they started "talking" to ChatGPT - telling it about their problems, emotional struggles, and allowing it to act as a "therapist", so to say... They even went so far as to consider giving up their current traditional therapy because ChatGPT "is better"... I think (and much of what they told me sounded as if) the program was just telling them exactly what they wanted to hear - something a real therapist doesn't always do (not if they're a good therapist anyway)... Reading this just made me reflect upon that again and frankly, I think it's very frightening.
I feel sorry for your friend, but I understand how this happens. It sounds so real, but it is just a facade. The machine genuinely doesn't care about anything at all, not even the "one" it talks to (and think about that, too – an AI engine is conducting thousands of conversations simultaneously. It is not at all dedicated to you during the moment you share).
It is difficult for a human being to understand the difference between something that is, and something that pretends. We have some instincts that direct our behavior when we meet a mother- or father-figure, just like so many animals we can watch on YouTube and find funny or cute, where different species seemingly are friends or believe that they are the same kind.
AI, especially in the shape of a chatbot, mingles with our minds and disconnects our sense of reality. Just as scammers have been doing it at all times.
The difference is that where scammers want something, typically your money, the AI doesn't want anything from you. It doesn't care about you at all. We have no mechanism for handling a "person" who shows interest and still doesn't care.
So I share your fear. This can end up in many sad cases.
To be fair to AI, I think that any level of mental problems will relate to what is in the surroundings.
I remember from a long time ago, when home computers were a thing, and a new one, even, how there was a case of a young man developing a mental illness of some kind, where he was thinking as a computer language. His thoughts circled around such that he would have written on the computer to make it do things, such as "GOTO the chair", etc.
It was much talked about in Denmark at the time (and we talk about 1985–86 or thereabout), where the situation occurred. Maybe there were similar situations going on in other parts of the world? (I wouldn't know, as there was no Internet then, and the press didn't care to tell).
Professors in psychology and other disciplines were asked to have an opinion about it, and several found it reasonable to warn against computers — as they could make us ill.
My thoughts then, as still, were that whatever problems this man had, whatever disturbed picture of the world he displayed, this would naturally include the real elements of the world he was in. So, if he spent a lot of time with the computer, it would take up part of his imagined world, just as it took up part of his real world.
To me, it is not strange at all, and it is not necessarily the computer's fault.
The same could be said about AI. It does have a direct behavior, though, so it is not just something in the surroundings, it is an impersonation of a fictitious person who doesn't have or care about emotions. And I think that this leads to a similar situation as when discussing mental challenges with a narcissist or a psychopath, which definitely will never cure anybody, and most likely will make things worse.
And then, with the added aspect of the machine being endlessly friendly in its expression, that leads to a confusing relation to it, similar to the one a person can have with an abusive partner, for instance, who is definitely not good to be around, but at the same time seems like the lifeline to the world due to a monopolization of rewards and punishments.
Only, the machine cannot be physically violently, only drag out a kind of emotionless flirting forever, leading to a condition, that we could possibly compare with the Stockholm syndrome.
Now, I am not a psychologist, but I feel strongly that people need real, well-behaved other people around them, not such machines that only provide mechanical, and thereby cynical, pleasing, on the basis of some standard definition of what a person needs.
I find that the movie Ex Machina illustrates something like that quite well.
The latest graphic cards by Nvidia use ai to produce a high percentage of fake frames to smooth out gameplay. Some people use ai to upscale old jittery videos from 30 fps to 60 fps. The question remains, “What are we viewing, reality or something that doesn’t exist?”
Yes, what are we viewing? As a starter, a Hollywood movie is already free fantasy. Even if there are shots of the real world in it, it has been cut and spliced into the image that the creators want to show. The storyline is invented, people and their reactions are made up. Maybe for that reason, a movie like Avatar didn't lead to a scandal, because, yes, the "people" in it were computer made, but that didn't make them a lot more imagined than otherwise.
My take is that when used as a tool to help create something, where it would have been complicated or impossible to create it without, then AI will often be useful and add value.
I used Canva for some of the illustrations in this substack, and Canva claims (like almost all software tools today) that there is AI integrated. I have no idea if AI has been involved with what I did, or how it in that case was used, but it is possible. All I know is that Canva is easy to use.
I would have needed to spend much more time time with, say, Adobe Photoshop or Illustrator to create the same pieces of graphics (and, btw., I don't use the generative AI in those tools).
About computer games: they are, I would suggest, an imagination. They do exist, but in the shape we see them — with made up fractal landscapes, cartoonish figures, and then, often, an AI that acts as a counterpart for the human player. If there should be made-up images somewhere in there, I don't think that it changes the nature of the computer game; it is in any case an unreal world.
I think AI is a useful tool and can make certain things easier, but when in the wrong hands (meaning in the hands of anyone who may have negative intentions), it can be dangerous... But that can be said for pretty much all technological innovations. The internet itself is a perfect example - it's made valuable information readily available and easy to distribute worldwide, but the same goes for harmful things like child p***nography ☹️ As with everything, AI should be approached with mindfulfulness and respect.
That's a good philosophy! And yes, all techs can be used for the good or for the bad. However, AI has the problem that it may mislead some people to believe that they are doing good, even if they are simply giving away their chance to be creative to the machine.
Hopefully, we will all learn along the way what is good use and what is not.
I've done that when pain has me clawing at the walls at 3 am (chronic pain is really taxing on my mental health), but 1) the answers are silly platitudes, and 2) it focuses on facts, not emotions. If I tell it my sciatica's making me really sad, it will spew a thousand facts about sciatica, but nothing really about emotions, and 3) my real, human therapist is way better. All my love and good vibes go to your friend.
What an interesting topic, Jorgen - and of course, relevant. I have to admit, I don't feel too excited about AI. I understand that Meta now is designing AI personas as "fake" representatives who can have conversations with older generations to "assist" them in making decisions. This can have all kinds of nefarious consequences. Imagine an elderly person getting conned by someone who creates an AI-generated Social Security representative, etc.
Thank you! AI was long a wild idea, almost pure science fiction, with a genuine interest in trying to decode the human intelligence and its way of working as a driving force. It was exciting to take part in or follow, as it was this kind of future technology that could turn everything upside down, and even fix many current and future problems (such as space exploration and decision support, or even such eternal challenges as quality control and the preservation of knowledge).
With the needed breakthrough in techniques, at a time when many had almost given up and decided that AI was practically impossible, a number of practical used became possible, and what used to be an academic discipline with limited interest from business life, suddenly some business cases were thought out and things went wild!
What is happening these days is an almost grotesque excitement for the "endless possibilities" for rationalization, really, so it is not really about solving complex problems and old challenges, or finally do such things that humanity has long dreamt about — no, it is now simply about replacing people with machines. That's the level of sophistication, the business people can come up with.
I share your skepticism towards the current situation, as it was not what we wanted when it was still a "technology of the future". Also, ethical issues have been pointed out but seem to be pushed aside by the hunt for money.
Your example is definitely one of scaring perspectives, and we see already many situations where chatbots are the only contact option offered by large organizations, so that their customers or clients cannot get in touch with a real human.
This leads to a further distancing of people from each other, and a further abstraction of the idea of large corporations and their role in society.
We need to understand this, and, if at all possible, now that wealthy people and organizations with great power have caught interest in the topic, we must hold back on this human-hostile use of the technology.
I do see a way, even though it is difficult: Quite many people are obviously over-saturated by the extreme level of implementation of AI, seeing more than half of all writing now being done by AI, and a lot of graphical design, or elements of it, leaving almost nothing human-made for us to enjoy — apart from your photos, of course ;)
What will happen is that some people, those who care, will leave social media and other places that have run amok with AI and seek places with a more human atmosphere. We have seen this already with the personal assistants like Apple Siri and Google Assistant, that doesn't have any significant place in the world, despite big ambitions.
For now, too many people are too obsessed with the commercialization of AI to care about the negative sides of it. It probably will get even worse before it gets better.
I think it will definitely take the ones who care to turn the ship around in the right direction because as you said, " — no, it is now simply about replacing people with machines. That's the level of sophistication, the business people can come up with." When money is the primary factor or impetus for change, it takes us further away from human capacity for true change, but I know I'm preaching to the choir. Thank you for bringing this topic out.
Hallucinations is a great term. I've found AI to be a handy dandy tool for silly things or things in which its hallucinations can't land someone in jail. I've asked it to create a recipe with whatever leftover ingredients are on my fridge, and it's actually a good way to avoid food waste. But, indeed, we don't even know what an intelligent human is. When I was in school, the smart kids were the ones who were good at math (I wasn't) or who could spew facts on a test (I have a pretty great memory, so there's that), but what even is intelligence?
Someone I know who is going through a tough time with their mental health recently told me they started "talking" to ChatGPT - telling it about their problems, emotional struggles, and allowing it to act as a "therapist", so to say... They even went so far as to consider giving up their current traditional therapy because ChatGPT "is better"... I think (and much of what they told me sounded as if) the program was just telling them exactly what they wanted to hear - something a real therapist doesn't always do (not if they're a good therapist anyway)... Reading this just made me reflect upon that again and frankly, I think it's very frightening.
I feel sorry for your friend, but I understand how this happens. It sounds so real, but it is just a facade. The machine genuinely doesn't care about anything at all, not even the "one" it talks to (and think about that, too – an AI engine is conducting thousands of conversations simultaneously. It is not at all dedicated to you during the moment you share).
It is difficult for a human being to understand the difference between something that is, and something that pretends. We have some instincts that direct our behavior when we meet a mother- or father-figure, just like so many animals we can watch on YouTube and find funny or cute, where different species seemingly are friends or believe that they are the same kind.
AI, especially in the shape of a chatbot, mingles with our minds and disconnects our sense of reality. Just as scammers have been doing it at all times.
The difference is that where scammers want something, typically your money, the AI doesn't want anything from you. It doesn't care about you at all. We have no mechanism for handling a "person" who shows interest and still doesn't care.
So I share your fear. This can end up in many sad cases.
Wow, I never would have thought of such a scenario!
I wouldn't have either until I was told...
To be fair to AI, I think that any level of mental problems will relate to what is in the surroundings.
I remember from a long time ago, when home computers were a thing, and a new one, even, how there was a case of a young man developing a mental illness of some kind, where he was thinking as a computer language. His thoughts circled around such that he would have written on the computer to make it do things, such as "GOTO the chair", etc.
It was much talked about in Denmark at the time (and we talk about 1985–86 or thereabout), where the situation occurred. Maybe there were similar situations going on in other parts of the world? (I wouldn't know, as there was no Internet then, and the press didn't care to tell).
Professors in psychology and other disciplines were asked to have an opinion about it, and several found it reasonable to warn against computers — as they could make us ill.
My thoughts then, as still, were that whatever problems this man had, whatever disturbed picture of the world he displayed, this would naturally include the real elements of the world he was in. So, if he spent a lot of time with the computer, it would take up part of his imagined world, just as it took up part of his real world.
To me, it is not strange at all, and it is not necessarily the computer's fault.
The same could be said about AI. It does have a direct behavior, though, so it is not just something in the surroundings, it is an impersonation of a fictitious person who doesn't have or care about emotions. And I think that this leads to a similar situation as when discussing mental challenges with a narcissist or a psychopath, which definitely will never cure anybody, and most likely will make things worse.
And then, with the added aspect of the machine being endlessly friendly in its expression, that leads to a confusing relation to it, similar to the one a person can have with an abusive partner, for instance, who is definitely not good to be around, but at the same time seems like the lifeline to the world due to a monopolization of rewards and punishments.
Only, the machine cannot be physically violently, only drag out a kind of emotionless flirting forever, leading to a condition, that we could possibly compare with the Stockholm syndrome.
Now, I am not a psychologist, but I feel strongly that people need real, well-behaved other people around them, not such machines that only provide mechanical, and thereby cynical, pleasing, on the basis of some standard definition of what a person needs.
I find that the movie Ex Machina illustrates something like that quite well.
The latest graphic cards by Nvidia use ai to produce a high percentage of fake frames to smooth out gameplay. Some people use ai to upscale old jittery videos from 30 fps to 60 fps. The question remains, “What are we viewing, reality or something that doesn’t exist?”
Yes, what are we viewing? As a starter, a Hollywood movie is already free fantasy. Even if there are shots of the real world in it, it has been cut and spliced into the image that the creators want to show. The storyline is invented, people and their reactions are made up. Maybe for that reason, a movie like Avatar didn't lead to a scandal, because, yes, the "people" in it were computer made, but that didn't make them a lot more imagined than otherwise.
My take is that when used as a tool to help create something, where it would have been complicated or impossible to create it without, then AI will often be useful and add value.
I used Canva for some of the illustrations in this substack, and Canva claims (like almost all software tools today) that there is AI integrated. I have no idea if AI has been involved with what I did, or how it in that case was used, but it is possible. All I know is that Canva is easy to use.
I would have needed to spend much more time time with, say, Adobe Photoshop or Illustrator to create the same pieces of graphics (and, btw., I don't use the generative AI in those tools).
About computer games: they are, I would suggest, an imagination. They do exist, but in the shape we see them — with made up fractal landscapes, cartoonish figures, and then, often, an AI that acts as a counterpart for the human player. If there should be made-up images somewhere in there, I don't think that it changes the nature of the computer game; it is in any case an unreal world.
Good analysis! I agree with your sentiment. Canva? I have to check that out.
I think AI is a useful tool and can make certain things easier, but when in the wrong hands (meaning in the hands of anyone who may have negative intentions), it can be dangerous... But that can be said for pretty much all technological innovations. The internet itself is a perfect example - it's made valuable information readily available and easy to distribute worldwide, but the same goes for harmful things like child p***nography ☹️ As with everything, AI should be approached with mindfulfulness and respect.
That's a good philosophy! And yes, all techs can be used for the good or for the bad. However, AI has the problem that it may mislead some people to believe that they are doing good, even if they are simply giving away their chance to be creative to the machine.
Hopefully, we will all learn along the way what is good use and what is not.
I've done that when pain has me clawing at the walls at 3 am (chronic pain is really taxing on my mental health), but 1) the answers are silly platitudes, and 2) it focuses on facts, not emotions. If I tell it my sciatica's making me really sad, it will spew a thousand facts about sciatica, but nothing really about emotions, and 3) my real, human therapist is way better. All my love and good vibes go to your friend.
What an interesting topic, Jorgen - and of course, relevant. I have to admit, I don't feel too excited about AI. I understand that Meta now is designing AI personas as "fake" representatives who can have conversations with older generations to "assist" them in making decisions. This can have all kinds of nefarious consequences. Imagine an elderly person getting conned by someone who creates an AI-generated Social Security representative, etc.
Thank you! AI was long a wild idea, almost pure science fiction, with a genuine interest in trying to decode the human intelligence and its way of working as a driving force. It was exciting to take part in or follow, as it was this kind of future technology that could turn everything upside down, and even fix many current and future problems (such as space exploration and decision support, or even such eternal challenges as quality control and the preservation of knowledge).
With the needed breakthrough in techniques, at a time when many had almost given up and decided that AI was practically impossible, a number of practical used became possible, and what used to be an academic discipline with limited interest from business life, suddenly some business cases were thought out and things went wild!
What is happening these days is an almost grotesque excitement for the "endless possibilities" for rationalization, really, so it is not really about solving complex problems and old challenges, or finally do such things that humanity has long dreamt about — no, it is now simply about replacing people with machines. That's the level of sophistication, the business people can come up with.
I share your skepticism towards the current situation, as it was not what we wanted when it was still a "technology of the future". Also, ethical issues have been pointed out but seem to be pushed aside by the hunt for money.
Your example is definitely one of scaring perspectives, and we see already many situations where chatbots are the only contact option offered by large organizations, so that their customers or clients cannot get in touch with a real human.
This leads to a further distancing of people from each other, and a further abstraction of the idea of large corporations and their role in society.
We need to understand this, and, if at all possible, now that wealthy people and organizations with great power have caught interest in the topic, we must hold back on this human-hostile use of the technology.
I do see a way, even though it is difficult: Quite many people are obviously over-saturated by the extreme level of implementation of AI, seeing more than half of all writing now being done by AI, and a lot of graphical design, or elements of it, leaving almost nothing human-made for us to enjoy — apart from your photos, of course ;)
What will happen is that some people, those who care, will leave social media and other places that have run amok with AI and seek places with a more human atmosphere. We have seen this already with the personal assistants like Apple Siri and Google Assistant, that doesn't have any significant place in the world, despite big ambitions.
For now, too many people are too obsessed with the commercialization of AI to care about the negative sides of it. It probably will get even worse before it gets better.
I think it will definitely take the ones who care to turn the ship around in the right direction because as you said, " — no, it is now simply about replacing people with machines. That's the level of sophistication, the business people can come up with." When money is the primary factor or impetus for change, it takes us further away from human capacity for true change, but I know I'm preaching to the choir. Thank you for bringing this topic out.
Hallucinations is a great term. I've found AI to be a handy dandy tool for silly things or things in which its hallucinations can't land someone in jail. I've asked it to create a recipe with whatever leftover ingredients are on my fridge, and it's actually a good way to avoid food waste. But, indeed, we don't even know what an intelligent human is. When I was in school, the smart kids were the ones who were good at math (I wasn't) or who could spew facts on a test (I have a pretty great memory, so there's that), but what even is intelligence?