Eliza…

I got this game, Eliza, a few weeks ago. It’s a point and click adventure/visual novel, so it’s possibly right up my alley. I want to play through all of these games one after another because they often have interesting stories and are fun to play even when I’m tired and just want some entertainment without having to think too much.

What I didn’t know was just how close to me the story of Eliza is turning out to be. Here’s the thing: I don’t know much about the story at all and I just started playing (I never read too much about these games before I play them, so that I can avoid spoilers). But here we have some psychologists (I’m a psychologist) who developed an AI that listens to clients and gives diagnoses as well as advice to them. However, since it’s basically a computer, it lacks the human part and so, they employ humans to read out the text lines that the AI writes for them. It makes it more acceptable for the client, even though they know that what they hear doesn’t come from the human sitting opposite of them but from the AI.

It’s such a fascinating topic! Back when I studied, I already heard that actually, computers fed with appropriate data are much better at making diagnosis and they make fewer mistakes. However, so many years later, there’s still us humans diagnosing – most often without the help of computers. Except maybe for running one test’s evaluation through a software. My real job is – among other tasks – to do psychological testing including evaluations and interpretations. I am not making any diagnoses, but I’m still giving advice. Sometimes, I come in after somebody else already did psychological tests. Most of the time, these were correct and I agree with the interpretations of the results, but not always. So I’m basically the second opinion. And then, behind me, there is my boss who is the third opinion. I guess with that many people checking and double-checking, we should get it right. But is all of that really necessary or couldn’t we just use some software to analyze the cases and help us come to correct conclusions?

And maybe, if we go one step further, giving advice or even treating people is better and more helpful when it comes from a computer/AI. But I agree that a real physical human being who is capable of compassion is playing a huge part here as well. Grawe did come to the conclusion that the therapist is part of the equation when looking at therapy effectiveness. I personally agree with this. I had one therapist who had to leave when she finished her therapy training, and the second therapist I had… was nice. She did her best. But the chemistry between us wasn’t right. She didn’t do bad, she didn’t do any harm, but without the first therapist, I wouldn’t have overcome my social phobia.

But back to Eliza: I play as Evelyn who is one of the “voices and faces” of Eliza. She just started her job reading aloud what Eliza computed as the best answer. And then there’s also the psychologists who developed Eliza and I assume they’re going to be a part of this story as well. It’s fascinating, because this seems Utopian, yet very very real.

4 Comments

  1. There’s a lot to unpack in Eliza. I just finished it a couple days ago also, after leaving it at Chapter 5 for a month or two.

    Without spoilers, some notable things were how the protagonist is expected to act as a mirror and oracle for the Eliza “AI” without adding on extraneous things – at some points this seems almost inhumanly disconnected and nonsensical, while at other points this might actually be the best thing in the interests of safety for both parties involved.

    It also makes you think about the motives of the folks who patronize the Eliza program, why are they actually choosing to talk to a more sophisticated chatbot instead of interacting with a human being, and is the human proxy actually necessary? Is the human there for the purposes of /listening/ rather than injecting their own take on matters?

    Nostalgia also helps, I used to play around with the Dr Sbaitso chatbot from Sound Blaster in the old days. A lot of folks have also put their spin on the old Eliza chatbot these days. This one is a pretty good example of the simplicity behind deflection and just reflecting back what the human conversation partner says. http://psych.fullerton.edu/mbirnbaum/psych101/Eliza.htm

    Like

    1. About the motives: I didn’t think about that, actually. I just automatically went with what I know and assumed accordingly. Germany has a really good health care system in general – except when it comes to the availability of psychotherapists. Most have a very long waitlist (6 months to a 1 year) and some of them have closed their waitlist for new patients because it just is too long. You can usually find a therapist that you pay on your own, but there aren’t enough who are officially paid by the health care system. Having something like this system – software plus a simple paid proxy (I think she says at the beginning that it’s not paying that much) would be cheaper. Thus, availability could be increased. Thus, less waiting time!

      And people begin to trust computers more and more the longer we’re using them. Corona shows this right now: People who never used voice chats now have to use them and all of a sudden, they realized that yes, you can have human interaction this way and it works really well! ;)

      Like

Comments are closed.