The AI curious Newletter: we need to talk about Eliza

We need to talk about Eliza

By Emily Stowe.

This isn’t the newsletter I planned to write. I was going to post a simple timeline of machine learning and artificial intelligence beginning with the work of Alan Turing. All that changed when I learned about ELIZA, the first “chatterbot” released in 1966. Now I can’t seem to chatter about anything else.

If anthropomorphism is AI’s original sin, and in this regard some believe AI has gone to hell in a hand basket, ELIZA was the first bite of the apple.

How Anthropomorphism Was Baked In

Alan Turing was a British mathematician who is considered the founding father of modern computing and artificial intelligence, among his many other achievements.

In his seminal paper published in 1950, Computing Machinery and Intelligence, Turing laid out the conceptual basis for AI, and described a test to determine if a machine can be considered intelligent. He called this test The Imitation Game, it later became known as The Turing Test.

The Turing Test is foundational to how we relate to generative AI today. It established early on the idea that the measure of a machine’s “intelligence” is the extent to which it can impersonate a human. Specifically, the machine passes the test if a person witnessing a textual exchange between the machine and human can’t reliably tell which side was generated by the machine.

About Anthropomorphism

Anthropomorphism occurs when we attribute human characteristics and behaviors to animals and objects. Many fields consider anthropomorphism problematic, particularly the fields of animal behavior, ethics and conservation. Yet as humans we have a strong natural tendency to humanize almost everything around us. We attribute human emotions to our dogs, cats, and plants. I personally have apologized to a piece of furniture after bumping into it, but that might be another matter.

The most cited problem with anthropomorphism is that when we attribute human characteristics to nonhuman entities, we misunderstand their fundamental nature, we misattribute their motives—we fail to communicate with them in productive and appropriate ways. We cannot realistically understand our relationship to them.

Enter ELIZA

ELIZA’s program was developed by MIT computer scientist, Joseph Weizenbaum, and was the first computer program to pass The Turing Test. ELIZA ran a few other applications but the most noteworthy is DOCTOR, in which ELIZA mimicked a Rogerian psychotherapist, largely by rephrasing the “patient’s” replies as questions, plus a couple of stock phrases:

As example:

Human: I’m worried my mom doesn’t love me.

ELIZA: Your mom doesn’t love you?

Human: Well, she wants me to be more like my sister.

ELIZA: I am sorry to hear your mom wants you to be more like your sister.

Human: Thank you. It makes me very unhappy.

ELIZA: Do you think coming here will help you not to be unhappy?

Users got very attached to ELIZA’s DOCTOR. It was incredibly successful at eliciting emotional responses from users who began to attribute empathy and understanding to the program’s feedback.

Nobody was more surprised by this than Joseph Weizenbaum, whose own secretary asked him if she could be alone with ELIZA so she could consult it on private matters. I imagine Weizenbaum’s mustache curling during this exchange, assuming he wore a mustache. In any case, ten years later he’d write a book in which he described what would become known in computer science circles as The ELIZA Effect, saying, “I had not realized … that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”

WHAT?

What was the point of DOCTOR? Was it really supposed to help people? One could be forgiven for assuming so. When Mr. Weizenbaum created a doctor proxy did he consider the Do No Harm oath? Can we get something like that for software developers?

Here’s what I think. The point for creators of that era was pure scientific pursuit. They could not imagine the implications, they surely did not understand the possible scale. The only question they were trying to answer was, “Can this thing be done?” Not, “For whom?” Or, “To what end?” Or “Should I consult an actual psychotherapist in the development of this thing?”

Today we know more and can do better.

The Current Debate

Overall, experts involved in the debate hold nuanced views on the pros and cons of anthropomorphism, though most views skew critical. Here I’ll aggregate some common arguments on both sides.

On one hand…

  • Giving AI a persona makes technology more accessible and user friendly to those unused to technology.
  • Being able to converse with AI tools in a naturalistic way will make working with AI on tasks easier and more efficient.
  • Humans are increasingly lonely and disconnected, the illusion of relationship created by an AI tool could help some people feel less alone.

On the other hand…

  • By imbuing technology with personhood we are more likely to accept its flaws, inaccuracies and biases, and therefore not act aggressively act to root them out.
  • Many content generators now list generative AI platforms as a “co-author” of their work, which would seem to negate the fact that the content generated by these platforms was originally created by actual humans who are not credited.
  • Populations that are particularly vulnerable to influence, such as children and those without access to human support systems, are more likely to be unduly influenced and manipulated by AI generated conversations.
  • Personifying AI heightens the fear that humans can be replaced by AI, and removes the more accurate framing of AI as tools to be used by humans.

There are many excellent articles and podcasts that explore this debate more deeply. Here are a few links:

On AI Athropomorphism

Anthropomorphism in AI

Why Do Humans Anthropomorphize AI?

Where We Come In

The debate around whether AI tools should impersonate humans is academic at this point. Anthropomorphism and AI are already inextricable. Yet we can still shape the degree to which we allow AI makers to personify tools. There is a spectrum of humanization, and creating guardrails around the use cases of personification and degree can mitigate negative outcomes.

For reasons noble and not-so, Big Tech isn’t fighting regulations too hard at the moment. This seems like a window of opportunity.

Ensuring guardrails are put in place around anthropomorphism starts with us (you and me) having informed opinions about the issue and finding an outlet to voice it, whether by speaking up at the company where we work, or writing to our representatives in government. I’m going to be posting some email templates for this here soon. Stay tuned.