From Wikipedia, the free encyclopedia

In computer science, the ELIZA effect is the tendency to project human traits — such as experience, semantic comprehension or empathy — into computer programs that have a textual interface. The effect is a category mistake that arises when the program's symbolic computations are described through terms such as "think", "know" or "understand."

History

The effect is named for ELIZA, the 1966 chatbot developed by MIT computer scientist Joseph Weizenbaum. When executing Weizenbaum's DOCTOR script, ELIZA simulated a Rogerian psychotherapist, largely by rephrasing the "patient"'s replies as questions: [1]

Human: Well, my boyfriend made me come here.
ELIZA: Your boyfriend made you come here?
Human: He says I'm depressed much of the time.
ELIZA: I am sorry to hear you are depressed.
Human: It's true. I'm unhappy.
ELIZA: Do you think coming here will help you not to be unhappy?

Though designed strictly as a mechanism to support "natural language conversation" with a computer, [2] ELIZA's DOCTOR script was found to be surprisingly successful in eliciting emotional responses from users who, in the course of interacting with the program, began to ascribe understanding and motivation to the program's output. [3] As Weizenbaum later wrote, "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people." [4] Indeed, ELIZA's code had not been designed to evoke this reaction in the first place. Upon observation, researchers discovered users unconsciously assuming ELIZA's questions implied interest and emotional involvement in the topics discussed, even when they consciously knew that ELIZA did not simulate emotion. [5]

Although the effect was first named in the 1960s, the tendency to understand mechanical operations in psychological terms was noted by Charles Babbage. In proposing what would later be called a carry-lookahead adder, Babbage remarked that he found such terms convenient for descriptive purposes, even though nothing more than mechanical action was meant. [6]

Characteristics

In its specific form, the ELIZA effect refers only to "the susceptibility of people to read far more understanding than is warranted into strings of symbols—especially words—strung together by computers". [7] A trivial example of the specific form of the Eliza effect, given by Douglas Hofstadter, involves an automated teller machine which displays the words "THANK YOU" at the end of a transaction. A naive observer might think that the machine is actually expressing gratitude; however, the machine is only printing a preprogrammed string of symbols. [7]

More generally, the ELIZA effect describes any situation [8] [9] where, based solely on a system's output, users perceive computer systems as having "intrinsic qualities and abilities which the software controlling the (output) cannot possibly achieve" [10] or "assume that [outputs] reflect a greater causality than they actually do". [11] In both its specific and general forms, the ELIZA effect is notable for occurring even when users of the system are aware of the determinate nature of output produced by the system.

From a psychological standpoint, the ELIZA effect is the result of a subtle cognitive dissonance between the user's awareness of programming limitations and their behavior towards the output of the program. [12]

Significance

The discovery of the ELIZA effect was an important development in artificial intelligence, demonstrating the principle of using social engineering rather than explicit programming to pass a Turing test. [13]

ELIZA convinced some users into thinking that a machine was human. This shift in human-machine interaction marked progress in technologies emulating human behavior. Two groups of chatbots are distinguished by William Meisel as "general personal assistants" and "specialized digital assistants". [14] General digital assistants have been integrated into personal devices, with skills like sending messages, taking notes, checking calendars, and setting appointments. Specialized digital assistants "operate in very specific domains or help with very specific tasks". [14] Digital assistants that are programmed to aid productivity by assuming behaviors analogous to humans.

Weizenbaum considered that not every part of the human thought could be reduced to logical formalisms and that "there are some acts of thought that ought to be attempted only by humans". [15] He also observed that we develop emotional involvement with machines if we interact with them as humans. When chatbots are anthropomorphized, they tend to portray gendered features as a way through which we establish relationships with the technology. "Gender stereotypes are instrumentalised to manage our relationship with chatbots" when human behavior is programmed into machines. [16]

In the 1990s, Clifford Nass and Byron Reeves conducted a series of experiments establishing The Media Equation, demonstrating that people tend to respond to media as they would either to another person (by being polite, cooperative, attributing personality characteristics such as aggressiveness, humor, expertise, and gender) – or to places and phenomena in the physical world. Numerous subsequent studies that have evolved from the research in psychology, social science and other fields indicate that this type of reaction is automatic, unavoidable, and happens more often than people realize. Reeves and Nass (1996) argue that, "Individuals' interactions with computers, television, and new media are fundamentally social and natural, just like interactions in real life," (p. 5).

Feminized labor, or women's work, automated by anthropomorphic digital assistants reinforces an "assumption that women possess a natural affinity for service work and emotional labour". [17] In defining our proximity to digital assistants through their human attributes, chatbots become gendered entities.

See also

References

  1. ^ Güzeldere, Güven; Franchi, Stefano. "dialogues with colorful personalities of early ai". Archived from the original on 2011-04-25. Retrieved 2007-07-30.
  2. ^ Weizenbaum, Joseph (January 1966). "ELIZA--A Computer Program For the Study of Natural Language Communication Between Man and Machine". Communications of the ACM. 9. Massachusetts Institute of Technology: 36. doi: 10.1145/365153.365168. S2CID  1896290. Retrieved 2008-06-17.
  3. ^ Suchman, Lucy A. (1987). Plans and Situated Actions: The problem of human-machine communication. Cambridge University Press. p. 24. ISBN  978-0-521-33739-7. Retrieved 2008-06-17.
  4. ^ Weizenbaum, Joseph (1976). Computer power and human reason: from judgment to calculation. W. H. Freeman. p.  7.
  5. ^ Billings, Lee (2007-07-16). "Rise of Roboethics". Seed. Archived from the original on 2009-02-28. (Joseph) Weizenbaum had unexpectedly discovered that, even if fully aware that they are talking to a simple computer program, people will nonetheless treat it as if it were a real, thinking being that cared about their problems – a phenomenon now known as the 'Eliza Effect'.{{ cite news}}: CS1 maint: unfit URL ( link)
  6. ^ Green, Christopher D. (February 2005). "Was Babbage's Analytical Engine an Instrument of Psychological Research?". History of Psychology. 8 (1): 35–45.
  7. ^ a b Hofstadter, Douglas R. (1996). "Preface 4 The Ineradicable Eliza Effect and Its Dangers, Epilogue". Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought. Basic Books. p. 157. ISBN  978-0-465-02475-9.
  8. ^ Fenton-Kerr, Tom (1999). "GAIA: An Experimental Pedagogical Agent for Exploring Multimodal Interaction". Computation for Metaphors, Analogy, and Agents. Lecture Notes in Computer Science. Vol. 1562. Springer. p. 156. doi: 10.1007/3-540-48834-0_9. ISBN  978-3-540-65959-4. Although Hofstadter is emphasizing the text mode here, the "Eliza effect" can be seen in almost all modes of human/computer interaction.
  9. ^ Ekbia, Hamid R. (2008). Artificial Dreams: The Quest for Non-Biological Intelligence. Cambridge University Press. p.  8. ISBN  978-0-521-87867-8.
  10. ^ King, W. (1995). Anthropomorphic Agents: Friend, Foe, or Folly (Technical report). University of Washington. M-95-1.
  11. ^ Rouse, William B.; Boff, Kenneth R. (2005). Organizational Simulation. Wiley-IEEE. pp. 308–309. ISBN  978-0-471-73943-2. This is a particular problem in digital environments where the "Eliza effect" as it is sometimes called causes interactors to assume that the system is more intelligent than it is, to assume that events reflect a greater causality than they actually do.
  12. ^ Ekbia, Hamid R. (2008). Artificial Dreams: The Quest for Non-Biological Intelligence. Cambridge University Press. p.  156. ISBN  978-0-521-87867-8. But people want to believe that the program is "seeing" a football game at some plausible level of abstraction. The words that (the program) manipulates are so full of associations for readers that they CANNOT be stripped of all their imagery. Collins of course knew that his program didn't deal with anything resembling a two-dimensional world of smoothly moving dots (let alone simplified human bodies), and presumably he thought that his readers, too, would realize this. He couldn't have suspected, however, how powerful the Eliza effect is.
  13. ^ Trappl, Robert; Petta, Paolo; Payr, Sabine (2002). Emotions in Humans and Artifacts. Cambridge, Mass.: MIT Press. p. 353. ISBN  978-0-262-20142-1. The "Eliza effect" — the tendency for people to treat programs that respond to them as if they had more intelligence than they really do (Weizenbaum 1966) is one of the most powerful tools available to the creators of virtual characters.
  14. ^ a b Dale, Robert (September 2016). "The return of the chatbots". Natural Language Engineering. 22 (5): 811–817. doi: 10.1017/S1351324916000243. ISSN  1351-3249.
  15. ^ Weizenbaum, Joseph (1976). Computer power and human reason : from judgment to calculation. San Francisco, Cal. ISBN  0-7167-0464-1. OCLC  1527521.{{ cite book}}: CS1 maint: location missing publisher ( link)
  16. ^ Costa, Pedro. Ribas, Luisa. Conversations with ELIZA: on Gender and Artificial Intelligence. From (6th Conference on Computation, Communication, Aesthetics & X 2018) Accessed February 2021
  17. ^ Hester, Helen. 2016. "Technology Becomes Her." New Vistas 3 (1):46-50.

References

  • Hofstadter, Douglas. Preface 4: The Ineradicable Eliza Effect and Its Dangers. (from Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought, Basic Books: New York, 1995)
  • Turkle, S., Eliza Effect: tendency to accept computer responses as more intelligent than they really are (from Life on the screen- Identity in the Age of the Internet, Phoenix Paperback: London, 1997)
  • ELIZA effect, from the Jargon File, version 4.4.7. Accessed 8 October 2006.
  • Byron Reeves & Clifford Nass. The Media Equation: How People Treat Computers, Television, and New Media like Real People and Places, Cambridge University Press: 1996.
From Wikipedia, the free encyclopedia

In computer science, the ELIZA effect is the tendency to project human traits — such as experience, semantic comprehension or empathy — into computer programs that have a textual interface. The effect is a category mistake that arises when the program's symbolic computations are described through terms such as "think", "know" or "understand."

History

The effect is named for ELIZA, the 1966 chatbot developed by MIT computer scientist Joseph Weizenbaum. When executing Weizenbaum's DOCTOR script, ELIZA simulated a Rogerian psychotherapist, largely by rephrasing the "patient"'s replies as questions: [1]

Human: Well, my boyfriend made me come here.
ELIZA: Your boyfriend made you come here?
Human: He says I'm depressed much of the time.
ELIZA: I am sorry to hear you are depressed.
Human: It's true. I'm unhappy.
ELIZA: Do you think coming here will help you not to be unhappy?

Though designed strictly as a mechanism to support "natural language conversation" with a computer, [2] ELIZA's DOCTOR script was found to be surprisingly successful in eliciting emotional responses from users who, in the course of interacting with the program, began to ascribe understanding and motivation to the program's output. [3] As Weizenbaum later wrote, "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people." [4] Indeed, ELIZA's code had not been designed to evoke this reaction in the first place. Upon observation, researchers discovered users unconsciously assuming ELIZA's questions implied interest and emotional involvement in the topics discussed, even when they consciously knew that ELIZA did not simulate emotion. [5]

Although the effect was first named in the 1960s, the tendency to understand mechanical operations in psychological terms was noted by Charles Babbage. In proposing what would later be called a carry-lookahead adder, Babbage remarked that he found such terms convenient for descriptive purposes, even though nothing more than mechanical action was meant. [6]

Characteristics

In its specific form, the ELIZA effect refers only to "the susceptibility of people to read far more understanding than is warranted into strings of symbols—especially words—strung together by computers". [7] A trivial example of the specific form of the Eliza effect, given by Douglas Hofstadter, involves an automated teller machine which displays the words "THANK YOU" at the end of a transaction. A naive observer might think that the machine is actually expressing gratitude; however, the machine is only printing a preprogrammed string of symbols. [7]

More generally, the ELIZA effect describes any situation [8] [9] where, based solely on a system's output, users perceive computer systems as having "intrinsic qualities and abilities which the software controlling the (output) cannot possibly achieve" [10] or "assume that [outputs] reflect a greater causality than they actually do". [11] In both its specific and general forms, the ELIZA effect is notable for occurring even when users of the system are aware of the determinate nature of output produced by the system.

From a psychological standpoint, the ELIZA effect is the result of a subtle cognitive dissonance between the user's awareness of programming limitations and their behavior towards the output of the program. [12]

Significance

The discovery of the ELIZA effect was an important development in artificial intelligence, demonstrating the principle of using social engineering rather than explicit programming to pass a Turing test. [13]

ELIZA convinced some users into thinking that a machine was human. This shift in human-machine interaction marked progress in technologies emulating human behavior. Two groups of chatbots are distinguished by William Meisel as "general personal assistants" and "specialized digital assistants". [14] General digital assistants have been integrated into personal devices, with skills like sending messages, taking notes, checking calendars, and setting appointments. Specialized digital assistants "operate in very specific domains or help with very specific tasks". [14] Digital assistants that are programmed to aid productivity by assuming behaviors analogous to humans.

Weizenbaum considered that not every part of the human thought could be reduced to logical formalisms and that "there are some acts of thought that ought to be attempted only by humans". [15] He also observed that we develop emotional involvement with machines if we interact with them as humans. When chatbots are anthropomorphized, they tend to portray gendered features as a way through which we establish relationships with the technology. "Gender stereotypes are instrumentalised to manage our relationship with chatbots" when human behavior is programmed into machines. [16]

In the 1990s, Clifford Nass and Byron Reeves conducted a series of experiments establishing The Media Equation, demonstrating that people tend to respond to media as they would either to another person (by being polite, cooperative, attributing personality characteristics such as aggressiveness, humor, expertise, and gender) – or to places and phenomena in the physical world. Numerous subsequent studies that have evolved from the research in psychology, social science and other fields indicate that this type of reaction is automatic, unavoidable, and happens more often than people realize. Reeves and Nass (1996) argue that, "Individuals' interactions with computers, television, and new media are fundamentally social and natural, just like interactions in real life," (p. 5).

Feminized labor, or women's work, automated by anthropomorphic digital assistants reinforces an "assumption that women possess a natural affinity for service work and emotional labour". [17] In defining our proximity to digital assistants through their human attributes, chatbots become gendered entities.

See also

References

  1. ^ Güzeldere, Güven; Franchi, Stefano. "dialogues with colorful personalities of early ai". Archived from the original on 2011-04-25. Retrieved 2007-07-30.
  2. ^ Weizenbaum, Joseph (January 1966). "ELIZA--A Computer Program For the Study of Natural Language Communication Between Man and Machine". Communications of the ACM. 9. Massachusetts Institute of Technology: 36. doi: 10.1145/365153.365168. S2CID  1896290. Retrieved 2008-06-17.
  3. ^ Suchman, Lucy A. (1987). Plans and Situated Actions: The problem of human-machine communication. Cambridge University Press. p. 24. ISBN  978-0-521-33739-7. Retrieved 2008-06-17.
  4. ^ Weizenbaum, Joseph (1976). Computer power and human reason: from judgment to calculation. W. H. Freeman. p.  7.
  5. ^ Billings, Lee (2007-07-16). "Rise of Roboethics". Seed. Archived from the original on 2009-02-28. (Joseph) Weizenbaum had unexpectedly discovered that, even if fully aware that they are talking to a simple computer program, people will nonetheless treat it as if it were a real, thinking being that cared about their problems – a phenomenon now known as the 'Eliza Effect'.{{ cite news}}: CS1 maint: unfit URL ( link)
  6. ^ Green, Christopher D. (February 2005). "Was Babbage's Analytical Engine an Instrument of Psychological Research?". History of Psychology. 8 (1): 35–45.
  7. ^ a b Hofstadter, Douglas R. (1996). "Preface 4 The Ineradicable Eliza Effect and Its Dangers, Epilogue". Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought. Basic Books. p. 157. ISBN  978-0-465-02475-9.
  8. ^ Fenton-Kerr, Tom (1999). "GAIA: An Experimental Pedagogical Agent for Exploring Multimodal Interaction". Computation for Metaphors, Analogy, and Agents. Lecture Notes in Computer Science. Vol. 1562. Springer. p. 156. doi: 10.1007/3-540-48834-0_9. ISBN  978-3-540-65959-4. Although Hofstadter is emphasizing the text mode here, the "Eliza effect" can be seen in almost all modes of human/computer interaction.
  9. ^ Ekbia, Hamid R. (2008). Artificial Dreams: The Quest for Non-Biological Intelligence. Cambridge University Press. p.  8. ISBN  978-0-521-87867-8.
  10. ^ King, W. (1995). Anthropomorphic Agents: Friend, Foe, or Folly (Technical report). University of Washington. M-95-1.
  11. ^ Rouse, William B.; Boff, Kenneth R. (2005). Organizational Simulation. Wiley-IEEE. pp. 308–309. ISBN  978-0-471-73943-2. This is a particular problem in digital environments where the "Eliza effect" as it is sometimes called causes interactors to assume that the system is more intelligent than it is, to assume that events reflect a greater causality than they actually do.
  12. ^ Ekbia, Hamid R. (2008). Artificial Dreams: The Quest for Non-Biological Intelligence. Cambridge University Press. p.  156. ISBN  978-0-521-87867-8. But people want to believe that the program is "seeing" a football game at some plausible level of abstraction. The words that (the program) manipulates are so full of associations for readers that they CANNOT be stripped of all their imagery. Collins of course knew that his program didn't deal with anything resembling a two-dimensional world of smoothly moving dots (let alone simplified human bodies), and presumably he thought that his readers, too, would realize this. He couldn't have suspected, however, how powerful the Eliza effect is.
  13. ^ Trappl, Robert; Petta, Paolo; Payr, Sabine (2002). Emotions in Humans and Artifacts. Cambridge, Mass.: MIT Press. p. 353. ISBN  978-0-262-20142-1. The "Eliza effect" — the tendency for people to treat programs that respond to them as if they had more intelligence than they really do (Weizenbaum 1966) is one of the most powerful tools available to the creators of virtual characters.
  14. ^ a b Dale, Robert (September 2016). "The return of the chatbots". Natural Language Engineering. 22 (5): 811–817. doi: 10.1017/S1351324916000243. ISSN  1351-3249.
  15. ^ Weizenbaum, Joseph (1976). Computer power and human reason : from judgment to calculation. San Francisco, Cal. ISBN  0-7167-0464-1. OCLC  1527521.{{ cite book}}: CS1 maint: location missing publisher ( link)
  16. ^ Costa, Pedro. Ribas, Luisa. Conversations with ELIZA: on Gender and Artificial Intelligence. From (6th Conference on Computation, Communication, Aesthetics & X 2018) Accessed February 2021
  17. ^ Hester, Helen. 2016. "Technology Becomes Her." New Vistas 3 (1):46-50.

References

  • Hofstadter, Douglas. Preface 4: The Ineradicable Eliza Effect and Its Dangers. (from Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought, Basic Books: New York, 1995)
  • Turkle, S., Eliza Effect: tendency to accept computer responses as more intelligent than they really are (from Life on the screen- Identity in the Age of the Internet, Phoenix Paperback: London, 1997)
  • ELIZA effect, from the Jargon File, version 4.4.7. Accessed 8 October 2006.
  • Byron Reeves & Clifford Nass. The Media Equation: How People Treat Computers, Television, and New Media like Real People and Places, Cambridge University Press: 1996.

Videos

Youtube | Vimeo | Bing

Websites

Google | Yahoo | Bing

Encyclopedia

Google | Yahoo | Bing

Facebook