Social Machines: Social Aspects of Human-Computer Interaction

Have you ever experienced the frustration of your computer not behaving as it should? Maybe it started to reboot during your most important meeting, or it simply froze and despite your best efforts, you couldn’t restart it. Take a moment to recall your reactions: how did you feel and behave? Did you mutter angry half-sentences? Did you shout at the machine in frustration? Did you threaten it that you will buy a new, better one if it doesn’t start to work properly?These behaviors seem completely normal to us, as if our computer had the intention to make us angry, even though most of us clearly understand that it doesn’t have intentions, emotions or desires. Why do we still act like it is a social being?


Humans are social animals

“Humans are 'social animals’” - Elliot Aronson, the world famous social psychologist  captured the essence of our nature this way in the title of his book [1]. Several studies revealed that our brain, senses, and perception were adapted to work well with our social context. Take, for example, the “mirror-neurons” [2] that help us be aware of the feelings and actions of others, or the human ears that are tuned to human voices better than to any other sounds [3]. Social interactions dominate our area of perception, and we can attribute social meaning to basically anything that others do around us - be it as simple as a direction of their gaze [4]. One of the explanations is the Cultural Intelligence Hypothesis [5], which theorize that our social environment and relationships were the driving forces in developing the complex human brain. According to this theory, competition and cooperation shaped our development so deeply that all our behavior is formed to social existence.

We are hypersocial - even with our computer

Humans are prone to socialize with their peers, but also will readily perceive inanimate objects as social agents. Computers are especially good subject for this personalization, as they have human-like traits [6]: they give words for output [7], they are interactive [8], and they are playing the same roles traditionally humans play [9], therefore making it obvious to answer socially to their actions. One of the theories explaining people’s social responses to computers is anthropomorphisation [10]. According to this, in uncertain, unfamiliar or ambiguous situations, we tend to use those categories we know the best, thus, we attribute human traits (that we know very well) to computers (that we are insecure with). Another possible explanation is that human beings don’t respond socially to computers, but to the human experts behind the software - but research showed that people don’t distinguish between two softwares as they do between two computers: using the same program on different computers, they will behave as if the computers were different social agents [11]. Moon and Nass proposed a third explanation: they stated that certain contextual cues activate certain scripts, leading to inappropriate behavior. Computers possess several cues that they should be treated as humans, therefore most of us ignore the cues stating the contrary, and behave as if they were our peers [12]. The fact that humans react to computers socially creates a base for the practice, where studies of social psychology are replicated in human-computer interactions as well.

Computers are social actors

One of the most well-known paradigms that serves as a theoretical framework for such studies is The Computers Are Social Actors (CASA) [13]. It states that those phenomena observed in the social sciences, such as the norm of politeness [14], the norm of reciprocity [15], or the norm of reciprocal self-disclosure [16] can be observed during the examination of human-computer interactions. We attribute personality to our computers, and we express similarity-preference (that is, introverted people prefer being in contact with a computer that shows introverted traits) [17]. Other studies showed that when a sense of interdependence was created between humans and computers in an experimental situation, participants viewed those computers as their team-mates, sensed themselves as more cooperative, perceived the computer more similar to themselves, and rated them as more reliable, more competent and friendly [18]. Social categorization and stereotypes do also play a role in human-computer interactions: participants rated the computer with the female-like (but obviously mechanical) voice-output more reliable when talking about dating and romantic relationships, whereas rated the male-like voice output more reliable in the topics of computers and technology. Furthermore, participants rated the male-like voice overall more competent and more friendly [19].

Chatbots

The results listed above are not only meaningful for social sciences, but they are relevant in our everyday life, too. Probably many of us have already met with computer programs that behave somewhat socially to gain trust or popularity, and the most social of these programs are the so-called chatbots: artificial intelligence programs that simulate human conversations.

In the last few decades, the need for social machines increased immensely. Based on the Sociable Design theory [20], human beings (‘users’ or ‘customers’, depending on the area) have many problems around machines that could be solved or mitigated by a simple social gesture from the machine itself. Thus, if we could ask our computer a favor, and it could give us a feedback about what he understood and what information is still needed for completing the task, our interaction would be much more efficient and enjoyable.
This insight makes it understandable that chatbots got into the center of the attention: since the creation of the first in 1966 [21], many have tried to develop an artificial agent capable of flawless interaction and use of natural language, with more or less success. The first chatbot was ELIZA, who imitated a non-directive therapist, soon to be followed by PARRY, who used the mannerisms of a paranoid schizophrenic patient to convince his conversational partner about his humanness. Since then, with the help of linguistic and social sciences, experts created more and more convincing chatbots.

The main venue for testing chatbots is the annually organized Loebner Prize. The competition was founded by Dr. Hugh Loebner in 1990, and the competing chatbots have to show the judges their ability to have a conversation like a human. The competition is based on the Turing test, named after Alan Turing who proposed the test in a 1950 paper entitled Computing Machinery and Intelligence [22]. In this paper, he asked himself and his readers about machines’ ability to think. He proposed that if the responses from the computer were indistinguishable from those of a human, the computer passed the Turing test. In agreement with this, during the Loebner Prize competitions, a human judge conducts conversations via a computer keyboard with a computer program and with a human being. Based on the responses, the judge decides which is which, and the most human-like program wins that year’s award. Today, the most successful chatbot is the three-times winner of the Loebner Prize, Mitsuku [23]. But the highest award has never been given to anyone yet. For the first software that judges cannot distinguish from a real human in a Turing test that includes understanding text, visual, and auditory input, 100.000 USD will be awarded. Once this is achieved, the annual competition will end.

Future directions

The status of chatbots’ development hasn’t reached its’ goal yet: the software capable of imitating human behavior faithfully is yet to come. But an interesting question has already emerged: how will we behave with a program that meets every condition to appear human? Would this situation awake the same type of responses in us? Would we still react socially, would we still be polite, would we still use our social stereotypes? Would we dare to shout at it or express our frustration?

To sum it up, we already know that human beings readily socialize with computers and computer programs. This characteristic of ours means that it is possible to apply findings of social psychology to human-computer interactions. We also know that the industry of artificial intelligence has already started to play into this field of our functioning, by creating the ultimate social machines: chatbots.

The main question is this: is it enough for a social machine to copy human behavior to appear sympathetic and trustful? The official testing venue for chatbots clearly promotes the idea that the most human-like chatbot is the best chatbot, but it is highly probable that creating the perfectly human-like chatbot would have opposite effects on our perception, and we would reject them for being too faithfully human [24]. The CASA- paradigm suggests that if those very few contextual cues are present in our interaction, the computer program doesn’t need to pretend to be human: it does suffice for it to behave socially, thus, to mirror the behavior of the user, share information about itself, respond politely to politeness, show if it is uncertain about something, and so on.

We are already surrounded by machines and computer programs in our day-to-day life, and as artificial intelligence is one of the fastest developing areas of human technology, it is essential to understand our relationship to them. Human-computer interactions will probably play an important role in humanity’s future, after all.

References

1 Aronson, E. (1995). The Social Animal New York. H. Freeman.

2 G. Rizzolatti and L. Craighero, “The Mirror-Neuron System,” Ann. Rev. Neuroscience, vol. 27, pp. 169-192, 2004.

3 J. Pickles, An Introduction to the Physiology of Hearing. Academic Press, 1982.

4 V. Richmond and J. McCroskey, Nonverbal Behaviors in Interpersonal Relations. Allyn and Bacon, 1995) in: Vinciarelli, A., Pantic, M., Heylen, D., Pelachaud, C., Poggi, I., D'Errico, F., & Schroeder, M. (2012). Bridging the gap between social animal and unsocial machine: A survey of social signal processing. IEEE Transactions on Affective Computing, 3(1), 69-87.

5  Herrmann, E., Call, J., Hernández-Lloreda, M. V., Hare, B., & Tomasello, M. (2007). Humans have evolved specialized skills of social cognition: The cultural intelligence hypothesis. Science, 317(5843), 1360-1366.

6 Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of social issues, 56(1), 81-103.

7 Brown, B. (1988). The human-machine distinction as predicted by children’s para-social interaction with toys. Unpublished doctoral dissertation, Stanford University, Stanford, CA. In: Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of social issues, 56(1), 81-103.

8 Rafaeli, S. (1990). Interacting with media: Para-social interaction and real interaction. In B. D. Ruben & L. A. Lievrouw (Eds.), Mediation, information, and communication: Information and behavior (Vol. 3, pp. 125–181). New Brunswick, NJ: Transaction. In: Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of social issues, 56(1), 81-103.

9 Cooley, C. H. (1966). Social process. Carbondale, IL: Southern Illinois University Press. In: Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of social issues, 56(1), 81-103. 

10 Barley, S. R. (1988). The social construction of a machine: Ritual, superstition, magical thinking and other pragmatic responses to running a CT scanner. In Biomedicine examined (pp. 497-539). Springer Netherlands.

11 Dennett, D. C. (1989). The intentional stance. MIT press.Diehl, M. (1988). Social identity and minimal groups: The effects of interpersonal and intergroup attitudinal similarity on intergroup discrimination. British Journal of Social Psychology, 27, 289–300.

12 Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of social issues, 56(1), 81-103.

13 Nass, C., Steuer, J., & Tauber, E. R. (1994, April). Computers are social actors. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 72-78). ACM.

14 Nass, C., Moon, Y., & Carney, P. (1999). Are People Polite to Computers? Responses to Computer‐Based Interviewing Systems1. Journal of Applied Social Psychology, 29(5), 1093-1109.

15 Fogg, B. J., & Nass, C. (1997, March) How users reciprocate to computers: an experiment that demonstrates behavior change. In CHI'97 extended abstracts on Human factors in computing systems (pp. 331-332). ACM

16 Moon, Y. (2000) Intimate exchanges: Using computers to elicit self-disclosure from consumers. Journal of consumer research, 26(4), 323-339.

17 Moon, Y., & Nass, C. (1996). How “real” are computer personalities? Psychological responses to personality types in human-computer interaction. Communication research, 23(6), 651-674.

18 Nass, C., Fogg, B. J., & Moon, Y. (1996). Can computers be teammates?. International Journal of Human-Computer Studies, 45(6), 669-678.

19 Nass, C., Moon, Y., & Green, N. (1997). Are machines gender neutral? Gender‐stereotypic responses to computers with voices. Journal of applied social psychology, 27(10), 864-876.

20 http://www.jnd.org/ms/1.1%20Sociable%20Design.pdf

21 Weizenbaum, J. (1966). Eliza – a computer program for the study of natural language communication between man and machine. Communication of the ACM, 9, 36–45

22 Turing, A. M.  (1950). Computing machinery and intelligence. Mind, 59 (236), 433.

23 https://www.pandorabots.com/mitsuku/

24 Gray, K., & Wegner, D. M. (2012). Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition, 125(1), 125-130.

article author(s)

facebook