The Curious Case of Uncanny Valley
Malgorzata Kuznik & Carolina Christina Lyknis
The term Uncanny Valley, first described in 1978 by robotics professor Masahiro Mori, has become a part of popular culture and everyday lingo, pertaining to phenomena eliciting unease, discomfort, or disgust. Even though the term can be used in virtually any everyday context, it often springs out of the odd sensation of experiencing something human-like, yet not fully human. During the dawn of development of increasingly complex AI in the middle of the 20th century, the phenomenon has gained increased relevance, both in relation to the content generated by AI art, pictures, music, and text, as well as with the very appearance of the AI themselves. This concerns mostly the so-called anthropomorphic robots, whose design purposely mimics human attributes to a lesser or greater degree (Walters et al., 2008, p. 165).
How can the effect of Uncanny Valley be explained psychologically? And what are the implications of the effect, in terms of today's design of AI-systems?
Within psychology, the phenomenon defined as 'the relationship between the human likeness of an object and a viewer's affinity towards it' (Kendall, 2022). This correlation is usually expressed in a graph, consisting of an x-axis and a y-axis, respectively corresponding to the object's human likeness and the person's affinity towards it. The higher the human likeness of the object's design, the higher the person's perceived affinity. The interesting part appears at the very top right area of the graph, when a significantly high level of human likeness is followed by a sudden and dramatic drop in the perceiver's affinity. The drop forms a valley, hence the name Uncanny Valley. Nearing a 100% human likeness, the correlation continues to increase linearly, ending with affinity towards an alive and healthy human being.
Illustration: Malgorzata Kuznik & Carolina Christina Lykins. Adapted from Walters et al. (2008).
As said, the term was first introduced in 1970 by a Japanese roboticist, Masahiro Mori, who coined the effect bukimi no tani in one of his essays (Kendall, 2022). The original essay was first translated to English in 2005 and swiftly became a topic of heated discussions within several fields such as robotics, sciences, art, and psychology. Originally, Mori proposed that movement intensified the Uncanny Valley effect and therefore placed moving human-like objects (zombie, Japanese bunraku pantomime puppet) on the right side of the graph which is steeper. On the other hand, the left side of the graph is more gradual and includes non-moving objects, such as a corpse and a prosthetic hand. It is important to mention however, that the phenomenon is proposed, and its scientific validity is not yet fully established. The theory has only partially been supported by research evidence and is far from universal—several findings suggest a significant individual variation in the degree to which people perceive the Uncanny Valley (Kendall, 2022).
Regardless of its degree of universality, how can the phenomenon be explained psychologically? There are several theories that can provide an answer. Mrdenovic (2021) mentions two central concepts in particular—Categorization Ambiguity and Perceptual Mismatch (p. 33). The first one alludes to our tendency to categorize surrounding phenomena in strict categories, which are often mutually exclusive (for instance 'alive' or 'dead'). Objects eliciting uncanniness cross this categorical boundary, making it impossible to exclusively identify them as only belonging to one category or another (Mrdenovic, 2021, p. 33). Perceptual Mismatch on the other hand, proposes that the effect can be explained by inconsistencies in our expectations towards the object in question (Mrdenovic, 2021, p. 33). A humanoid robot with a realistic face would therefore be deemed as uncanny if its facial expressions did not match the message conveyed by its speech. Another theory, though slightly more simplistic, suggests that objects causing the Uncanny Valley effect, evoke our fear of death, by association to the morbid qualities of a corpse (Mrdenovic, 2021, p. 34).
Specific design features and the Uncanny Valley
What specific steps can then be taken by AI-designers in order to avoid the effect, when attempting to design AI which mimics human-like behaviour or appearance? According to Heather Knight, a PhD candidate at Carnegie Mellon University, one of the ways in which this effect is abated is by making use of traditionally feminine behaviours and aesthetic forms (Knight, 2014). A female voice or appearance appears to draw attention away from the uncanniness and other unnatural effects of AI which attempt to appear human-like. This could explain the contemporary tendency to create mostly 'female' anthropomorphic robots, such as Ameca (Engineered Arts), Jiajia (University of Science and Technology of China) and Kime (Macco Robotics) (Biba, 2022). The same rationale seems to apply in the case of AI resembling children, for instance the android Abel created at Pisa University (Cominelli et al., 2021, p. 6) or the famous Telenoid developed by the Japanese researcher Hiroshi Ishiguro (Geminoid, n.d.). Though differing in the degree of human-likeness (Abel is a hyper-realistic replica of a young boy, while Telenoid is stripped of any signs of age or gender), both machines are quite compact and rather non-threatening in appearance.
The honorary representative of the female-looking androids is the very first AI-entity to receive an official citizenship, namely Sophiabot (Stone, 2017). The AI-based android was developed by Hanson Robotics, an engineering and robotics company based in Hong Kong whose goal is to create 'human-like' robots. They are described by Hanson Robotics as 'socially intelligent machines [ … ] with good aesthetic design' (Hanson Robotics LTD. n.d.). Even still, most would agree that Sophiabot, with 'her' porcelain skin and vast range of facial expressions, fails to pass convincingly as human. For example, 'her' speech sounds somewhat stilted, and the facial expressions accompanying 'her' words are out-of-sync; almost as though 'her' reactions are delayed by a beat, which causes 'her' changes in appearance to seem too abrupt (Männistö-Funk & Sihvonen, 2018, p. 46). Such inconsistencies contribute to uncanniness of 'her' behaviour, which is not normal amongst humans and may be perceived as alarming.
The role of such inconsistencies between behaviour and appearance of robots was explored in a study by Walters et al. (2008). The aim was to establish how specific design features in robots varying in the degree of anthropomorphism were received, in addition to identifying the participants' perceptions of the machines' personality. The robots used were coined the 'Mechanoid' (consisted of a camera unit and a single flashing light), the 'Basic Robot' (with a simple head, one single arm and two flashing lights as eyes) and the 'Humanoid' (featuring a head and arms, with flashing lights as the eyes and mouth) (Walters et al, 2008, p.167). The behaviour displayed by the machines matched their degree of anthropomorphism—the first one only moved up and down and made beeping sounds, the second one could lift its' arm and used distorted speech, while the third one could make a wave-gesture and used a high-quality human speech (Walters et al, 2008, p.167). The results showed that the participants consistently preferred the Humanoid the most and the simple Mechanoid the least. Even though none of the robots used in the study were designed to resemble humans to the degree in which feelings of uncanniness would be elicited, the results support the left-hand side of Mori's diagram, in that human-likeness was preferred over mechanical appearance (Walters et al., 2008, p.178).
When it comes to inconsistencies between appearance and behaviour, the features that did not correspond to the degree of anthropomorphism were rated more negatively, as opposed to the consistent ones. For instance, the participants rated the flashing lights of the Humanoid robot less positively than the same flashing lights of the Basic Robot (Walters et al, 2008, p. 178). According to the authors, our desirability of consistency between appearance and behaviour, can be explained by unconscious assessments of the machines' reliability (Walters et al, 2008, p. 163). Further evidence supporting this claim was found in previous studies conducted by the authors, where the attention of children was captured more effectively when movement of the different body parts of a robot were consistent with its speech (Walters et al., 2008, p. 163). In addition to measuring perceived affinity towards the presented machines, the study also examined the attribution of personality traits to them. It was established that the robots' design determined the way in which their 'personality' was categorized—the more anthropomorphic the appearance, the more positive the rating of robot's personality (Walters et al., 2008, p. 179). Another interesting aspect was that the subjects' own personality had influence on the evaluations of the ones of robots—more extroverted participants rated the less human-like machines' personality more positively than introverted subjects (Walters et al, 2008, p. 179). This is yet another example of how individual differences may modulate people's response to human-like AI.
Individual variation
Individual variation in the perception of the Uncanny Valley effect has been the subject of a significant amount of scientific research. Additionally, the question seems to be extremely relevant at the cross section of AI-development and design since it challenges the assumption that the effect can solely be avoided by adapting the machines' visual features. It is precisely this claim that Mori originally presented as the only solution to the Uncanny Valley conundrum—it can be prevented exclusively by abstract and non-human design (Mrdenovic, 2021, p. 35). Therefore, a greater understanding of the broader range of factors that may influence the way in which AI-design is perceived and received, appears crucial.
In addition to the perceiver's personality, one factor that seems to play a significant role in this context is isolation and motivation for social connection. This tendency was illustrated in a study by Powers and colleagues (2014), where participants were shown a series of morphed images, merging the faces of a doll and an actual human. The images varied in the degree to which they included an actual human face or a doll face and were presented in a random order (Powers et al., 2014, p. 1944). The results indicated that participants craving social contact, identified the morphed human-doll faces as alive to a greater degree than their socially satisfied counterparts (Powers et al., 2014, p. 1943). The authors' conclusion was that the tendency can be seen as adaptive from an evolutionary perspective because it provides motivation to seek new sources of meaningful contact in our surroundings (Powers et al., 2014, p. 1947). Indeed, the tendency to attribute human mental states to inanimate or abstract entities when experiencing social deprivation, has been reported in several other studies, including Epley et al. (2008) where socially starved participants anthropomorphised objects like pets and technological gadgets (p. 115).
Another example of circumstantial factors that may influence the Uncanny Valley effect is cultural conditioning. Western societies for instance, are more likely to view AI with apprehension, due to the wide belief that machines may turn against us if they develop free will (Knight, 2014, p. 8). On the contrary, Japanese society generally views machines as friendly and gentle companions with an understanding that their purpose is to help humanity (Knight, 2014, p. 8). As mentioned by Mrdenovic (2021), the role of cultural conditioning is also visible in studies focusing on reactions of young children and the elderly to human-like AI, where both groups report lower levels of uneasiness. A possible explanation is that these social strata have been exposed to cultural representation of AI to a lesser degree, and therefore lack the same expectations (Mrdenovic, 2021, p. 36). In the author's eyes, this directly contradicts Mori's initial assumption of the Uncanny Valley as an innate instinct aiming at survival and suggests that the reaction is at least partially learned (Mrdenovic, 2021, p. 36). Additionally, Mrdenovic argues that repeated exposure for human-like robots could potentially decrease the initial feeling of unease over time (Mrdenovic, 2021, p. 35).
In the light of these findings, Mrdenovic claims, the Uncanny Valley effect should be seen rather as an advanced interplay of several factors than solely a result of the machines’ properties and design (Mrdenovic, 2021, p. 36). Such understanding of the effect accommodates both the specific design qualities of the object and individual variation amongst the receivers. Even though the intricacies of this interplay have not yet been completely mapped scientifically, the fact that the phenomenon has been so widely observed, calls for AI designers and users to take the Uncanny Valley into consideration, when attempting to design human-like AI.
Refrences:
- Biba, J. (2022). Top 20 Humanoid Robots in Use Right Now. Available at: https://builtin.com/robotics/humanoid-robots (Accessed: 21 November 2022)
- Cominelli, Hoegen, G., & De Rossi, D. (2021). Abel: Integrating Humanoid Body, Emotions, and Time Perception to Investigate Social Interaction and Human Cognition. Applied Sciences, 11(3), 1070. >
- Epley, N., Akalis, S., Waytz, A., & Cacioppo, J. T. (2008). Creating Social Connection through Inferential Reproduction: Loneliness and Perceived Agency in Gadgets, Gods, and Greyhounds. Psychological Science, 19(2), 114–120. >
- Geminoid (n.d.) Telenoid. Available at: http://www.geminoid.jp/projects/kibans/Telenoid-overview.html (Accessed: 23 November 2022)
- Hanson Robotics LTD. (n.d.) Available at: https://www.hansonrobotics.com/ (Accessed: 21 November 2022)
- Kendall, E. (2022, November 8). Uncanny Valley, in Encyclopædia Britannica. Available at: https://www.britannica.com/topic/uncanny-valley (Accessed: 9 November 2022)
- Knight, H. (2014, July 29). How humans respond to robots: Building public policy through good design. Available at: https://www.brookings.edu/research/how-humans-respond-to-robots-building-public-policy-through-good-design/ (Accessed: 12 November 2022)
- Männistö-Funk, & Sihvonen, T. (2018). Voices from the uncanny valley: How robots and artificial intelligences talk back to us. Digital Culture & Society, 4(1), 45. >
- Mrdenovic, M. (2021). Uncanny Valley, in Encyclopædia Britannica. Available at: https://www.britannica.com/topic/uncanny-valley (Accessed: 9 November 2022)
- Mrdenovic, M. (2021). Uncanny Logic: A Theoretical Extension of the Uncanny Valley of the Mind Hypothesis Influenced by the Perception of Intelligence in Black Box Artificial Intelligence. Master’s thesis. Oslo: University of Oslo. Available at: https://www.duo.uio.no/handle/10852/92230 (Accessed: 12 November 2022).
- Powers, K. E., Worsham, A. L., Freeman, J. B., Wheatley, T., & Heatherton, T. F. (2014). Social Connection Modulates Perceptions of Animacy. Psychological Science, 25(10), 1943–1948.
-
Stone, Z. (2017). Everything You Need To Know About Sophia, The World’s First Robot Citizen. Available at:
https://www.forbes.com/
sites/zarastone/2017/11/07/ (Accessed: 21 November 2022)everything-you-need-to-know-about- sophia-the-worlds-first-robot-citizen/?sh=7de3400946fa - Walters, M. L., Syrdal, D. S., Dautenhahn, K., te Boekhorst, R., & Koay, K. L. (2008). Avoiding the uncanny valley: robot appearance, personality and consistency of behavior in an attention-seeking home scenario for a robot companion. Autonomous robots, 24(2), 159–178.