skip to main content
10.1145/3290607.3312758acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
abstract

Robots are Always Social: Robotic Movements are Automatically Interpreted as Social Cues

Authors Info & Claims
Published:02 May 2019Publication History

ABSTRACT

Physical movement is a dominant element in robot behavior. We evaluate if robotic movements are automatically interpreted as social cues, even if the robot has no social role. 24 participants performed the Implicit Associations Test, classifying robotic gestures into direction categories ("to-front" or "to-back") and words into social categories (willingness or unwillingness for interaction). Our findings show that social interpretation of the robot's gestures is an automatic process. The implicit social interpretation influenced both classification tasks, and could not be avoided even when it decreased participant's performance. This effect is of importance for the HCI community as designers should consider, that even if a robot is not intended for social interaction (e.g. factory robot), people will not be able to avoid interpreting its movement as social cues. Interaction designers should leverage this phenomenon and consider the social interpretation that will be automatically associated with their robots' movement.

References

  1. Lucy Anderson-Bashan, Benny Megidish, Hadas Erel, Iddo Wald, Andrey Grishko, Guy Hoffman, and Oren Zuckerman. 2018. The Greeting Machine: An Abstract Robotic Object for Opening Encounters. In Robot and Human Interactive Communication (RO-MAN), 2018 27th IEEE International Symposium on. IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  2. Joanna J Bryson. 2010. Robots should be slaves. Close Engagements with Artificial Companions: Key social, psychological, ethical and design issues (2010), 63--74.Google ScholarGoogle Scholar
  3. Brian R Duffy. 2003. Anthropomorphism and the social robot. Robotics and autonomous systems 42, 3 (2003), 177--190.Google ScholarGoogle Scholar
  4. Helen L Gallagher and Christopher D Frith. 2003. Functional imaging of theory of mind. Trends in cognitive sciences 7, 2 (2003), 77--83.Google ScholarGoogle Scholar
  5. Valeria Gazzola, Giacomo Rizzolatti, Bruno Wicker, and Christian Keysers. 2007. The anthropomorphic brain: the mirror neuron system responds to human and robotic actions. Neuroimage 35, 4 (2007), 1674--1684.Google ScholarGoogle ScholarCross RefCross Ref
  6. Fritz Heider and Marianne Simmel. 1944. An experimental study of apparent behavior. The American journal of psychology 57, 2 (1944), 243--259.Google ScholarGoogle Scholar
  7. Wendy Ju and Leila Takayama. 2009. Approachability: How people interpret automatic door movement as gesture. International Journal of Design 3, 2 (2009), 1--10.Google ScholarGoogle Scholar
  8. Michal Luria, Guy Hoffman, and Oren Zuckerman. 2017. Comparing social robot, screen and voice interfaces for smart-home control. In Proceedings of the 2017 CHI conference on human factors in computing systems. ACM, 580--628. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Brian A Nosek, Anthony G Greenwald, and Mahzarin R Banaji. 2005. Understanding and using the Implicit Association Test: II. Method variables and construct validity. Personality and Social Psychology Bulletin 31, 2 (2005), 166--180.Google ScholarGoogle ScholarCross RefCross Ref
  10. Fumihide Tanaka and Madhumita Ghosh. 2011. The implementation of care-receiving robot at an English learning school for children. In Proceedings of the 6th international conference on Human-robot interaction. ACM, 265--266. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Robert H Wortham, Andreas Theodorou, and Joanna J Bryson. 2016. What does the robot think? Transparency as a fundamental design requirement for intelligent systems. (2016).Google ScholarGoogle Scholar
  12. Stephen Yang, Brian Ka-Jun Mok, David Sirkin, Hillary Page Ive, Rohan Maheshwari, Kerstin Fischer, and Wendy Ju. 2015. Experiences developing socially acceptable interactions for a robotic trash barrel. In Robot and Human Interactive Communication (RO-MAN), 2015 24th IEEE International Symposium on. IEEE, 277--284.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Robots are Always Social: Robotic Movements are Automatically Interpreted as Social Cues

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      CHI EA '19: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems
      May 2019
      3673 pages
      ISBN:9781450359719
      DOI:10.1145/3290607

      Copyright © 2019 Owner/Author

      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 2 May 2019

      Check for updates

      Qualifiers

      • abstract

      Acceptance Rates

      Overall Acceptance Rate6,164of23,696submissions,26%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format