Advances in artificial intelligence are fundamentally changing how we relate to machines. We used to treat computers as tools, but now we expect them to be agents, and increasingly our instinct is to treat them like peers. This paper is an exploration of peer-to-peer communication between people and machines. Two ideas are central to the approach explored here: shared perception, in which people work together in a shared environment, and much of the information that passes between them is contextual and derived from perception; and visually grounded reasoning, in which actions are considered feasible if they can be visualized and/or simulated in 3D. We explore shared perception and visually grounded reasoning in the context of blocks world, which serves as a surrogate for cooperative tasks where the partners share a workspace. We begin with elicitation studies observing pairs of people working together in blocks world and noting the gestures they use. These gestures are grouped into three categories: social, deictic, and iconic gestures. We then build a prototype system in which people are paired with avatars in a simulated blocks world. We find that when participants can see but not hear each other, all three gesture types are necessary, but that when the participants can speak to each other the social and deictic gestures remain important while the iconic gestures become less so. We also find that ambiguities flip the conversational lead, in that the partner previously receiving information takes the lead in order to resolve the ambiguity.
  • Headshot of Isaac WangIsaac Wang
  • Headshot of Jaime Ruiz wearing a HololensJaime Ruiz
  • As well as: Pradyumna Narayana, Nikhil Krishnaswamy, Rahul Bangar, Dhruva Patil, Gururaj Mulay, Kyeongmin Rim, Ross Beveridge, James Pustejovsky, and Bruce Draper

Pradyumna Narayana, Nikhil Krishnaswamy, Isaac Wang, Rahul Bangar, Dhruva Patil, Gururaj Mulay, Kyeongmin Rim, Ross Beveridge, Jaime Ruiz, James Pustejovsky, and Bruce Draper. 2018. Cooperating with Avatars Through Gesture, Language and Action. In Intelligent Systems and Applications (Advances in Intelligent Systems and Computing), 272–293.

@inproceedings{10.1007/978-3-030-01054-6_20,
    author="Narayana, Pradyumna and Krishnaswamy, Nikhil and Wang, Isaac and Bangar, Rahul and Patil, Dhruva and Mulay, Gururaj and Rim, Kyeongmin and Beveridge, Ross and Ruiz, Jaime and Pustejovsky, James and Draper, Bruce",
    editor="Arai, Kohei and Kapoor, Supriya and Bhatia, Rahul",
    title="Cooperating with Avatars Through Gesture, Language and Action",
    booktitle="Intelligent Systems and Applications",
    year="2018",
    publisher="Springer International Publishing",
    address="Cham",
    pages="272--293",
    isbn="978-3-030-01054-6"
}