Rodrigo Calvo, Heting Wang, Alexander Barquero, Xuanpu Zhang, Rohith Venkatakrishnan, & Jaime Ruiz. (2025). In Proceedings of Human-Agent Interaction 2025.
CAREER: Next Generation Multimodal InterfacesPI: Jaime Ruiz
Ever smaller computational devices coupled with advances in input affordances are revolutionizing the way users interact with technology. New, more natural user interfaces exploit touch, speech, gestures, handwriting, and vision in an effort to reduce the barriers imposed by interfaces, so that computing technology acts more like a dynamic partner and less like a tool. Nevertheless, human-computer interaction continues for the most part to mimic the traditional point-and-click paradigm associated with desktop computers. This research will take human-computer communication to the next level by leveraging human-human nonverbal communication such as gaze, body posture, and facial expressions. Project outcomes will have strong societal impact by moving us closer to truly smart environments and lowering the bar for those individuals who find using current technology difficult. The educational activities encompassed by this work will expose students to new ways of interacting with technology, in order to encourage them to pursue careers in computer science.
To further the understanding of nonverbal input and then use this understanding to discover new natural multimodal interactions, the research will include a number of thrusts. (i) Data collection and analysis: Collect human-human interactions in different scenarios and analyze nonverbal aspects of communication to determine common characteristics that can inform the design of more natural gestural interfaces. (ii) Recognizing and understanding intent: Create new recognition and input fusion methods that are not only able to correctly recognize the input (e.g., movement as a gesture) but also to understand the intended meaning. (iii) Develop and evaluate: Define new interactions that incorporate multimodal nonverbal communication and characterize the temporal and cognitive costs of these interactions.
The outcomes of this work will include: (1) an understanding of how nonverbal communication can be leveraged in the design of multimodal interfaces; (2) a generalized formal taxonomy for gestural interaction; (3) novel methods for fusing multimodal input in order to infer user intent; (4) new multimodal interaction techniques that leverage characteristics of natural human communication; and (5) experimentally validated mathematical models that allow designers and researchers to predict the temporal and cognitive costs associated with any proposed interaction techniques.
Related Projects
Situational Awareness through Augmented Reality (AR)
Multimodal Affective Recognition
Related Projects
Delgado, D. A., Calvo, L. C., Bowers C. J., Ruiz, J. IEEE ISMAR. 2025.
Rodrigo Calvo, Heting Wang, Alexander Barquero, & Jaime Ruiz. (2025). In Proceedings of Graphics Interface 2025.
Danish Nisar Ahmed Tamboli, Rohith Venkatakrishnan, Roshan Venkatakrishnan, Balagopal Raveendranath, Julia Woodward, Isaac Wang, Jesse Smith, and Jaime Ruiz. 2025. In 2025 IEEE Conference Virtual Reality and 3D User Interfaces (VR), 472–482. https://doi.org/10.1109/VR59515.2025.00071
Delgado, D.A, Ruiz, J. IEEE VR. (2024).
Isaac Wang, Rodrigo Calvo, Heting Wang, and Jaime Ruiz. 2023. In Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents (IVA ’23). Association for Computing Machinery, New York, NY, USA, Article 49, 1–4. https://doi.org/10.1145/3570945.3607322
Julia Woodward and Jaime Ruiz. 2023. In Proceedings of the 22nd Annual ACM Interaction Design and Children Conference (IDC ’23). Association for Computing Machinery, New York, NY, USA, 27–39. https://doi.org/10.1145/3585088.3589373
Julia Woodward, Jesse Smith, Isaac Wang, Sofia Cuenca, Jamie Ruiz. 2023. Journal of Engineering Research and Sciences 2(3):01-15, 2023 doi: 10.55708/js0203001
Julia Woodward, Feben Alemu, Natalia E. López Adames, Lisa Anthony, Jason C. Yip, and Jaime Ruiz. 2022. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 152, 1–13. https://doi.org/10.1145/3491102.3501979
Julia Woodward and Jaime Ruiz, IEEE Transactions on Visualization & Computer Graphics, 2022. doi:10.1109/TVCG.2022.3141585
Isaac Wang and Jaime Ruiz. 2021. International Journal of Human–Computer Interaction (Mar. 2021), 1–26. DOI: https://doi.org/10.1080/10447318.2021.1898851
Julia Woodward, Jahelle Cato, Jesse Smith, Isaac Wang, Brett Benda, Lisa Anthony, and Jaime Ruiz. 2020. In Proceedings of the International Conference on Advanced Visual Interfaces (AVI ’20). September 28– October 2, 2020, Ischia, Italy. ACM, New York, NY, USA, 5 pages. https://doi.org/10.1145/3399715.3399844
Jesse Smith, Isaac Wang, Winston Wei, Julia Woodward, and Jaime Ruiz. 2020. In Proceedings of the International Conference on Advanced Visual Interfaces (AVI ’20). September 28– October 2, 2020, Ischia, Italy. ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/3399715.3399850
Julia Woodward, Jesse Smith, Isaac Wang, Sofia Cuenca, and Jaime Ruiz. 2020. In Proceedings of the International Conference on Advanced Visual Interfaces (AVI ’20). September 28– October 2, 2020, Ischia, Italy. ACM, New York, NY, USA, 5 pages. https://doi.org/10.1145/3399715.3399846
Isaac Wang, Jesse Smith, and Jaime Ruiz. 2019. In 2019 CHI Conference on Human Factors in Computing Systems Proceedings (CHI 2019), May 4–9, 2019, Glasgow, Scotland, UK. ACM, New York, NY, USA. Paper 281, 10 pages. https://doi.org/10.1145/3290605.3300511

