Motion gestures are an underutilized input modality for mobile interaction despite numerous potential advantages. Negulescu et al. found that the lack of feedback on attempted motion gestures made it difficult for participants to diagnose and correct errors, resulting in poor recognition performance and user frustration. In this article, we describe and evaluate a training and feedback technique, Glissando, which uses audio characteristics to provide feedback on the system’s interpretation of user input. This technique enables feedback by verbally confirming correct gestures and notifying users of errors in addition to providing continuous feedback by manipulating the pitch of distinct musical notes mapped to each of three dimensional axes in order to provide both spatial and temporal information.

Sarah Morrison-Smith, Megan Hofmann, Yang Li, and Jaime Ruiz. 2016. Using Audio Cues to Support Motion Gesture Interaction on Mobile Devices. ACM Trans. Appl. Percept. 13, 3, Article 16 (May 2016), 19 pages. DOI=

 author = {Morrison-Smith, Sarah and Hofmann, Megan and Li, Yang and Ruiz, Jaime},
 title = {Using Audio Cues to Support Motion Gesture Interaction on Mobile Devices},
 journal = {ACM Trans. Appl. Percept.},
 issue_date = {May 2016},
 volume = {13},
 number = {3},
 month = may,
 year = {2016},
 issn = {1544-3558},
 pages = {16:1--16:19},
 articleno = {16},
 numpages = {19},
 url = {},
 doi = {10.1145/2897516},
 acmid = {2897516},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {Motion gestures, audio feedback, mobile interaction},