Motion gestures are an underutilized input modality for mobile interaction, despite numerous potential advantages. Negulescu et al. found that the lack of feedback on attempted motion gestures made it difficult for participants to diagnose and correct errors, resulting in poor recognition performance and user frustration. Here, we describe and evaluate a training and feedback system consisting of two techniques that use audio characteristics to provide: (1) a spatial representation of the desired gesture and (2) feedback on the system’s interpretation of user input. Results show that while both techniques provide adequate feedback, users prefer continuous feedback.

Sarah Morrison-Smith and Jaime Ruiz. 2014. Using audio cues to support motion gesture interaction on mobile devices. In CHI ’14 Extended Abstracts on Human Factors in Computing Systems (CHI EA ’14). ACM, New York, NY, USA, 1621-1626. DOI=http://dx.doi.org/10.1145/2559206.2581236

@inproceedings{Morrison-Smith:2014:UAC:2559206.2581236,
 author = {Morrison-Smith, Sarah and Ruiz, Jaime},
 title = {Using Audio Cues to Support Motion Gesture Interaction on Mobile Devices},
 booktitle = {CHI '14 Extended Abstracts on Human Factors in Computing Systems},
 series = {CHI EA '14},
 year = {2014},
 isbn = {978-1-4503-2474-8},
 location = {Toronto, Ontario, Canada},
 pages = {1621--1626},
 numpages = {6},
 url = {http://doi.acm.org/10.1145/2559206.2581236},
 doi = {10.1145/2559206.2581236},
 acmid = {2581236},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {audio feedback, mobile interaction, motion gestures},
}