Video annotation is a vital part of research examining gestural and multimodal interaction as well as computer vision, machine learning, and interface design. However, annotation is a difficult, time-consuming task that requires high cognitive effort. Existing tools for labeling and annotation still require users to manually label most of the data, limiting the tools’ helpfulness. In this paper, we present the Easy Automatic Segmentation Event Labeler (EASEL), a tool supporting gesture analysis. EASEL streamlines the annotation process by introducing assisted annotation, using automatic gesture segmentation and recognition to automatically annotate gestures. To evaluate the efficacy of assisted annotation, we conducted a user study with 24 participants and found that assisted annotation decreased the time needed to annotate videos with no difference in accuracy compared with manual annotation. The results of our study demonstrate the benefit of adding computational intelligence to video and audio annotation tasks.

Isaac Wang, Pradyumna Narayana, Jesse Smith, Bruce Draper, Ross Beveridge, and Jaime Ruiz. 2018. EASEL: Easy Automatic Segmentation Event Labeler. In Proceedings of the 23rd International Conference on Intelligent User Interfaces (IUI ’18). ACM, New York, NY, USA. DOI: https://doi.org/10.1145/3172944.3173003

@inproceedings{Wang:2018:EEA:3172944.3173003,
 author = {Wang, Isaac and Narayana, Pradyumna and Smith, Jesse and Draper, Bruce and Beveridge, Ross and Ruiz, Jaime},
 title = {EASEL: Easy Automatic Segmentation Event Labeler},
 booktitle = {23rd International Conference on Intelligent User Interfaces},
 series = {IUI '18},
 year = {2018},
 isbn = {978-1-4503-4945-1},
 location = {Tokyo, Japan},
 pages = {595--599},
 numpages = {5},
 url = {http://doi.acm.org/10.1145/3172944.3173003},
 doi = {10.1145/3172944.3173003},
 acmid = {3173003},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {data annotation tools, gesture analysis, gesture segmentation},
}