Back to Projects

Motion Kinematics in Interfaces

Interfaces on personal computers have evolved from conversational interfaces, i.e. command-line interfaces, to direct manipulation interfaces where users use a mouse, electronic stylus, touchpad, trackpoint, trackball, or finger to directly interact with content on the screen. As a result, many techniques that facilitate pointing at on-screen targets have been evaluated by HCI researchers. McGuffin and Balakrishnan have noted that a fundamental assumption made by many of these techniques is that salient targets are relatively sparse on the display and are separated by whitespace. However, salient targets are frequently tiled into small regions on the display (e.g., ribbons or toolbars). As well, in many modern computer programs, such as spreadsheet programs, word processors, and bitmap drawing programs, any cell, character, or pixel might constitute a legitimate target for pointing. Therefore, endpoint prediction, i.e. the ability to identify—during motion—the likely target of a user’s pointing gesture, has been identified as a necessary precursor to pointing facilitation in modern computer interfaces.

In collaboration with Dr. Edward Lank, we developed a technique to predict the endpoint of a user’s pointing task during motion. The technique, referred as kinematic endpoint prediction (KEP), uses established laws of motion to derive an equation to model the initial ballistic phase of movement in order to predict movement distance. Through controlled experimentation,  we characterized the effects that movement distance and target size have on the accuracy of the KEP predictor for both one and two-dimensional targets. We demonstrated that there exists a linear relationship between prediction accuracy and motion distance and that this relationship can be leveraged to create a probabilistic model for each target on the computer display. How might KEP’s probability distributions be used? If we have prior probability distributions on the underlying interface (e.g., by modeling command usage or user task), they can be combined with KEP’s distribution to identify maximally likely targets within a region. To support this notion, we examined the utility of expanding the size of a set of candidate targets on the display and demonstrated that expanding widgets can be modified to support a region of expansion. I also developed EXPECT-K, a virtual keyboard that uses KEP and tetra-gram letter frequencies to incorporate target expansion and visual cues to speed text entry on Tablet PCs.

In addition to predicting user endpoint, we have also examined how analyzing kinematics can be used to extract additional information about a user’s target. We examined the effects of target constraints on user motion. Using machine learning techniques, specifically Hidden Markov Models (HMM), we found that constraints create differences in the underlying motion characteristics and that the differences lie in the instantaneous components of motion. In particular, we found the primary effect to be concentrated in motion along the axis orthogonal to the primary direction of motion. Using these results, we demonstrated that we can predict the target constraint using only 70% of a pointing gesture.

From a theoretical perspective, my research on motion kinematics in interfaces supports proposed models of physical goal-directed motion (i.e. pointing and grasping) which describes the initial phase of motion as a high-velocity movement towards the target (often referred to as the ballistic or initial impulse phase). My work also suggests that, for pointing in interfaces, this ballistic phase of motion occupies at least the first 90% of the motion distance. Therefore, any endpoint prediction technique that wishes to predict user motion before 90% of motion is complete must model the ballistic phase of motion. From a practical perspective, my research provides a roadmap for interaction designers who want to incorporate pointing facilitation techniques into user interfaces that have menus, toolbars, ribbons, or any interface where users may want to target content that exists anywhere on the screen

Headshot of Jaime Ruiz wearing a Hololens

Jaime Ruiz

Associate Professor

jaime.ruiz@ufl.edu
Analyzing the kinematics of bivariate pointing

Jaime Ruiz, David Tausky, Andrea Bunt, Edward Lank, and Richard Mann. 2008. In Proceedings of Graphics Interface 2008 (GI ’08). Canadian Information Processing Society, CAN, 251–258.

Endpoint prediction using motion kinematics

Edward Lank, Yi-Chun Nikko Cheng, and Jaime Ruiz. 2007. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’07). Association for Computing Machinery, New York, NY, USA, 637–646. https://doi.org/10.1145/1240624.1240724