By Shumin Zhai, Google, USA, firstname.lastname@example.org | Per Ola Kristensson, University of St Andrews, UK, email@example.com | Caroline Appert, University of Paris-Sud & CNRS, France, firstname.lastname@example.org | Tue Haste Anderson, frog design, Italy, email@example.com | Xiang Cao, Microsoft Research Asia, China, firstname.lastname@example.org
The potential for using stroke gestures to enter, retrieve and select commands and text has been recently unleashed by the popularity of touchscreen devices. This monograph provides a state-of-the-art integrative review of a body of human–computer interaction research on stroke gestures. It begins with an analysis of the design dimensions of stroke gestures as an interaction medium. The analysis classifies gestures into analogue versus abstract gestures, gestures for commands versus for symbols, gestures with different orders of complexity, visual-spatial dependent and independent gestures, and finger versus stylus drawn gestures. Gesture interfaces such as the iOS interface, the Graffiti text entry method for Palm devices, marking menus, and the SHARK/Shape Writer word-gesture keyboard, make different choices in this multi-dimensional design space.
The main body of this work consists of reviewing and synthesizing some of the foundational studies in the literature on stroke gesture interaction, particularly those done by the authors in the last decade. The human performance factors covered include motor control complexity, visual and auditory feedback, and human memory capabilities in dealing with gestures. Based on these foundational studies this review presents a set of design principles for creating stroke gesture interfaces. These include making gestures analogous to physical effects or cultural conventions, keeping gestures simple and distinct, defining stroke gestures systematically, making them self-revealing, supporting appropriate levels of chunking, and facilitating progress from visually guided performance to recall-driven performance. The overall theme is on making learning gestures easier while designing for long-term efficiency. Important system implementation issues of stroke gesture interfaces such as gesture recognition algorithms and gesture design toolkits are also covered in this review. The monograph ends with a few call-to-action research topics.
The advent of modern touchscreen devices has unleashed many opportunities and calls for innovative use of stroke gestures as a richer interaction medium. A significant body of knowledge on stroke gesture design is scattered throughout the Human-Computer Interaction research literature. Primarily based on the authors' own decade-long gesture user interface (UI) research which launched the word-gesture keyboard paradigm, Foundational Issues in Touch-Surface Stroke Gesture Design - An Integrative Review synthesizes some of the foundational issues of human motor control complexity, visual and auditory feedback, and memory and learning capacity concerning gesture user interfaces. In the second half of the book a set of gesture UI design principles is derived from the research literature. The book also covers system implementation aspects of gesture UI such as gesture recognition algorithms and design toolkits. Foundational Issues in Touch-Surface Stroke Gesture Design - An Integrative Review is an ideal primer for researchers and graduate students embarking on research in gesture interfaces. It is also an excellent reference for designers and developers who want to leverage insights and lessons learned in the academic research community.