Gesture recognition is a fundamental technology for understanding human behavior and providing natural human-computer interaction, of which the need has been boosted by the popularity of ubiquitous systems and applications. Currently, there are mainly two classes of gesture recognition approaches: parametric methods and the template matching methods. Although both approaches provide accurate recognition performance, the former is usually highly constrained in time and resources, while the latter is limited within a small range of gesture types that are simple and distinct from each other.
The objective of the thesis is to present a novel stroke-based gesture recognition solution, Gestimator, that recognizes dynamic hand gestures with high accuracy, run-time efficiency, and ease for customization, considering a wide range of complexity, ambiguity, and difficulty for user to perform the gesture. A stroke-wise elastic gesture matching framework alongside an adaptive sequence segmentation technique were developed for improving recognition accuracy.
We conducted extensive evaluations using three datasets that included pen-based command gestures, character gestures, and mid-air gestures collected from a user authentication experiment. Results from extensive benchmark evaluations prove that Gestimator achieves higher overall accuracy, compared with three state-of-the-art gesture recognizers, dealing with both touch-screen based gestures (98:9%) and spatial gestures (96:61%). Results also show that Gestimator outperforms baseline methods on recognizing ambiguous gestures.