Debjyoti Ghosh, Can Liu, Shengdong Zhao, and Kotaro Hara
ACM Transactions on Computer-Human Interaction, August 2020, Article No.: 28, https://doi.org/10.1145/3390889
Existing voice-based interfaces have limited support for text editing, especially when seeing the text is difficult, e.g., while walking or cooking. This research develops voice interaction techniques for eyes-free text editing. First, with a Wizard-of-Oz study, we identified two primary user strategies: using commands, e.g., “replace go with goes” and re-dictating over an erroneous portion, e.g., correcting “he go there” by saying “he goes there.” To support these user strategies with an actual system implementation, we developed two eyes-free voice interaction techniques, Commanding and Re-dictation, and evaluated them with a controlled experiment. Results showed that while Re-dictation performs significantly better for more semantically complex edits, Commanding is more suitable for making one-word edits, especially deletions. We developed VoiceRev to combine both the techniques in the same interface and evaluated it with realistic tasks. Results showed improved usability of the combined techniques over either of the two techniques used individually
Commanding and Re-Dictation: Developing Eyes-Free Voice-Based Interaction for Editing Dictated Text
Authors: NUS Graduate School for Integrative Sciences and Engineering, National University of Singapore, Singapore View Profile , School of Creative Media, City University of Hong Kong, Hong Kong View Profile , School of Computing, National University of Singapore, Singapore View Profile , School of Information Systems, Singapore Management University, Singapore View Profile Authors Info & Affiliations Abstract Existing voice-based interfaces have limited support for text editing, especially when seeing the text is difficult, e.g., while walking or cooking.