Commanding and Re-dictation: Developing Eyes-free Voice-based Interaction for Editing Dictated Text


Debjyoti Ghosh, Can Liu, Shengdong Zhao, and Kotaro Hara


Existing voice-based interfaces have limited support for text editing, especially when seeing the text is difficult, e.g., while walking or cooking. This research develops voice interaction techniques for eyes-free text editing. First, with a Wizard-of-Oz study, we identified two primary user strategies: using commands, e.g., “replace go with goes” and re-dictating over an erroneous portion, e.g., correcting “he go there” by saying “he goes there.” To support these user strategies with an actual system implementation, we developed two eyes-free voice interaction techniques, Commanding and Re-dictation, and evaluated them with a controlled experiment. Results showed that while Re-dictation performs significantly better for more semantically complex edits, Commanding is more suitable for making one-word edits, especially deletions. We developed VoiceRev to combine both the techniques in the same interface and evaluated it with realistic tasks. Results showed improved usability of the combined techniques over either of the two techniques used individually