đź“š

LiveSnippets: Voice-based Live Authoring of Multimedia Articles about Experiences

Authors

Hyeongcheol Kim, Shengdong Zhao, Can Liu, and Kotaro Hara

Publication

MobileHCI '20: 22nd International Conference on Human-Computer Interaction with Mobile Devices and Services, October 2020, Article No.: 31, Pages 1–11, https://doi.org/10.1145/3379503.3403556

Abstract

We transform traditional experience writing into in-situ voice-based multimedia authoring. Documenting experiences digitally in blogs and journals is a common activity that allows people to socially connect with others by sharing their experiences (e.g., travelogue). However, documenting such experiences can be time consuming and cognitively demanding as it is typically done OUT-OF-CONTEXT (after the actual experience). We propose in-situ voice-based multimedia authoring (IVA), an alternative workflow to allow IN-CONTEXT experience documentation. Unlike the traditional approach, IVA encourages in-context content creations using voice-based multimedia input and stores them in multi-modal “snippets”. The snippets can be rearranged to form multimedia articles and can be published with light copy-editing. To improve the output quality from impromptu speech, Q&A scaffolding was introduced to guide the content creation. We implement the IVA workflow in an android application, LiveSnippets - and qualitatively evaluate it under three scenarios (travel writing, recipe creation, product review). Results demonstrated that IVA can effectively lower the barrier of writing with acceptable trade-offs in multitasking.

Paper