Accessibility and Assistive Technologies
People with disabilities have long relied on support from others to navigate accessibility barriers, and while technologies like the Internet, mobile devices, and AI have transformed many aspects of accessibility, their future potential depends on whether people with disabilities can effectively use them. Without careful design, these technologies risk creating new forms of exclusion rather than inclusion. My group addresses this challenge by designing, developing, and evaluating assistive technologies, investigating how people understand and navigate digital information and physical spaces using technologies. By harnessing emerging technologies like VR/AR headsets and sensors, we create and improve tools that enable people with disabilities to live more independently. For example, RunSight enables people with low vision to run at night through augmented reality by using see-through head-mounted displays to enhance runners’ awareness of their guide’s position and potential obstacles.
See more details in the following publications.
Related Publications:
- Abe, Y., Matsushima, K., Hara, K., Sakamoto, D., Ono, T. (2025) “I can run at night!": Using Augmented Reality to Support Nighttime Guided Running for Low-vision Runners, in Proceedings of CHI ‘25
- Cai, S., Ram, A., Gou, Z., Shaikh, M. A. W., Chen, Y.-A., Wan, Y., Hara, K., Zhao, S., & Hsu, D. (2024). Navigating real-world challenges: A quadruped robot guiding system for visually impaired people in diverse environments, in Proceedings of CHI ‘24
- Jiao, Y., Sun, R., Luo, R., Yao, X., She, X., Hara, K., Zhang, Y., & Fu, X. (2025). Tactile data comics: Combining step-by-step presentation of tactile graphics with verbal narration for the blind and visually impaired, in ASSETS '25.
Enhancing Interactions in Indoor Environments
We spend much of our lives indoors for activities like grocery shopping, admiring artwork in museums, or navigating hallways and choosing which direction to turn. Our group creates interactive technologies that enhance these indoor experiences by building technical capabilities for mapping, localization, and tracking, then studying how to design meaningful interactions using them. For instance, we designed conversational localization, an approach that determines users' indoor positions through natural language dialogue rather than environmental sensors. As users describe their surroundings, a conversational agent extracts spatial information from visible landmarks, signage, and architectural features, then estimates their location on existing floor maps. This enables tasks like navigation support without requiring additional infrastructure. The approach works well for museums: it can combine real-time user positioning with audio descriptions and mobile devices to create richer visitor experiences, automatically playing relevant content as visitors approach each artwork.
Related Publications:
- Sheshadri, S. and Hara, K. (2025) Enhancing Smartphone-based Inertial Indoor Tracking with Conversational User Input, in Proceedings of ACM IMWUT
- Yonetani, R. and Hara, K. (2025) Map as a By-product: Collective Landmark Mapping from IMU Data and User-provided Texts in Situated Tasks, in Proceedings of ACM IMWUT
- Sheshadri, S. and Hara, K. (2024) Conversational Localization: Indoor Human Localization through Intelligent Conversation, in Proceedings of ACM IMWUT
- Sheshadri, S., Cheng, L., and Hara, K. (2022) Feasibility Studies in Indoor Localization through Intelligent Conversation, in Proceedings of CHI EA ‘22
Other Work
See our past and ongoing work on my publication page: https://kotarohara.com/publications