Active Projects

Active Projects

Future of Online Work

Employment facilitated by online communication is transforming the way we work. In the U.S. alone, 600k people participate in online gig-economyβ€”also known as crowd workβ€”and the number of workers is growing rapidly (data from 2015). Crowd work's remote and asynchronous work style, unbounded by time and location, extends the contemporary office work, enabling people with disabilities, at-home parents, and temporarily out-of-work people to work.

At the same time, however, many are concerned that employers in these labor markets mistreat workers. Concerns about low earnings on crowd work platforms have been voiced repeatedly. Given the potential benefits and drawbacks of this new way of work, we study what opportunities online crowd work and gig-economy offer and what new challenges they create for people, particularly for those who have disabilities. We then design and develop technologies that mitigate and solve the problems.

Related Publications

πŸ“š
ViScene: A Collaborative Authoring Tool for Scene Descriptions in Videos
πŸ“š
Worker Demographics and Earnings on Amazon Mechanical Turk: An Exploratory Analysis
πŸ“š
Striving to Earn More: A Survey of Work Strategies and Tool Use Among Crowd Workers
πŸ“š
A Data-Driven Analysis of Workers' Earnings on Amazon Mechanical Turk
πŸ“š
Introducing People with ASD to Crowd Work
πŸ“š
The Crowd Work Accessibility Problem

Mapping Accessibility

The accessibility of the physical environment around us influences the way we move and behave, particularly for those with disabilities. We use techniques from crowdsourcing, computer vision, machine learning, and information visualization to understand and inform the urban accessibility.

Related Publications

πŸ“š
A Pilot Deployment of an Online Tool for Large-Scale Virtual Auditing of Urban Accessibility
πŸ“š
The Design of Assistive Location-based Technologies for People with Ambulatory Disabilities: A Formative Study
πŸ“š
Improving Public Transit Accessibility for Blind Riders by Crowdsourcing Bus Stop Landmark Locations with Google Street View: An Extended Analysis
πŸ“š
Tohme: Detecting Curb Ramps in Google Street View Using Crowdsourcing, Computer Vision, and Machine Learning
πŸ“š
An Initial Study of Automatic Curb Ramp Detection with Crowdsourced Verification using Google Street View Images
πŸ“š
Improving Public Transit Accessibility for Blind Riders by Crowdsourcing Bus Stop Landmark Locations with Google Street View
πŸ“š
Combining Crowdsourcing and Google Street View to Identify Street-level Accessibility Problems
πŸ“š
A Feasibility Study of Crowdsourcing and Google Street View to Determine Sidewalk Accessibility

Multimodal Conversational Interaction

We design novel multimodal conversational interaction methods that empower people with disabilities. Using technologies such as natural language processing, computer vision, and AI agent, we create technologies to support people with disabilities to interactive software systems.

Related Publications

πŸ“š
LiveSnippets: Voice-based Live Authoring of Multimedia Articles about Experiences
πŸ“š
Commanding and Re-dictation: Developing Eyes-free Voice-based Interaction for Editing Dictated Text
πŸ“š
Vocal Programming for People with Upper-Body Motor Impairments