Active Projects

Active Projects

The Future of Work

Employment facilitated by online communication is transforming the way we work. In the U.S. alone, 600k people participate in online gig-economy—also known as crowd work—and the number of workers is growing rapidly (data from 2015). Crowd work's remote and asynchronous work style, unbounded by time and location, extends the contemporary office work, enabling people with disabilities, at-home parents, and temporarily out-of-work people to work.

At the same time, however, many are concerned that employers in these labor markets mistreat workers. Concerns about low earnings on crowd work platforms have been voiced repeatedly. Given the potential benefits and drawbacks of this new way of work, we study what opportunities online crowd work and gig-economy offer and what new challenges they create for people, particularly for those who have disabilities. We then design and develop technologies that mitigate and solve the problems.

Related Publications

📚
ViScene: A Collaborative Authoring Tool for Scene Descriptions in Videos
📚
Worker Demographics and Earnings on Amazon Mechanical Turk: An Exploratory Analysis
📚
Striving to Earn More: A Survey of Work Strategies and Tool Use Among Crowd Workers
📚
A Data-Driven Analysis of Workers' Earnings on Amazon Mechanical Turk
📚
Introducing People with ASD to Crowd Work
📚
The Crowd Work Accessibility Problem

Accessibility and Assistive Technology

The accessibility of the physical environment around us influences the way we move and behave, particularly for those with disabilities. We use techniques from crowdsourcing, computer vision, machine learning, and information visualization to understand and inform the urban accessibility.

Related Publications

📚
A Pilot Deployment of an Online Tool for Large-Scale Virtual Auditing of Urban Accessibility
📚
The Design of Assistive Location-based Technologies for People with Ambulatory Disabilities: A Formative Study
📚
Improving Public Transit Accessibility for Blind Riders by Crowdsourcing Bus Stop Landmark Locations with Google Street View: An Extended Analysis
📚
Tohme: Detecting Curb Ramps in Google Street View Using Crowdsourcing, Computer Vision, and Machine Learning
📚
An Initial Study of Automatic Curb Ramp Detection with Crowdsourced Verification using Google Street View Images
📚
Improving Public Transit Accessibility for Blind Riders by Crowdsourcing Bus Stop Landmark Locations with Google Street View
📚
Combining Crowdsourcing and Google Street View to Identify Street-level Accessibility Problems
📚
A Feasibility Study of Crowdsourcing and Google Street View to Determine Sidewalk Accessibility

Multimodal Conversational Interaction

We design novel multimodal conversational interaction methods that empower people with disabilities. Using technologies such as natural language processing, computer vision, and AI agent, we create technologies to support people with disabilities to interactive software systems.

Related Publications

📚
LiveSnippets: Voice-based Live Authoring of Multimedia Articles about Experiences
📚
Commanding and Re-dictation: Developing Eyes-free Voice-based Interaction for Editing Dictated Text
📚
Vocal Programming for People with Upper-Body Motor Impairments