HCI Capstone: Deaf Accessibility

Design Research / Voice UX / Mobile Design

The goal of my Capstone project in Human-Computer Interaction is to develop a tool that would enable the Deaf and Hard of Hearing to make use of Speech Controlled Devices like Amazon Echo and Google Home.

We hope to enable deaf individuals to use these devices to complete day to day tasks that are important to them. Currently, we have a low-fidelity prototype of the app that we hope to design in the end. We will observe and analyze how users interact with our prototype through directed tasks, as well as find out what they would want to use a VCD for without being limited to existing capabilities. Our client for the project is Jeffrey Bigham, Associate Professor and HCII Ph.D. Program Director. He has been working on furthering accessibility research for over the past 15 years.

Teammates: Avi Romanoff, Rae Lasko, Sarah Shy, Emma Shi / Duration: 2 Months (In Progress)

Contribution: User Research, UX/UI Design, Storyboards, Wireframes, Mockups,


With advances in speech recognition technology, voice controlled devices (VCD) are become increasingly prevalent in homes as cheaper alternatives to devices with visual displays.

Personal assistants, including Amazon’s Alexa, Google Home, and Apple’s Siri, are examples of recent products that have been rapidly growing in popularity. These services enable users to use their voice to ask general search questions, perform administrative and home automation tasks, and other tasks commonly performed via smartphones. As of October 2017, 20.5 million Amazon Echos and 4.6 million Google Homes have been sold. Other products such as speech controlled light bulbs, outlets, and thermostats can also be found on the market. In the coming years, we can expect an increase in the number of voice-controlled device produced and sold.


A consequence of these devices being speech controlled is that the Deaf and Hard of Hearing have little to no ability to use them.

The large number of VCD sales is paralleled by the size of the deaf community. According to the World Health Organization, over 5% of the world’s population (360 million) has disabling hearing loss. In addition, about 7.5 million people in the United States have trouble using their voice. Taking these number into consideration, a large portion of the population is left out of the market and is unable to take full advantage of the services that are provided through these devices. While many deaf individuals are taught speech, interacting with existing devices is still difficult as they require nearly perfect English to understand a command.

The Google Home is only about 95% accurate when spoken to using perfect English grammar and diction. Current solutions are limited to speech generating devices, such as Proloquo2Go and TouchChat, and text-to-speech applications. However, these can only be loosely labeled as solutions as they are workarounds that are inefficient and don’t provide the user much benefit over simply typing a Google search.


In doing a competitive analysis, we researched various accessibility tools and learned about the pros and cons of using them.

In June 2017, Apple launched an accessibility feature that gives iPhone users the ability to type commands to Siri. Eliminating the need for any sort of voice input, this feature enables the Deaf and hard of hearing to interact with Apple’s voice assistant by using the keyboard. We asked a group of Deaf teenagers to interact with the tool. We discovered that some had trouble typing commands. For many Deaf individuals, English is a second language. Therefore, they are not as quick at forming or typing out complete sentences to Siri in English. The feature is also tricky to use for them as it requires nearly perfect grammar.

Proloquo2Go is an Augmentative and Alternative Communication application that provides a voice to those who have trouble producing speech. The application can be purchased from the Apple store and used on the iPad, iPhone, iPod touch and Apple Watch. The application works in English, Spanish, French and Dutch. The design of the interface is quite basic. Each tile on the screen shows a vocabulary word and related symbol. Users can tap on the desired words or symbols to form complete English sentences which the device will read aloud.

The last image of the set above is of a videophone. This is a device that Deaf individuals use to have video calls with others. They can sign what they want to say to an ASL interpreter who will translate their message to the person on the other end of the call. These devices are provided to Deaf individuals for free. 


We conducted contextual inquiries and interviews at the Western Pennsylvania School for the Deaf with students and faculty members.

The individuals we spoke to had a range of hearing abilities; some being profoundly deaf from birth, some being hard of hearing, while others were able to hear. Moreover, there was also a great range in speaking abilities - some choosing to use their voice when possible, while others solely using ASL to communicate. The purpose of the general interviews was to see if there was any interest in even creating such a tool that would allow them to use devices, such as Echo.  We created an interview protocol and compiled a list of questions that sought to understand their general interests, any assistive and general technologies they use, and their knowledge and interest in voice-controlled devices.

We also spoke to a faculty member at Gallaudet University doing research on accessibility and an education specialist that works with Deaf individuals. After doing all this initial research, we came away with two key findings.


Key Finding #1:  The Deaf and Hard of Hearing individuals we spoke to overwhelmingly indicated that they are more comfortable using ASL than English.

Just as we like to use voice because typing is tedious, deaf and HOH individuals prefer to use ASL over typing. This is best exemplified by the fact that whenever possible, deaf and hard of hearing individuals communicate via video apps like Glide, Apple Clips, and Snapchat. Also, upon explaining our project, every individual we spoke to immediately inquired “Can you build one that understands ASL?”.


Key Finding #2: The Deaf and Hard of Hearing individuals would use the voice controlled devices for the same tasks that hearing users do.

This was a main research question for us because we falsely assumed deaf individuals would not be interested in music, which is the biggest use case for most voice controlled device used by hearing individuals. We found that many, but not all, Deaf and hard of hearing individuals like to listen to music through vibrations and match the lyrics they find on Google. We found that they are interested in using voice controlled devices to get sport updates, to check the weather and to make Google searches. 


While we really hoped to build something that would allow the Deaf and Hard of Hearing to command devices using American Sign Language, we had to scale back the idea for a number of reasons.

American Sign Language is complex and hard to translate because many body parts are used in signing words and phrases. Interpreters have to look out for movement in the hands, eyes, lips and eyebrows when reading sign language. We learned from speaking to a colleague that the sign for "mother" or "father" is the same. To differentiate the meaning, the sign for "mother" is placed by the chin, while the sign for "father" is done around the forehead area. 

Our client reiterated to us that the technology required to translate ASL doesn't exist yet and it may not for several more years. Given his prior experiences in building accessibility tools, Jeff encouraged us to look towards building a short-term solution that could be accessible to the Deaf and Hard of Hearing population by the end of the semester.