This post summarizes a research paper, authored by Yuhang Zhao (Cornell Tech, also an intern Microsoft Research), Edward Cutrell, Christian Holz, Meredith Ringel Morris, Eyal Ofek, and Andrew D. Wilson from Microsoft Research. This paper will be presented at CHI 2019, a conference of Human-Computer Interaction, on Monday 6th May 2019 at 11:00 am in the session AR/VR 1: Accessibility.
Virtual reality (VR) technology is emerging but not yet accessible to people with visual disabilities. Our team addresses VR accessibility for people with low vision to promote equity for this large but overlooked population. We designed SeeingVR, a toolkit with 14 tools that enhance VR scenes for people with low vision by providing both visual and auditory augmentations. We also evaluated our toolkit with both users with low vision and VR developers. Our study demonstrated the effectiveness of SeeingVR tools for users with low vision. Developers also found our Unity toolkit easy to use.
What is Low Vision?
Low vision is a visual impairment that falls short of complete blindness but cannot be fully corrected by glasses. It is a pervasive disability that makes up 86% of the visual impairment population. According to the World Health Organization, about 217 million people worldwide have low vision. Low vision encompasses a range of visual abilities (for example, low visual acuity, tunnel vision, blind spots, brightness sensitivity, etc.), which can severely impact people’s ability to perform activities of daily life. While desktop software offers some accommodation features for people with low vision (for example, screen magnifiers), VR systems have not yet grappled with the issue of accessibility for this audience. Indeed, when interviewing VR developers, the team found that none had received training or guidance on how to develop accessible VR experiences.
To address VR accessibility for low vision, we designed SeeingVR, a toolkit with 14 tools that can enhance VR scenes for people with low vision. The tools are designed by considering different low vision conditions and representative VR tasks (for example, aiming at a target or selecting from a menu). Example tools include a magnifier and bifocal views, brightness and contrast adjustment for the scene, edge-enhancement to make virtual objects more salient from their backgrounds, depth measurement tools, and the ability to point at text or objects in a virtual scene to have them read or described aloud. Low vision users can activate different combinations of these tools depending on their preferences.
We support two approaches to use our SeeingVR tools: (1) a plugin with the majority of these tools, that can be applied to existing Unity applications post-hoc, to support easy adoption; (2) a Unity toolkit for developers (Unity is one of the most widely-used VR development platforms) that allow developers to incorporate these tools in their VR apps during the development process.
Experiences with SeeingVR
We evaluated SeeingVR with both low vision users and Unity developers. Eleven participants with low vision conducted a variety of VR tasks (for example, menu selection, finding and grasping objects, shooting moving targets) with SeeingVR. We found that all participants could complete tasks more quickly and accurately when using SeeingVR tools as compared to the default VR experience. All participants chose different combinations of the available tools, reinforcing the value of allowing flexibility and customization of low vision accessibility options.
“I stopped playing FIFA [a football video game] because it became more realistic… they changed the visuals suddenly… I was not able to play that game after enjoying it for 10 years… SeeingVR gives me the option to be able to see. It may not be comparable or directly equal to what you may experience if you don’t have a visual impairment, but it’s an equitable experience, so that I’m still able to participate in that game.” — One low vision participant.
Six Unity developers also used our Unity toolkit. They emphasized the challenges of making VR accessible and said that SeeingVR would be a useful toolkit to help them accomplish this goal.
“Sometimes people just assume accessibility in VR is the same as that on a 2D screen, which is not really right. We sometimes got asked by [the] accessibility team [at our company], ‘You need to be accessible.’ But they don’t really understand what accessibility is in the VR context. You guys are the first that actually look this deeply into this problem.” — One Unity developer.
Microsoft Research Intern Yuhang Zhao, a graduate student at Cornell Tech, will present the paper at CHI 2019 on Monday 6th May 2019 (11:00 am, Room: Alsh 1). The team will also present a live demo during the conference’s demonstration session and at the Microsoft booth.