top of page
My research interest involves Accessibility, AR/VR, and AI: (i) XR accessibility & inclusion(ii) Intelligent interactive systems to support accessibility (e.g., AR, eye-tracking systems); (iii) Privacy & security in XR and/or accessibility; (iv) Others.

Intelligent Interactive Systems for Accessibility

gazeoverlay_2x-100.jpeg
Understanding How Low Vision People Read Using Eye Tracking (CHI 2023)​  pdf

While being able to read with screen magnifiers, low vision people have slow and unpleasant reading experiences.Eye tracking has the potential to improve their experience by recognizing fine-grained gaze behaviors and providing more targeted enhancements. To inspire gaze-based low vision technology, we investigate the suitable method to collect low vision users' gaze data via commercial eye trackers and thoroughly explore their challenges in reading based on their gaze behaviors. With an improved calibration interface, we collected the gaze data of 20 low vision participants and 20 sighted controls who performed reading tasks on a computer screen; low vision participants were also asked to read with different screen magnifiers. We found that, with an accessible calibration interface and data collection method, commercial eye trackers can collect gaze data of comparable quality from low vision and sighted people. Our study identified low vision people’s unique gaze patterns during reading, building upon which, we propose design implications for gaze-based low vision technology.

Thumbnail.png
The Effectiveness of Visual and Audio Wayfinding Guidance on Smartglasses for People with Low Vision 
(CHI 2020) pdf
Wayfinding is a critical but challenging task for people who have low vision, a visual impairment that falls short of blindness. Prior wayfinding systems for people with visual impairments focused on blind people, providing only audio and tactile feedback. Since people with low vision use their remaining vision, we sought to determine how audio feedback compares to visual feedback in a wayfinding task. We developed visual and audio wayfinding guidance on smartglasses based on de facto standard approaches for blind and sighted people and conducted a study with 16 low vision participants. We found that participants made fewer mistakes and experienced lower cognitive load with visual feedback. Moreover, participants with a full field of view completed the wayfinding tasks faster when using visual feedback. However, many participants preferred audio feedback because of its shorter learning curve. Based on our findings, we propose design guidelines for wayfinding systems for low vision. 
StairLight: Designing AR Visualizations to Facilitate Stair Navigation for People with Low Vision (UIST 2019) pdf video
Navigating stairs is a dangerous mobility challenge for people with low vision, who have a visual impairment that falls short of blindness. Prior research contributed systems for stair navigation that provide audio or tactile feedback, but people with low vision have usable vision and don’t typically use nonvisual aids. We conducted the first exploration of augmented reality (AR) visualizations to facilitate stair navigation for people with low vision. We designed visualizations for a projection-based AR platform and smartglasses, considering the different characteristics of these platforms. For projection-based AR, we designed visual highlights that are projected directly on the stairs. In contrast, for smartglasses that have a limited vertical field of view, we designed visualizations that indicate the user’s position on the stairs, without directly augmenting the stairs themselves. We evaluated our visualizations on each platform with 12 people with low vision, finding that the visualizations for projection-based AR increased participants’ walking speed. Our designs on both platforms largely increased participants’ self-reported psychological security.  
Screen Shot 2019-01-28 at 7.43.18 PM.png
“It Looks Beautiful but Scary:” How Low Vision People Navigate Stairs and Other Surface Level Changes
(ASSETS 2018)  pdf
 
Walking in environments with stairs and curbs is potentially dangerous for people with low vision. We sought to understand what challenges low vision people face and what strategies and tools they use when navigating such surface level changes. Using contextual inquiry, we interviewed and observed 14 low vision participants as they completed navigation tasks in two buildings and through two city blocks. The tasks involved walking in- and outdoors, across four staircases and two city blocks. We found that surface level changes were a source of uncertainty and even fear for all participants. Besides the white cane that many participants did not want to use, participants did not use technology in the study. Participants mostly used their vision, which was exhausting and sometimes deceptive. Our findings highlight the need for systems that support surface level changes and other depth-perception tasks; they should consider low vision people’s distinct experiences from blind people, their sensitivity to different lighting conditions, and leverage visual enhancements.
ForeSee++: Designing and Evaluating a Customizable Head-Mounted Vision Enhancement System for People with Low Vision (TACCESS 2019) pdf

ForeSee++ is an extension of ForeSee (Zhao, Szpiro, & Azenkot, 2015), a head-mounted system for people with low vision that supports five vision enhancement methods and two display modes. ForeSee++ allows users to customize their visual experience with a set of touch-based interaction techniques on smartwatch. It supports three interaction techniques: select an enhancement, adjust the enhancement level, and turn on/off an enhancement. ForeSee++ also supports a set of speech commands to allow users to adjust the enhancements with speech, thus a user could choose the suitable interaction techniques in different situations. We evaluated it with 11 low vision participants, showing that ForeSee++ was an effective tool. All participants could easily customize their visual experience and read small letters at a 10 feet distance with an average time of 12.23 seconds using the smartwatch interactions, and 9.06 seconds using speech input.

A Face Recognition Application for People with Visual Impairments: Understanding Use Beyond the Lab     
(CHI 2018)  pdf

We present Accessibility Bot, a Facebook Messenger bot, that leverages state-of-the-art computer vision and a user’s friends’ tagged photos on Facebook to help people with visual impairments recognize their friends. Accessibility Bot provides users information about identity and facial expressions and attributes of friends captured by their phone’s camera. We conducted a diary study with six VIPs to study its use in everyday life. While most participants found the Bot helpful, their experience was undermined by perceived low recognition accuracy, difficulty aiming a camera, and lack of knowledge about the phone’s status. We discuss these real-world challenges, identify suitable use cases for Accessibility Bot, and distill design implications for future face recognition applications.

The Effect of Computer-Generated Descriptions on Photo-Sharing Experiences of People with Visual Impairments (CSCW 2018)  pdf

Like sighted people, visually impaired people want to share photographs on social networking services, but find it difficult to identify and select photos from their albums. We addressed this problem by incorporating state-of-the-art computer-generated descriptions into Facebook’s photo-sharing feature. We interviewed 12 visually impaired participants to understand their photo-sharing experiences and designed a photo description feature for the Facebook mobile application. We evaluated this feature with six participants in a seven-day diary study. We found that participants used the descriptions to recall and organize their photos, but they hesitated to upload photos without a sighted person’s input. In addition to basic information about photo content, participants wanted to know more details about salient objects and people, and whether the photos reflected their personal aesthetic. 

Understanding Low Vision People's Visual Perception on Commercial Augmented Reality Glasses (CHI 2017)  pdf

 

People with low vision, who have some functional vision, can benefit greatly from smart glasses that provide dynamic, always-available visual information. We sought to determine what low vision people could see on a mainstream commercial optical see-through AR glasses platform, despite their visual limitations and the device’s constraints. We conducted a study with 20 low vision participants and 18 sighted controls and asked them to identify virtual shapes and text in different sizes, colors, and thicknesses. We also evaluated their ability to see the virtual elements while walking. Our study yielded preliminary evidence that mainstream AR glasses can be powerful accessibility tools. We derive guidelines for presenting visual output for low vision and discuss opportunities for accessibility applications on this platform.

CueSee: Using Visual Cues to Facilitate Product Search for People with Low Vision (UbiComp 2016)  pdf  video   

Shopping is an important daily activity for a productive and independent life. We address the task of locating a specific product on a grocery store or pharmacy shelf, which is a major challenge for low vision people. We designed CueSee, an augmented reality method designed to run on a head-mounted display (HMD) that facilitates product search on a shelf by recognizing the product automatically and using visual cues to orient the user’s attention to the product. CueSee supports five visual cues: Guideline, Spotlight, Flash, Movement, and Sunrays. Users with different visual condictions can select and combine visual cues according to their preferences.

ForeSee: A Customizable Head-Mounted Vision Enhancement System for People with Low Vision 
(ASSETS 2015)  pdf  video

Most low vision people have functional vision and would likely prefer to use their vision to access information. We created ForeSee, a head-mounted vision enhancement system with five enhancement methods: Magnification, Contrast Enhancement, Edge Enhancement, Black/White Reversal, and Text Extraction; in two display modes: Full and Window. ForeSee enables users to customize their visual experience by selecting, adjusting, and combining different enhancement methods and display modes in real time. We evaluated ForeSee by conducting a study with 19 low vision participants who performed near- and far-distance viewing tasks to understand the potential of HMDs for low vision.

XR Accessibility & Inclusivity

VRBubble: Enhancing Peripheral Awareness of Avatars for People with Visual Impairments in Social Virtual Reality (ASSETS 2022, CHI LBW 2022)  pdf poster

Social Virtual Reality (VR) is growing for remote socializing and collaboration. However, current social VR applications are not accessible to people with visual impairments (PVI) due to their focus on visual experiences. We aim to facilitate social VR accessibility by enhancing PVI's peripheral awareness of surrounding avatar dynamics. We designed VRBubble, an audio-based VR technique that provides surrounding avatar information based on social distances. Based on Hall’s proxemic theory, VRBubble divides the social space with three Bubbles---Intimate, Conversation, and Social Bubble---generating spatial audio feedback to distinguish avatars in different bubbles and provide suitable avatar information. We provide three audio alternatives: earcons, verbal notifications, and real-world sound effects. PVI can select and combine their preferred feedback alternatives for different avatars, bubbles, and social contexts. 

Screen Shot 2022-06-17 at 11.45.40 AM.png
"It's Just Part of Me:'' Understanding Avatar Diversity and the Self-presentation of People with Disabilities in Social Virtual Reality (ASSETS 2022)  pdf

In social Virtual Reality (VR), users are embodied in avatars and interact with other users in a face-to-face manner using avatars as the medium. People with disabilities (PWD) have shown an increasing presence on this new social media. We seek to explore PWD's avatar perception and disability disclosure preferences in social VR. Our study involved two steps. We first conducted a systematic review of fifteen popular social VR applications to evaluate their avatar diversity and accessibility support. We then conducted an in-depth interview study with 19 participants who had different disabilities to understand their avatar experiences. Our research revealed a number of disability disclosure preferences and strategies adopted by PWD (e.g., reflect selective disabilities, present a capable self), and identified several challenges faced by PWD during their avatar customization process. We further discuss the design implications to promote avatar accessibility and diversity for future social VR platforms.

SeeingVR: A Set of Tools to Make Virtual Reality More Accessible to People with Low Vision (CHI 2019)​ 
pdf  video  Open Source on Github

Current virtual reality applications do not support people who have low vision, i.e., vision loss that falls short of complete blindness but is not correctable by glasses. We present SeeingVR, a set of 14 tools that enhance a VR application for people with low vision by providing visual and audio augmentations. A user can select, adjust, and combine different tools based on their preferences. Nine of our tools modify an existing VR application post hoc via a plugin without developer effort. The rest require simple inputs from developers using a Unity toolkit we created that allows integrating all 14 of our low vision support tools during development. Our evaluation with 11 participants with low vision showed that SeeingVR enabled users to better enjoy VR and complete tasks more quickly and accurately. Developers also found our Unity toolkit easy and convenient to use.

Canetroller: Enabling People with Visual Impairments to Navigate Virtual Reality with a Haptic and Auditory Cane Simulation (CHI 2018)​  pdf  video

We created Canetroller, a haptic cane controller that simulates white cane interactions, enabling people with visual impairments to navigate a virtual environment by transferring their cane skills into the virtual world. Canetroller provides three types of feedback: (1) physical resistance generated by a wearable programmable brake mechanism that physically impedes the controller when the virtual cane comes in contact with a virtual object; (2) vibrotactile feedback that simulates the vibrations when a cane hits an object or touches and drags across various surfaces; and (3) spatial 3D auditory feedback simulating the sound of real-world cane interactions. We designed indoor and outdoor VR scenes to evaluate the effectiveness of our controller. Our study showed that Canetroller was a promising tool that enabled visually impaired participants to navigate different virtual spaces.

Privacy & Usable Security around Emerging Technology

Screen Shot 2023-02-25 at 9.31.05 AM.png
“If sighted people know, I should be able to know:” Privacy Perceptions of Bystanders with Visual Impairments around Camera-based Technology. (USENIX 2023)  pdf 

Camera-based technology can be privacy-invasive, especially for bystanders who can be captured by the cameras but do not have direct control or access to the devices. The privacy threats become even more significant to bystanders with visual impairments (BVI) since they cannot visually discover the use of cameras nearby and effectively avoid being captured. While some prior research has studied visually impaired people's privacy concerns as direct users of camera-based assistive technologies, no research has explored their unique privacy perceptions and needs as bystanders. We conducted an in-depth interview study with 16 visually impaired participants to understand BVI's privacy concerns, expectations, and needs in different camera usage scenarios. A preliminary survey with 90 visually impaired respondents and 96 sighted controls was conducted to compare BVI and sighted bystanders' general attitudes towards cameras and elicit camera usage scenarios for the interview study. Our research revealed BVI's unique privacy challenges and perceptions around cameras, highlighting their needs for privacy awareness and protection. We summarized design considerations for future privacy-enhancing technologies to fulfill BVI's privacy needs.

Screen Shot 2023-02-25 at 9.55.59 AM.png
“I was Confused by It; It was Confused by Me:” Exploring the Experiences of People with Visual Impairments around Mobile Service Robots. (CSCW 2022)  pdf 

Mobile service robots have become increasingly ubiquitous. However, these robots can pose potential accessibility issues and safety concerns to people with visual impairments (PVI). We sought to explore the challenges faced by PVI around mainstream mobile service robots and identify their needs. Seventeen PVI were interviewed about their experiences with three emerging robots: vacuum robots, delivery robots, and drones. We comprehensively investigated PVI's robot experiences by considering their different roles around robots---direct users and bystanders. Our study highlighted participants' challenges and concerns about the accessibility, safety, and privacy issues around mobile service robots. We found that the lack of accessible feedback made it difficult for PVI to precisely control, locate, and track the status of the robots. Moreover, encountering mobile robots as bystanders confused and even scared the participants, presenting safety and privacy barriers. We further distilled design considerations for more accessible and safe robots for PVI.

Others

QOOK: Enhancing Information Revisitation for Active Reading with a Paper Book (TEI 2014)  pdf  video

QOOK is an interactive reading system that incorporates the benefits of both physical and digital books to facilitate active reading. With a Kinect-projector system, QOOK creates digital contents on a blank paper book. It allows users to flip pages just like they would with a real book and embeds electronic functions such as keyword searching, highlighting and bookmarking to provide users with additional digital assistance. People could use these electronic functions directly with their fingers. The combination of the electronic functions and free-form interaction creates a natural reading experience, providing faster navigation between pages and better understanding of the book contents.

FOCUS: Enhancing Children’s Engagement in Reading by Using Contextual BCI Training Sessions (CHI 2014)    pdf  video

Reading is an important aspect of a child’s development. Reading outcome is heavily dependent on the level of engagement while reading. We present FOCUS, an EEG-augmented reading system which monitors a child’s engagement level in real time, and provides contextual BCI training sessions to improve a child’s reading engagement. A laboratory experiment was conducted to assess the validity of the system. Results showed that FOCUS could significantly improve engagement in terms of both EEG-based measurement and teachers’ subjective measure on the reading outcome.

PicoPet: A “Real World” Digital Pet on a Handheld projector (UIST 2011 Adjunct)  pdf  video

We designed and built PicoPet, a digital pet game played on mobile handheld projectors.

The player can project the pet onto various physical environments, and the pet behaves and evolves differently according to its physical surroundings. PicoPet creates a new form of gaming that blends with the physical world. This connection between the pet and the physical world means that the pet could become incorporated into the player's daily life, as well as reflect their lifestyle. Multiple pets projected by multiple players can also interact with each other, potentially triggering social interactions between players.

Hero: Designing Learning Tools to Increase Parental Involvement in Elementary Education in China
(CHI EA 2013)  pdf  video

 

We designed Hero, a suite of learning tools that guide parental involvement with teacher-created challenges and bring out-of-school learning achievements back into the classroom to increase the effect of at-home education on students’ school performance. Hero provides a website for teachers to give educational guidance in the form of challenges and track the students’ at-home education progress, a mobile application for parents to complete the teacher-created challenges with their children anywhere outside of class, and an interactive adventure map used in the classroom by each student to evaluate their out-side-of-school and in-class achievements.

Defining and Analyzing a Gesture Set for Interactive TV Remote on Touchscreen Phones (IEEE UIC 2014) pdf

In this study, we recruited 20 participants preforming user-defined gestures on a touch screen phone for 22 TV remote commands. Totally 440 gestures were recorded, analyzed and paired with think-aloud data for these 22 referents. After analyzing these gestures according to extended taxonomy of surface gestures and agreement measure, we presented a user-defined gesture set for interactive TV remote on touch screen phones. Our study indicated that people preferred to using single-handed thumb and eyes-free gestures, which needed no attention switch in TV viewing scenario. We also found that multi-display was useful in text entry and menu access tasks. Our results contributed to better gesture design for the interaction between TVs and touch screens on mobile phones.

HouseGenie: Smart Home on Smart Phone
(UbiComp 2011) pdf

 

Mobile phone with high accessibility and usability is regarded as the ideal interface for the users to monitor and control the smart home environment. We designed HouseGenie, an interactive, direct manipulation application on mobile, which supports a range of basic home monitoring and controlling functionalities as a replacement of individual remotes of smart home appliances. HouseGenie also addresses several common requirements, such as scenario, short-delay alarm, area restriction and so on. HouseGenie not only provides intuitive presentations and interactions for smart home management, but also improves user experience comparing to present solutions.

bottom of page