top of page
Above the Clouds

Enhancing Revisitation in Touchscreen Reading for Visually Impaired People with Semantic Navigation Design

Advisor(s)

Prof. Chun Yu and Prof. Yuanchun Shi,

Tsinghua University Pervasive Computing Group, Beijing

Status

Categorized as Revise and Resubmit by CHI'22 (second author)

Duration

August 2020 - September 2021

Role

Team member (Android development, hardware design and fabrication, user experiment, paper drafting)

title.png

Revisitation, the process of non-linearly returning to previously visited regions, is an important task in academic reading. However, listening to content on mobile phones via a screen reader fails to support eyes-free revisiting due to its linear audio stream, ineffective text organization, and interaction. To enhance the efficiency and experience of eyes-free revisiting, we identified visually impaired people's behaviors and difficulties during text revisiting through a survey (N=37) and an observation study (N=12). We proposed a series of design guidelines targeting high precision, high flexibility, and low workload in interaction, and iteratively designed and developed a reading prototype application. Our application supports dynamic text structure and is supplemented by both linear and non-linear layered text navigation. The evaluation results (N=8) showed that compared to existing methods, our prototype improved the clarity of text understanding and fluency of revisiting with a reduced workload. 

We identified challenges and needs in eyes-free touchscreen reading through interviews and focus group studies with visually impaired readers. Then based on the summarised design guidelines for such a tool, we iteratively developed a semantically aware reading tool that supports dynamic text segmentation.

Walkthrough of the app structure

Screen Shot 2021-11-07 at 8.42.24 PM.png

Text page

Menu page

Screen Shot 2021-11-07 at 8.42.35 PM.png

Video demo of the prototype App

Screen Shot 2021-11-04 at 11.15.08 AM.png

The paths of the user's fingers on the screen demonstrate reading behaviors including (1) sequential touch-reading; (2) locating using the line numbers; (3) non-linear movements to find keywords; and (4) using the progress bar to jump.

Screen Shot 2021-11-04 at 11.15.00 AM.png

Evaluation of our Semantic Reader for the Visually Impaired (SRVI) compared to the most commonly used touchscreen reading App WeChat Reading. 

bottom of page