Viva: A Virtual Assistant for the Visually Impaired
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Visual impairment refers to the partial or complete loss of one’s ability to see. It is estimated that there are 1.3 billion people in the world with some form of vision loss. In this work, we present Viva, an Android-based virtual assistant aiming to help people with visual impairment. The application provides haptic and voice navigation assistance by detecting obstacles in the user’s surroundings and calculating the potential risk. We present the architecture, as well as a proof-of-concept prototype intended to demonstrate a potential use-case for a commercial embedded product that can be integrated into a walking stick or any wearable gadget. This Android application has features such as navigation assistant, object detection, voice-controlled UI and emergency assistant. The navigation assistant analyzes a user’s surroundings by detecting and estimating distances from the user to the object. Object recognition mode includes a pre-built object recognition model that can recognize over 100 different common objects. Data collected is then processed by a risk-prediction algorithm to calculate the risk of collision. Feedback is provided to the user whenever there is a potential risk observed. The UI of the virtual assistant is uniquely designed from the ground-up to be intuitive, without the need for any usual aids via voice commands or single point touch control – where the entire screen acts as a soft button. Viva operates in a low-power mode with the screen turned off to efficiently utilize the limited battery resources on mobile phones. Viva is a prototype intended to demonstrate the potential use-cases of this idea. It can be integrated into other IoT devices such as smart walking sticks or wearable gadgets.
Android, Haptic feedback, Navigation assistant, Object recognition, TensorFlow Lite, UI, UX, Visually impaired assistance, Voice feedback
Zeeshan Ahmed Pachodiwale, Yugeshwari Brahmankar, Neha Parakh, Dhruvil Patel, and Magdalini Eirinaki. "Viva: A Virtual Assistant for the Visually Impaired" Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (2021): 444-460. https://doi.org/10.1007/978-3-030-78092-0_30