ech T Press Science Computers, Materials & Continua DOI:10.32604/cmc.2022.020827 Article Deep Learning Based Audio Assistive System for Visually Impaired People S. Kiruthika Devi * and C. N. Subalalitha Department of Computer Science and Engineering, SRM Institute of Science and Technology, Kattankulathur, 603203, India * Corresponding Author: S. Kiruthika Devi. Email: kiruthis2@srmist.edu.in Received: 10 June 2021; Accepted: 05 August 2021 Abstract: Vision impairment is a latent problem that affects numerous people across the globe. Technological advancements, particularly the rise of com- puter processing abilities like Deep Learning (DL) models and emergence of wearables pave a way for assisting visually-impaired persons. The models developed earlier specifcally for visually-impaired people work effectually on single object detection in unconstrained environment. But, in real-time scenarios, these systems are inconsistent in providing effective guidance for visually-impaired people. In addition to object detection, extra information about the location of objects in the scene is essential for visually-impaired people. Keeping this in mind, the current research work presents an Effcient Object Detection Model with Audio Assistive System (EODM-AAS) using DL-based YOLO v3 model for visually-impaired people. The aim of the research article is to construct a model that can provide a detailed description of the objects around visually-impaired people. The presented model involves a DL-based YOLO v3 model for multi-label object detection. Besides, the presented model determines the position of object in the scene and fnally generates an audio signal to notify the visually-impaired people. In order to validate the detection performance of the presented method, a detailed simulation analysis was conducted on four datasets. The simulation results established that the presented model produces effectual outcome over existing methods. Keywords: Deep learning; visually impaired people; object detection; YOLO v3 1 Introduction In recent times, Artifcial Intelligence (AI) models started yielding better outcomes in terms of voice-rich virtual candidates like Siri and Alexa [1], independent vehicles (Tesla), robotics (car manufacturing), and automated conversion (Google translator). In line with this, AI-based solutions have been introduced in assistive techniques, especially in guiding visually-impaired or blind people. Mostly, the systems mentioned above overcome the independent navigation issues with the help of portable assistive tools such as infrared sensors, ultrasound sensors, Radio Frequency Identifcation (RFID), Bluetooth Low Energy (BLE) beacon and cameras. Followed This work is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.