论文标题
SANIP:视力障碍的购物助理和导航
SANIP: Shopping Assistant and Navigation for the visually impaired
论文作者
论文摘要
拟议的购物助理模型SANIP将帮助盲人检测手持有的物体,并从检测到的对象中获取信息的视频反馈。提出的模型由三个Python模型组成,即自定义对象检测,文本检测和条形码检测。为了检测手持对象的对象检测,我们创建了自己的自定义数据集,该数据集包括Parle-G,Tide和Lays等日常商品。除此之外,我们还收集了购物车和出口标志的图像,因为对于任何人来说,使用购物车都至关重要,并且还会注意到紧急情况下的出口标志。对于其他2个模型,提出的文本和条形码信息将从文本转换为语音,并传达给盲人。该模型用于检测经过训练并成功地检测和识别所需输出的对象,其精度和精确度良好。
The proposed shopping assistant model SANIP is going to help blind persons to detect hand held objects and also to get a video feedback of the information retrieved from the detected and recognized objects. The proposed model consists of three python models i.e. Custom Object Detection, Text Detection and Barcode detection. For object detection of the hand held object, we have created our own custom dataset that comprises daily goods such as Parle-G, Tide, and Lays. Other than that we have also collected images of Cart and Exit signs as it is essential for any person to use a cart and also notice the exit sign in case of emergency. For the other 2 models proposed the text and barcode information retrieved is converted from text to speech and relayed to the Blind person. The model was used to detect objects that were trained on and was successful in detecting and recognizing the desired output with a good accuracy and precision.