• Title/Summary/Keyword: Learning System for the Blind

Search Result 42, Processing Time 0.028 seconds

Intelligent Shoes for Detecting Blind Falls Using the Internet of Things

  • Ahmad Abusukhon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.9
    • /
    • pp.2377-2398
    • /
    • 2023
  • In our daily lives, we engage in a variety of tasks that rely on our senses, such as seeing. Blindness is the absence of the sense of vision. According to the World Health Organization, 2.2 billion people worldwide suffer from various forms of vision impairment. Unfortunately, blind people face a variety of indoor and outdoor challenges on a daily basis, limiting their mobility and preventing them from engaging in other activities. Blind people are very vulnerable to a variety of hazards, including falls. Various barriers, such as stairs, can cause a fall. The Internet of Things (IoT) is used to track falls and send a warning message to the blind caretakers. One of the gaps in the previous works is that they were unable to differentiate between falls true and false. Treating false falls as true falls results in many false alarms being sent to the blind caretakers and thus, they may reject the IoT system. As a means of bridging this chasm, this paper proposes an intelligent shoe that is able to precisely distinguish between false and true falls based on three sensors, namely, the load scale sensor, the light sensor, and the Flex sensor. The proposed IoT system is tested in an indoor environment for various scenarios of falls using four models of machine learning. The results from our system showed an accuracy of 0.96%. Compared to the state-of-the-art, our system is simpler and more accurate since it avoids sending false alarms to the blind caretakers.

OnDot: Braille Training System for the Blind (시각장애인을 위한 점자 교육 시스템)

  • Kim, Hak-Jin;Moon, Jun-Hyeok;Song, Min-Uk;Lee, Se-Min;Kong, Ki-sok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.6
    • /
    • pp.41-50
    • /
    • 2020
  • This paper deals with the Braille Education System which complements the shortcomings of the existing Braille Learning Products. An application dedicated to the blind is configured to perform full functions through touch gestures and voice guidance for user convenience. Braille kit is produced for educational purposes through Arduino and 3D printing. The system supports the following functions. First, the learning of the most basic braille, such as initial consonants, final consonant, vowels, abbreviations, etc. Second, the ability to check learned braille by solving step quizzes. Third, translation of braille. Through the experiment, the recognition rate of touch gestures and the accuracy of braille expression were confirmed, and in case of translation, the translation was done as intended. The system allows blind people to learn braille efficiently.

Comparison of Deep Learning Networks in Voice-Guided System for The Blind (시각장애인을 위한 음성안내 네비게이션 시스템의 심층신경망 성능 비교)

  • An, Ryun-Hui;Um, Sung-Ho;Yu, Yun Seop
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.175-177
    • /
    • 2022
  • This paper introduces a system that assists the blind to move to their destination, and compares the performance of 3-types of deep learning network (DNN) used in the system. The system is made up with a smartphone application that finds route from current location to destination using GPS and navigation API and a bus station installation module that recognizes and informs the bus (type and number) being about the board at bus stop using 3-types of DNN and bus information API. To make the module recognize bus number to get on, We adopted faster-RCNN, YOLOv4, YOLOv5s and YOLOv5s showed best performance in accuracy and speed.

  • PDF

Landmark based Localization System of Mobile Robots Considering Blind Spots (사각지대를 고려한 이동로봇의 인공표식기반 위치추정시스템)

  • Heo, Dong-Hyeog;Park, Tae-Hyoung
    • The Journal of Korea Robotics Society
    • /
    • v.6 no.2
    • /
    • pp.156-164
    • /
    • 2011
  • This paper propose a localization system of indoor mobile robots. The localization system includes camera and artificial landmarks for global positioning, and encoders and gyro sensors for local positioning. The Kalman filter is applied to take into account the stochastic errors of all sensors. Also we develop a dead reckoning system to estimate the global position when the robot moves the blind spots where it cannot see artificial landmarks, The learning engine using modular networks is designed to improve the performance of the dead reckoning system. Experimental results are then presented to verify the usefulness of the proposed localization system.

Deep Learning Based Sign Detection and Recognition for the Blind (시각장애인을 위한 딥러닝 기반 표지판 검출 및 인식)

  • Jeon, Taejae;Lee, Sangyoun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.2
    • /
    • pp.115-122
    • /
    • 2017
  • This paper proposes a deep learning algorithm based sign detection and recognition system for the blind. The proposed system is composed of sign detection stage and sign recognition stage. In the sign detection stage, aggregated channel features are extracted and AdaBoost classifier is applied to detect regions of interest of the sign. In the sign recognition stage, convolutional neural network is applied to recognize the regions of interest of the sign. In this paper, the AdaBoost classifier is designed to decrease the number of undetected signs, and deep learning algorithm is used to increase recognition accuracy and which leads to removing false positives which occur in the sign detection stage. Based on our experiments, proposed method efficiently decreases the number of false positives compared with other methods.

HunMinJeomUm: Text Extraction and Braille Conversion System for the Learning of the Blind (시각장애인의 학습을 위한 텍스트 추출 및 점자 변환 시스템)

  • Kim, Chae-Ri;Kim, Ji-An;Kim, Yong-Min;Lee, Ye-Ji;Kong, Ki-Sok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.5
    • /
    • pp.53-60
    • /
    • 2021
  • The number of visually impaired and blind people is increasing, but braille translation textbooks for them are insufficient, which violates their rights to education despite their will. In order to guarantee their rights, this paper develops a learning system, HunMinJeomUm, that helps them access textbooks, documents, and photographs that are not available in braille, without the assistance of others. In our system, a smart phone app and web pages are designed to promote the accessibility of the blind, and a braille kit is produced using Arduino and braille modules. The system supports the following functions. First, users select documents or pictures that they want, and the system extracts the text using OCR. Second, the extracted text is converted into voice and braille. Third, a membership registration function is provided so that the user can view the extracted text. Experiments have confirmed that our system generates braille and audio outputs successfully, and provides high OCR recognition rates. The study has also found that even completely blind users can easily access the smart phone app.

Interface Modeling for Digital Device Control According to Disability Type in Web

  • Park, Joo Hyun;Lee, Jongwoo;Lim, Soon-Bum
    • Journal of Multimedia Information System
    • /
    • v.7 no.4
    • /
    • pp.249-256
    • /
    • 2020
  • Learning methods using various assistive and smart devices have been developed to enable independent learning of the disabled. Pointer control is the most important consideration for the disabled when controlling a device and the contents of an existing graphical user interface (GUI) environment; however, difficulties can be encountered when using a pointer, depending on the disability type; Although there are individual differences depending on the blind, low vision, and upper limb disability, problems arise in the accuracy of object selection and execution in common. A multimodal interface pilot solution is presented that enables people with various disability types to control web interactions more easily. First, we classify web interaction types using digital devices and derive essential web interactions among them. Second, to solve problems that occur when performing web interactions considering the disability type, the necessary technology according to the characteristics of each disability type is presented. Finally, a pilot solution for the multimodal interface for each disability type is proposed. We identified three disability types and developed solutions for each type. We developed a remote-control operation voice interface for blind people and a voice output interface applying the selective focusing technique for low-vision people. Finally, we developed a gaze-tracking and voice-command interface for GUI operations for people with upper-limb disability.

Visual Analysis of Deep Q-network

  • Seng, Dewen;Zhang, Jiaming;Shi, Xiaoying
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.3
    • /
    • pp.853-873
    • /
    • 2021
  • In recent years, deep reinforcement learning (DRL) models are enjoying great interest as their success in a variety of challenging tasks. Deep Q-Network (DQN) is a widely used deep reinforcement learning model, which trains an intelligent agent that executes optimal actions while interacting with an environment. This model is well known for its ability to surpass skilled human players across many Atari 2600 games. Although DQN has achieved excellent performance in practice, there lacks a clear understanding of why the model works. In this paper, we present a visual analytics system for understanding deep Q-network in a non-blind matter. Based on the stored data generated from the training and testing process, four coordinated views are designed to expose the internal execution mechanism of DQN from different perspectives. We report the system performance and demonstrate its effectiveness through two case studies. By using our system, users can learn the relationship between states and Q-values, the function of convolutional layers, the strategies learned by DQN and the rationality of decisions made by the agent.

Development of Driver's Safety/Danger Status Cognitive Assistance System Based on Deep Learning (딥러닝 기반의 운전자의 안전/위험 상태 인지 시스템 개발)

  • Miao, Xu;Lee, Hyun-Soon;Kang, Bo-Yeong
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.1
    • /
    • pp.38-44
    • /
    • 2018
  • In this paper, we propose Intelligent Driver Assistance System (I-DAS) for driver safety. The proposed system recognizes safety and danger status by analyzing blind spots that the driver cannot see because of a large angle of head movement from the front. Most studies use image pre-processing such as face detection for collecting information about the driver's head movement. This not only increases the computational complexity of the system, but also decreases the accuracy of the recognition because the image processing system dose not use the entire image of the driver's upper body while seated on the driver's seat and when the head moves at a large angle from the front. The proposed system uses a convolutional neural network to replace the face detection system and uses the entire image of the driver's upper body. Therefore, high accuracy can be maintained even when the driver performs head movement at a large angle from the frontal gaze position without image pre-processing. Experimental result shows that the proposed system can accurately recognize the dangerous conditions in the blind zone during operation and performs with 95% accuracy of recognition for five drivers.

Blind Image Quality Assessment on Gaussian Blur Images

  • Wang, Liping;Wang, Chengyou;Zhou, Xiao
    • Journal of Information Processing Systems
    • /
    • v.13 no.3
    • /
    • pp.448-463
    • /
    • 2017
  • Multimedia is a ubiquitous and indispensable part of our daily life and learning such as audio, image, and video. Objective and subjective quality evaluations play an important role in various multimedia applications. Blind image quality assessment (BIQA) is used to indicate the perceptual quality of a distorted image, while its reference image is not considered and used. Blur is one of the common image distortions. In this paper, we propose a novel BIQA index for Gaussian blur distortion based on the fact that images with different blur degree will have different changes through the same blur. We describe this discrimination from three aspects: color, edge, and structure. For color, we adopt color histogram; for edge, we use edge intensity map, and saliency map is used as the weighting function to be consistent with human visual system (HVS); for structure, we use structure tensor and structural similarity (SSIM) index. Numerous experiments based on four benchmark databases show that our proposed index is highly consistent with the subjective quality assessment.