• Title/Summary/Keyword: visual notification

Search Result 18, Processing Time 0.022 seconds

Foot Pressure Mat with Visual Notification for Recognizing and Correcting Foot Pressure Imbalance (시각적 알림이 있는 족저압매트 개발을 통한 족저압 불균형 인지와 즉각적인 교정)

  • Hanna Park;Bonhak Koo;Jinhee Park;Jooyong Kim
    • Journal of Fashion Business
    • /
    • v.28 no.1
    • /
    • pp.83-97
    • /
    • 2024
  • A plantar pressure mat with visual notifications was developed to confirm whether individuals can effectively balance themselves and correct imbalances. The sensor-embedded mat was made with a commercial yoga mat, and was tested on seven working women in their 30s to determine plantar pressure distribution when standing and squatting, and if they could recognize and correct imbalances with visual feedback. The study found that visual notifications significantly changed the plantar pressure ratio of the forefoot and hindfoot, as well as the left and right foot plantar pressure ratio. Without notifications, the center of gravity was more concentrated in the rear foot than the forefoot in both standing and squatting positions. Visual notifications showed that the center of gravity, which was largely focused on the rear foot, was distributed to the forefoot, resulting in a more evenly distributed center of gravity throughout the sole. For the change in left and right plantar pressure, the weight that was largely loaded on the left side was distributed to the right foot through the visual notification mat, confirming a more balanced plantar pressure.

Using Freeze Frame and Visual Notifications in an Annotation Drawing Interface for Remote Collaboration

  • Kim, Seungwon;Billinghurst, Mark;Lee, Chilwoo;Lee, Gun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.12
    • /
    • pp.6034-6056
    • /
    • 2018
  • This paper describes two user studies in remote collaboration between two users with a video conferencing system where a remote user can draw annotations on the live video of the local user's workspace. In these two studies, the local user had the control of the view when sharing the first-person view, but our interfaces provided instant control of the shared view to the remote users. The first study investigates methods for assisting drawing annotations. The auto-freeze method, a novel solution for drawing annotations, is compared to a prior solution (manual freeze method) and a baseline (non-freeze) condition. Results show that both local and remote users preferred the auto-freeze method, which is easy to use and allows users to quickly draw annotations. The manual-freeze method supported precise drawing, but was less preferred because of the need for manual input. The second study explores visual notification for better local user awareness. We propose two designs: the red-box and both-freeze notifications, and compare these to the baseline, no notification condition. Users preferred the less obtrusive red-box notification that improved awareness of when annotations were made by remote users, and had a significantly lower level of interruption compared to the both-freeze condition.

Face Recognition and Notification System for Visually Impaired People (시각장애인을 위한 얼굴 인식 및 알림 시스템)

  • Jin, Yongsik;Lee, Minho
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.12 no.1
    • /
    • pp.35-41
    • /
    • 2017
  • We propose a face recognition and notification system that can transform visual face information into tactile signals in order to help visually impaired people. The proposed system consists of a glasses type camera, a mobile computer and an electronic cane. The glasses type camera captures the frontal view of the user, and sends this image to mobile computer. The mobile computer starts to search for human's face in the image when obstacles are detected by ultrasonic sensors. In a case that human's face is detected, the mobile computer identifies detected face. At this time, Adaboost and compressive sensing are used as a detector and a classifier, respectively. After the identification procedures of the detected face, the identified face information is sent to controller attached to a cane using a Bluetooth communication. The controller generates motor control signals using Pulse Width Modulation (PWM) according to the recognized face labels. The vibration motor generates vibration patterns to inform the visually impaired person of the face recognition result. The experimental results of face recognition and notification system show that proposed system is helpful for visually impaired people by providing person identification results in front of him/her.

Bending Strength of Korean Softwood Species for 120×180 mm Structural Members

  • Pang, Sung-Jun;Park, Joo-Saeng;Hwang, Kweon-Hwan;Jeong, Gi-Young;Park, Moon-Jae;Lee, Jun-Jae
    • Journal of the Korean Wood Science and Technology
    • /
    • v.39 no.5
    • /
    • pp.444-450
    • /
    • 2011
  • The goal of this study is to investigate bending properties of domestic timber. Three representative structural timber from Larix kaempferi, Pinus koraiensis, and Pinus densiflora, in the northeastern South Korea were selected. Visual grading for the timber was conducted based on KFRI notification 2009-01 and the bending strength for the timber was evaluated based on ASTM D 198 bending. The high percentage of grade 1 and 2 for Larix kaempferi shows that the KFRI notification was optimized for this species. The bending strength distributions from Pinus koraiensis and Pinus densiflora were very similar. It could be possible to specify the allowable bending properties of these two Specification using a united species group similar to spruce-pine-fir. Lastly, the bending strength of $120{\times}180mm$ structural members was higher than both existing values in KBC 2009 and design values for timber of imported species described in the NDS. Thus, 120 mm thick domestic softwoods could replace the commercial imported species and the KBC should be modified to provide design values for both timber and dimensional lumber, respectively, like NDS.

Design and Implementation of SIP-based Multi-party Conference System Including Presence Service (Presence 서비스를 포함한 SIP 기반의 다자간 컨퍼런스 시스템의 설계 및 구현)

  • Jung Young-Myun;Ko Se-Lyung;Jang Choon-Seo;Jo Hyun-Gyu
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.2
    • /
    • pp.257-266
    • /
    • 2005
  • As developing of the internet and computer technology, more interests are gathered to the conference service which provides capability of multi-party real-time visual conference. In this paper, we have designed and implemented a SIP-based visual conference system which includes Presence service. The elements of this conference system are user system, which has conference UA(User Agent) capability, presence seuer and conference server. For the presence service, we have adapted publication method which uses SIP PUBLISH message, and with this service various status informations of users are easily acquired. Also invitations and involvements to the conference are easily made through this service. For the conference server which controls establishment and management of multi-party connections, we have included conference event package. This package provides dynamically changing conference informations and users informations through SIP subscription and notification functions.

  • PDF

A Real-time Bus Arrival Notification System for Visually Impaired Using Deep Learning (딥 러닝을 이용한 시각장애인을 위한 실시간 버스 도착 알림 시스템)

  • Seyoung Jang;In-Jae Yoo;Seok-Yoon Kim;Youngmo Kim
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.2
    • /
    • pp.24-29
    • /
    • 2023
  • In this paper, we propose a real-time bus arrival notification system using deep learning to guarantee movement rights for the visually impaired. In modern society, by using location information of public transportation, users can quickly obtain information about public transportation and use public transportation easily. However, since the existing public transportation information system is a visual system, the visually impaired cannot use it. In Korea, various laws have been amended since the 'Act on the Promotion of Transportation for the Vulnerable' was enacted in June 2012 as the Act on the Movement Rights of the Blind, but the visually impaired are experiencing inconvenience in using public transportation. In particular, from the standpoint of the visually impaired, it is impossible to determine whether the bus is coming soon, is coming now, or has already arrived with the current system. In this paper, we use deep learning technology to learn bus numbers and identify upcoming bus numbers. Finally, we propose a method to notify the visually impaired by voice that the bus is coming by using TTS technology.

  • PDF

Designing a Vibrotactile Reading System for Mobile Phones

  • Chu, Shaowei;Zhu, Keying
    • Journal of Information Processing Systems
    • /
    • v.14 no.5
    • /
    • pp.1102-1113
    • /
    • 2018
  • Vibrotactile feedback is widely used in designing non-visual interactions on mobile phones, such as message notification, non-visual reading, and blind use. In this work, novel vibrotactile codes are presented to implement a non-visual text reading system for mobile phones. The 26 letters of the English alphabet are formed in an index table with four rows and seven columns, and each letter is mapped using the codes of vibrations. Two kinds of vibrotactile codes are designed with the actuator's on and off states and with specific lengths (short and long) assigned to each state. To improve the efficiency of tactile perception and user satisfaction, three user experiments are conducted. The first experiment explores the maximum number of continuous vibrations and minimum vibration time of the actuator's on and off states that the human can perceive. The second experiment determines the minimum interval between continuous vibrations. The vibrotactile reading system is designed and evaluated in the third experiment according to the results of the two preceding experiments. Results show that the character reading accuracy reaches 91.7% and the character reading speed is approximately 617.8 ms. Our method has better reading efficiency and is easier to learn than the traditional Braille coding method.

A Study on Defect Prediction through Real-time Monitoring of Die-Casting Process Equipment (주조공정 설비에 대한 실시간 모니터링을 통한 불량예측에 대한 연구)

  • Chulsoon Park;Heungseob Kim
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.45 no.4
    • /
    • pp.157-166
    • /
    • 2022
  • In the case of a die-casting process, defects that are difficult to confirm by visual inspection, such as shrinkage bubbles, may occur due to an error in maintaining a vacuum state. Since these casting defects are discovered during post-processing operations such as heat treatment or finishing work, they cannot be taken in advance at the casting time, which can cause a large number of defects. In this study, we propose an approach that can predict the occurrence of casting defects by defect type using machine learning technology based on casting parameter data collected from equipment in the die casting process in real time. Die-casting parameter data can basically be collected through the casting equipment controller. In order to perform classification analysis for predicting defects by defect type, labeling of casting parameters must be performed. In this study, first, the defective data set is separated by performing the primary clustering based on the total defect rate obtained during the post-processing. Second, the secondary cluster analysis is performed using the defect rate by type for the separated defect data set, and the labeling task is performed by defect type using the cluster analysis result. Finally, a classification learning model is created by collecting the entire labeled data set, and a real-time monitoring system for defect prediction using LabView and Python was implemented. When a defect is predicted, notification is performed so that the operator can cope with it, such as displaying on the monitoring screen and alarm notification.

A Design and Implementation of Bus Information Notification Application

  • Kang, A-Yeon;Lee, Tae-Hyeon;Lee, Na-Kyung;Lee, Won-Joo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.6
    • /
    • pp.81-88
    • /
    • 2021
  • In this paper, we design and implement a bus information notification application based on the GPS sensor of a smartphone. This application provides the ability to check bus stops within a radius of 200m based on the current user's location using the smartphone's GPS sensor, Google Map, and open API. In addition, if you click the marker of the desired stop, you can see the name of the stop, and click the view arrival information button to check the detailed bus arrival information of the stop. In addition, it provides a function to check the location information of pharmacies, nonghyups, and post offices that sell public masks, the names of public mask stores, and mask inventory through the public mask store button. Each icon was used differently to make visual differences in order to easily indicate the difference between the times of public mask sales and bus stops. In addition, if you want to know the information of other bus stops and the route of the desired bus, not around the user's location, click the bus stop search button. Finally, after storing the destination stop or location, it implements a function that provides an alarm when it approaches the location.

Analysis of Auditory Information Types in Vehicle based on User Experience of Hearing Impaired Drivers (청각장애 운전자의 사용자경험에 기반한 자동차 내 청각정보 유형 분석)

  • Byun, Jae Hyung
    • Smart Media Journal
    • /
    • v.10 no.1
    • /
    • pp.70-78
    • /
    • 2021
  • The auditory information is used for urgent notification or warning in vehicle because it is not restricted by the direction compared to the visual. However, since the hearing impaired drivers cannot recognize sound signal, various methods of visualizing the auditory information have been attempted to replace it. When visualizing auditory information, only important information should be selected and provided to prevent cognitive overload concentrated on the vision. For this purpose, analysis of the type of auditory information in vehicle should be given in advance. In this study, the types of auditory information in vehicle were analyzed based on the user experience of hearing impaired drivers. Through the observation of the driving behavior of hearing impaired drivers, 33 auditory informations experienced in vehicle were collected. The collected auditory informations were classified into 12 groups through open card sorting by an expert group, and the types of auditory information in vehicle consisting of four levels were presented through a relative comparison of importance between groups. The presented type of auditory information in vehicle can be used as a guideline for selecting important information when the auditory information is converted into visual or tactile. This study is meaningful in that the user experience analysis was conducted by observing actual driving in daily life of hearing impaired drivers.