• Title/Summary/Keyword: human adaptive interface

Search Result 22, Processing Time 0.023 seconds

Development of Smart Driving System Using iPod and Its Performance Evaluation for People with Severe Physical Disabilities in the Driving Simulator

  • Jung, Woo-Chul;Kim, Yong-Chul
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.5
    • /
    • pp.637-646
    • /
    • 2012
  • Objective: The aim of this study was to develop the adaptive device for severe physical disabilities using smart device in the driving simulator and its performance evaluation. Development of appropriate driving adaptive device for the people with serious physical limitation could contribute to maintain their community mobility. Background: There is lack of adaptive driving devices for the people with disabilities in Korea. However, if smart device systems like iPod and iPhone are used for driving a car, the people with serious physical limitations can improve their community mobility. Method: Both gyroscope and accelerometer from iPod were used to measure the tilted angle of the smart device for driving. Customized Labview program was also used to control three axis motors for steering wheel, accelerator and brake pedals. Thirteen subjects were involved in the experiment for performance evaluation of smart device in simulator. Five subjects had driver licenses. Another four subjects did not have driver licenses. Others were people with disabilities. Results: Average driving score of the normal group with driver license in the simulator increased 46.6% compared with the normal group without driver license and increased 30.4% compared with the disabled group(p<0.01). There was no significant difference in the average driving score between normal group without driver license and disabled group(p>0.05). Conclusion: The normal group with driver license showed significantly higher driving score than other groups. The normal group without driver license and disabled group could improve their driving skills with training in simulator. Application: If follow-up studies would be continued and applied in adapted vehicle for on road environment, many people with more severe disabilities could drive and improve the quality of life.

Driving Performance Evaluation Using Bio-signals from the Prefrontal Lobe in the Driving Simulator

  • Kim, Young-Hyun;Kim, Yong-Chul
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.2
    • /
    • pp.319-325
    • /
    • 2012
  • Objective: The aim of this study was to develop the assistive device for accelerator and brake pedals using bio-signals from the prefrontal lobe in the driving simulator and evaluate its performance. Background: There is lack of assistive devices for the driving in peoples with disabilities in Korea. However, if bio-signals and/or brain waves are used at driving a car, the people with serious physical limitations can improve their community mobility. Method: 15 subjects with driver's license participated in this study for experiment of driving performance evaluation in the simulator. Each subject drove the simulator the same course 10 times in three separated groups which use different interface controllers to accelerate and brake: (1) conventional pedal group, (2) joystick group and (3) bio-signal group(horizontal quick glance of the eyes and clench teeth). All experiments were recorded and the driving performances were evaluated by three inspectors. Results: Average score of bio-signal group for the driving in the simulator was increased 3% compared with the pedal group and was increased 9% compared with the joystick group(p<0.01). The subjects using bio-signals was decreased 44% in number of deduction compared with others because the device had the built-in modified cruise control. Conclusion: The assistive device for accelerator and brake pedals using bio-signals showed significantly better performance than using general pedal and a joystick interface(p<0.01). Application: This study can be used to design adaptive vehicle for driving in people with disabilities.

Development and Evaluation of Smart Secondary Controls Using iPad for People with Hemiplegic Disabilities

  • Song, Jeongheon;Kim, Yongchul
    • Journal of the Ergonomics Society of Korea
    • /
    • v.34 no.2
    • /
    • pp.85-101
    • /
    • 2015
  • Objective: The purpose of this study was to develop and evaluate smart secondary controls using iPad for the drivers with physical disabilities in the driving simulator. Background: The physically disabled drivers face problems in the operation of secondary control devices that accept a control input from a driver for the purpose of operating the subsystems of a motor vehicle. Many of conventional secondary controls consist of small knobs or switches that physically disabled drivers have difficulties in grasping, pulling or twisting. Therefore, their use while driving might increase distraction and workload because of longer operation time. Method: We examined the operation time of conventional and smart secondary controls, such as hazard warning, turn signal, window, windshield wiper, headlights, automatic transmission and horn. The hardware of smart secondary control system was composed of iPad, wireless router, digital input/output module and relay switch. We used the STISim Drive3 software for driving test, customized Labview and Xcode programs for interface control of smart secondary system. Nine subjects were involved in the study for measuring operation time of secondary controls. Results: When the driver was in the stationary condition, the average operation time of smart secondary devices decreased 32.5% in the normal subjects (p <0.01), 47.4% in the subjects with left hemiplegic disabilities (p <0.01) and 38.8% in the subjects with right hemiplegic disabilities (p <0.01) compared with conventional secondary devices. When the driver was driving for the test in the simulator, the average operation time of smart secondary devices decreased 36.1% in the normal subjects (p <0.01), 41.7% in the subjects with left hemiplegic disabilities (p <0.01) and 34.1% in the subjects with right hemiplegic disabilities (p <0.01) compared with conventional secondary devices. Conclusion: The smart secondary devices using iPad for people with hemiplegic disabilities showed significant reduction of operation time compared with conventional secondary controls. Application: This study can be used to design secondary controls for adaptive vehicles and to improve the quality of life of the people with disabilities.

The Modified Block Matching Algorithm for a Hand Tracking of an HCI system (HCI 시스템의 손 추적을 위한 수정 블록 정합 알고리즘)

  • Kim Jin-Ok
    • Journal of Internet Computing and Services
    • /
    • v.4 no.4
    • /
    • pp.9-14
    • /
    • 2003
  • A GUI (graphical user interface) has been a dominant platform for HCI (human computer interaction). A GUI - based interaction has made computers simpler and easier to use. The GUI - based interaction, however, does not easily support the range of interaction necessary to meet users' needs that are natural. intuitive, and adaptive. In this paper, the modified BMA (block matching algorithm) is proposed to track a hand in a sequence of an image and to recognize it in each video frame in order to replace a mouse with a pointing device for a virtual reality. The HCI system with 30 frames per second is realized in this paper. The modified BMA is proposed to estimate a position of the hand and segmentation with an orientation of motion and a color distribution of the hand region for real - time processing. The experimental result shows that the modified BMA with the YCbCr (luminance Y, component blue, component red) color coordinate guarantees the real - time processing and the recognition rate. The hand tracking by the modified BMA can be applied to a virtual reclity or a game or an HCI system for the disable.

  • PDF

A Study on the Interactive Architecture in Nature Environment

  • Baek, Seung-Man
    • Journal of the Regional Association of Architectural Institute of Korea
    • /
    • v.20 no.6
    • /
    • pp.41-46
    • /
    • 2018
  • The context of innovation in which we evolve today, subtracts us in a spacial reality and virtuality (digital) that aimed less and less to interact with natural processes which could converge to new possible relationships in the world. We constantly live in presence of fluctuations and imperceptible natural energies (wind, solar radiation, etc.) defined by flows, their own physicality, which remains without being virtual, elusive. This study first outlines how these energies already exploited within the framework of production, could be thought as interactive of our habitat's space dimension, as a prolongation of a physical and material environment built by men and for men, giving rise to new social, cultural dynamics, and making natural complexity of our space vivid, comprehensible with new visual and physical clues. In recent days, where lifestyles are changing, architecture no longer needs to limit its scope of creation to only built structures. Based on a deeper understanding of human and through new potential advanced technologies (kinetic system, etc), it is time to fundamentally diagnose what environments or devices contribute to our lives. Architecture becomes ${\ll}interface{\gg}$, step up its fundamental role, and newly defines the sturdy image and tectonics of existing environment, establishing a stance to search for a new typology. In the end, building will show two simultaneous and distinctive connections related to its physical existence: reality in its function and irreductibility, in its ability to forge new dynamic connections with its environment, hybridizing the spatial dimension to a new form of physicality, adaptive and incessantly flexible in the dimension time, becoming a vessel for ever changing contemporary lifestyles.

Accelerometer-based Gesture Recognition for Robot Interface (로봇 인터페이스 활용을 위한 가속도 센서 기반 제스처 인식)

  • Jang, Min-Su;Cho, Yong-Suk;Kim, Jae-Hong;Sohn, Joo-Chan
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.53-69
    • /
    • 2011
  • Vision and voice-based technologies are commonly utilized for human-robot interaction. But it is widely recognized that the performance of vision and voice-based interaction systems is deteriorated by a large margin in the real-world situations due to environmental and user variances. Human users need to be very cooperative to get reasonable performance, which significantly limits the usability of the vision and voice-based human-robot interaction technologies. As a result, touch screens are still the major medium of human-robot interaction for the real-world applications. To empower the usability of robots for various services, alternative interaction technologies should be developed to complement the problems of vision and voice-based technologies. In this paper, we propose the use of accelerometer-based gesture interface as one of the alternative technologies, because accelerometers are effective in detecting the movements of human body, while their performance is not limited by environmental contexts such as lighting conditions or camera's field-of-view. Moreover, accelerometers are widely available nowadays in many mobile devices. We tackle the problem of classifying acceleration signal patterns of 26 English alphabets, which is one of the essential repertoires for the realization of education services based on robots. Recognizing 26 English handwriting patterns based on accelerometers is a very difficult task to take over because of its large scale of pattern classes and the complexity of each pattern. The most difficult problem that has been undertaken which is similar to our problem was recognizing acceleration signal patterns of 10 handwritten digits. Most previous studies dealt with pattern sets of 8~10 simple and easily distinguishable gestures that are useful for controlling home appliances, computer applications, robots etc. Good features are essential for the success of pattern recognition. To promote the discriminative power upon complex English alphabet patterns, we extracted 'motion trajectories' out of input acceleration signal and used them as the main feature. Investigative experiments showed that classifiers based on trajectory performed 3%~5% better than those with raw features e.g. acceleration signal itself or statistical figures. To minimize the distortion of trajectories, we applied a simple but effective set of smoothing filters and band-pass filters. It is well known that acceleration patterns for the same gesture is very different among different performers. To tackle the problem, online incremental learning is applied for our system to make it adaptive to the users' distinctive motion properties. Our system is based on instance-based learning (IBL) where each training sample is memorized as a reference pattern. Brute-force incremental learning in IBL continuously accumulates reference patterns, which is a problem because it not only slows down the classification but also downgrades the recall performance. Regarding the latter phenomenon, we observed a tendency that as the number of reference patterns grows, some reference patterns contribute more to the false positive classification. Thus, we devised an algorithm for optimizing the reference pattern set based on the positive and negative contribution of each reference pattern. The algorithm is performed periodically to remove reference patterns that have a very low positive contribution or a high negative contribution. Experiments were performed on 6500 gesture patterns collected from 50 adults of 30~50 years old. Each alphabet was performed 5 times per participant using $Nintendo{(R)}$ $Wii^{TM}$ remote. Acceleration signal was sampled in 100hz on 3 axes. Mean recall rate for all the alphabets was 95.48%. Some alphabets recorded very low recall rate and exhibited very high pairwise confusion rate. Major confusion pairs are D(88%) and P(74%), I(81%) and U(75%), N(88%) and W(100%). Though W was recalled perfectly, it contributed much to the false positive classification of N. By comparison with major previous results from VTT (96% for 8 control gestures), CMU (97% for 10 control gestures) and Samsung Electronics(97% for 10 digits and a control gesture), we could find that the performance of our system is superior regarding the number of pattern classes and the complexity of patterns. Using our gesture interaction system, we conducted 2 case studies of robot-based edutainment services. The services were implemented on various robot platforms and mobile devices including $iPhone^{TM}$. The participating children exhibited improved concentration and active reaction on the service with our gesture interface. To prove the effectiveness of our gesture interface, a test was taken by the children after experiencing an English teaching service. The test result showed that those who played with the gesture interface-based robot content marked 10% better score than those with conventional teaching. We conclude that the accelerometer-based gesture interface is a promising technology for flourishing real-world robot-based services and content by complementing the limits of today's conventional interfaces e.g. touch screen, vision and voice.

Stereo Vision Based 3D Input Device (스테레오 비전을 기반으로 한 3차원 입력 장치)

  • Yoon, Sang-Min;Kim, Ig-Jae;Ahn, Sang-Chul;Ko, Han-Seok;Kim, Hyoung-Gon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.4
    • /
    • pp.429-441
    • /
    • 2002
  • This paper concerns extracting 3D motion information from a 3D input device in real time focused to enabling effective human-computer interaction. In particular, we develop a novel algorithm for extracting 6 degrees-of-freedom motion information from a 3D input device by employing an epipolar geometry of stereo camera, color, motion, and structure information, free from requiring the aid of camera calibration object. To extract 3D motion, we first determine the epipolar geometry of stereo camera by computing the perspective projection matrix and perspective distortion matrix. We then incorporate the proposed Motion Adaptive Weighted Unmatched Pixel Count algorithm performing color transformation, unmatched pixel counting, discrete Kalman filtering, and principal component analysis. The extracted 3D motion information can be applied to controlling virtual objects or aiding the navigation device that controls the viewpoint of a user in virtual reality setting. Since the stereo vision-based 3D input device is wireless, it provides users with a means for more natural and efficient interface, thus effectively realizing a feeling of immersion.

Robust Extraction of Facial Features under Illumination Variations (조명 변화에 견고한 얼굴 특징 추출)

  • Jung Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.6 s.38
    • /
    • pp.1-8
    • /
    • 2005
  • Facial analysis is used in many applications like face recognition systems, human-computer interface through head movements or facial expressions, model based coding, or virtual reality. In all these applications a very precise extraction of facial feature points are necessary. In this paper we presents a method for automatic extraction of the facial features Points such as mouth corners, eye corners, eyebrow corners. First, face region is detected by AdaBoost-based object detection algorithm. Then a combination of three kinds of feature energy for facial features are computed; valley energy, intensity energy and edge energy. After feature area are detected by searching horizontal rectangles which has high feature energy. Finally, a corner detection algorithm is applied on the end region of each feature area. Because we integrate three feature energy and the suggested estimation method for valley energy and intensity energy are adaptive to the illumination change, the proposed feature extraction method is robust under various conditions.

  • PDF

Walking Number Detection Algorithm using a 3-Axial Accelerometer Sensor and Activity Monitoring (3축 가속도 센서를 이용한 보행 횟수 검출 알고리즘과 활동 모니터링)

  • Yoo, Hyang-Mi;Suh, Jae-Won;Cha, Eun-Jong;Bae, Hyeon-Deok
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.8
    • /
    • pp.253-260
    • /
    • 2008
  • The research for a 3-axial accelerometer sensor has increased dramatically in the fields of cellular phone, PDA, etc. In this paper, we develop a human walking detection algorithm using 3-axial accelerometer sensor and a user interface system to show the activity expenditure in real-time. To measure a walking number more correctly in a variety of walking activities including walking, walking in place, running, slow walking, we propose a new walking number detection algorithm using adaptive threshold value. In addition, we calculate the activity expenditure base on counted walking number and display calculated activity expenditure on UI in real-time. From the experimental results, we could obtain that the detection rate of proposal algorithm is higher than that of existing algorithm using a fixed threshold value about $5{\sim}10%$. Especially, it could be found out high detection rate in walking in place.

Korean Emotion Vocabulary: Extraction and Categorization of Feeling Words (한국어 감정표현단어의 추출과 범주화)

  • Sohn, Sun-Ju;Park, Mi-Sook;Park, Ji-Eun;Sohn, Jin-Hun
    • Science of Emotion and Sensibility
    • /
    • v.15 no.1
    • /
    • pp.105-120
    • /
    • 2012
  • This study aimed to develop a Korean emotion vocabulary list that functions as an important tool in understanding human feelings. In doing so, the focus was on the careful extraction of most widely used feeling words, as well as categorization into groups of emotion(s) in relation to its meaning when used in real life. A total of 12 professionals (including Korean major graduate students) partook in the study. Using the Korean 'word frequency list' developed by Yonsei University and through various sorting processes, the study condensed the original 64,666 emotion words into a finalized 504 words. In the next step, a total of 80 social work students evaluated and classified each word for its meaning and into any of the following categories that seem most appropriate for inclusion: 'happiness', 'sadness', 'fear', 'anger', 'disgust', 'surprise', 'interest', 'boredom', 'pain', 'neutral', and 'other'. Findings showed that, of the 504 feeling words, 426 words expressed a single emotion, whereas 72 words reflected two emotions (i.e., same word indicating two distinct emotions), and 6 words showing three emotions. Of the 426 words that represent a single emotion, 'sadness' was predominant, followed by 'anger' and 'happiness'. Amongst 72 words that showed two emotions were mostly a combination of 'anger' and 'disgust', followed by 'sadness' and 'fear', and 'happiness' and 'interest'. The significance of the study is on the development of a most adaptive list of Korean feeling words that can be meticulously combined with other emotion signals such as facial expression in optimizing emotion recognition research, particularly in the Human-Computer Interface (HCI) area. The identification of feeling words that connote more than one emotion is also noteworthy.

  • PDF