• Title/Summary/Keyword: Facial motion

Search Result 157, Processing Time 0.021 seconds

Design of Computer Access Devices for Severly Motor-disability Using Bio-potentials (생체전위를 이용한 중증 운동장애자들을 위한 컴퓨터 접근제어장치 설계)

  • Jung, Sung-Jae;Kim, Myung-Dong;Park, Chan-Won;Kim, Il-Hwan
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.55 no.11
    • /
    • pp.502-510
    • /
    • 2006
  • In this paper, we describe implementation of a computer access device for the severly motor-disability. Many people with severe motor disabilities need an augmentative communication technology. Those who are totally paralyzed, or 'locked-in' cannot use conventional augmentative technologies, all of which require some measure of muscle control. The forehead is often the last site to suffer degradation in cases of severe disability and degenerative disease. For example, In ALS(Amyotrophic Lateral Sclerosis) and MD(Muscular dystrophy) the ocular motorneurons and ocular muscles are usually spared permitting at least gross eye movements, but not precise eye pointing. We use brain and body forehead bio-potentials in a novel way to generate multiple signals for computer control inputs. A bio-amplifier within this device separates the forehead signal into three frequency channels. The lowest channel is responsive to bio-potentials resulting from an eye motion, and second channel is the band pass derived between 0.5 and 45Hz, falling within the accepted Electroencephalographic(EEG) range. A digital processing station subdivides this region into eleven components frequency bands using FFT algorithm. The third channel is defined as an Electromyographic(EMG) signal. It responds to contractions of facial muscles and is well suited to discrete on/off switch closures, keyboard commands. These signals are transmitted to a PC that analyzes in a time series and a frequency region and discriminates user's intentions. That software graphically displays user's bio-potential signals in the real time, therefore user can see their own bio-potentials and control their physiological signals little by little after some training sessions. As a result, we confirmed the performance and availability of the developed system with experimental user's bio-potentials.

A Study on Fast Iris Detection for Iris Recognition in Mobile Phone (휴대폰에서의 홍채인식을 위한 고속 홍채검출에 관한 연구)

  • Park Hyun-Ae;Park Kang-Ryoung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.2 s.308
    • /
    • pp.19-29
    • /
    • 2006
  • As the security of personal information is becoming more important in mobile phones, we are starting to apply iris recognition technology to these devices. In conventional iris recognition, magnified iris images are required. For that, it has been necessary to use large magnified zoom & focus lens camera to capture images, but due to the requirement about low size and cost of mobile phones, the zoom & focus lens are difficult to be used. However, with rapid developments and multimedia convergence trends in mobile phones, more and more companies have built mega-pixel cameras into their mobile phones. These devices make it possible to capture a magnified iris image without zoom & focus lens. Although facial images are captured far away from the user using a mega-pixel camera, the captured iris region possesses sufficient pixel information for iris recognition. However, in this case, the eye region should be detected for accurate iris recognition in facial images. So, we propose a new fast iris detection method, which is appropriate for mobile phones based on corneal specular reflection. To detect specular reflection robustly, we propose the theoretical background of estimating the size and brightness of specular reflection based on eye, camera and illuminator models. In addition, we use the successive On/Off scheme of the illuminator to detect the optical/motion blurring and sunlight effect on input image. Experimental results show that total processing time(detecting iris region) is on average 65ms on a Samsung SCH-S2300 (with 150MHz ARM 9 CPU) mobile phone. The rate of correct iris detection is 99% (about indoor images) and 98.5% (about outdoor images).

Visual Disturbance following Autologous Fat Injection into Periorbital Area (안와부 자가지방이식술 후 시력 저하에 대한 증례보고)

  • Jeon, Young Woo;Kim, Sung Soo;Ha, Sang Wook;Lee, Young Dae;Seul, Chul Hwan;Tark, Kwan Chul;Cho, Eul Jae;Yoo, Won Min
    • Archives of Plastic Surgery
    • /
    • v.34 no.5
    • /
    • pp.663-666
    • /
    • 2007
  • Purpose: Autologous fat injection into the facial area is a frequently used technique in aesthetic plastic surgery for augmentation of the soft tissue. Fat injection is a very safe procedure because of the advantage of being autologous tissue. Minimal foreign body reaction or infections are noted after fat injection. However, there may be some complications including those as severe as blindness. There have been some case reports on visual disturbances after autologous fat injection reported in the literature. Methods: A 21-year-old female patient underwent autologous fat injection into left eyebrow area to correct depression of soft tissue. Immediately after injection of autologous fat, she complained sudden visual loss on the left eye. She had come to our emergency room and ophthalmologic evaluation showed that the patient could only recognize hand motion. There was no abnormality of the optic nerve on magnetic resonance imaging. Suspecting an ischemic optic neuritis from fat embolism of the central retinal artery, the patient was treated conservatively with occular massage, antiglaucomatic agent, anti-inflammatory drugs and antibiotics. Visual field examination showed visual defect of half the lower hemisphere. Results: While maintaining antiglaucomatic agents and non steroidal anti inflammatory drugs, fundoscopic examination showed no abnormalities on the second day of admission. Visual field examination showed an improvement on the fourth day along with decreased eyeball pain. Significant improvement of vision was noted and the patient was discharged on the fifth day of admission. The patient was followed-up 2 days afterwards with improved vision and visual field defect. Conclusion: We describe an unusual case of sudden unilateral visual disturbance following autologous fat injection into periorbital area.

The Behavioral Patterns of Neutral Affective State for Service Robot Using Video Ethnography (비디오 에스노그래피를 이용한 서비스 로봇의 대기상태 행동패턴 연구)

  • Song, Hyun-Soo;Kim, Min-Joong;Jeong, Sang-Hoon;Suk, Hyeon-Jeong;Kwon, Dong-Soo;Kim, Myung-Suk
    • Science of Emotion and Sensibility
    • /
    • v.11 no.4
    • /
    • pp.629-636
    • /
    • 2008
  • In recent years, a large number of robots have been developed in several countries, and these robots have been built for the purpose to appeal to users by well designed human-robot interaction. In case of the robots developed so far, they show proper reactions only when there is a certain input. On the other hands, they cannot perform in a standby mode which means there is no input. In other words, if a robot does not make any motion in standby mode, users may feel that the robot is being turned-off or even out of work. Especially, the social service robots maintain the standby status after finishing a certain task. In this period of time, if the robots can make human-like behavioral patterns such like a person in help desk, then they are expected to make people feels that they are alive and is more likely to interact with them. It is said that even if there is no interaction with others or the environment, people normally reacts to internal or external stimuli which are created by themselves such as moving their eyes or bodies. In order to create robotic behavioral patterns for standby mode, we analyze the actual facial expression and behavior from people who are in neutral affective emotion based on ethnographic methodology and apply extracted characteristics to our robots. Moreover, by using the robots which can show those series of expression and action, our research needs to find that people can feel like they are alive.

  • PDF

Construction of Virtual Public Speaking Simulator for Treatment of Social Phobia (대인공포증의 치료를 위한 가상 연설 시뮬레이터의 실험적 제작)

  • 구정훈;장동표;신민보;조항준;안희범;조백환;김인영;김선일
    • Journal of Biomedical Engineering Research
    • /
    • v.21 no.6
    • /
    • pp.615-621
    • /
    • 2000
  • A social phobia is an anxiety disorder characterized by extreme fear and phobic avoidance of social and performance situations. Medications or cognitive-behavior methods have been mainly used in treating it. These methods have some shortcomings such as being inefficient and difficult to apply to treatment. Lately the virtual rcality technology has been applied to dcal with the anxiety disorders in order to compcnsate for these defects. A virtual environment provides a patient with stimuli which cvokes a phobia. and the patient's exposure to the virtual phobic situation make him be able to overcome it. In this study, we suggested the public speaking simulator based on a personal computer for the treatment of social phobia. The public speaking simulator was composed of a position sensor. head mount display and audio system. And a virtual environment for the treatment was suggested to be a seminar room where 8 avatars are sitting. The virtual environment includes a tracking system the trace a participant's head-movement using a HMD with position sensor and 3D sound is added to the virtual environment so that he might fcel it realistic. We also made avatars' motion and facial expression change in reaction to a participant's speech. The goal of developing public speaking simulator is to apply to treat fear of public speaking efficiently and economically. In a future study. we should get more information about immergence and treatment efficiency by clinical test and apply it to this simulator.

  • PDF

Analysis of Users' Emotions on Lighting Effect of Artificial Intelligence Devices (인공지능 디바이스의 조명효과에 대한 사용자의 감정 평가 분석)

  • Hyeon, Yuna;Pan, Young-hwan;Yoo, Hoon-Sik
    • Science of Emotion and Sensibility
    • /
    • v.22 no.3
    • /
    • pp.35-46
    • /
    • 2019
  • Artificial intelligence (AI) technology has been evolving to recognize and learn the languages, voice tones, and facial expressions of users so that they can respond to users' emotions in various contexts. Many AI-based services of particular importance in communications with users provide emotional interaction. However, research on nonverbal interaction as a means of expressing emotion in the AI system is still insufficient. We studied the effect of lighting on users' emotional interaction with an AI device, focusing on color and flickering motion. The AI device used in this study expresses emotions with six colors of light (red, yellow, green, blue, purple, and white) and with a three-level flickering effect (high, middle, and low velocity). We studied the responses of 50 men and women in their 20s and 30s to the emotions expressed by the light colors and flickering effects of the AI device. We found that each light color represented an emotion that was largely similar to the user's emotional image shown in a previous color-sensibility study. The rate of flickering of the lights produced changes in emotional arousal and balance. The change in arousal patterns produced similar intensities of all colors. On the other hand, changes in balance patterns were somewhat related to the emotional image in the previous color-sensibility study, but the colors were different. As AI systems and devices are becoming more diverse, our findings are expected to contribute to designing the users emotional with AI devices through lighting.

Web-based Text-To-Sign Language Translating System (웹기반 청각장애인용 수화 웹페이지 제작 시스템)

  • Park, Sung-Wook;Wang, Bo-Hyeun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.3
    • /
    • pp.265-270
    • /
    • 2014
  • Hearing-impaired people have difficulty in hearing, so it is also hard for them to learn letters that represent sound and text that conveys complex and abstract concepts. Therefore it has been natural choice for the hearing-impaired people to use sign language for communication, which employes facial expression, and hands and body motion. However, the major communication methods in daily life are text and speech, which are big obstacles for the hearing-impaired people to access information, to learn and make intellectual activities, and to get jobs. As delivering information via internet become common the hearing-impaired people are experiencing more difficulty in accessing information since internet represents information mostly in text forms. This intensifies unbalance of information accessibility. This paper reports web-based text-to-sign language translating system that helps web designer to use sign language in web page design. Since the system is web-based, if web designers are equipped with common computing environment for internet browsing, they can use the system. The web-based text-to-sign language system takes the format of bulletin board as user interface. When web designers write paragraphs and post them through the bulletin board to the translating server, the server translates the incoming text to sign language, animates with 3D avatar and records the animation in a MP4 file. The file addresses are fetched by the bulletin board and it enables web designers embed the translated sign language file into their web pages by using HTML5 or Javascript. Also we analyzed text used by web pages of public services, then figured out new words to the translating system, and added to improve translation. This addition is expected to encourage wide and easy acceptance of web pages for hearing-impaired people to public services.