• Title/Summary/Keyword: Brain Computer Interaction

Search Result 29, Processing Time 0.024 seconds

An Implementation of Brain-wave DB building system for Artifacts prevention using Face Tracking (얼굴 추적 기반의 잡파 혼입 방지가 가능한 뇌파 DB구축 시스템 구현)

  • Shin, Jeong-Hoon;Kwon, Hyeong-Oh
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.10 no.1
    • /
    • pp.40-48
    • /
    • 2009
  • Leading of the computer, IT technology has make great strides. As a information-industry-community was highly developed, user's needs to convenience about intelligence and humanization of interface is being increase today. Nowadays, researches with are related to BCI are progress put the application-technology development first in importance eliminating research about fountainhead technology with DB construction. These problems are due to a BCI-related research studies have not overcome the initial level, and not toward a systematic study. Brain wave are collected from subjects is a signal that the signal is appropriate and necessary in the experiment is difficult to distinguish. In addition, brain wave that it's not necessary to collect the experiment, serious eyes flicker, facial and body movements of an EMG and electrodes attached to the state, noise, vibration, etc. It is hard to collect accurate brain wave was caused by mixing disturbance wave in experiment on the environment. This movement, and the experiment of subject impact on the environment due to the mixing disturbance wave can cause that lowering cognitive and decline of efficiency when embodied BCI system. Therefore, in this paper, we propose an accurate and efficient brain-wave DB building system that more exactness and cognitive basis studies when embodied BCI system with brain-wave. For the minimize about brain wave DB with mixing disturbance, we propose a DB building method using an automatic control and prevent unnecessary action, put to use the subjects face tracking.

  • PDF

Classification of Three Different Emotion by Physiological Parameters

  • Jang, Eun-Hye;Park, Byoung-Jun;Kim, Sang-Hyeob;Sohn, Jin-Hun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.2
    • /
    • pp.271-279
    • /
    • 2012
  • Objective: This study classified three different emotional states(boredom, pain, and surprise) using physiological signals. Background: Emotion recognition studies have tried to recognize human emotion by using physiological signals. It is important for emotion recognition to apply on human-computer interaction system for emotion detection. Method: 122 college students participated in this experiment. Three different emotional stimuli were presented to participants and physiological signals, i.e., EDA(Electrodermal Activity), SKT(Skin Temperature), PPG(Photoplethysmogram), and ECG (Electrocardiogram) were measured for 1 minute as baseline and for 1~1.5 minutes during emotional state. The obtained signals were analyzed for 30 seconds from the baseline and the emotional state and 27 features were extracted from these signals. Statistical analysis for emotion classification were done by DFA(discriminant function analysis) (SPSS 15.0) by using the difference values subtracting baseline values from the emotional state. Results: The result showed that physiological responses during emotional states were significantly differed as compared to during baseline. Also, an accuracy rate of emotion classification was 84.7%. Conclusion: Our study have identified that emotions were classified by various physiological signals. However, future study is needed to obtain additional signals from other modalities such as facial expression, face temperature, or voice to improve classification rate and to examine the stability and reliability of this result compare with accuracy of emotion classification using other algorithms. Application: This could help emotion recognition studies lead to better chance to recognize various human emotions by using physiological signals as well as is able to be applied on human-computer interaction system for emotion recognition. Also, it can be useful in developing an emotion theory, or profiling emotion-specific physiological responses as well as establishing the basis for emotion recognition system in human-computer interaction.

Autopoiesis, Affordance, and Mimesis: Layout for Explication of Complexity of Cognitive Interaction between Environment and Human (오토포이에시스, 어포던스, 미메시스: 환경과 인간의 인지적 상호작용의 복잡성 해명을 위한 밑그림)

  • Shim, Kwang Hyun
    • Korean Journal of Cognitive Science
    • /
    • v.25 no.4
    • /
    • pp.343-384
    • /
    • 2014
  • In order to unravel the problems of the mind, today's cognitive science has expanded its perspective from the narrow framework of the past computer model or neuronal network model to the wider frameworks of interaction with the brain in interaction with the body in interaction with their environments. The theories of 'the extended mind', 'embodied mind', or 'enactive mind' appeared through such processes are working on a way to move into the environments while the problem to unravel the complex process of interactions between the mind, the body and the environments are left alone. This problem can be traced back as far as to Gibson and Maturana & Varela who tried at first to unravel the problem of the mind in terms of interaction between the brain, the body and there environments in 1960~70s. It's because Gibson stressed the importance of the 'affordance' provided by the environment while Maturana & Varela emphasized the 'autonomy' of auto-poiesis of life. However, it will be proper to say that there are invariants in the affordances provided by the environment as well as the autonomy of life in the state of structural coupling of the environment's variants and life's openness toward the environment. In this case, the confrontational points between Gibson and Maturana & Varela will be resolved. In this article, I propose Benjamin's theory of mimesis as a mediator of both theories. Because Benjamin's concept of mimesis has the process of making a constellation of the embodiment of the affordance and the enaction of new affordance into the environment at the same time, Gibson's concept of the affordance and Maturana & Varela's concept of embodiment and enaction will be so smoothly interconnected to circulate through the medium of Benjamin's concept of mimesis.

Detecting Stress Based Social Network Interactions Using Machine Learning Techniques

  • S.Rajasekhar;K.Ishthaq Ahmed
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.8
    • /
    • pp.101-106
    • /
    • 2023
  • In this busy world actually stress is continuously grow up in research and monitoring social websites. The social interaction is a process by which people act and react in relation with each other like play, fight, dance we can find social interactions. In this we find social structure means maintain the relationships among peoples and group of peoples. Its a limit and depends on its behavior. Because relationships established on expectations of every one involve depending on social network. There is lot of difference between emotional pain and physical pain. When you feel stress on physical body we all feel with tensions, stress on physical consequences, physical effects on our health. When we work on social network websites, developments or any research related information retrieving etc. our brain is going into stress. Actually by social network interactions like watching movies, online shopping, online marketing, online business here we observe sentiment analysis of movie reviews and feedback of customers either positive/negative. In movies there we can observe peoples reaction with each other it depends on actions in film like fights, dances, dialogues, content. Here we can analysis of stress on brain different actions of movie reviews. All these movie review analysis and stress on brain can calculated by machine learning techniques. Actually in target oriented business, the persons who are working in marketing always their brain in stress condition their emotional conditions are different at different times. In this paper how does brain deal with stress management. In software industries when developers are work at home, connected with clients in online work they gone under stress. And their emotional levels and stress levels always changes regarding work communication. In this paper we represent emotional intelligence with stress based analysis using machine learning techniques in social networks. It is ability of the person to be aware on your own emotions or feeling as well as feelings or emotions of the others use this awareness to manage self and your relationships. social interactions is not only about you its about every one can interacting and their expectations too. It about maintaining performance. Performance is sociological understanding how people can interact and a key to know analysis of social interactions. It is always to maintain successful interactions and inline expectations. That is to satisfy the audience. So people careful to control all of these and maintain impression management.

Research on Classification of Human Emotions Using EEG Signal (뇌파신호를 이용한 감정분류 연구)

  • Zubair, Muhammad;Kim, Jinsul;Yoon, Changwoo
    • Journal of Digital Contents Society
    • /
    • v.19 no.4
    • /
    • pp.821-827
    • /
    • 2018
  • Affective computing has gained increasing interest in the recent years with the development of potential applications in Human computer interaction (HCI) and healthcare. Although momentous research has been done on human emotion recognition, however, in comparison to speech and facial expression less attention has been paid to physiological signals. In this paper, Electroencephalogram (EEG) signals from different brain regions were investigated using modified wavelet energy features. For minimization of redundancy and maximization of relevancy among features, mRMR algorithm was deployed significantly. EEG recordings of a publically available "DEAP" database have been used to classify four classes of emotions with Multi class Support Vector Machine. The proposed approach shows significant performance compared to existing algorithms.

EEG Feature Classification Based on Grip Strength for BCI Applications

  • Kim, Dong-Eun;Yu, Je-Hun;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.15 no.4
    • /
    • pp.277-282
    • /
    • 2015
  • Braincomputer interface (BCI) technology is making advances in the field of humancomputer interaction (HCI). To improve the BCI technology, we study the changes in the electroencephalogram (EEG) signals for six levels of grip strength: 10%, 20%, 40%, 50%, 70%, and 80% of the maximum voluntary contraction (MVC). The measured EEG data are categorized into three classes: Weak, Medium, and Strong. Features are then extracted using power spectrum analysis and multiclass-common spatial pattern (multiclass-CSP). Feature datasets are classified using a support vector machine (SVM). The accuracy rate is higher for the Strong class than the other classes.

A Research on Prediction of Hand Movement by EEG Coherence at Lateral Hemisphere Area (편측적 EEG Coherence 에 의한 손동작 예측에 관한 연구)

  • Woo, Jin-Cheol;Whang, Min-Cheol;Kim, Jong-Wha;Kim, Chi-Jung;Kim, Ji-Hye;Kim, Young-Woo
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.330-334
    • /
    • 2009
  • 본 연구는 뇌의 편측 영역 에서의 EEG(Electroencephalography) coherence 로 손동작 의도를 예측하고자 하는 연구이다. 손 동작 예측을 위한 실험에 신체에 이상이 없는 6 명의 피실험자가 참여 하였다. 실험은 데이터 트레이닝 6 분과 동작 의도 판단 6 분으로 진행되었으며 무작위 순서로 손 동작을 지시한 후 편측적 영역 5 개 지점의 EEG 와 동작 시점을 알기 위한 오른손 EMG(Electromyography)를 측정하였다. 측정된 EEG 데이터를 분석하기 위해 주파수 별 Alpha 와 Beta 를 분류하였고 EMG 신호를 기준으로 동작과 휴식으로 분류된 Alpha 와 Beta 데이터를 5 개의 측정 영역별 Coherence 분석을 하였다. 그 결과 동작과 휴식을 구분할 수 있는 통계적으로 유효한 EEG Coherence 영역을 통하여 동작 판단을 할 수 있음을 확인하였다.

  • PDF

Evaluation of Human Factors for the Next-Generation Displays: A Review of Subjective and Objective Measurement Methods

  • Mun, Sungchul;Park, Min-Chul
    • Journal of the Ergonomics Society of Korea
    • /
    • v.32 no.2
    • /
    • pp.207-215
    • /
    • 2013
  • Objective: This study aimed to investigate important human factors that should be considered when developing ultra-high definition TVs by reviewing measurement methods and main characteristics of ultra-high definition displays. Background: Although much attention has been paid to high-definition displays, there have been few studies for systematically evaluating human factors. Method: In order to determine human factors to be considered in developing human-friendly displays, we reviewed subjective and objective measurement methods to figure out the current limitations and establish a guideline for developing human-centered ultra-high definition TVs. In doing so, pros and cons of both subjective and objective measurement methods for assessing humans factors were discussed and specific aspects of ultra-high definition displays were also investigated in the literature. Results: Hazardous effects such as visually-induced motion sickness, visual fatigue, and mental fatigue in the brain caused by undesirable TV viewing are induced by not only temporal decay of visual function but also cognitive load in processing sophisticated external information. There has been a growing evidence that individual differences in visual and cognitive ability to process external information can make contrary responses after exposing to the same viewing situation. A wide vision, ultra-high definition TVs provide, can has positive and negative influences on viewers depending on their individual characteristics. Conclusion: Integrated measurement methods capable of considering individual differences in human visual system are required to clearly determine potential effects of super-high vision displays with a wide view on humans. All of brainwaves, autonomic responses, eye functions, and psychological responses should be simultaneously examined and correlated. Application: The results obtained in this review are expected to be a guideline for determining optimized viewing factors of ultra-high definition displays and accelerating successful penetration of the next-generation displays into our daily life.

Autonomous Mobile Robot Control using the Wearable Devices Based on EMG Signal for detecting fire (EMG 신호 기반의 웨어러블 기기를 통한 화재감지 자율 주행 로봇 제어)

  • Kim, Jin-Woo;Lee, Woo-Young;Yu, Je-Hun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.3
    • /
    • pp.176-181
    • /
    • 2016
  • In this paper, the autonomous mobile robot control system for detecting fire was proposed using the wearable device based on EMG(Electromyogram) signal. Myo armband is used for detecting the user's EMG signal. The gesture was classified after sending the data of EMG signal to a computer using Bluetooth communication. Then the robot named 'uBrain' was implemented to move by received data from Bluetooth communication in our experiment. 'Move front', 'Turn right', 'Turn left', and 'Stop' are controllable commands for the robot. And if the robot cannot receive the Bluetooth signal from a user or if a user wants to change manual mode to autonomous mode, the robot was implemented to be in the autonomous mode. The robot flashes the LED when IR sensor detects the fire during moving.

Emotion Recognition using Facial Thermal Images

  • Eom, Jin-Sup;Sohn, Jin-Hun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.3
    • /
    • pp.427-435
    • /
    • 2012
  • The aim of this study is to investigate facial temperature changes induced by facial expression and emotional state in order to recognize a persons emotion using facial thermal images. Background: Facial thermal images have two advantages compared to visual images. Firstly, facial temperature measured by thermal camera does not depend on skin color, darkness, and lighting condition. Secondly, facial thermal images are changed not only by facial expression but also emotional state. To our knowledge, there is no study to concurrently investigate these two sources of facial temperature changes. Method: 231 students participated in the experiment. Four kinds of stimuli inducing anger, fear, boredom, and neutral were presented to participants and the facial temperatures were measured by an infrared camera. Each stimulus consisted of baseline and emotion period. Baseline period lasted during 1min and emotion period 1~3min. In the data analysis, the temperature differences between the baseline and emotion state were analyzed. Eyes, mouth, and glabella were selected for facial expression features, and forehead, nose, cheeks were selected for emotional state features. Results: The temperatures of eyes, mouth, glanella, forehead, and nose area were significantly decreased during the emotional experience and the changes were significantly different by the kind of emotion. The result of linear discriminant analysis for emotion recognition showed that the correct classification percentage in four emotions was 62.7% when using both facial expression features and emotional state features. The accuracy was slightly but significantly decreased at 56.7% when using only facial expression features, and the accuracy was 40.2% when using only emotional state features. Conclusion: Facial expression features are essential in emotion recognition, but emotion state features are also important to classify the emotion. Application: The results of this study can be applied to human-computer interaction system in the work places or the automobiles.