• Title/Summary/Keyword: Human computer interactions

Search Result 93, Processing Time 0.022 seconds

LSTM Hyperparameter Optimization for an EEG-Based Efficient Emotion Classification in BCI (BCI에서 EEG 기반 효율적인 감정 분류를 위한 LSTM 하이퍼파라미터 최적화)

  • Aliyu, Ibrahim;Mahmood, Raja Majid;Lim, Chang-Gyoon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.6
    • /
    • pp.1171-1180
    • /
    • 2019
  • Emotion is a psycho-physiological process that plays an important role in human interactions. Affective computing is centered on the development of human-aware artificial intelligence that can understand and regulate emotions. This field of study is also critical as mental diseases such as depression, autism, attention deficit hyperactivity disorder, and game addiction are associated with emotion. Despite the efforts in emotions recognition and emotion detection from nonstationary, detecting emotions from abnormal EEG signals requires sophisticated learning algorithms because they require a high level of abstraction. In this paper, we investigated LSTM hyperparameters for an optimal emotion EEG classification. Results of several experiments are hereby presented. From the results, optimal LSTM hyperparameter configuration was achieved.

Real-Time Eye Tracking Using IR Stereo Camera for Indoor and Outdoor Environments

  • Lim, Sungsoo;Lee, Daeho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.8
    • /
    • pp.3965-3983
    • /
    • 2017
  • We propose a novel eye tracking method that can estimate 3D world coordinates using an infrared (IR) stereo camera for indoor and outdoor environments. This method first detects dark evidences such as eyes, eyebrows and mouths by fast multi-level thresholding. Among these evidences, eye pair evidences are detected by evidential reasoning and geometrical rules. For robust accuracy, two classifiers based on multiple layer perceptron (MLP) using gradient local binary patterns (GLBPs) verify whether the detected evidences are real eye pairs or not. Finally, the 3D world coordinates of detected eyes are calculated by region-based stereo matching. Compared with other eye detection methods, the proposed method can detect the eyes of people wearing sunglasses due to the use of the IR spectrum. Especially, when people are in dark environments such as driving at nighttime, driving in an indoor carpark, or passing through a tunnel, human eyes can be robustly detected because we use active IR illuminators. In the experimental results, it is shown that the proposed method can detect eye pairs with high performance in real-time under variable illumination conditions. Therefore, the proposed method can contribute to human-computer interactions (HCIs) and intelligent transportation systems (ITSs) applications such as gaze tracking, windshield head-up display and drowsiness detection.

Implementation of a Context-awareness based UoC Architecture for MANET (MANET에서 상황인식 기반의 UoC Architecture 구현)

  • Doo, Kyoung-Min;Lee, Kang-Whan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.6
    • /
    • pp.1128-1133
    • /
    • 2008
  • Context-aware computing has been attracting the attention as an approach to alleviating the inconvenience in human-computer interactions. This paper proposes a context-aware system architecture to be implemented on an UoC (Ubiquitous system on Chip). A new proposed technology of CRS (Context Recognition Switch) and DOS (Dynamic and Optimal Standard) based on Context-awareness system architecture with pre-processor, HPSP(High Performance Signal Processor) in this paper. And proposed a new algorithm using in network topology processor shows for Ubiquitous Computing System. implementing in UoC (Ubiquitous System on Chip) base on the IEEE 802.15.4 WPAN (Wireless Personal Area Network) standard. Also, This context-aware based UoC architecture has been developed to apply to mobile intelligent robots which would support human in a context-aware manner.

Multimodal Attention-Based Fusion Model for Context-Aware Emotion Recognition

  • Vo, Minh-Cong;Lee, Guee-Sang
    • International Journal of Contents
    • /
    • v.18 no.3
    • /
    • pp.11-20
    • /
    • 2022
  • Human Emotion Recognition is an exciting topic that has been attracting many researchers for a lengthy time. In recent years, there has been an increasing interest in exploiting contextual information on emotion recognition. Some previous explorations in psychology show that emotional perception is impacted by facial expressions, as well as contextual information from the scene, such as human activities, interactions, and body poses. Those explorations initialize a trend in computer vision in exploring the critical role of contexts, by considering them as modalities to infer predicted emotion along with facial expressions. However, the contextual information has not been fully exploited. The scene emotion created by the surrounding environment, can shape how people perceive emotion. Besides, additive fusion in multimodal training fashion is not practical, because the contributions of each modality are not equal to the final prediction. The purpose of this paper was to contribute to this growing area of research, by exploring the effectiveness of the emotional scene gist in the input image, to infer the emotional state of the primary target. The emotional scene gist includes emotion, emotional feelings, and actions or events that directly trigger emotional reactions in the input image. We also present an attention-based fusion network, to combine multimodal features based on their impacts on the target emotional state. We demonstrate the effectiveness of the method, through a significant improvement on the EMOTIC dataset.

Access Management Using Knowledge Based Multi Factor Authentication In Information Security

  • Iftikhar, Umar;Asrar, Kashif;Waqas, Maria;Ali, Syed Abbas
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.7
    • /
    • pp.119-124
    • /
    • 2021
  • Today, both sides of modern culture are decisively invaded by digitalization. Authentication is considered to be one of the main components in keeping this process secure. Cyber criminals are working hard in penetrating through the existing network channels to encounter malicious attacks. When it comes to enterprises, the company's information is a major asset. Question here arises is how to protect the vital information. This takes into account various aspects of a society often termed as hyper connected society including online communication, purchases, regulation of access rights and many more. In this research paper, we will discuss about the concepts of MFA and KBA, i.e., Multi-Factor Authentication and Knowledge Based Authentication. The purpose of MFA and KBA its utilization for human.to.everything..interactions, offering easy to be used and secured validation mechanism while having access to the service. In the research, we will also explore the existing yet evolving factor providers (sensors) used for authenticating a user. This is an important tool to protect data from malicious insiders and outsiders. Access Management main goal is to provide authorized users the right to use a service also preventing access to illegal users. Multiple techniques can be implemented to ensure access management. In this paper, we will discuss various techniques to ensure access management suitable for enterprises, primarily focusing/restricting our discussion to multifactor authentication. We will also highlight the role of knowledge-based authentication in multi factor authentication and how it can make enterprises data more secure from Cyber Attack. Lastly, we will also discuss about the future of MFA and KBA.

Visual Cohesion Improvement Technology by Clustering of Abstract Object (추상화 객체의 클러스터링에 의한 가시적 응집도 향상기법)

  • Lee Jeong-Yeal;Kim Jeong-Ok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.9 no.4 s.32
    • /
    • pp.61-69
    • /
    • 2004
  • The user interface design needs to support the complex interactions between human and computers. It also requires comprehensive knowledges many areas to collect customer's requirements and negotiate with them. The user interface designer needs to be a graphic expert, requirement analyst, system designer, programmer, technical expert, social activity scientist, and so on. Therefore, it is necessary to research on an designing methodology of user interface for satisfying various expertise areas. In the paper, We propose the 4 business event's abstract object visualizing phases such as fold abstract object modeling, task abstract object modeling, transaction abstract object modeling, and form abstract object modeling. As a result, this modeling method allows us to enhance visual cohesion of UI, and help unskilled designer to can develope the higy-qualified user interface.

  • PDF

Comparing automated and non-automated machine learning for autism spectrum disorders classification using facial images

  • Elshoky, Basma Ramdan Gamal;Younis, Eman M.G.;Ali, Abdelmgeid Amin;Ibrahim, Osman Ali Sadek
    • ETRI Journal
    • /
    • v.44 no.4
    • /
    • pp.613-623
    • /
    • 2022
  • Autism spectrum disorder (ASD) is a developmental disorder associated with cognitive and neurobehavioral disorders. It affects the person's behavior and performance. Autism affects verbal and non-verbal communication in social interactions. Early screening and diagnosis of ASD are essential and helpful for early educational planning and treatment, the provision of family support, and for providing appropriate medical support for the child on time. Thus, developing automated methods for diagnosing ASD is becoming an essential need. Herein, we investigate using various machine learning methods to build predictive models for diagnosing ASD in children using facial images. To achieve this, we used an autistic children dataset containing 2936 facial images of children with autism and typical children. In application, we used classical machine learning methods, such as support vector machine and random forest. In addition to using deep-learning methods, we used a state-of-the-art method, that is, automated machine learning (AutoML). We compared the results obtained from the existing techniques. Consequently, we obtained that AutoML achieved the highest performance of approximately 96% accuracy via the Hyperpot and tree-based pipeline optimization tool optimization. Furthermore, AutoML methods enabled us to easily find the best parameter settings without any human efforts for feature engineering.

A novel tetrapeptide for the treatment of hair loss identified in ginseng berry: in silico characterization and molecular docking with TGF-β2

  • Sung-Gyu Lee;Sang Moon Kang;Hyun Kang
    • Journal of Plant Biotechnology
    • /
    • v.49 no.4
    • /
    • pp.316-324
    • /
    • 2022
  • Hair loss causes psychological stress due to its effect on appearance. Therefore, the global market for hair loss treatment products is rapidly growing. The present study demonstrated that ginseng berry-derived and sequence-modified peptides promoted the proliferation rate of dermal papilla (DP) cells and keratinocytes, in addition to having antioxidant properties. Moreover, the potential role of these ginseng berry peptides as TGF-β2 antagonists was confirmed through in silico computer docking. In addition to promoting the growth of ,the ginseng berry-derived peptides also promoted the proliferation of keratinocytes experimental Particularly, an unmodified ginseng berry-derived peptide (GB-1) and two peptides with sequence modifications (GB-2 and GB-3) decreased ROS generation and exhibited a protective effect on damaged HaCaT keratinocytes. Computer-aided peptide discovery was conducted to identify the potential interactions of important proteins with transforming growth factor-beta 2 (TGF-β2), a key protein that plays a crucial role in the human hair growth cycle. Our results demonstrated that MAGH, an amino acid sequence present in herbal supplements and plant-based natural compounds, can inhibit TGF-β2.

Crossmodal Perception of Mismatched Emotional Expressions by Embodied Agents (에이전트의 표정과 목소리 정서의 교차양상지각)

  • Cho, Yu-Suk;Suk, Ji-He;Han, Kwang-Hee
    • Science of Emotion and Sensibility
    • /
    • v.12 no.3
    • /
    • pp.267-278
    • /
    • 2009
  • Today an embodied agent generates a large amount of interest because of its vital role for human-human interactions and human-computer interactions in virtual world. A number of researchers have found that we can recognize and distinguish between various emotions expressed by an embodied agent. In addition many studies found that we respond to simulated emotions in a similar way to human emotion. This study investigates interpretation of mismatched emotions expressed by an embodied agent (e.g. a happy face with a sad voice); whether audio-visual channel integration occurs or one channel dominates when participants judge the emotion. The study employed a 4 (visual: happy, sad, warm, cold) $\times$ 4 (audio: happy, sad, warm, cold) within-subjects repeated measure design. The results suggest that people perceive emotions not depending on just one channel but depending on both channels. Additionally facial expression (happy face vs. sad face) makes a difference in influence of two channels; Audio channel has more influence in interpretation of emotions when facial expression is happy. People were able to feel other emotion which was not expressed by face or voice from mismatched emotional expressions, so there is a possibility that we may express various and delicate emotions with embodied agent by using only several kinds of emotions.

  • PDF

User-interface Considerations for the Main Button Layout of the Tactical Computer for Korea Army (한국군 전술컴퓨터의 인간공학적 메인버튼 설계)

  • Baek, Seung-Chang;Jung, Eui-S.;Park, Sung-Joon
    • Journal of the Ergonomics Society of Korea
    • /
    • v.28 no.4
    • /
    • pp.147-154
    • /
    • 2009
  • The tactical computer is currently being developed and installed in armored vehicles and tanks for reinforcement. With the tactical computer, Korea Army will be able to grasp the deployment status of our forces, enemy, and obstacles under varying situations. Furthermore, it makes the exchange of command and tactical intelligence possible. Recent studies showed that the task performance is greatly affected by the user interface. The U.S. Army is now conducting user-centered evaluation tests based on C2 (Command & Control) to develop tactical intelligence machinery and tools. This study aims to classify and regroup subordinate menu functions according to the user-centered task performance for the Korea Army's tactical computer. Also, the research suggests an ergonomically sound layout and size of main touch buttons by considering human factors guidelines for button design. To achieve this goal, eight hierarchical subordinate menu functions are initially drawn through clustering analysis and then each group of menu functions was renamed. Based on the suggested menu structure, new location and size of the buttons were tested in terms of response time, number of error, and subjective preference by comparing them to existing ones. The result showed that the best performance was obtained when the number of buttons or functions was eight to conduct tactical missions. Also, the improved button size and location were suggested through the experiment. It was found in addition that the location and size of the buttons had interactions regarding the user's preference.