• Title/Summary/Keyword: human and computer interaction

Search Result 602, Processing Time 0.035 seconds

Evaluation of Human Factors for the Next-Generation Displays: A Review of Subjective and Objective Measurement Methods

  • Mun, Sungchul;Park, Min-Chul
    • Journal of the Ergonomics Society of Korea
    • /
    • v.32 no.2
    • /
    • pp.207-215
    • /
    • 2013
  • Objective: This study aimed to investigate important human factors that should be considered when developing ultra-high definition TVs by reviewing measurement methods and main characteristics of ultra-high definition displays. Background: Although much attention has been paid to high-definition displays, there have been few studies for systematically evaluating human factors. Method: In order to determine human factors to be considered in developing human-friendly displays, we reviewed subjective and objective measurement methods to figure out the current limitations and establish a guideline for developing human-centered ultra-high definition TVs. In doing so, pros and cons of both subjective and objective measurement methods for assessing humans factors were discussed and specific aspects of ultra-high definition displays were also investigated in the literature. Results: Hazardous effects such as visually-induced motion sickness, visual fatigue, and mental fatigue in the brain caused by undesirable TV viewing are induced by not only temporal decay of visual function but also cognitive load in processing sophisticated external information. There has been a growing evidence that individual differences in visual and cognitive ability to process external information can make contrary responses after exposing to the same viewing situation. A wide vision, ultra-high definition TVs provide, can has positive and negative influences on viewers depending on their individual characteristics. Conclusion: Integrated measurement methods capable of considering individual differences in human visual system are required to clearly determine potential effects of super-high vision displays with a wide view on humans. All of brainwaves, autonomic responses, eye functions, and psychological responses should be simultaneously examined and correlated. Application: The results obtained in this review are expected to be a guideline for determining optimized viewing factors of ultra-high definition displays and accelerating successful penetration of the next-generation displays into our daily life.

Q&A Chatbot in Arabic Language about Prophet's Biography

  • Somaya Yassin Taher;Mohammad Zubair Khan
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.3
    • /
    • pp.211-223
    • /
    • 2024
  • Chatbots have become very popular in our times and are used in several fields. The emergence of chatbots has created a new way of communicating between human and computer interaction. A Chatbot also called a "Chatter Robot," or conversational agent CA is a software application that mimics human conversations in its natural format, which contains textual material and oral communication with artificial intelligence AI techniques. Generally, there are two types of chatbots rule-based and smart machine-based. Over the years, several chatbots designed in many languages for serving various fields such as medicine, entertainment, and education. Unfortunately, in the Arabic chatbots area, little work has been done. In this paper, we developed a beneficial tool (chatBot) in the Arabic language which contributes to educating people about the Prophet's biography providing them with useful information by using Natural Language Processing.

Development of a Human Error Hazard Identification Method for Introducing Smart Mobiles to Nuclear Power Plants

  • Lee, Yong-Hee;Yun, Jong-Hun;Lee, Yong-Hee
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.1
    • /
    • pp.261-269
    • /
    • 2012
  • Objective: The aim of this study is to develop an analysis method to extract plausible types of errors when using a smart mobile in nuclear power plants. Background: Smart mobiles such as a smart-phone and a tablet computer(smart-pad) are to be introduced to the various industries. Nuclear power plant like APR1400 already adopted many up-to-date digital devices within its main control room. With this trend, various types of smart mobiles will be inevitably introduced to the nuclear field in the near future. However nuclear power plants(NPPs) should be managed considering a big risk as a result of the trend not only economically but also socially compared to the other industrial systems. It is formally required to make sure to reasonably prevent the all hazards due to the introduction of new technologies and devices before the application to the specific tasks in nuclear power plants. Method: We define interaction segments(IS) as a main architect of interaction description, and enumerate all plausible error segments(ES) for a part of design evaluation of digital devices. Results: We identify various types of interaction errors which are coped with reasonably by interaction design using smart mobiles. Conclusion: According to the application result of the proposed method, we conclude that the proposed method can be utilized to specify the requirements to the human error hazards in digital devices, and to conduct a human factors review during the design of digital devices. Application: The proposed method can be applied to predict the human errors of the tasks related to the digital devices; therefore we can ensure the safety to apply the digital devices to be introduced to NPPs.

An Art-Robot Expressing Emotion with Color Light and Behavior by Human-Object Interaction

  • Kwon, Yanghee;Kim, Sangwook
    • Journal of Multimedia Information System
    • /
    • v.4 no.2
    • /
    • pp.83-88
    • /
    • 2017
  • The era of the fourth industrial revolution, which will bring about a great wave of change in the 21st century, is the age of super-connection that links humans to humans, objects to objects, and humans to objects. In the smart city and the smart space which are evolving further, emotional engineering is a field of interdisciplinary researches that still attract attention with the development of technology. This paper proposes an emotional object prototype as a possibility of emotional interaction in the relation between human and object. By suggesting emotional objects that produce color changes and movements through the emotional interactions between humans and objects against the current social issue-loneliness of modern people, we have approached the influence of our lives in the relation with objects. It is expected that emotional objects that are approached from the fundamental view will be able to be in our lives as a viable cultural intermediary in our future living space.

Development of Facial Expression Recognition System based on Bayesian Network using FACS and AAM (FACS와 AAM을 이용한 Bayesian Network 기반 얼굴 표정 인식 시스템 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.4
    • /
    • pp.562-567
    • /
    • 2009
  • As a key mechanism of the human emotion interaction, Facial Expression is a powerful tools in HRI(Human Robot Interface) such as Human Computer Interface. By using a facial expression, we can bring out various reaction correspond to emotional state of user in HCI(Human Computer Interaction). Also it can infer that suitable services to supply user from service agents such as intelligent robot. In this article, We addresses the issue of expressive face modeling using an advanced active appearance model for facial emotion recognition. We consider the six universal emotional categories that are defined by Ekman. In human face, emotions are most widely represented with eyes and mouth expression. If we want to recognize the human's emotion from this facial image, we need to extract feature points such as Action Unit(AU) of Ekman. Active Appearance Model (AAM) is one of the commonly used methods for facial feature extraction and it can be applied to construct AU. Regarding the traditional AAM depends on the setting of the initial parameters of the model and this paper introduces a facial emotion recognizing method based on which is combined Advanced AAM with Bayesian Network. Firstly, we obtain the reconstructive parameters of the new gray-scale image by sample-based learning and use them to reconstruct the shape and texture of the new image and calculate the initial parameters of the AAM by the reconstructed facial model. Then reduce the distance error between the model and the target contour by adjusting the parameters of the model. Finally get the model which is matched with the facial feature outline after several iterations and use them to recognize the facial emotion by using Bayesian Network.

A Study about Improvement of Digital Textbook Interface based on Affordance Theory in the Context of HCI (HCI 관점에서 어포던스 이론에 근거한 디지털교과서 사용자 인터페이스 개선 연구)

  • Hwang, YunJa;Sung, EunMo
    • The Journal of Korean Association of Computer Education
    • /
    • v.19 no.2
    • /
    • pp.61-71
    • /
    • 2016
  • The purpose of this study was to identify problems of usability and to improve an interface in the digital textbook for leaner's leading to self-directed learning. To address those goals, the theory of affordance, which was related to affordance as leading to behavior, was applied for analyzing the user interface of digital textbook. Also, 10 students, 4th grade elementary school, were participated in the study. Participants were reported affordance's problems of digital textbook through Human Computer Interaction. As a result, some affordance's problems of the digital textbook were found out as follow; difficulty of page clicking, too small touch button, confusing the button, and, need to specific guidance. Based on the result, some suggestions were recommended to improve usability of digital textbook.

A study on human performance in graphic-aided scheduling tasks

  • 백동현;오상윤;윤완철
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 1994.04a
    • /
    • pp.357-363
    • /
    • 1994
  • In many industrial situations the human acts as the primary scheduler since there often exist various constraints and considerations that may not be mathematically or quantitatively defined. For proper design of interactive scheduling systems, how human strategy and performance are affected by the fashion of human-computer interaction at various levels of task complexity should be investigated. In this study, two scheduling experiments were conducted. The first one showed that human schedulers could perform better than simple heuristic rules with each of typical performance measures such as average machine utilization, average tardiness, and maximum tardiness. In experiment 2, the effect of providing computer-generated initial solution was investigated. The results was that in complex problems the subjects performed significantly better when the initial solutions were generated by themselves, evidencing the importance of the continuity of strategic search through the problem.

Wireless EMG-based Human-Computer Interface for Persons with Disability

  • Lee, Myoung-Joon;Moon, In-Hyuk;Kim, Sin-Ki;Mun, Mu-Seong
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.1485-1488
    • /
    • 2003
  • This paper proposes a wireless EMG-based human-computer interface (HCI) for persons with disabilities. For the HCI, four interaction commands are defined by combining three elevation motions of shoulders such as left, right and both elevations. The motions are recognized by comparing EMG signals on the Levator scapulae muscles with double thresholds. A real-time EMG processing hardware is implemented for acquiring EMG signals and recognizing the motions. To achieve real-time processing, filters such as high- and low-pass filter and band-pass and -rejection filter, and a full rectifier and a mean absolute value circuit are embedded on a board with a high speed microprocessor. The recognized results are transferred to a wireless client system such as a mobile robot via a Bluetooth module. From experimental results using the implemented real-time EMG processing hardware, the proposed wireless EMG-based HCI is feasible for the disabled.

  • PDF

Automatic Human Emotion Recognition from Speech and Face Display - A New Approach (인간의 언어와 얼굴 표정에 통하여 자동적으로 감정 인식 시스템 새로운 접근법)

  • Luong, Dinh Dong;Lee, Young-Koo;Lee, Sung-Young
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2011.06b
    • /
    • pp.231-234
    • /
    • 2011
  • Audiovisual-based human emotion recognition can be considered a good approach for multimodal humancomputer interaction. However, the optimal multimodal information fusion remains challenges. In order to overcome the limitations and bring robustness to the interface, we propose a framework of automatic human emotion recognition system from speech and face display. In this paper, we develop a new approach for fusing information in model-level based on the relationship between speech and face expression to detect automatic temporal segments and perform multimodal information fusion.

Developing Interactive Game Contents using 3D Human Pose Recognition (3차원 인체 포즈 인식을 이용한 상호작용 게임 콘텐츠 개발)

  • Choi, Yoon-Ji;Park, Jae-Wan;Song, Dae-Hyeon;Lee, Chil-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.12
    • /
    • pp.619-628
    • /
    • 2011
  • Normally vision-based 3D human pose recognition technology is used to method for convey human gesture in HCI(Human-Computer Interaction). 2D pose model based recognition method recognizes simple 2D human pose in particular environment. On the other hand, 3D pose model which describes 3D human body skeletal structure can recognize more complex 3D pose than 2D pose model in because it can use joint angle and shape information of body part. In this paper, we describe a development of interactive game contents using pose recognition interface that using 3D human body joint information. Our system was proposed for the purpose that users can control the game contents with body motion without any additional equipment. Poses are recognized comparing current input pose and predefined pose template which is consist of 14 human body joint 3D information. We implement the game contents with the our pose recognition system and make sure about the efficiency of our proposed system. In the future, we will improve the system that can be recognized poses in various environments robustly.