• Title/Summary/Keyword: 인간기계인터페이스

Search Result 80, Processing Time 0.03 seconds

Applying Social Strategies for Breakdown Situations of Conversational Agents: A Case Study using Forewarning and Apology (대화형 에이전트의 오류 상황에서 사회적 전략 적용: 사전 양해와 사과를 이용한 사례 연구)

  • Lee, Yoomi;Park, Sunjeong;Suk, Hyeon-Jeong
    • Science of Emotion and Sensibility
    • /
    • v.21 no.1
    • /
    • pp.59-70
    • /
    • 2018
  • With the breakthrough of speech recognition technology, conversational agents have become pervasive through smartphones and smart speakers. The recognition accuracy of speech recognition technology has developed to the level of human beings, but it still shows limitations on understanding the underlying meaning or intention of words, or understanding long conversation. Accordingly, the users experience various errors when interacting with the conversational agents, which may negatively affect the user experience. In addition, in the case of smart speakers with a voice as the main interface, the lack of feedback on system and transparency was reported as the main issue when the users using. Therefore, there is a strong need for research on how users can better understand the capability of the conversational agents and mitigate negative emotions in error situations. In this study, we applied social strategies, "forewarning" and "apology", to conversational agent and investigated how these strategies affect users' perceptions of the agent in breakdown situations. For the study, we created a series of demo videos of a user interacting with a conversational agent. After watching the demo videos, the participants were asked to evaluate how they liked and trusted the agent through an online survey. A total of 104 respondents were analyzed and found to be contrary to our expectation based on the literature study. The result showed that forewarning gave a negative impression to the user, especially the reliability of the agent. Also, apology in a breakdown situation did not affect the users' perceptions. In the following in-depth interviews, participants explained that they perceived the smart speaker as a machine rather than a human-like object, and for this reason, the social strategies did not work. These results show that the social strategies should be applied according to the perceptions that user has toward agents.

An Emotion Recognition Technique using Speech Signals (음성신호를 이용한 감정인식)

  • Jung, Byung-Wook;Cheun, Seung-Pyo;Kim, Youn-Tae;Kim, Sung-Shin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.4
    • /
    • pp.494-500
    • /
    • 2008
  • In the field of development of human interface technology, the interactions between human and machine are important. The research on emotion recognition helps these interactions. This paper presents an algorithm for emotion recognition based on personalized speech signals. The proposed approach is trying to extract the characteristic of speech signal for emotion recognition using PLP (perceptual linear prediction) analysis. The PLP analysis technique was originally designed to suppress speaker dependent components in features used for automatic speech recognition, but later experiments demonstrated the efficiency of their use for speaker recognition tasks. So this paper proposed an algorithm that can easily evaluate the personal emotion from speech signals in real time using personalized emotion patterns that are made by PLP analysis. The experimental results show that the maximum recognition rate for the speaker dependant system is above 90%, whereas the average recognition rate is 75%. The proposed system has a simple structure and but efficient to be used in real time.

Function Expansion of Human-Machine Interface(HMI) for Small and Medium-sized Enterprises: Focused on Injection Molding Industries (중소기업을 위한 인간-기계 인터페이스(HMI) 기능 확장: 사출성형기업 중심으로)

  • Sungmoon Bae;Sua Shin;Junhong Yook;Injun Hwang
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.45 no.4
    • /
    • pp.150-156
    • /
    • 2022
  • As the 4th industrial revolution emerges, the implementation of smart factories are essential in the manufacturing industry. However, 80% of small and medium-sized enterprises that have introduced smart factories remain at the basic level. In addition, in root industries such as injection molding, PLC and HMI software are used to implement functions that simply show operation data aggregated by facilities in real time. This has limitations for managers to make decisions related to product production other than viewing data. This study presents a method for upgrading the level of smart factories to suit the reality of small and medium-sized enterprises. By monitoring the data collected from the facility, it is possible to determine whether there is an abnormal situation by proposing an appropriate algorithm for meaningful decision-making, and an alarm sounds when the process is out of control. In this study, the function of HMI has been expanded to check the failure frequency rate, facility time operation rate, average time between failures, and average time between failures based on facility operation signals. For the injection molding industry, an HMI prototype including the extended function proposed in this study was implemented. This is expected to provide a foundation for SMEs that do not have sufficient IT capabilities to advance to the middle level of smart factories without making large investments.

Development of a Field Evaluation Methodology for eHMI Technology in Autonomous Vehicle-Pedestrian Communication (자율주행차-보행자간 의사소통을 위한 eHMI 기술의 현장 평가 방법론 개발)

  • Hyunmi Lee;Jeong-Ah Jang;Soomin Kwon;Yeonhwa Ha
    • Journal of Auto-vehicle Safety Association
    • /
    • v.16 no.3
    • /
    • pp.64-71
    • /
    • 2024
  • With the advent of Level 4 autonomous vehicles, the need for effective communication between these vehicles and pedestrians has become increasingly important. To address this, eHMI (Enhanced Human-Machine Interface) technology has been proposed to replace traditional driver-pedestrian interactions. eHMI plays a crucial role in conveying the vehicle's status and intentions to pedestrians, thereby improving interaction. Globally, automobile manufacturers and technology companies are investing in visual eHMI technologies, advancing in tandem with the automotive industry. This study developed a methodology for field evaluation of communication technologies between pedestrians and Level 4 autonomous vehicles in urban settings. A three-stage message and display system, tailored to the pedestrian crossing process (recognition, judgment, response), was established. In experiments without message displays, 42.2% (38 out of 90) of participants abandoned crossing. Most who crossed did so only after the vehicle stopped, with some groups crossing irrespective of vehicle approach. When the 'yield' message was introduced, crossing patterns and speed distributions changed significantly. All 38 participants who initially abandoned crossing decided to cross, and the elderly who previously ran or walked quickly crossed at a normal pace, reducing overall crossing time. Field experiments are crucial as real-world conditions may elicit different behaviors than controlled settings. Continued research in field evaluations is essential to develop and assess effective eHMI messages. By observing and analyzing actual pedestrian movements, we can reliably evaluate the effectiveness of these communication technologies, thereby enhancing interaction between autonomous vehicles and pedestrians.

EEG based Vowel Feature Extraction for Speech Recognition System using International Phonetic Alphabet (EEG기반 언어 인식 시스템을 위한 국제음성기호를 이용한 모음 특징 추출 연구)

  • Lee, Tae-Ju;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.1
    • /
    • pp.90-95
    • /
    • 2014
  • The researchs using brain-computer interface, the new interface system which connect human to macine, have been maded to implement the user-assistance devices for control of wheelchairs or input the characters. In recent researches, there are several trials to implement the speech recognitions system based on the brain wave and attempt to silent communication. In this paper, we studied how to extract features of vowel based on international phonetic alphabet (IPA), as a foundation step for implementing of speech recognition system based on electroencephalogram (EEG). We conducted the 2 step experiments with three healthy male subjects, and first step was speaking imagery with single vowel and second step was imagery with successive two vowels. We selected 32 channels, which include frontal lobe related to thinking and temporal lobe related to speech function, among acquired 64 channels. Eigen value of the signal was used for feature vector and support vector machine (SVM) was used for classification. As a result of first step, we should use over than 10th order of feature vector to analyze the EEG signal of speech and if we used 11th order feature vector, the highest average classification rate was 95.63 % in classification between /a/ and /o/, the lowest average classification rate was 86.85 % with /a/ and /u/. In the second step of the experiments, we studied the difference of speech imaginary signals between single and successive two vowels.

Design of Backrest and Seat Pan of Chairs on the Basis of Haptics-Aided Design Method (햅틱 보조 설계 기법에 기반한 의자의 등판 및 좌판의 설계)

  • Jin, Yong-Jie;Lee, Sang-Duck;Song, Jae-Bok
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.34 no.5
    • /
    • pp.527-533
    • /
    • 2010
  • The feeling that is evoked when products are handled has become increasingly important in the design of products primarily used by humans. In the traditional product design process, prototypes are built several times in order to evaluate the feeling evoked during use. However, these design processes can be optimized by adopting a haptic simulator that can serve as a prototype. The design method based on the use of the haptic simulator is called haptics-aided design (HAD), which is the main subject of this paper. Here, a new HAD method that can be effectively used to design a custom-made chair is proposed. A haptic simulator, which is composed of a haptic chair and an intuitive graphical user interface, was developed. The simulator can adjust the impedance of the backrest and seat pan of a chair in real time. The haptic chair was used instead of real prototypes in order to evaluate the comfort of the initially designed seat pan and backrest on the basis of their stiffness and damping values. It was shown that the HAD method can be effectively used to design a custom-made chair and can be extended to other product design processes.

Automatic Recognition and Normalization System of Korean Time Expression using the individual time units (시간의 단위별 처리를 이용한 자동화된 한국어 시간 표현 인식 및 정규화 시스템)

  • Seon, Choong-Nyoung;Kang, Sang-Woo;Seo, Jung-Yun
    • Korean Journal of Cognitive Science
    • /
    • v.21 no.4
    • /
    • pp.447-458
    • /
    • 2010
  • Time expressions are a very important form of information in different types of data. Thus, the recognition of a time expression is an important factor in the field of information extraction. However, most previously designed systems consider only a specific domain, because time expressions do not have a regular form and frequently include different ellipsis phenomena. We present a two-level recognition method consisting of extraction and transformation phases to achieve generality and portability. In the extraction phase, time expressions are extracted by atomic time units for extensibility. Then, in the transformation phase, omitted information is restored using basis time and prior knowledge. Finally, every complete atomic time unit is transformed into a normalized form. The proposed system can be used as a general-purpose system, because it has a language- and domain-independent architecture. In addition, this system performs robustly in noisy data like SMS data, which include various errors. For SMS data, the accuracies of time-expression extraction and time-expression normalization by using the proposed system are 93.8% and 93.2%, respectively. On the basis of these experimental results, we conclude that the proposed system shows high performance in noisy data.

  • PDF

Real-Time Remote Display Technique based on Wireless Mobile Environments (무선 모바일 환경 기반의 실시간 원격 디스플레이 기법)

  • Seo, Jung-Hee;Park, Hung-Bog
    • The KIPS Transactions:PartC
    • /
    • v.15C no.4
    • /
    • pp.297-302
    • /
    • 2008
  • In case of display a lot of information from mobile devices, those systems are being developed that display the information from mobile devices on remote devices such as TV using the mobile devices as remote controllers because it is difficult to display a lot of information on mobile devices due to their limited bandwidth and small screen sizes. A lot of cost is required to design and develop interfaces for these systems corresponding to each of remote display devices. In this paper, a mobile environment based remote display system for displays at real times is proposed for continuous monitoring of status data for unique 'Mote IDs'. Also, remote data are collected and monitored through sensor network devices such as ZigbeX by applying status perception based remote displays at real times through processing ubiquitous computing environment data, and remote display applications at real times are implemented through PDA wireless mobiles. The system proposed in this paper consists of a PDA for remote display and control, mote embedded applications programming for data collections and radio frequency, server modules to analyze and process collected data and virtual prototyping for monitoring and controls by virtual machines. The result of the implementations indicates that this system not only provides a good mobility from a human oriented viewpoint and a good usability of accesses to information but also transmits data efficiently.

A Study on Verification of Back TranScription(BTS)-based Data Construction (Back TranScription(BTS)기반 데이터 구축 검증 연구)

  • Park, Chanjun;Seo, Jaehyung;Lee, Seolhwa;Moon, Hyeonseok;Eo, Sugyeong;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.11
    • /
    • pp.109-117
    • /
    • 2021
  • Recently, the use of speech-based interfaces is increasing as a means for human-computer interaction (HCI). Accordingly, interest in post-processors for correcting errors in speech recognition results is also increasing. However, a lot of human-labor is required for data construction. in order to manufacture a sequence to sequence (S2S) based speech recognition post-processor. To this end, to alleviate the limitations of the existing construction methodology, a new data construction method called Back TranScription (BTS) was proposed. BTS refers to a technology that combines TTS and STT technology to create a pseudo parallel corpus. This methodology eliminates the role of a phonetic transcriptor and can automatically generate vast amounts of training data, saving the cost. This paper verified through experiments that data should be constructed in consideration of text style and domain rather than constructing data without any criteria by extending the existing BTS research.

Neurotechnologies and civil law issues (뇌신경과학 연구 및 기술에 대한 민사법적 대응)

  • SooJeong Kim
    • The Korean Society of Law and Medicine
    • /
    • v.24 no.2
    • /
    • pp.147-196
    • /
    • 2023
  • Advances in brain science have made it possible to stimulate the brain to treat brain disorder or to connect directly between the neuron activity and an external devices. Non-invasive neurotechnologies already exist, but invasive neurotechnologies can provide more precise stimulation or measure brainwaves more precisely. Nowadays deep brain stimulation (DBS) is recognized as an accepted treatment for Parkinson's disease and essential tremor. In addition DBS has shown a certain positive effect in patients with Alzheimer's disease and depression. Brain-computer interfaces (BCI) are in the clinical stage but help patients in vegetative state can communicate or support rehabilitation for nerve-damaged people. The issue is that the people who need these invasive neurotechnologies are those whose capacity to consent is impaired or who are unable to communicate due to disease or nerve damage, while DBS and BCI operations are highly invasive and require informed consent of patients. Especially in areas where neurotechnology is still in clinical trials, the risks are greater and the benefits are uncertain, so more explanation should be provided to let patients make an informed decision. If the patient is under guardianship, the guardian is able to substitute for the patient's consent, if necessary with the authorization of court. If the patient is not under guardianship and the patient's capacity to consent is impaired or he is unable to express the consent, korean healthcare institution tend to rely on the patient's near relative guardian(de facto guardian) to give consent. But the concept of a de facto guardian is not provided by our civil law system. In the long run, it would be more appropriate to provide that a patient's spouse or next of kin may be authorized to give consent for the patient, if he or she is neither under guardianship nor appointed enduring power of attorney. If the patient was not properly informed of the risks involved in the neurosurgery, he or she may be entitled to compensation of intangible damages. If there is a causal relation between the malpractice and the side effects, the patient may also be able to recover damages for those side effects. In addition, both BCI and DBS involve the implantation of electrodes or microchips in the brain, which are controlled by an external devices. Since implantable medical devices are subject to product liability laws, the patient may be able to sue the manufacturer for damages if the defect caused the adverse effects. Recently, Korea's medical device regulation mandated liability insurance system for implantable medical devices to strengthen consumer protection.