• Title/Summary/Keyword: Voice-Based Interface

Search Result 130, Processing Time 0.028 seconds

Conversation Analysis based on User-Personality Traits for Voice User Interface (음성 인터페이스를 위한 사용자 성격 관련 담화분석)

  • Kim, Jinguk;Kwon, Soonil
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.11a
    • /
    • pp.341-343
    • /
    • 2011
  • 이번 연구에서는 음성신호로부터 성격을 자동으로 인식하는 성격 인식 사용자 인터페이스에 대한 기술을 소개한다. 사용자의 음성대화 과정에서 말투로부터 성격 인식, 특히 외향과 내향을 구분해 내기 위해 사용되는 행동패턴에 있어서 대화중에 발생하는 생각을 위한 시간의 할애를 기초한다. 이를 바탕으로 질문이 주어진 후 이에 대한 답변을 시작하는데 걸리는 시간, 그리고 대화의 중간에 생각할 시간을 갖기 위해 활용하는 언어 주저형의 빈도수를 고려하여 사용자 성격분류의 실험을 실시하였다. 그 결과 평균적으로 약 65%의 성공률을 보였다.

Korean Pause Prediction Model based on Dialogue Context (대화 맥락에 기반한 한국어 휴지 예측 모델)

  • Joung Lee;Jeongho Na;Jeongbeom Jeong;Maengsik Choi;Chunghee Lee;Seung-Hoon Na
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.404-408
    • /
    • 2023
  • 음성 사용자 인터페이스(Voice User Interface)에 대한 수요가 증가함에 따라 음성 합성(Speech Synthesis) 시스템에서 자연스러운 음성 발화를 모방하기 위해 적절한 위치에 휴지를 삽입하는 것이 주된 과업으로 자리잡았다. 대화의 연속성을 고려했을 때, 자연스러운 음성 기반 인터페이스를 구성하기 위해서는 대화의 맥락을 이해하고 적절한 위치에 휴지를 삽입하는 것이 필수적이다. 이에 따라 본 연구는 대화 맥락에 기반하여 적절한 위치에 휴지를 삽입하는 Long-Input Transformer 기반 휴지 예측 모델을 제안하고 한국어 대화 데이터셋에서 검증한 결과를 보인다.

  • PDF

A Study on the Windows Application Control Model Based on Leap Motion (립모션 기반의 윈도우즈 애플리케이션 제어 모델에 관한 연구)

  • Kim, Won
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.11
    • /
    • pp.111-116
    • /
    • 2019
  • With recent rapid development of computer capabilities, various technologies that can facilitate the interaction between humans and computers are being studied. The paradigm tends to change to NUI using the body such as 3D motion, haptics, and multi-touch with GUI using traditional input devices. Various studies have been conducted on transferring human movements to computers using sensors. In addition to the development of optical sensors that can acquire 3D objects, the range of applications in the industrial, medical, and user interface fields has been expanded. In this paper, I provide a model that can execute other programs through gestures instead of the mouse, which is the default input device, and control Windows based on the lip motion. To propose a model which converges with an Android application and can be controlled by various media and voice instruction functions using voice recognition and buttons through connection with a main client. It is expected that Internet media such as video and music can be controlled not only by a client computer but also by an application at a long distance and that convenient media viewing can be performed through the proposal model.

English Conversation System Using Artificial Intelligent of based on Virtual Reality (가상현실 기반의 인공지능 영어회화 시스템)

  • Cheon, EunYoung
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.11
    • /
    • pp.55-61
    • /
    • 2019
  • In order to realize foreign language education, various existing educational media have been provided, but there are disadvantages in that the cost of the parish and the media program is high and the real-time responsiveness is poor. In this paper, we propose an artificial intelligence English conversation system based on VR and speech recognition. We used Google CardBoard VR and Google Speech API to build the system and developed artificial intelligence algorithms for providing virtual reality environment and talking. In the proposed speech recognition server system, the sentences spoken by the user can be divided into word units and compared with the data words stored in the database to provide the highest probability. Users can communicate with and respond to people in virtual reality. The function provided by the conversation is independent of the contextual conversations and themes, and the conversations with the AI assistant are implemented in real time so that the user system can be checked in real time. It is expected to contribute to the expansion of virtual education contents service related to the Fourth Industrial Revolution through the system combining the virtual reality and the voice recognition function proposed in this paper.

Maximum Likelihood-based Automatic Lexicon Generation for AI Assistant-based Interaction with Mobile Devices

  • Lee, Donghyun;Park, Jae-Hyun;Kim, Kwang-Ho;Park, Jeong-Sik;Kim, Ji-Hwan;Jang, Gil-Jin;Park, Unsang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.9
    • /
    • pp.4264-4279
    • /
    • 2017
  • In this paper, maximum likelihood-based automatic lexicon generation using mixed-syllables is proposed for unlimited vocabulary voice interface for East Asian languages (e.g. Korean, Chinese and Japanese) in AI-assistant based interaction with mobile devices. The conventional lexicon has two inevitable problems: 1) a tedious repetition of out-of-lexicon unit additions to the lexicon, and 2) the propagation of errors during a morpheme analysis and space segmentation. The proposed method provides an automatic framework to solve the above problems. The proposed method produces a level of overall accuracy similar to one of previous methods in the presence of one out-of-lexicon word in a sentence, but the proposed method provides superior results with the absolute improvements of 1.62%, 5.58%, and 10.09% in terms of word accuracy when the number of out-of-lexicon words in a sentence was two, three and four, respectively.

Implementation of Android-based Interactive Edutainment Contents Using Authoring Tool Developed for Interactive Animation

  • Song, Mi-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.4
    • /
    • pp.71-80
    • /
    • 2018
  • In this paper, we developed an interactive animation authoring tool and developed the Android based interactive edutainment contents. The authoring tool for creating interactive animations developed in this paper is based on a graphical user interface, so users can easily create interactive animations. Interactive animation contents created by this authoring tool can be created as images and xml files so that they can be used directly on mobile devices. In order to increase learning efficiency for children, Android-based interactive edutainment electronic storybooks, which is implemented using this authoring tool, provided a recording function to listen to the parents' voice as well as an interactive action in which the characters move in accordance with the story line. We also provided a STEAM game that combines creativity and imagination with creative science and technology. Therefore, by creating the edutainment contents through the proposed authoring tool for interactive animation, various interactive animation contents could be produced more easily than the code implementation method. Through this study, I hope that it will be helpful for the development of various interactive edutainment contents to provide educational contents considering the quantity and quality to infants.

NUI/NUX of the Virtual Monitor Concept using the Concentration Indicator and the User's Physical Features (사용자의 신체적 특징과 뇌파 집중 지수를 이용한 가상 모니터 개념의 NUI/NUX)

  • Jeon, Chang-hyun;Ahn, So-young;Shin, Dong-il;Shin, Dong-kyoo
    • Journal of Internet Computing and Services
    • /
    • v.16 no.6
    • /
    • pp.11-21
    • /
    • 2015
  • As growing interest in Human-Computer Interaction(HCI), research on HCI has been actively conducted. Also with that, research on Natural User Interface/Natural User eXperience(NUI/NUX) that uses user's gesture and voice has been actively conducted. In case of NUI/NUX, it needs recognition algorithm such as gesture recognition or voice recognition. However these recognition algorithms have weakness because their implementation is complex and a lot of time are needed in training because they have to go through steps including preprocessing, normalization, feature extraction. Recently, Kinect is launched by Microsoft as NUI/NUX development tool which attracts people's attention, and studies using Kinect has been conducted. The authors of this paper implemented hand-mouse interface with outstanding intuitiveness using the physical features of a user in a previous study. However, there are weaknesses such as unnatural movement of mouse and low accuracy of mouse functions. In this study, we designed and implemented a hand mouse interface which introduce a new concept called 'Virtual monitor' extracting user's physical features through Kinect in real-time. Virtual monitor means virtual space that can be controlled by hand mouse. It is possible that the coordinate on virtual monitor is accurately mapped onto the coordinate on real monitor. Hand-mouse interface based on virtual monitor concept maintains outstanding intuitiveness that is strength of the previous study and enhance accuracy of mouse functions. Further, we increased accuracy of the interface by recognizing user's unnecessary actions using his concentration indicator from his encephalogram(EEG) data. In order to evaluate intuitiveness and accuracy of the interface, we experimented it for 50 people from 10s to 50s. As the result of intuitiveness experiment, 84% of subjects learned how to use it within 1 minute. Also, as the result of accuracy experiment, accuracy of mouse functions (drag(80.4%), click(80%), double-click(76.7%)) is shown. The intuitiveness and accuracy of the proposed hand-mouse interface is checked through experiment, this is expected to be a good example of the interface for controlling the system by hand in the future.

A Conversational Interactive Tactile Map for the Visually Impaired (시각장애인의 길 탐색을 위한 대화형 인터랙티브 촉각 지도 개발)

  • Lee, Yerin;Lee, Dongmyeong;Quero, Luis Cavazos;Bartolome, Jorge Iranzo;Cho, Jundong;Lee, Sangwon
    • Science of Emotion and Sensibility
    • /
    • v.23 no.1
    • /
    • pp.29-40
    • /
    • 2020
  • Visually impaired people use tactile maps to get spatial information about their surrounding environment, find their way, and improve their independent mobility. However, classical tactile maps that make use of braille to describe the location within the map have several limitations, such as the lack of information due to constraints on space and limited feedback possibilities. This study describes the development of a new multi-modal interactive tactile map interface that addresses the challenges of tactile maps to improve the usability and independence of visually impaired people when using tactile maps. This interface adds touch gesture recognition to the surface of tactile maps and enables the users to verbally interact with a voice agent to receive feedback and information about navigation routes and points of interest. A low-cost prototype was developed to conduct usability tests that evaluated the interface through a survey and interview given to blind participants after using the prototype. The test results show that this interactive tactile map prototype provides improved usability for people over traditional tactile maps that use braille only. Participants reported that it was easier to find the starting point and points of interest they wished to navigate to with the prototype. Also, it improved self-reported independence and confidence compared with traditional tactile maps. Future work includes further development of the mobility solution based on the feedback received and an extensive quantitative study.

Design and Implementation of a Bluetooth Baseband Module based on IP (IP에 기반한 블루투스 기저대역 모듈의 설계 및 구현)

  • Lim, Ji-Suk;Chun, Ik-Jae;Kim, Bo-Gwan
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2002.04b
    • /
    • pp.1285-1288
    • /
    • 2002
  • Bluetooth wireless technology is a publicly available specification proposed for Radio Frequency (RF) communication for short-range and point-to- multipoint voice and data transfer. It operates in the 2.4GHz ISM(Industrial, Scientific and Medical) band and offers the potential for low-cost, broadband wireless access for various mobile and portable devices at range of about 10 meters. In this paper, we describe the structure and the test results of the bluetooth baseband module we have developed. This module was developed based on IP reuse. So Interface of each module such as link controller UART, and audio CODEC is designed based on ARM7 comfortable processor. We also considered various interfaces of related external chips. The fully synthesizable baseband module was fabricated in a $0.25{\mu}m$ CMOS technology occupying $2.79{\times}2.8mm^2$ area including the ARM TDMI processor. And a FPGA implementation of this module is tested for file and bit-stream transfers between PCs.

  • PDF

Hi, KIA! Classifying Emotional States from Wake-up Words Using Machine Learning (Hi, KIA! 기계 학습을 이용한 기동어 기반 감성 분류)

  • Kim, Taesu;Kim, Yeongwoo;Kim, Keunhyeong;Kim, Chul Min;Jun, Hyung Seok;Suk, Hyeon-Jeong
    • Science of Emotion and Sensibility
    • /
    • v.24 no.1
    • /
    • pp.91-104
    • /
    • 2021
  • This study explored users' emotional states identified from the wake-up words -"Hi, KIA!"- using a machine learning algorithm considering the user interface of passenger cars' voice. We targeted four emotional states, namely, excited, angry, desperate, and neutral, and created a total of 12 emotional scenarios in the context of car driving. Nine college students participated and recorded sentences as guided in the visualized scenario. The wake-up words were extracted from whole sentences, resulting in two data sets. We used the soundgen package and svmRadial method of caret package in open source-based R code to collect acoustic features of the recorded voices and performed machine learning-based analysis to determine the predictability of the modeled algorithm. We compared the accuracy of wake-up words (60.19%: 22%~81%) with that of whole sentences (41.51%) for all nine participants in relation to the four emotional categories. Accuracy and sensitivity performance of individual differences were noticeable, while the selected features were relatively constant. This study provides empirical evidence regarding the potential application of the wake-up words in the practice of emotion-driven user experience in communication between users and the artificial intelligence system.