• Title/Summary/Keyword: Voice-Based Interface

Search Result 130, Processing Time 0.024 seconds

An Implementation of Travel Information Service Using VoiceXML and GPS (VoiceXML과 GPS를 이용한 여행정보 서비스의 구현)

  • Oh, Jae-Gyu;Kim, Sun-Hyung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.8 no.6
    • /
    • pp.1443-1448
    • /
    • 2007
  • In this paper, we implement a distributed computing environment-based travel information service that can use web(internet) and speech interface at the same time and can apply location information, using voice and web browser-based VoiceXML and GPS, to escape the limitations of traditional web(internet)-based travel information services. Because of IVR(Interactive Voice Response) of traditional call center has operated to a pre-installation scenario, it takes much a service time and has the inconveniences that must repeat speech recording according to the revised scenarios in case change response contents. However, suggested VoiceXML and GPS-based travel information service system has advantages that reorganization of system setups is easy, because it consists of the method to update server after make individual conversation scenarios by file format(document), and can provide usefully various travel information in environmental restriction conditions such as the back regions environment, according as our prototype find user's present location using GPS information and then provide various travel information service by this information.

  • PDF

APPLICATION OF KOREAN TEXT-TO-SPEECH FOR X.400 MHS SYSTEM

  • Kim, Hee-Dong;Koo, Jun-Mo;Choi, Ho-Joon;Kim, Sang-Taek
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1994.06a
    • /
    • pp.885-892
    • /
    • 1994
  • This paper presents the Korean text-to-speech (TTS) algorithm with speed and intonation control capability, and describes the development of the Voice message delivery system employing this TTS algorithm. This system allows the Interpersonal Messaging (IPM) Service users of Message Handling System (MHS) to send his/her text messages to user via telephone line using synthetic voice. In the X.400 MHS recommendation, the protocols and service elements are not specified for the voice message delivery system. Thus, we defined access protocol and service elements for Voice Access Unit based on the application program interface for message transfers between X.400 Message Transfer Agent and Voice Access Unit. The system architecture and operations will be provided.

  • PDF

Implementation of interactive Stock Trading System Using VoiceXML

  • Shin Jeong-Hoon;Cho Chang-Su;Hong Kwang-Seok
    • Proceedings of the IEEK Conference
    • /
    • summer
    • /
    • pp.387-390
    • /
    • 2004
  • In this paper, we design and implement practical application service using VoiceXML. And we suggest new solutions of problems can be occurred when implementing a new systems using VoiceXML, based on the fact. Up to now, speech related services were developed using API (Application Program Interface) and programming languages, which methods depend on system architectures. It thus appears that reuse of contents and resource was very difficult. To solve these problems, nowadays, companies develop their applications using VoiceXML. Advantages of using VoiceXML when developing services are as follows. First, we can use web developing technologies and technologies for transmitting web contents. And, we can save labors for low level programming like C language or Assembler language. And we can save labors for managing resources, too. As the result of these advantages, we can reduce developing hours of applications services and we can solve problem of compatibility between systems. But, there's poor grip of actual problems can be occurred when implementing their own services using VoiceXML. To overcome these problems, we implemented interactive stock trading system using VoiceXML and concentrated our effort to find out problems when using VoiceXML. And then, we proposed solutions to these problems and analyzed strong points and weak points of suggested system.

  • PDF

A Proposal of Eye-Voice Method based on the Comparative Analysis of Malfunctions on Pointer Click in Gaze Interface for the Upper Limb Disabled (상지장애인을 위한 시선 인터페이스에서 포인터 실행 방법의 오작동 비교 분석을 통한 Eye-Voice 방식의 제안)

  • Park, Joo Hyun;Park, Mi Hyun;Lim, Soon-Bum
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.4
    • /
    • pp.566-573
    • /
    • 2020
  • Computers are the most common tool when using the Internet and utilizing a mouse to select and execute objects. Eye tracking technology is welcomed as an alternative technology to help control computers for users who cannot use their hands due to their disabilities. However, the pointer execution method of the existing eye tracking technique causes many malfunctions. Therefore, in this paper, we developed a gaze tracking interface that combines voice commands to solve the malfunction problem when the upper limb disabled uses the existing gaze tracking technology to execute computer menus and objects. Usability verification was conducted through comparative experiments regarding the improvements of the malfunction. The upper limb disabled who are hand-impaired use eye tracking technology to move the pointer and utilize the voice commands, such as, "okay" while browsing the computer screen for instant clicks. As a result of the comparative experiments on the reduction of the malfunction of pointer execution with the existing gaze interfaces, we verified that our system, Eye-Voice, reduced the malfunction rate of pointer execution and is effective for the upper limb disabled to use.

Trends and Implications of Digital Transformation in Vehicle Experience and Audio User Interface (차내 경험의 디지털 트랜스포메이션과 오디오 기반 인터페이스의 동향 및 시사점)

  • Kim, Kihyun;Kwon, Seong-Geun
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.2
    • /
    • pp.166-175
    • /
    • 2022
  • Digital transformation is driving so many changes in daily life and industry. The automobile industry is in a similar situation. In some cases, element techniques in areas called metabuses are also being adopted, such as 3D animated digital cockpit, around view, and voice AI, etc. Through the growth of the mobile market, the norm of human-computer interaction (HCI) has been evolving from keyboard-mouse interaction to touch screen. The core area was the graphical user interface (GUI), and recently, the audio user interface (AUI) has partially replaced the GUI. Since it is easy to access and intuitive to the user, it is quickly becoming a common area of the in-vehicle experience (IVE), especially. The benefits of a AUI are freeing the driver's eyes and hands, using fewer screens, lower interaction costs, more emotional and personal, effective for people with low vision. Nevertheless, when and where to apply a GUI or AUI are actually different approaches because some information is easier to process as we see it. In other cases, there is potential that AUI is more suitable. This is a study on a proposal to actively apply a AUI in the near future based on the context of various scenes occurring to improve IVE.

Human-Computer Interaction Based Only on Auditory and Visual Information

  • Sha, Hui;Agah, Arvin
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.2 no.4
    • /
    • pp.285-297
    • /
    • 2000
  • One of the research objectives in the area of multimedia human-computer interaction is the application of artificial intelligence and robotics technologies to the development of computer interfaces. This involves utilizing many forms of media, integrating speed input, natural language, graphics, hand pointing gestures, and other methods for interactive dialogues. Although current human-computer communication methods include computer keyboards, mice, and other traditional devices, the two basic ways by which people communicate with each other are voice and gesture. This paper reports on research focusing on the development of an intelligent multimedia interface system modeled based on the manner in which people communicate. This work explores the interaction between humans and computers based only on the processing of speech(Work uttered by the person) and processing of images(hand pointing gestures). The purpose of the interface is to control a pan/tilt camera to point it to a location specified by the user through utterance of words and pointing of the hand, The systems utilizes another stationary camera to capture images of the users hand and a microphone to capture the users words. Upon processing of the images and sounds, the systems responds by pointing the camera. Initially, the interface uses hand pointing to locate the general position which user is referring to and then the interface uses voice command provided by user to fine-the location, and change the zooming of the camera, if requested. The image of the location is captured by the pan/tilt camera and sent to a color TV monitor to be displayed. This type of system has applications in tele-conferencing and other rmote operations, where the system must respond to users command, in a manner similar to how the user would communicate with another person. The advantage of this approach is the elimination of the traditional input devices that the user must utilize in order to control a pan/tillt camera, replacing them with more "natural" means of interaction. A number of experiments were performed to evaluate the interface system with respect to its accuracy, efficiency, reliability, and limitation.

  • PDF

Voice Portal based on SMS Authentication at CTI Module Implementation by Speech Recognition (SMS 인증 기반의 보이스포탈에서의 음성인식을 위한 CTI 모듈 구현)

  • Oh, Se-Il;Kim, Bong-Hyun;Koh, Jin-Hwan;Park, Won-Tea
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2001.04b
    • /
    • pp.1177-1180
    • /
    • 2001
  • 전화를 통해 인터넷 정보를 들을 수 있는 보이스 포탈(Voice Portal) 서비스가 인기를 얻고 있다. Voice Portal 서비스란 알고자 하는 정보를 Speech Recognition System에 음성으로 명령하면 전화를 통해 음성으로 원하는 정보를 듣는 서비스이다. Authentication의 절차를 수행하는 SMS (Short Message Service) 서버 Module, PSTN과 Database 서버사이의 Interface를 제공하는 CTI (Computer Telephony Integration) Module, CTI 서버와 WWW (World Wide Web) 사이의 Voice XML Module, 정보를 검색하기 위한 Searching Module들이 필요하다. 본 논문은 Speech Recognition technology를 기반으로 한 CTI Module 설계를 구현하였다. 또한 인정 방식으로 Random한 일회용 password를 기반으로 한 SMS Authentication을 택하므로 더욱 더 안정된 서비스 제공을 목적으로 하였다.

  • PDF

Moderating Effects of User Gender and AI Voice on the Emotional Satisfaction of Users When Interacting with a Voice User Interface (음성 인터페이스와의 상호작용에서 AI 음성이 성별에 따른 사용자의 감성 만족도에 미치는 영향)

  • Shin, Jong-Gyu;Kang, Jun-Mo;Park, Yeong-Jin;Kim, Sang-Ho
    • Science of Emotion and Sensibility
    • /
    • v.25 no.3
    • /
    • pp.127-134
    • /
    • 2022
  • This study sought to identify the voice user interface (VUI) design parameters that evoked positive user emotions. Six VUI design parameters that could affect emotional user satisfaction were considered. The moderating effects of user gender and the design parameters were analyzed to determine the appropriate conditions for user satisfaction when interacting with the VUI. An interactive VUI system that could modify the six parameters was implemented using the Wizard of OZ experimental method. User emotions were assessed from the users' facial expression data, which was then converted into a valence score. The frequency analysis and chi-square test found that there were statistically significant moderating gender and AI effects. These results implied that it is beneficial to consider the users' gender when designing voice-based interactions. Adult/male/high-tone voices for males and adult/female/mid-tone voices for females are recommended as general guidelines for future VUI designs. Future analyses that consider various human factors will be able to more delicately assess human-AI interactions from a UX perspective.

An Experimental Study on Barging-In Effects for Speech Recognition Using Three Telephone Interface Boards

  • Park, Sung-Joon;Kim, Ho-Kyoung;Koo, Myoung-Wan
    • Speech Sciences
    • /
    • v.8 no.1
    • /
    • pp.159-165
    • /
    • 2001
  • In this paper, we make an experiment on speech recognition systems with barging-in and non-barging-in utterances. Barging-in capability, with which we can say voice commands while voice announcement is coming out, is one of the important elements for practical speech recognition systems. Barging-in capability can be realized by echo cancellation techniques based on the LMS (least-mean-square) algorithm. We use three kinds of telephone interface boards with barging-in capability, which are respectively made by Dialogic Company, Natural MicroSystems Company and Korea Telecom. Speech database was made using these three kinds of boards. We make a comparative recognition experiment with this speech database.

  • PDF

Synthetic Speech Quality Improvement By Glottal parameter Interpolation - Preliminary study on open quotient interpolation in the speech corpus - (성대특성 보간에 의한 합성음의 음질향상 - 음성코퍼스 내 개구간 비 보간을 위한 기초연구 -)

  • Bae, Jae-Hyun;Oh, Yung-Hwa
    • Proceedings of the KSPS conference
    • /
    • 2005.11a
    • /
    • pp.63-66
    • /
    • 2005
  • For the Large Corpus based TTS the consistency of the speech corpus is very important. It is because the inconsistency of the speech quality in the corpus may result in a distortion at the concatenation point. And because of this inconsistency, large corpus must be tuned repeatedly One of the reasons for the inconsistency of the speech corpus is the different glottal characteristics of the speech sentence in the corpus. In this paper, we adjusted the glottal characteristics of the speech in the corpus to prevent this distortion. And the experimental results are showed.

  • PDF