• 제목/요약/키워드: automatic voice system

Search Result 81, Processing Time 0.025 seconds

A study on the Smart Door System For Single Households (1인 가구를 위한 스마트 도어 시스템에 대한 연구)

  • Kim, Donghyeon;Park, Yeeun;Moon, Juhyuk;Im, Yunkyung;Ko, Dongbeom;Kim, Jungjoon;Park, Jeongmin
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.5
    • /
    • pp.267-274
    • /
    • 2018
  • This paper introduces a smart door system composed of security system and secretary system. As ratio of single households increase, the security of household became more important. Also already there were a lot of artificial intelligence secretary system based on voice called smart home technology. But It has limits. It can not work without user's requests. That mean it is not automatic. And the voice recognition depend on user's pronounce. Thus in this paper, we design and develop smart door system that is added function of security and secretary. That can inform users that there are outsider in front of their house in real time. Also that can speak information such as user's requirements, delivery and weather information using TTS. As a result they can prevent crimes and use convenient secretary system.

Verification of Automatic PAR Control System using DEVS Formalism (DEVS 형식론을 이용한 공항 PAR 관제 시스템 자동화 방안 검증)

  • Sung, Chang-ho;Koo, Jung;Kim, Tag-Gon;Kim, Ki-Hyung
    • Journal of the Korea Society for Simulation
    • /
    • v.21 no.3
    • /
    • pp.1-9
    • /
    • 2012
  • This paper proposes automatic precision approach radar (PAR) control system using digital signal to increase the safety of aircraft, and discrete event systems specification (DEVS) methodology is utilized to verify the proposed system. Traditionally, a landing aircraft is controlled by the human voice of a final approach controller. However, the voice information can be missed during transmission, and pilots may also act improperly because of incorrectness of auditory signals. The proposed system enables the stable operation of the aircraft, regardless of the pilot's capability. Communicating DEVS (C-DEVS) is used to analyze and verify the behavior of the proposed system. A composed C-DEVS atomic model has overall composed discrete state sets of models, and the state sequence acquired through full state search is utilized to verify the safeness and the liveness of a system behavior. The C-DEVS model of the proposed system shows the same behavior with the traditional PAR control system.

Development of an Embedded System for Ship선s Steering Gear using Voice Recognition Module (음성인식모듈을 이용한 선박조타용 임베디드 시스템 개발)

  • Park, Gyei-Kark;Seo, Ki-Yeol;Hong, Tae-Ho
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.5
    • /
    • pp.604-609
    • /
    • 2004
  • Recently, various studies had been made for automatic control system of small ships, in order to improve maneuvering and to reduce labor and working on board. To achieve efficient operation of small ships, it had been accomplished to rapid development of automatic technique, but the ship operation had been more complicated because of the need to handle various gauges and instruments. To solve these problems, there are examples to be applied to the speech information processing technologies which is one of the human interface methods in the system operation of ship, but the implementation of definite system is still incomplete. Therefore, the purpose of this paper is to implement the control system for ship steering using the voice recognition module.

Subtitle Automatic Generation System using Speech to Text (음성인식을 이용한 자막 자동생성 시스템)

  • Son, Won-Seob;Kim, Eung-Kon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.1
    • /
    • pp.81-88
    • /
    • 2021
  • Recently, many videos such as online lecture videos caused by COVID-19 have been generated. However, due to the limitation of working hours and lack of cost, they are only a part of the videos with subtitles. It is emerging as an obstructive factor in the acquisition of information by deaf. In this paper, we try to develop a system that automatically generates subtitles using voice recognition and generates subtitles by separating sentences using the ending and time to reduce the time and labor required for subtitle generation.

Classification of Pathological Voice from ARS using Neural Network (신경회로망을 이용한 ARS 장애음성의 식별에 관한 연구)

  • Jo, C.W.;Kim, K.I.;Kim, D.H.;Kwon, S.B.;Kim, K.R.;Kim, Y.J.;Jun, K.R.;Wang, S.G.
    • Speech Sciences
    • /
    • v.8 no.2
    • /
    • pp.61-71
    • /
    • 2001
  • Speech material, which is collected from ARS(Automatic Response System), was analyzed and classified into disease and non-disease state. The material include 11 different kinds of diseases. Along with ARS speech, DAT(Digital Audio Tape) speech is collected in parallel to give the bench mark. To analyze speech material, analysis tools, which is developed local laboratory, are used to provide an improved and robust performance to the obtained parameters. To classify speech into disease and non-disease class, multi-layered neural network was used. Three different combinations of 3, 6, 12 parameters are tested to obtain the proper network size and to find the best performance. From the experiment, the classification rate of 92.5% was obtained.

  • PDF

Design of Metaverse for Two-Way Video Conferencing Platform Based on Virtual Reality

  • Yoon, Dongeon;Oh, Amsuk
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.3
    • /
    • pp.189-194
    • /
    • 2022
  • As non-face-to-face activities have become commonplace, online video conferencing platforms have become popular collaboration tools. However, existing video conferencing platforms have a structure in which one side unilaterally exchanges information, potentially increase the fatigue of meeting participants. In this study, we designed a video conferencing platform utilizing virtual reality (VR), a metaverse technology, to enable various interactions. A virtual conferencing space and realistic VR video conferencing content authoring tool support system were designed using Meta's Oculus Quest 2 hardware, the Unity engine, and 3D Max software. With the Photon software development kit, voice recognition was designed to perform automatic text translation with the Watson application programming interface, allowing the online video conferencing participants to communicate smoothly even if using different languages. It is expected that the proposed video conferencing platform will enable conference participants to interact and improve their work efficiency.

Implementation of Automatic Test System for Voice Recognition (음성인식 자동시험장치 개발)

  • 김희경
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1998.06e
    • /
    • pp.219-222
    • /
    • 1998
  • 음성인식시험은 다양한 사용자의 음성을 입력으로 음성인식을 수행하고 그 결과를 이용하여 시스팀의 성능을 평가하거나, 음성의 특징을 파악하기 위한 중요한 기능으로 음성인식 서비스의 질을 향상시키기 위한 필수적인 요소이다. 본 논문에서 제시하는 음성인식 자동시험장치는 음성인식의 결과를 DTMF 신호로 처리하도록 하여 사람의 개입 없이 빠르고 정확한 결과를 통해 인식율, 인식속도 등 인식기술과 관련된 중요한 정보를 얻을 수 있도록 하였다. 본 논문에서는 한국통신의 기업체 음성다이얼서비스의 음성인식시험을 중심으로 음성인식 자동시험장치의 구성 및 기능에 대해서 설명한다.

  • PDF

A Study on the Automatic Speech Control System Using DMS model on Real-Time Windows Environment (실시간 윈도우 환경에서 DMS모델을 이용한 자동 음성 제어 시스템에 관한 연구)

  • 이정기;남동선;양진우;김순협
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.3
    • /
    • pp.51-56
    • /
    • 2000
  • Is this paper, we studied on the automatic speech control system in real-time windows environment using voice recognition. The applied reference pattern is the variable DMS model which is proposed to fasten execution speed and the one-stage DP algorithm using this model is used for recognition algorithm. The recognition vocabulary set is composed of control command words which are frequently used in windows environment. In this paper, an automatic speech period detection algorithm which is for on-line voice processing in windows environment is implemented. The variable DMS model which applies variable number of section in consideration of duration of the input signal is proposed. Sometimes, unnecessary recognition target word are generated. therefore model is reconstructed in on-line to handle this efficiently. The Perceptual Linear Predictive analysis method which generate feature vector from extracted feature of voice is applied. According to the experiment result, but recognition speech is fastened in the proposed model because of small loud of calculation. The multi-speaker-independent recognition rate and the multi-speaker-dependent recognition rate is 99.08% and 99.39% respectively. In the noisy environment the recognition rate is 96.25%.

  • PDF

Automatic Detection of Korean Accentual Phrase Boundaries

  • Lee, Ki-Yeong;Song, Min-Suck
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.1E
    • /
    • pp.27-31
    • /
    • 1999
  • Recent linguistic researches have brought into focus the relations between prosodic structures and syntactic, semantic or phonological structures. Most of them prove that prosodic information is available for understanding syntactic, semantic and discourse structures. But this result has not been integrated yet into recent Korean speech recognition or understanding systems. This study, as a part of integrating prosodic information into the speech recognition system, proposes an automatic detection technique of Korean accentual phrase boundaries by using one-stage DP, and the normalized pitch pattern. For making the normalized pitch pattern, this study proposes a method of modified normalization for Korean spoken language. For the experiment, this study employs 192 sentential speech data of 12 men's voice spoken in standard Korean, in which 720 accentual phrases are included, and 74.4% of the accentual phrase boundaries are correctly detected while 14.7% are the false detection rate.

  • PDF

Performance Evaluation of an Automatic Distance Speech Recognition System (원거리 음성명령어 인식시스템 설계)

  • Oh, Yoo-Rhee;Yoon, Jae-Sam;Park, Ji-Hoon;Kim, Min-A;Kim, Hong-Kook;Kong, Dong-Geon;Myung, Hyun;Bang, Seok-Won
    • Proceedings of the IEEK Conference
    • /
    • 2007.07a
    • /
    • pp.303-304
    • /
    • 2007
  • In this paper, we implement an automatic distance speech recognition system for voiced-enabled services. We first construct a baseline automatic speech recognition (ASR) system, where acoustic models are trained from speech utterances spoken by using a cross-talking microphone. In order to improve the performance of the baseline ASR using distance speech, the acoustic models are adapted to adjust the spectral characteristics of speech according to different microphones and the environmental mismatches between cross-talking and distance speech. Next we develop a voice activity detection algorithm for distance speech. We compare the performance of the base-line system and the developed ASR system on a task of PBW (Phonetically Balanced Word) 452. As a result it is shown that the developed ASR system provides the average word error rate (WER) reduction of 30.6 % compared to the baseline ASR system.

  • PDF