• 제목/요약/키워드: Image to Speech

검색결과 188건 처리시간 0.022초

효과적인 2차 최적화 적용을 위한 Minibatch 단위 DNN 훈련 관점에서의 CNN 구현 (Implementation of CNN in the view of mini-batch DNN training for efficient second order optimization)

  • 송화전;정호영;박전규
    • 말소리와 음성과학
    • /
    • 제8권2호
    • /
    • pp.23-30
    • /
    • 2016
  • This paper describes some implementation schemes of CNN in view of mini-batch DNN training for efficient second order optimization. This uses same procedure updating parameters of DNN to train parameters of CNN by simply arranging an input image as a sequence of local patches, which is actually equivalent with mini-batch DNN training. Through this conversion, second order optimization providing higher performance can be simply conducted to train the parameters of CNN. In both results of image recognition on MNIST DB and syllable automatic speech recognition, our proposed scheme for CNN implementation shows better performance than one based on DNN.

얼굴영상과 음성을 이용한 멀티모달 감정인식 (Multimodal Emotion Recognition using Face Image and Speech)

  • 이현구;김동주
    • 디지털산업정보학회논문지
    • /
    • 제8권1호
    • /
    • pp.29-40
    • /
    • 2012
  • A challenging research issue that has been one of growing importance to those working in human-computer interaction are to endow a machine with an emotional intelligence. Thus, emotion recognition technology plays an important role in the research area of human-computer interaction, and it allows a more natural and more human-like communication between human and computer. In this paper, we propose the multimodal emotion recognition system using face and speech to improve recognition performance. The distance measurement of the face-based emotion recognition is calculated by 2D-PCA of MCS-LBP image and nearest neighbor classifier, and also the likelihood measurement is obtained by Gaussian mixture model algorithm based on pitch and mel-frequency cepstral coefficient features in speech-based emotion recognition. The individual matching scores obtained from face and speech are combined using a weighted-summation operation, and the fused-score is utilized to classify the human emotion. Through experimental results, the proposed method exhibits improved recognition accuracy of about 11.25% to 19.75% when compared to the most uni-modal approach. From these results, we confirmed that the proposed approach achieved a significant performance improvement and the proposed method was very effective.

심상그리기가 좌편 무시현상에 미치는 영향: 사례연구 (The Effects of a Mental Image Drawing on Left Neglect: a Case Study)

  • 김하경;황영진;정옥란
    • 음성과학
    • /
    • 제12권3호
    • /
    • pp.91-102
    • /
    • 2005
  • This study investigated to the effects of a mental image drawing in RHD patients with left neglect. Three subjects participated in this study. All subjects were right handed and native speaker of korean. In task 1, the patients were presented with visual stimulus card directly above the response sheet and were asked to draw the picture. In task 2, they were presented with items auditorily and asked to draw the picture(mental image drawing). In all experimental conditions, there was no response time limit. The results showed that the subjects showed left neglect leaving some space on the left side in task 1. And the picture was drawn the left side from the right in direction. However, the neglect disappeared in task 2. And the picture was drawn the right side from the left in direction. The results of the present study suggested that a mental image drawing technique can be effective in treating individuals who exhibit left neglect. Also, the picture direction showed that the korean normality was same.

  • PDF

Application of Speech Recognition with Closed Caption for Content-Based Video Segmentations

  • Son, Jong-Mok;Bae, Keun-Sung
    • 음성과학
    • /
    • 제12권1호
    • /
    • pp.135-142
    • /
    • 2005
  • An important aspect of video indexing is the ability to segment video into meaningful segments, i.e., content-based video segmentation. Since the audio signal in the sound track is synchronized with image sequences in the video program, a speech signal in the sound track can be used to segment video into meaningful segments. In this paper, we propose a new approach to content-based video segmentation. This approach uses closed caption to construct a recognition network for speech recognition. Accurate time information for video segmentation is then obtained from the speech recognition process. For the video segmentation experiment for TV news programs, we made 56 video summaries successfully from 57 TV news stories. It demonstrates that the proposed scheme is very promising for content-based video segmentation.

  • PDF

웹상의 영상 내의 문자 인식과 음성 전환 시스템 (Text to Speech System from Web Images)

  • 안희임;정기철
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2001년도 하계종합학술대회 논문집(3)
    • /
    • pp.5-8
    • /
    • 2001
  • The computer programs based upon graphic user interface(GUI) became commonplace with the advance of computer technology. Nevertheless, programs for the visually-handicapped have still remained at the level of TTS(text to speech) programs and this prevents many visually-handicapped from enjoying the pleasure and convenience of the information age. This paper is, paying attention to the importance of character recognition in images, about the configuration of the system that converts text in the image selected by a user to the speech by extracting the character part, and carrying out character recognition.

  • PDF

상태공유 HMM을 이용한 서브워드 단위 기반 립리딩 (Subword-based Lip Reading Using State-tied HMM)

  • 김진영;신도성
    • 음성과학
    • /
    • 제8권3호
    • /
    • pp.123-132
    • /
    • 2001
  • In recent years research on HCI technology has been very active and speech recognition is being used as its typical method. Its recognition, however, is deteriorated with the increase of surrounding noise. To solve this problem, studies concerning the multimodal HCI are being briskly made. This paper describes automated lipreading for bimodal speech recognition on the basis of image- and speech information. It employs audio-visual DB containing 1,074 words from 70 voice and tri-viseme as a recognition unit, and state tied HMM as a recognition model. Performance of automated recognition of 22 to 1,000 words are evaluated to achieve word recognition of 60.5% in terms of 22word recognizer.

  • PDF

라플라시안 피라미드 프로세싱과 백터 양자화 방법을 이용한 영상 데이타 압축 (Image Data Compression Using Laplacian Pyramid Processing and Vector Quantization)

  • 박광훈;차일환;윤대희
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1987년도 전기.전자공학 학술대회 논문집(II)
    • /
    • pp.1347-1351
    • /
    • 1987
  • This thesis aims at studying laplacian pyramid vector quantization which keeps a simple compression algorithm and stability against various kinds of image data. To this end, images are devied into two groups according to their statistical characteristics. At 0.860 bits/pixel and 0.360 bits/pixel respectively, laplacian pyramid vector quantization is compared to the existing spatial domain vector quantization and transform coding under the same condition in both objective and subjective value. The laplacian pyramid vector quantization is much more stable against the statistical characteristics of images than the existing vector quantization and transform coding.

  • PDF

말소리와 성격 이미지 (Speech sound and personality impression)

  • 이은영;유혜옥
    • 말소리와 음성과학
    • /
    • 제9권4호
    • /
    • pp.59-67
    • /
    • 2017
  • Regardless of their intention, listeners tend to assess speakers' personalities based on the sounds of the speech they hear. Assessment criteria, however, have not been fully investigated to indicate whether there is any relationship between the acoustic cue of produced speech sounds and perceived personality impression. If properly investigated, the potential relationship between these two will provide crucial insights on the aspects of human communications and further on human-computer interaction. Since human communications have distinctive characteristics of simultaneity and complexity, this investigation would be the identification of minimum essential factors among the sounds of speech and perceived personality impression. The purpose of this study, therefore, is to identify significant associations between the speech sounds and perceived personality impression of speaker by the listeners. Twenty eight subjects participated in the experiment and eight acoustic parameters were extracted by using Praat from the recorded sounds of the speech. The subjects also completed the Neo-five Factor Inventory test so that their personality traits could be measured. The results of the experiment show that four major factors(duration average, pitch difference value, pitch average and intensity average) play crucial roles in defining the significant relationship.

RASTA 필터를 이용한 립리딩 성능향상에 관한 연구 (A Study on Lip-reading enhancement using RATSTA fileter)

  • 신도성;김진영;최승호;김상훈
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 2002년도 11월 학술대회지
    • /
    • pp.191-194
    • /
    • 2002
  • Lip-reading technology that is studied them is used to compensate speech recognition degradation in noise environment in bi-modal's form. The most important thing is that search for correct lips area in this lip-reading. But, it is hard to forecast stable performance in dynamic environment. Used RASTA filter that show good performance to remove noise in the speech to compensate. This filter shows that improve performance of using time domain of digital filter. To this experiment observes performance of speech recognition only using image information, service chooses possible 22 words and did recognition experiment in car. We used hidden Markov model by speech recognition algorithm to compare this words' recognition performance.

  • PDF

SPEECH TRAINING TOOLS BASED ON VOWEL SWITCH/VOLUME CONTROL AND ITS VISUALIZATION

  • Ueda, Yuichi;Sakata, Tadashi
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송공학회 2009년도 IWAIT
    • /
    • pp.441-445
    • /
    • 2009
  • We have developed a real-time software tool to extract a speech feature vector whose time sequences consist of three groups of vector components; the phonetic/acoustic features such as formant frequencies, the phonemic features as outputs on neural networks, and some distances of Japanese phonemes. In those features, since the phoneme distances for Japanese five vowels are applicable to express vowel articulation, we have designed a switch, a volume control and a color representation which are operated by pronouncing vowel sounds. As examples of those vowel interface, we have developed some speech training tools to display a image character or a rolling color ball and to control a cursor's movement for aurally- or vocally-handicapped children. In this paper, we introduce the functions and the principle of those systems.

  • PDF