• Title/Summary/Keyword: visual-audio

Search Result 426, Processing Time 0.024 seconds

Design of a Format Converter from MPEG-4 Over MPEG-2 TS to MP4 (MPEG-4 Over MPEG-2 TS로부터 MP4 파일로의 포맷 변환기 설계)

  • 최재영;정제창
    • Journal of Broadcast Engineering
    • /
    • v.5 no.2
    • /
    • pp.176-187
    • /
    • 2000
  • MPEG-4 is a digital bit stream format and associated protocols for representing multimedia content consisting of natural and synthetic audio, video and object data. This paper describes an application where multiple audio/visual data stream are combined in MPEG-4 and transported via MPTG-2 transport streams(TS). Also, this paper describes how to convert MPEG-4 Over MPEG-2 TS bit streams into MP4 file which Is designed to contain the media information of an MPEG-4 presentation in a flexible, extensible format. MPEG-4 is presented in the form of audio-visual objects that are arranged into an audio-visual scene by means of a scene descriptor and is composed of the audio-visual objects by means of an object descriptor. These descriptor streams are not defined MPEG-2 TS. So. this paper focuses on handling of these descriptors and parsing TS streams to get MPEG-4 data. The MPEG-4 Over MPEG-2 TS to MP4 format converter is implemented in the demonstrated systems.

  • PDF

Speech Recognition by Integrating Audio, Visual and Contextual Features Based on Neural Networks (신경망 기반 음성, 영상 및 문맥 통합 음성인식)

  • 김명원;한문성;이순신;류정우
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.3
    • /
    • pp.67-77
    • /
    • 2004
  • The recent research has been focused on fusion of audio and visual features for reliable speech recognition in noisy environments. In this paper, we propose a neural network based model of robust speech recognition by integrating audio, visual, and contextual information. Bimodal Neural Network(BMNN) is a multi-layer perception of 4 layers, each of which performs a certain level of abstraction of input features. In BMNN the third layer combines audio md visual features of speech to compensate loss of audio information caused by noise. In order to improve the accuracy of speech recognition in noisy environments, we also propose a post-processing based on contextual information which are sequential patterns of words spoken by a user. Our experimental results show that our model outperforms any single mode models. Particularly, when we use the contextual information, we can obtain over 90% recognition accuracy even in noisy environments, which is a significant improvement compared with the state of art in speech recognition. Our research demonstrates that diverse sources of information need to be integrated to improve the accuracy of speech recognition particularly in noisy environments.

A Study on the Elements of Interface Design of Audio-based Social Networking Service (오디오 기반 SNS의 인터페이스 디자인 요소 연구)

  • Kim, Yeon-Soo;Choe, Jong-Hoon
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.2
    • /
    • pp.143-150
    • /
    • 2022
  • Audio-based SNS also needs a visual guide to reach the contents desired by the users. Therefore, this study investigates visual interface design elements that influence the experience of using audio contents in audio-based SNS. Prior researches have identified that the generally acknowledged interface design elements are important for the usability of audio contents. Through the analysis of the currently launched audio-based SNS, the influence of general interface elements were again confirmed, and via the analysis of other audio content services, a new interface evaluation element was explored. Accordingly, with five general interface evaluation elements-layout, color, icon, typography, graphic image, multimedia elements are newly defined and proposed as crucial factors in evaluating the UI of audio-based SNS.

Audio-visual Spatial Coherence Judgments in the Peripheral Visual Fields

  • Lee, Chai-Bong;Kang, Dae-Gee
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.16 no.2
    • /
    • pp.35-39
    • /
    • 2015
  • Auditory and visual stimuli presented in the peripheral visual field were perceived as spatially coincident when the auditory stimulus was presented five to seven degrees outwards from the direction of the visual stimulus. Furthermore, judgments of the perceived distance between auditory and visual stimuli presented in the periphery did not increase when an auditory stimulus was presented in the peripheral side of the visual stimulus. As to the origin of this phenomenon, there would seem to be two possibilities. One is that the participants could not perceptually distinguish the distance on the peripheral side because of the limitation of accuracy perception. The other is that the participants could distinguish the distances, but could not evaluate them because of the insufficient experimental setup of auditory stimuli. In order to confirm which of these two alternative explanations is valid, we conducted an experiment similar to that of our previous study using a sufficient number of loudspeakers for the presentation of auditory stimuli. Results revealed that judgments of perceived distance increased on the peripheral side. This indicates that we can perceive discrimination between audio and visual stimuli on the peripheral side.

A Study on the Use of Supplementary Teaching Materials and Implements in the High School Home Economics Education (고등학교 가정과 교육에서 보조학습 교재.교구의 활용실태 연구)

  • 조은경;김용숙
    • Journal of Korean Home Economics Education Association
    • /
    • v.9 no.1
    • /
    • pp.1-17
    • /
    • 1997
  • This study was conducted to obtain basic materials to improve the teaching method of Home Economics by theoretically looking into the supplementary teaching materials or implements usable in teaching Costume History area. And based on these data, the types and the applications of the supplementary teaching materials or implements highschool owned were examined. The subjects of this study were 111 Home Economics and Housework curriculum highschool teachers who give a lecture in the country by using self-administered questionnaires. SAS program was used to calculate frequency, percentage, average, standard deviation, and $\chi$(sup)2-test analysis. The results of the study were as follows; 1. Most of the highschool teachers used the school expenses for experiments in preparing the supplementary teaching materials or implements. 2. Of the supplementary teaching materials and implements concerning Costume History, visual implements such as slides and pictures were the mostly owned. CD and audio implements as cassette-tapes were not used. 3. Most of the teachers recognized the importance of the audio-visual teaching materials and implements concerning Costume History. 4. Among the audio-visual materials and implements concerning Costume History by which can be made by school teachers of Home Economics and Housework curriculum, the mostly used one was ‘cutting pictorials from magazines and newspapers’, and the next were ‘orbital materials’, and ‘copy the pictorials’, and the least was ‘recording from the radio’. 5. Most of the annual expenses assigned to the department of Home Economics was used in cooking practice, and the least of the expenses was assigned in buying audio-visual teaching materials and implements. 6. Time assigned to the area of Home Economics was for the most part one or two hours per week, and among this, time assigned to the history of western costume and the history ok korean costume was for the most part five to eight hours. 7. The areas that the highschool teachers felt difficulties mostly during clothing and textiles curriculum were ‘textiles’and the next were ‘knitting’, ‘western costume history’, and ‘korean clothing construction’. 8. The difficulties the highschool teachers faced while teaching Costume History were mostly that ‘the pictorials in the text is not fully explainable’, the next were ‘most of the supplementary teaching materials or implements are not owned’, ‘have to explain very much in a short time’, and ‘the lectural explanation is insufficient’. 9. The solution for the difficulties that the highschool teachers faced while teaching Costume History was mostly ‘the information, on which audio-visual materials and implements are distributed in the market, should be easy to obtain’, the next opinions were ‘the school should provide enough experiment and practice expenses to buy audio-visual materials and implements’, and ‘education facilities of the Home Economics Department should be the main aspects in improving the teaching methods and should give special lectures about it’.

  • PDF

Audio-Visual Fusion for Sound Source Localization and Improved Attention (음성-영상 융합 음원 방향 추정 및 사람 찾기 기술)

  • Lee, Byoung-Gi;Choi, Jong-Suk;Yoon, Sang-Suk;Choi, Mun-Taek;Kim, Mun-Sang;Kim, Dai-Jin
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.35 no.7
    • /
    • pp.737-743
    • /
    • 2011
  • Service robots are equipped with various sensors such as vision camera, sonar sensor, laser scanner, and microphones. Although these sensors have their own functions, some of them can be made to work together and perform more complicated functions. AudioFvisual fusion is a typical and powerful combination of audio and video sensors, because audio information is complementary to visual information and vice versa. Human beings also mainly depend on visual and auditory information in their daily life. In this paper, we conduct two studies using audioFvision fusion: one is on enhancing the performance of sound localization, and the other is on improving robot attention through sound localization and face detection.

Development of a Lipsync Algorithm Based on Audio-visual Corpus (시청각 코퍼스 기반의 립싱크 알고리듬 개발)

  • 김진영;하영민;이화숙
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.3
    • /
    • pp.63-69
    • /
    • 2001
  • A corpus-based lip sync algorithm for synthesizing natural face animation is proposed in this paper. To get the lip parameters, some marks were attached some marks to the speaker's face, and the marks' positions were extracted with some Image processing methods. Also, the spoken utterances were labeled with HTK and prosodic information (duration, pitch and intensity) were analyzed. An audio-visual corpus was constructed by combining the speech and image information. The basic unit used in our approach is syllable unit. Based on this Audio-visual corpus, lip information represented by mark's positions was synthesized. That is. the best syllable units are selected from the audio-visual corpus and each visual information of selected syllable units are concatenated. There are two processes to obtain the best units. One is to select the N-best candidates for each syllable. The other is to select the best smooth unit sequences, which is done by Viterbi decoding algorithm. For these process, the two distance proposed between syllable units. They are a phonetic environment distance measure and a prosody distance measure. Computer simulation results showed that our proposed algorithm had good performances. Especially, it was shown that pitch and intensity information is also important as like duration information in lip sync.

  • PDF

An fMRI Study on the Differences in the Brain Regions Activated by an Identical Audio-Visual Clip Using Major and Minor Key Arrangements (동일한 영상자극을 이용한 장조음악과 단조음악에 의해 유발된 뇌 활성화의 차이 : fMRI 연구)

  • Lee, Chang-Kyu;Eum, Young-Ji;Kim, Yeon-Kyu;Watanuki, Shigeki;Sohn, Jin-Hun
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2009.05a
    • /
    • pp.109-112
    • /
    • 2009
  • The purpose of this study was to examine the differences in the brain activation evoked by music arranged in major and minor key used with an identical motion film during the fMRI testing. A part of the audio-visual combinations composed by Iwamiya and Sano were used for the study stimuli. This audio- visual clip was originally developed by combining a small motion segment of the animation "The Snowman" and music arranged in both major and minor key from the original jazz music "Avalon" rewritten in a classical style. Twenty-seven Japanese male graduate and undergraduate students participated in the study. Brain regions more activated by the major key than the minor key when presented with the identical motion film were the left cerebellum, the right fusiform gyrus, the right superior occipital, the left superior orbito frontal, the right pallidum, the left precuneus, and the bilateral thalamus. On the other hand, brain regions more activated by the minor key than the major key when presented with the identical motion film were the right medial frontal, the left inferior orbito frontal, the bilateral superior parietal, the left postcentral, and the right precuneus. The study showed a difference in brain regions activated between the two different stimulus (i.e., major key and minor key) controlling for the visual aspect of the experiment. These findings imply that our brain systematically generates differently in the way it processes music written in major and minor key(Supported by the User Science Institute of Kyushu University, Japan and the Korea Science and Engineering Foundation).

  • PDF