• Title/Summary/Keyword: object audio

Search Result 95, Processing Time 0.026 seconds

Object Audio Coding Standard SAOC Technology and Application (객체 오디오 부호화 표준 SAOC 기술 및 응용)

  • Oh, Hyen-O;Jung, Yang-Won
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.5
    • /
    • pp.45-55
    • /
    • 2010
  • Object-based audio coding technology has been interested with its expectation to apply in wide areas. Recently, ISO/IEC MPEG has standardized a parametric object audio coding method, the SAOC (Spatial Audio Object Coding). This paper introduces parametric object audio coding techniques with special focus on the MPEG SAOC and also describes several issues and solutions that should be considered for a success in its application.

Audio Object Coding Standard Technology - MPEG SAOC (오디오 객체 부호화 표준 - MPEG SAOC)

  • Jung, Yang-Won;Oh, Hyen-O
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.7
    • /
    • pp.630-639
    • /
    • 2009
  • This paper introduces MPEG SAOC (Spatial Audio Object Coding) that has been standardized in MPEG audio subgroup. MPEG SAOC is a trendy parametric coding technology conceptually similar to PS (Parametric Stereo) and the MPEG Surround. SAOC especially parameterizes and codes the spatial information for the object signals comprising a downmixed audio scene and thus lets users render one's preferred scene in an interactive manner.

An User Controllable Object Audio File Format and Audio Scene Description (사용자 기반 실감 객체 오디오 파일 포맷 및 오디오 장면 묘사 기법)

  • Cho, Choong-Sang;Kim, Je-Woo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.5
    • /
    • pp.25-33
    • /
    • 2010
  • Multi-media service has been changed into user based audio services, which service supports actively user's preference and interaction with the users. In the market, multi-media products which can support the highest audio-quality by using lossless audio technology have been released and object audio music which user can select the objects has been serviced. In this paper, we design user's preference information based object audio file format and audio scene description for storage and transmission media. The designed file format is designed based on MPEG-4 file format because high-quality audio codecs in MPEG-4 audio can be easily used and the track of file format can be flexibly controlled depend on the number of the instrument in music. The encoded audio data of each objects and encoded audio scene description by binary encoding that has independent track are packed in a file. The scene description for storage media is consist of full and object scene description, the scene description for transmission media has an essential description for object audio operation and a specific description for real audio sound. The designed file format based simulator is developed and it generates an object audio file with several scene descriptions. Also, the real audio sound is serviced by the interaction with user and the unpacked scene description.

A Study on Setting the Minimum and Maximum Distances for Distance Attenuation in MPEG-I Immersive Audio

  • Lee, Yong Ju;Yoo Jae-hyoun;Jang, Daeyoung;Kang, Kyeongok;Lee, Taejin
    • Journal of Broadcast Engineering
    • /
    • v.27 no.7
    • /
    • pp.974-984
    • /
    • 2022
  • In this paper, we introduce the minimum and maximum distance setting methods used in geometric distance attenuation processing, which is one of spatial sound reproduction methods. In general, sound attenuation by distance is inversely proportional to distance, that is 1/r law, but when the relative distance between the user and the audio object is very short or long, exceptional processing might be performed by setting the minimum distance or the maximum distance. While MPEG-I Immersive Audio's RM0 uses fixed values for the minimum and maximum distances, this study proposes effective methods for setting the distances considering the signal gain of an audio object. Proposed methods were verified through simulation of the proposed methods and experiments using RM0 renderer.

Channel Expansion Technology in MPEG Audio (MPEG 오디오의 채널 확장 기술)

  • Pang, Hee-Suk
    • Journal of Broadcast Engineering
    • /
    • v.16 no.5
    • /
    • pp.714-721
    • /
    • 2011
  • MPEG audio uses the masking effect, high frequency component synthesis based on spectral band replication, and channel expansion based on parametric stereo for efficient compression of audio signals. In this paper, we present an overview of the state-of-the-art channel expansion technology in MPEG audio. We also present technical overviews and application examples to broadcasting services for HE-AAC v.2, MPEG Surround, spatial audio object coding (SAOC), and unified speech and audio coding (USAC) which are MPEG audio codecs based on the channel expansion technology.

Visual Object Tracking Fusing CNN and Color Histogram based Tracker and Depth Estimation for Automatic Immersive Audio Mixing

  • Park, Sung-Jun;Islam, Md. Mahbubul;Baek, Joong-Hwan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.3
    • /
    • pp.1121-1141
    • /
    • 2020
  • We propose a robust visual object tracking algorithm fusing a convolutional neural network tracker trained offline from a large number of video repositories and a color histogram based tracker to track objects for mixing immersive audio. Our algorithm addresses the problem of occlusion and large movements of the CNN based GOTURN generic object tracker. The key idea is the offline training of a binary classifier with the color histogram similarity values estimated via both trackers used in this method to opt appropriate tracker for target tracking and update both trackers with the predicted bounding box position of the target to continue tracking. Furthermore, a histogram similarity constraint is applied before updating the trackers to maximize the tracking accuracy. Finally, we compute the depth(z) of the target object by one of the prominent unsupervised monocular depth estimation algorithms to ensure the necessary 3D position of the tracked object to mix the immersive audio into that object. Our proposed algorithm demonstrates about 2% improved accuracy over the outperforming GOTURN algorithm in the existing VOT2014 tracking benchmark. Additionally, our tracker also works well to track multiple objects utilizing the concept of single object tracker but no demonstrations on any MOT benchmark.

The Design of Object-based 3D Audio Broadcasting System (객체기반 3차원 오디오 방송 시스템 설계)

  • 강경옥;장대영;서정일;정대권
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.7
    • /
    • pp.592-602
    • /
    • 2003
  • This paper aims to describe the basic structure of novel object-based 3D audio broadcasting system To overcome current uni-directional audio broadcasting services, the object-based 3D audio broadcasting system is designed for providing the ability to interact with important audio objects as well as realistic 3D effects based on the MPEG-4 standard. The system is composed of 6 sub-modules. The audio input module collects the background sound object, which is recored by 3D microphone, and audio objects, which are recorded by monaural microphone or extracted through source separation method. The sound scene authoring module edits the 3D information of audio objects such as acoustical characteristics, location, directivity and etc. It also defines the final sound scene with a 3D background sound, which is intended to be delievered to a receiving terminal by producer. The encoder module encodes scene descriptors and audio objects for effective transmission. The decoder module extracts scene descriptors and audio objects from decoding received bistreams. The sound scene composition module reconstructs the 3D sound scene with scene descriptors and audio objects. The 3D sound renderer module maximizes the 3D sound effects through adapting the final sound to the listner's acoustical environments. It also receives the user's controls on audio objects and sends them to the scene composition module for changing the sound scene.

An Efficient Time-Frequency Representation for Parametric-Based Audio Object Coding

  • Beack, Seung-Kwon;Lee, Tae-Jin;Kim, Min-Je;Kang, Kyeong-Ok
    • ETRI Journal
    • /
    • v.33 no.6
    • /
    • pp.945-948
    • /
    • 2011
  • Object-based audio coding can provide new music applications with interactivity. To efficiently compress a lot of target audio objects, a subband-based parametric coding scheme has been adopted for MPEG spatial audio object coding. In this letter, the time-frequency (T/F) subband analysis structure is investigated. A reconfigured T/F structure is also proposed to enhance the generating performance of sound scenes such as 'karaoke' and 'solo' play in interactive music scenarios. From the experimental results, it was confirmed that the proposed scheme remarkably improves the SNR and sound quality.

A Study on Realistic Sound Reproduction for UHDTV (UHDTV를 위한 실감 오디오 재현 기술)

  • Jang, Daeyoung;Seo, Jeongil;Lee, Yong Ju;Yoo, Jae-Hyoun;Park, Taejin;Lee, Taejin
    • Journal of Broadcast Engineering
    • /
    • v.20 no.1
    • /
    • pp.68-81
    • /
    • 2015
  • Owing to the latest development of component and media processing technologies, UHDTV as a successor of the HDTV is expected that this will be coming soon realization. Accordingly, an audio technology that provides a 5.1-channel surround sound in home should be contemplating on what services should be provided with the advent of UHDTV era. In fact, however, the market of 5.1-channel audio is struggling, due to the difficulty of installation and maintenance of the multi speakers in a home. Meanwhile, the movie sound market for a long time been used in 5.1 and 7.1-channel sound formats, have changed as Dolby ATMOS, IOSONO, AURO3D etc. are launched one after another with the introduction of hybrid audio technologies that include the ceiling and object-based sounds. This very object-based audio technology is assured to be introduced in the home theater and broadcast audio market, and this change in audio technology is expected to be a breath of pioneering technological advances and market growth from the channel-based audio market that lacks flexibility. In this paper, we will investigate a suitable realistic audio solution for UHDTV, and introduce hybrid audio technologies, which is expected to be an audio technology for UHDTV, and we will describe the hybrid audio content format and reproduction methods in a home and consider the future prospects of realistic audio.

Design and Implementation of Distributed Object Framework Supporting Audio/Video Streaming (오디오/비디오 스트리밍을 지원하는 분산 객체 프레임 워크 설계 및 구현)

  • Ban, Deok-Hun;Kim, Dong-Seong;Park, Yeon-Sang;Lee, Heon-Ju
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.5 no.4
    • /
    • pp.440-448
    • /
    • 1999
  • 본 논문은 객체지향형 분산처리 환경 하에서 오디오나 비디오 등과 같은 실시간(real-time) 스트림(stream) 데이타를 처리하는 데 필요한 소프트웨어 기반구조를 설계하고 구현한 내용을 기술한다. 본 논문에서 제시한 DAViS(Distributed Object Framework supporting Audio/Video Streaming)는, 오디오/비디오 데이타의 처리와 관련된 여러 소프트웨어 구성요소들을 분산객체로 추상화하고, 그 객체들간의 제어정보 교환경로와 오디오/비디오 데이타 전송경로를 서로 분리하여 처리한다. 분산응용프로그램 작성자는 DAViS에서 제공하는 서비스들을 이용하여, 기존의 분산프로그래밍 환경이 제공하는 것과 동일한 수준에서 오디오/비디오 데이타에 대한 처리를 표현할 수 있다. DAViS는, 새로운 형식의 오디오/비디오 데이타를 처리하는 부분을 손쉽게 통합하고, 하부 네트워크의 전송기술이나 컴퓨터시스템 관련 기술의 진보를 신속하고 자연스럽게 수용할 수 있도록 하는 유연한 구조를 가지고 있다. Abstract This paper describes the design and implementation of software framework which supports the processing of real-time stream data like audio and video in distributed object-oriented computing environment. DAViS(Distributed Object Framework supporting Audio/Video Streaming), proposed in this paper, abstracts software components concerning the processing of audio/video data as distributed objects and separates the transmission path of data between them from that of control information. Based on DAViS, distributed applications can be written in the same abstract level as is provided by the existing distributed environment in handling audio/video data. DAViS has a flexible internal structure enough to easily incorporate new types of audio/video data and to rapidly accommodate the progress of underlying network and computer system technology with very little modifications.