• Title/Summary/Keyword: Immersive audio

Search Result 32, Processing Time 0.026 seconds

Standardization of MPEG-I Immersive Audio and Related Technologies (MPEG-I Immersive Audio 표준화 및 기술 동향)

  • Jang, D.Y.;Kang, K.O.;Lee, Y.J.;Yoo, J.H.;Lee, T.J.
    • Electronics and Telecommunications Trends
    • /
    • v.37 no.3
    • /
    • pp.52-63
    • /
    • 2022
  • Immersive media, also known as spatial media, has become essential with the decrease in face-to-face activities in the COVID-19 pandemic era. Teleconference, metaverse, and digital twin have been developed with high expectations as immersive media services, and the demand for hyper-realistic media is increasing. Under these circumstances, MPEG-I Immersive Media is being standardized as a technologies of navigable virtual reality, which is expected to be launched in the first half of 2024, and the Audio Group is working to standardize the immersive audio technology. Following this trend, this article introduces the trend in MPEG-I immersive audio standardization. Further, it describes the features of the immersive audio rendering technology, focusing on the structure and function of the RM0 base technology, which was chosen after evaluating all the technologies proposed in the January 2022 "MPEG Audio Meeting."

MPEG-I Immersive Audio Standardization Trend (MPEG-I Immersive Audio 표준화 동향)

  • Kang, Kyeongok;Lee, Misuk;Lee, Yong Ju;Yoo, Jae-hyoun;Jang, Daeyoung;Lee, Taejin
    • Journal of Broadcast Engineering
    • /
    • v.25 no.5
    • /
    • pp.723-733
    • /
    • 2020
  • In this paper, MPEG-I Immersive Audio Standardization and related trends are presented. MPEG-I Immersive Audio, which is under the development of standard documents at the exploration stage, can make a user interact with a virtual scene in 6 DoF manner and perceive sounds realistic and matching the user's spatial audio experience in the real world, in VR/AR environments that are expected as killer applications in hyper-connected environments such as 5G/6G. In order to do this, MPEG Audio Working Group has discussed the system architecture and related requirements for the spatial audio experience in VR/AR, audio evaluation platform (AEP) and encoder input format (EIF) for assessing the performance of submitted proponent technologies, and evaluation procedures.

Spatial Audio Technologies for Immersive Media Services (체감형 미디어 서비스를 위한 공간음향 기술 동향)

  • Lee, Y.J.;Yoo, J.;Jang, D.;Lee, M.;Lee, T.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.3
    • /
    • pp.13-22
    • /
    • 2019
  • Although virtual reality technology may not be deemed as having a satisfactory quality for all users, it tends to incite interest because of the expectation that the technology can allow one to experience something that they may never experience in real life. The most important aspect of this indirect experience is the provision of immersive 3D audio and video, which interacts naturally with every action of the user. The immersive audio faithfully reproduces an acoustic scene in a space corresponding to the position and movement of the listener, and this technology is also called spatial audio. In this paper, we briefly introduce the trend of spatial audio technology in view of acquisition, analysis, reproduction, and the concept of MPEG-I audio standard technology, which is being promoted for spatial audio services.

A Study on Setting the Minimum and Maximum Distances for Distance Attenuation in MPEG-I Immersive Audio

  • Lee, Yong Ju;Yoo Jae-hyoun;Jang, Daeyoung;Kang, Kyeongok;Lee, Taejin
    • Journal of Broadcast Engineering
    • /
    • v.27 no.7
    • /
    • pp.974-984
    • /
    • 2022
  • In this paper, we introduce the minimum and maximum distance setting methods used in geometric distance attenuation processing, which is one of spatial sound reproduction methods. In general, sound attenuation by distance is inversely proportional to distance, that is 1/r law, but when the relative distance between the user and the audio object is very short or long, exceptional processing might be performed by setting the minimum distance or the maximum distance. While MPEG-I Immersive Audio's RM0 uses fixed values for the minimum and maximum distances, this study proposes effective methods for setting the distances considering the signal gain of an audio object. Proposed methods were verified through simulation of the proposed methods and experiments using RM0 renderer.

Visual Object Tracking Fusing CNN and Color Histogram based Tracker and Depth Estimation for Automatic Immersive Audio Mixing

  • Park, Sung-Jun;Islam, Md. Mahbubul;Baek, Joong-Hwan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.3
    • /
    • pp.1121-1141
    • /
    • 2020
  • We propose a robust visual object tracking algorithm fusing a convolutional neural network tracker trained offline from a large number of video repositories and a color histogram based tracker to track objects for mixing immersive audio. Our algorithm addresses the problem of occlusion and large movements of the CNN based GOTURN generic object tracker. The key idea is the offline training of a binary classifier with the color histogram similarity values estimated via both trackers used in this method to opt appropriate tracker for target tracking and update both trackers with the predicted bounding box position of the target to continue tracking. Furthermore, a histogram similarity constraint is applied before updating the trackers to maximize the tracking accuracy. Finally, we compute the depth(z) of the target object by one of the prominent unsupervised monocular depth estimation algorithms to ensure the necessary 3D position of the tracked object to mix the immersive audio into that object. Our proposed algorithm demonstrates about 2% improved accuracy over the outperforming GOTURN algorithm in the existing VOT2014 tracking benchmark. Additionally, our tracker also works well to track multiple objects utilizing the concept of single object tracker but no demonstrations on any MOT benchmark.

A Study on Immersive Audio Improvement of FTV using an effective noise (유효 잡음을 활용한 FTV 입체음향 개선방안 연구)

  • Kim, Jong-Un;Cho, Hyun-Seok;Lee, Yoon-Bae;Yeo, Sung-Dae;Kim, Seong-Kweon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.10 no.2
    • /
    • pp.233-238
    • /
    • 2015
  • In this paper, we proposed that immersive audio effect method using the effective noise to improve engagement in free-viewpoint TV(FTV) service. In the basketball court, we monitored the frequency spectrums by acquiring continuous audio data of players and referee using shotgun and wireless microphone. By analyzing this spectrum, in case that users zoomed in, we determined whether it is effective frequency or not. Therefore when users using FTV service zoom in toward the object, it is proposed that we need to utilize unnecessary noise instead of removing that. it will be able to be useful for an immersive audio implementation of FTV.

A DNN-Based Personalized HRTF Estimation Method for 3D Immersive Audio

  • Son, Ji Su;Choi, Seung Ho
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.1
    • /
    • pp.161-167
    • /
    • 2021
  • This paper proposes a new personalized HRTF estimation method which is based on a deep neural network (DNN) model and improved elevation reproduction using a notch filter. In the previous study, a DNN model was proposed that estimates the magnitude of HRTF by using anthropometric measurements [1]. However, since this method uses zero-phase without estimating the phase, it causes the internalization (i.e., the inside-the-head localization) of sound when listening the spatial sound. We devise a method to estimate both the magnitude and phase of HRTF based on the DNN model. Personalized HRIR was estimated using the anthropometric measurements including detailed data of the head, torso, shoulders and ears as inputs for the DNN model. After that, the estimated HRIR was filtered with an appropriate notch filter to improve elevation reproduction. In order to evaluate the performance, both of the objective and subjective evaluations are conducted. For the objective evaluation, the root mean square error (RMSE) and the log spectral distance (LSD) between the reference HRTF and the estimated HRTF are measured. For subjective evaluation, the MUSHRA test and preference test are conducted. As a result, the proposed method can make listeners experience more immersive audio than the previous methods.

Object Audio Coding Standard SAOC Technology and Application (객체 오디오 부호화 표준 SAOC 기술 및 응용)

  • Oh, Hyen-O;Jung, Yang-Won
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.5
    • /
    • pp.45-55
    • /
    • 2010
  • Object-based audio coding technology has been interested with its expectation to apply in wide areas. Recently, ISO/IEC MPEG has standardized a parametric object audio coding method, the SAOC (Spatial Audio Object Coding). This paper introduces parametric object audio coding techniques with special focus on the MPEG SAOC and also describes several issues and solutions that should be considered for a success in its application.

MPEG Surround Extension Technique for MPEG-H 3D Audio

  • Beack, Seungkwon;Sung, Jongmo;Seo, Jeongil;Lee, Taejin
    • ETRI Journal
    • /
    • v.38 no.5
    • /
    • pp.829-837
    • /
    • 2016
  • In this paper, we introduce extension tools for MPEG Surround, which were recently adopted as MPEG-H 3D Audio tools by the ISO/MPEG standardization group. MPEG-H 3D Audio is a next-generation technology for representing spatial audio in an immersive manner. However, considerably large numbers of input signals can degrade the compression performance during a low bitrate operation. The proposed extension of MPEG Surround was basically designed based on the original MPEG Surround technology, where the limitations of MPEG Surround were revised by adopting a new coding structure. The proposed MPEG-H 3D Audio technologies will play a pivotal role in dramatically improving the sound quality during a lower bitrate operation.

A study on the audio/video integrated control system based on network

  • Lee, Seungwon;Kwon, Soonchul;Lee, Seunghyun
    • International journal of advanced smart convergence
    • /
    • v.11 no.4
    • /
    • pp.1-9
    • /
    • 2022
  • The recent development of information and communication technology is also affecting audio/video systems used in industry. The audio/video device configuration system changes from analog to digital, and the network-based audio/video system control has the advantage of reducing costs in accordance with system operation. However, audio/video systems released on the market have limitations in that they can only control their own products or can only be performed on specific platforms (Windows, Mac, Linux). This paper is a study on a device (Network Audio Video Integrated Control: NAVICS) that can integrate and control multiple audio / video devices with different functions, and can control digitalized audio / video devices through network and serial communication. As a result of the study, it was confirmed that individual control and integrated control were possible through the protocol provided by each audio/video device by NAVICS, and that even non-experts could easily control the audio/video system. In the future, it is expected that network-based audio/video integrated control technology will become the technical standard for complex audio/video system control.