• Title/Summary/Keyword: Virtual Sound Source

Search Result 56, Processing Time 0.026 seconds

A Study on Enhancement of 3D Sound Using Improved HRTFS (개선된 머리전달함수를 이용한 3차원 입체음향 성능 개선 연구)

  • Koo, Kyo-Sik;Cha, Hyung-Tai
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.6
    • /
    • pp.557-565
    • /
    • 2009
  • To perceive the direction and the distance of a sound, we always use a couple of information. Head Related Transfer Function (HRTF) contains the information that sound arrives from a sound source to the ears of the listener, like differences of level, phase and frequency spectrum. For a reproduction system using 2 channels, we apply HRTF to many algorithms which make 3d sound. But it causes a problem to localize a sound source around a certain places which is called the cone-of-confusion. In this paper, we proposed the new algorithm to reduce the confusion of sound image localization. The difference of frequency spectrum and psychoacoustics theory are used to boost the spectral cue among each directions. To confirm the performance of the algorithm, informal listening tests are carried out. As a result, we can make the improved 3d sound in 2 channel system based on a headphone. Also sound quality of improved 3d sound is much better than conventional methods.

HRTF Enhancement Algorithm for Stereo ground Systems (스테레오 시스템을 위한 머리전달함수의 개선)

  • Koo, Kyo-Sik;Cha, Hyung-Tai
    • The Journal of the Acoustical Society of Korea
    • /
    • v.27 no.4
    • /
    • pp.207-214
    • /
    • 2008
  • To create 3D sound, we usually use two methods which are two channels or multichannel sound systems. Because of cost and space problems, we prefer two channel sound system to multi-channel. Using a headphone or two speakers, the most typical method to create 3D sound effects is a technology of head related transfer function (HRTF) which contains the information that sound arrives from a sound source to the ears of the listener. But it causes a problem to localize a sound source around a certain places which is called cone-of-confusion. In this paper, we proposed the new algorithm to reduce the confusion of sound image localization. HRTF grouping and psychoacoustics theory are used to boost the spectral cue with spectrum difference among each directions. Informal listening tests show that the proposed method improves the front-back sound localization characteristics much better than conventional methods.

Improvement of Head Related Transfer Function to Create Realistic 3D Sound (현실감있는 입체음향 생성을 위한 머리전달함수의 개선)

  • Koo, Kyo-Sik;Cha, Hyung-Tai
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.3
    • /
    • pp.381-386
    • /
    • 2008
  • Virtual 3D audio methods that create 3D sound effects are researched highly for multimedia devices using 2 speakers or headphone. The most typical method to create 3D effects is a technology through use of head related transfer function (HRTF) which contains the information that sound arrives from a sound source to the ears of the listener. But it can decline some 3D effects by cone of confusion between front and back directions due to the non-individual HRTF depending on each listener. In this paper, we propose a new method to use psychoacoustic theory that creates realistic 3D audio. In order to improve 3D sound, we calculate the excitation energy of each symmetric HRTF and extract the ratio of energy of each bark range. Informal listening tests show that the proposed method improves the front-bach sound localization characteristics much better than the conventional methods.

Effect on Audio Play Latency for Real-Time HMD-Based Headphone Listening (HMD를 이용한 오디오 재생 기술에서 Latency의 영향 분석)

  • Son, Sangmo;Jo, Hyun;Kim, Sunmin
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2014.10a
    • /
    • pp.141-145
    • /
    • 2014
  • A minimally appropriate time delay of audio data processing is investigated for rendering virtual sound source direction in real-time head-tracking environment under headphone listening. Less than 3.7 degree of angular mismatch should be maintained in order to keep desired sound source directions in virtually fixed while listeners are rotating their head in a horizontal plane. The angular mismatch is proportional to speed of head rotation and data processing delay. For 20 degree/s head rotation, which is a relatively slow head-movement case, less than total of 63ms data processing delay should be considered.

  • PDF

Implementation of Stereophonic Sound System Using Multiple Smartphones (여러 대의 스마트폰을 이용한 입체 음향 시스템 구현)

  • Kim, Ki-Jun;Myeong, Chang-Ho;Park, Hochong
    • Journal of Broadcast Engineering
    • /
    • v.19 no.6
    • /
    • pp.810-818
    • /
    • 2014
  • In this paper, we propose a stereophonic sound system using multiple smartphones. In the conventional sound systems using smartphones, all devices play the same signal so that it is difficult to provide true stereophonic effect. In order to solve this problem, we propose a novel sound system which can generate a virtual sound source at any location in such a way that smartphones at different locations play different signals with amplitude panning. By using the proposed system, we can generate more realistic stereophonic effect than the conventional system, and can control the sound effect by user's command. We developed the proposed system using commercial smartphones and verified that the developed sound system effectively provides the desired stereophonic effect.

Numerical Simulation of Head Related Transfer Functions and Sound Fields (수치해석을 이용한 머리전달함수의 계산 및 음장해석)

  • ;V. Kahana;P. A. Nelson;M. Petyt
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.6
    • /
    • pp.94-103
    • /
    • 2001
  • The goal of using numerical methods in this study is two-fold: to replicate a set of measured, individualized HRTFs by a computer simulation, and also to visualise the resultant sound field around the head. Two methods can be wed: the Boundary Element Method (BEM) and the Infinite-Finite Element Method (IFEM). This paper presents the results of a preliminary study carried out on a KEMAR dummy-head, the geometry of which was captured with a high accuracy 3-D laser scanner and digitiser. The scanned computer model was converted to a few valid BEM and IFEM meshes with different polygon resolutions, enabling us to optimise the simulation for different frequency ranges. The results show a good agreement between simulations and measurements of the sound pressure at the blocked ear-canal of the dummy-head. The principle of reciprocity provides an effect method to simulate HRTF database. The BEM was also used to investigate the total sound field around the head, providing a tool to visualise the sound field for different arrangements of virtual acoustic imaging systems.

  • PDF

Preliminary Design and Implementation of 3D Sound Play Interface for Graphic Contents Developer (그래픽 콘텐츠 개발자를 위한 입체음 재생 인터페이스 기본 설계 및 구현)

  • Won, Yong-Tae;Jang, Bong-Seog;Ahn, Dong-Soon;Kwak, Hoon-Sung
    • Journal of Digital Contents Society
    • /
    • v.9 no.2
    • /
    • pp.203-211
    • /
    • 2008
  • Due to the advance of H/W and S/W techniques to play 3D sound, the virtual space contented by 3D graphics and sounds can provide users more improved realities and vividness. However for the small 3D contents developers and companies, it is hard to implement 3D sound techniques because the implementation requires expensive sound engines, 3D sound technical understanding and 3D sound programming skills. Therefore 3D-sound-playing-interface is necessary to easy and cost-effective 3D sound implementation. Using this interface, graphics experts can easily add 3D sound techniques to their applications. In this paper, the followings are designed and implemented as a preliminary stage in the way of developing the 3D sound playing interface. First, we develop 3D sound S/W modules converting mono to 3D sound in PC based systems. Second, we develop the interconnection modules to map 3D graphic objects and sound sources. The developed modules in this paper can allow the user to percept sound source position and surround effect at the moving positions in the virtual world. In the coming works, we are going to develop the more completed 3D sound playing interface consisted of the synchronization technique for sound and moving objects, and HRTF.

  • PDF

A Real Time 6 DoF Spatial Audio Rendering System based on MPEG-I AEP (MPEG-I AEP 기반 실시간 6 자유도 공간음향 렌더링 시스템)

  • Kyeongok Kang;Jae-hyoun Yoo;Daeyoung Jang;Yong Ju Lee;Taejin Lee
    • Journal of Broadcast Engineering
    • /
    • v.28 no.2
    • /
    • pp.213-229
    • /
    • 2023
  • In this paper, we introduce a spatial sound rendering system that provides 6DoF spatial sound in real time in response to the movement of a listener located in a virtual environment. This system was implemented using MPEG-I AEP as a development environment for the CfP response of MPEG-I Immersive Audio and consists of an encoder and a renderer including a decoder. The encoder serves to offline encode metadata such as the spatial audio parameters of the virtual space scene included in EIF and the directivity information of the sound source provided in the SOFA file and deliver them to the bitstream. The renderer receives the transmitted bitstream and performs 6DoF spatial sound rendering in real time according to the position of the listener. The main spatial sound processing technologies applied to the rendering system include sound source effect and obstacle effect, and other ones for the system processing include Doppler effect, sound field effect and etc. The results of self-subjective evaluation of the developed system are introduced.

Headphone-based multi-channel 3D sound generation using HRTF (HRTF를 이용한 헤드폰 기반의 다채널 입체음향 생성)

  • Kim Siho;Kim Kyunghoon;Bae Keunsung;Choi Songin;Park Manho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.1
    • /
    • pp.71-77
    • /
    • 2005
  • In this paper we implement a headphone-based 5.1 channel 3-dimensional (3D) sound generation system using HRTF (Head Related Transfer Function). Each mono sound source in the 5.1 channel signal is localized on its virtual location by binaural filtering with corresponding HRTFs, and reverberation effect is added for spatialization. To reduce the computational burden, we reduce the number of taps in the HRTF impulse response and model the early reverberation effect with several tens of impulses extracted from the whole impulse sequences. We modified the spectrum of HRTF by weighing the difference of front-back spec01m to reduce the front-back confusion caused by non-individualized HRTF DB. In informal listening test we can confirm that the implemented 3D sound system generates live and rich 3D sound compared with simple stereo or 2 channel down mixing.

Reduced Raytracing Approach for Handling Sound Map with Multiple Sound Sources, Wind Advection and Temperature

  • Jong-Hyun Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.9
    • /
    • pp.55-62
    • /
    • 2023
  • In this paper, we present a method that utilizes geometry-based sound generation techniques to efficiently handle multiple sound sources, wind turbulence, and temperature-dependent interactions. Recently, a method based on reduced raytracing has been proposed to update the sound position and efficiently calculate sound propagation and diffraction without recursive reflection/refraction of many rays, but this approach only considers the propagation characteristics of sound and does not consider the interaction of multiple sound sources, wind currents, and temperature. These limitations make it difficult to create sound scenes in a variety of virtual environments because they only generate static sounds. In this paper, we propose a method for efficiently constructing a sound map in a situation where multiple sounds are placed, and a method for efficiently controlling the movement of an agent through it. In addition, we propose a method for controlling sound propagation by considering wind currents and temperature. The method proposed in this paper can be utilized in various fields such as metaverse environment design and crowd simulation, as well as games that can improve content immersion based on sound.