• Title/Summary/Keyword: Virtual Sounds

Search Result 36, Processing Time 0.025 seconds

Virtual Acoustics Field Simulation System for the Soundscape Reproduction in Public (공공장소의 음풍경 재현을 위한 가상음장현장재현시스템 개발)

  • Song, Hyuk;Kook, Chan;Jang, Gil-Soo
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.14 no.4
    • /
    • pp.319-326
    • /
    • 2004
  • The soundscape is a novel attempt to offer comfortable sound environments at the urban public spaces by adding pleasant sounds and removing unagreeable ones. Most important factors to be considered therein are to determine what kind of sounds to offer and how to adjust them to the changing circumstances. But, installing, maintaining and adjusting the soundscape system directly in the field will ensue numerous problems as well as high costs. Thus, it is essential to devise a test method to analyse and verify the outcome before the actual installation in the field takes place. This study aims at devising the instrument system that enables to control. with a great ease, numerous variables. reproduce most agreeable sound sources, and can verify the effects on the virtual simulatory settings, which is named as Virtual Acoustic Field Simulation System (VAFSS).

Proposal of a new method for learning of diesel generator sounds and detecting abnormal sounds using an unsupervised deep learning algorithm

  • Hweon-Ki Jo;Song-Hyun Kim;Chang-Lak Kim
    • Nuclear Engineering and Technology
    • /
    • v.55 no.2
    • /
    • pp.506-515
    • /
    • 2023
  • This study is to find a method to learn engine sound after the start-up of a diesel generator installed in nuclear power plant with an unsupervised deep learning algorithm (CNN autoencoder) and a new method to predict the failure of a diesel generator using it. In order to learn the sound of a diesel generator with a deep learning algorithm, sound data recorded before and after the start-up of two diesel generators was used. The sound data of 20 min and 2 h were cut into 7 s, and the split sound was converted into a spectrogram image. 1200 and 7200 spectrogram images were created from sound data of 20 min and 2 h, respectively. Using two different deep learning algorithms (CNN autoencoder and binary classification), it was investigated whether the diesel generator post-start sounds were learned as normal. It was possible to accurately determine the post-start sounds as normal and the pre-start sounds as abnormal. It was also confirmed that the deep learning algorithm could detect the virtual abnormal sounds created by mixing the unusual sounds with the post-start sounds. This study showed that the unsupervised anomaly detection algorithm has a good accuracy increased about 3% with comparing to the binary classification algorithm.

A Range Dependent Structural HRTF Model for 3-D Sound Generation in Virtual Environments (가상현실 환경에서의 3차원 사운드 생성을 위한 거리 변화에 따른 구조적 머리전달함수 모델)

  • Lee, Young-Han;Kim, Hong-Kook
    • MALSORI
    • /
    • no.59
    • /
    • pp.89-99
    • /
    • 2006
  • This paper proposes a new structural head-related transfer function(HRTF) model to produce sounds in a virtual environment. The proposed HRTF model generates 3-D sounds by using a head model, a pinna model and the proposed distance model for azimuth, elevation, and distance that are three aspects for 3-D sounds, respectively. In particular, the proposed distance model consists of level normalization block distal region model, and proximal region model. To evaluate the performance of the proposed model, we setup an experimental procedure that each listener identifies a distance of 3-D sound sources that are generated by the proposed method with a predefined distance. It is shown from the tests that the proposed model provides an average distance error of $0.13{\sim}0.31$ meter when the sound source is generated as if it is 0.5 meter $\sim$ 2 meters apart from the listeners. This result is comparable to the average distance error of the human listening for the actual sound source.

  • PDF

Design of Spontaneous Acoustic Field Reproducing System (II) (능동형 음장조성시스템의 설계(II))

  • Kook, Chan;Jang, Gil-Soo;Chon, Ji-Hyun;Shin, Yong-Gyu;Min, Byoung-Chul
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2006.05a
    • /
    • pp.964-969
    • /
    • 2006
  • The soundscape is a novel attempt to offer comfortable sound environments at the urban public spaces by adding pleasant sounds and removing unagreeable ones. Most important factors to be considered therein are to determine what kind of sounds to offer and how to adjust them to the changing circumstances. But nowadays, the audio system provided in the almost every urban public spaces is just only a PA system with CD player or radio broadcasting music, the provided sound is only intended by the operator. Furthermore, providing the soundscape which fits to the situation and the atmospheric conditions needs enormous effort and time, it is almost impossible with the existing PA systems which installed in the public spaces nowadays. Thus, the new sounds cape reproduction system was developed on the basis of the prior VAFSS(Virtual Acoustic Field Simulation System) systems, which has the artificial intelligence to read out the mood of the field and select the appropriate soundscape to reproduce. In this new system, various environmental sensors with standard voltage, current or resistance output are available simultaneously, and the monitoring with video and sound became available via the TCP/IP communication protocol. The update and control of this system can be very convenient, so the money, time and the effort of maintaining and providing soundscape on the public spaces can be enormously saved. This new soundscape reproducing system was named as Virtual Acoustic Field Simulation System II (V AFSS II).

  • PDF

Virtual Prototyping of Passenger Vehicle (승용차의 가상 프로토타이핑)

  • Ko, Jeong-Hun;Son, Kwon;Choi, kyung-Hyun
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.7 no.5
    • /
    • pp.230-239
    • /
    • 1999
  • A virtual prototyping seeks to virtual environment where the development of vehicle models can be flecible as well as rapid and the experiments can be executed effectively concerning kinematics, controls, and behavior aspects of the models. This paper explains a virtual environment used for virtual prototyping of a vehicle. Ut has been developed using the dVISE environment thar provides actions, events, sounds, and light features. A vehicle model including detailed informations about a real-size vehicle. A human model is introduced for odjective visual evaluations of the developed, and then results are illustrated in order to demonstrate the applicability of developed models.

  • PDF

Virtual Prototyping Simulation for a Passenger Vehicle

  • Kwon Son;Park, Kyung-Hyun;Eom, Sung-Sook
    • Journal of Mechanical Science and Technology
    • /
    • v.15 no.4
    • /
    • pp.448-458
    • /
    • 2001
  • The primary goal of virtual prototyping is to eliminate the need for fabricating physical prototypes, and to reduce cost and time for developing new products. A virtual prototyping seeks to create a virtual environment where the development of a new model can be flexible as well as rapid, and experiments can be carried out effectively concerning kinematics, dynamics, and control aspects of the model. This paper addresses the virtual environment used for virtual prototyping of a passenger vehicle. It has been developed using the dVISE environment that provides such useful features as actions, events, sounds, and light features. A vehicle model including features, and behaviors is constructed by employing an object-oriented paradigm and contains detailed information about a real-size vehicle. The human model is also implemented not only for visual and reach evaluations of the developed vehicle model, but also for behavioral visualization during a crash test. For the real time driving simulation, a neural network model is incorporated into the virtual environment. The cases of passing bumps with a vehicle are discussed in order to demonstrate the applicability of a set of developed models.

  • PDF

A Study of Analysis about Virtual Musical Instruments' Timbre - Focused on Violin, Erhu, Haegeum - (가상악기의 음색 분석 연구 - 바이올린, 얼후, 해금을 중심으로 -)

  • Sung, Ki-Young;Lee, You-Jung
    • Journal of Korea Entertainment Industry Association
    • /
    • v.13 no.7
    • /
    • pp.219-227
    • /
    • 2019
  • In this paper, we proactively looked at the structure and characteristics of each instrument in order to compare and analyze the sound colors of the western violin, chinese erhu and korean haegeum, which are representative bow string instruments. Also, many performers have simply been unable to fully explain how the violin is rich in pitch and the haegeum has a unique tone. Also, many performers thinks that violin sounds rich just because it has many overtones and have been unable to fully explain how haegeum makes unique tone. While previous research data show that most instruments are studied and published by analyzing their own frequencies or related cases of acoustic studies, this study provides a visual look how the harmonics composition, which determines musical instruments' timbres, consists of and suggests data specifically by analyzing each sound pressure of integer multiple overtones so that the structure of instruments' unique timbre can be understood. Based on this, we hope that it will be of considerable help to the development of virtual musical instruments of korean traditional instruments, which are relatively small compared to western virtual instruments, by reproducing instrument sounds through the synthesizers in the future.

Research on The Educational Courseware Based on VR Content

  • Lu, Kai;Cho, Dong Min
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.3
    • /
    • pp.502-509
    • /
    • 2022
  • With the development of media technology, virtual reality (VR) technology is widely used in education, medical care, aerospace, entertainment and other fields. Among them, application in teaching courseware is a relatively new topic. Compared with traditional coursewares, virtual games visualized and extruded abstract teaching contents. Thus it strengthened teaching effects and expanded dimensions of learning. We hypothesized that virtual coursewares could increase users'sense of presence and enhance their focus. In this study, virtual courseswares were compared with traditional coursewares. At the same time, its feasibility and advantages of application were analyzed through literature researching, practical researching and statistical analysis from questionnaires. Furthermore, we designed a teaching system for VR coursewares and explored its performance in multidimensional and contextual teaching situations. It was found that Virtual coursewares have changed the boring traditional teaching methods. The teaching content was displayed in the form of three-dimensional images, videos and sounds through VR equipment, which effectively improved teaching efficiency. In addition, the feasibility of virtual courseware was demonstrated through factor analysis in questionnaires. Compared with traditional teaching courseware, VR coursewares can attract students' attention and improve learning efficiency. It provides a good example and is valuable for the research of virtual realities in education.

Practical Application of Virtual Acoustic Field Simulation System(VAFSS) (능동형 음장조성시스템의 적용 사례)

  • Park, Sa-Keun;Jang, Gil-Soo;Kook, Chan;Song, Min-Jeong;Jeon, Ji-Hyeon;Shin, Hoon
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2006.05a
    • /
    • pp.738-741
    • /
    • 2006
  • Virtual Acoustic Field Simulation System (VFASS) has been developed through soundscape technique research for making comfortable acoustic environment in urban public places. This system could suggest Introducing sounds which are suitable for certain area, Also this system gives certain area vitalities and amenity through with the correspondence to time, temperature, humidity, wind velocity and sunshine of the area. In this paper, Application possibility of VFASS is examined how can be adapted to D University square as a case study.

  • PDF

A Study on "A Midsummer Night's Palace" Using VR Sound Engineering Technology

  • Seok, MooHyun;Kim, HyungGi
    • International Journal of Contents
    • /
    • v.16 no.4
    • /
    • pp.68-77
    • /
    • 2020
  • VR (Virtual Reality) contents make the audience perceive virtual space as real through the virtual Z axis which creates a space that could not be created in 2D due to the space between the eyes of the audience. This visual change has led to the need for technological changes to sound and sound sources inserted into VR contents. However, studies to increase immersion in VR contents are still more focused on scientific and visual fields. This is because composing and producing VR sounds require professional views in two areas: sound-based engineering and computer-based interactive sound engineering. Sound-based engineering is difficult to reflect changes in user interaction or time and space by directing the sound effects, script sound, and background music according to the storyboard organized by the director. However, it has the advantage of producing the sound effects, script sound, and background music in one track and not having to go through the coding phase. Computer-based interactive sound engineering, on the other hand, is produced in different files, including the sound effects, script sound, and background music. It can increase immersion by reflecting user interaction or time and space, but it can also suffer from noise cancelling and sound collisions. Therefore in this study, the following methods were devised and utilized to produce sound for VR contents called "A Midsummer Night" so as to take advantage of each sound-making technology. First, the storyboard is analyzed according to the user's interaction. It is to analyze sound effects, script sound, and background music which is required according to user interaction. Second, the sounds are classified and analyzed as 'simultaneous sound' and 'individual sound'. Thirdly, work on interaction coding for sound effects, script sound, and background music that were produced from the simultaneous sound and individual time sound categories is done. Then, the contents are completed by applying the sound to the video. By going through the process, sound quality inhibitors such as noise cancelling can be removed while allowing sound production that fits to user interaction and time and space.