• Title/Summary/Keyword: perceptual audio

Search Result 74, Processing Time 0.019 seconds

An Efficient PN Sequence Embedding and Detection Method for High Quality Digital Audio Watermarking (고음질 디지털 오디오 워터마킹을 위한 효율적인 PN 시퀸스 삽입 및 검출 방법)

  • 김현욱;오현오;김연정;윤대희
    • Journal of Broadcast Engineering
    • /
    • v.6 no.1
    • /
    • pp.21-31
    • /
    • 2001
  • In the PN-sequence based audio watermarking system, the PN sequence is shaped by a filter derived from the psychoacoustic model to increase robustness and inaudibility The psychoacoustic model calculated in each audio segment, however, requires heavy computational loads. In this paper, we propose an efficient watermarking system adopting a fixed-shape perceptual filter that substitutes psychoacoustic model derived filter. The proposed filter can shape the PN-sequence to be inaudible and enable to embed the robust watermark in a simple manner. Moreover, we propose an anchitecture for the PN-sequence compensation fitter In the watermark detecter to increase correlation between the watermark and the PN-sequence. With the proposed architecture, the blind watermark detection performance has been enhanced.

  • PDF

Human Laughter Generation using Hybrid Generative Models

  • Mansouri, Nadia;Lachiri, Zied
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.5
    • /
    • pp.1590-1609
    • /
    • 2021
  • Laughter is one of the most important nonverbal sound that human generates. It is a means for expressing his emotions. The acoustic and contextual features of this specific sound are different from those of speech and many difficulties arise during their modeling process. During this work, we propose an audio laughter generation system based on unsupervised generative models: the autoencoder (AE) and its variants. This procedure is the association of three main sub-process, (1) the analysis which consist of extracting the log magnitude spectrogram from the laughter database, (2) the generative models training, (3) the synthesis stage which incorporate the involvement of an intermediate mechanism: the vocoder. To improve the synthesis quality, we suggest two hybrid models (LSTM-VAE, GRU-VAE and CNN-VAE) that combine the representation learning capacity of variational autoencoder (VAE) with the temporal modelling ability of a long short-term memory RNN (LSTM) and the CNN ability to learn invariant features. To figure out the performance of our proposed audio laughter generation process, objective evaluation (RMSE) and a perceptual audio quality test (listening test) were conducted. According to these evaluation metrics, we can show that the GRU-VAE outperforms the other VAE models.

Audio Contents Adaptation Technology According to User′s Preference on Sound Fields (사용자의 음장선호도에 따른 오디오 콘텐츠 적응 기술)

  • 강경옥;홍재근;서정일
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.6
    • /
    • pp.437-445
    • /
    • 2004
  • In this paper. we describe a novel method for transforming audio contents according to user's preference on sound field. Sound field effect technologies. which transform or simulate acoustic environments as user's preference, are very important for enlarging the reality of acoustic scene. However huge amount of computational power is required to process sound field effect in real time. so it is hard to implement this functionality at the portable audio devices such as MP3 player. In this paper, we propose an efficient method for providing sound field effect to audio contents independent of terminal's computational power through processing this functionality at the server using user's sound field preference, which is transfered from terminal side. To describe sound field preference, user can use perceptual acoustic parameters as well as the URI address of room impulse response signal. In addition, a novel fast convolution method is presented to implement a sound field effect engine as a result of convoluting with a room impulse response signal at the realtime application. and verified to be applicable to real-time applications through experiments. To verify the evidence of benefit of proposed method we performed two subjective listening tests about sound field descrimitive ability and preference on sound field processed sounds. The results showed that the proposed sound field preference can be applicable to the public.

The Implementation of Multi-Channel Audio Codec for Real-Time operation (실시간 처리를 위한 멀티채널 오디오 코덱의 구현)

  • Hong, Jin-Woo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.14 no.2E
    • /
    • pp.91-97
    • /
    • 1995
  • This paper describes the implementation of a multi-channel audio codec for HETV. This codec has the features of the 3/2-stereo plus low frequency enhancement, downward compatibility with the smaller number of channels, backward compatibility with the existing 2/0-stereo system(MPEG-1 audio), and multilingual capability. The encoder of this codec consists of 6-channel analog audio input part with the sampling rate of 48 kHz, 4-channel digital audio input part and three TMS320C40 /DSPs. The encoder implements multi-channel audio compression using a human perceptual psychoacoustic model, and has the bit rate reduction to 384 kbit/s without impairment of subjective quality. The decoder consists of 6-channel analog audio output part, 4-channel digital audio output part, and two TMS320C40 DSPs for a decoding procedure. The decoder analyzes the bit stream received with bit rate of 384 kbit/s from the encoder and reproduces the multi-channel audio signals for analog and digital outputs. The multi-processing of this audio codec using multiple DSPs is ensured by high speed transfer of date between DSPs through coordinating communication port activities with DMA coprocessors. Finally, some technical considerations are suggested to realize the problem of real-time operation, which are found out through the implementation of this codec using the MPEG-2 layer II sudio coding algorithm and the use of the hardware architecture with commercial multiple DSPs.

  • PDF

Sinusoidal Modeling of Polyphonic Audio Signals Using Dynamic Segmentation Method (동적 세그멘테이션을 이용한 폴리포닉 오디오 신호의 정현파 모델링)

  • 장호근;박주성
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.4
    • /
    • pp.58-68
    • /
    • 2000
  • This paper proposes a sinusoidal modeling of polyphonic audio signals. Sinusoidal modeling which has been applied well to speech and monophonic signals cannot be applied directly to polyphonic signals because a window size for sinusoidal analysis cannot be determined over the entire signal. In addition, for high quality synthesized signal transient parts like attacks should be preserved which determines timbre of musical instrument. In this paper, a multiresolution filter bank is designed which splits the input signal into six octave-spaced subbands without aliasing and sinusoidal modeling is applied to each subband signal. To alleviate smearing of transients in sinusoidal modeling a dynamic segmentation method is applied to subbands which determines the analysis-synthesis frame size adaptively to fit time-frequency characteristics of the subband signal. The improved dynamic segmentation is proposed which shows better performance about transients and reduced computation. For various polyphonic audio signals the result of simulation shows the suggested sinusoidal modeling can model polyphonic audio signals without loss of perceptual quality.

  • PDF

Stereo Audio Matched with 3D Video (3D영상에 정합되는 스테레오 오디오)

  • Park, Sung-Wook;Chung, Tae-Yun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.2
    • /
    • pp.153-158
    • /
    • 2011
  • This paper presents subjective experimental results to understand how audio should be changed when a video clip is watched in 3D than 2D. This paper divided auditory perceptual information into two categories; distance and azimuth that a sound source contributes mostly, and spaciousness that scene or environment contribute mostly. According to the experiment for distance and azimuth, i.e. sound localization, we found that distance and azimuth of sound sources were magnified when heard with 3D than 2D video. This lead us to conclude 3D sound for localization should be designed to have more distance and azimuth than 2D sound. Also we found 3D sound are preferred to be played with not only 3D video clip but also 2D video clip. According to the experiment for spaciousness, we found people prefer sound with more reverberation when they watch 3D video clips than 2D video clips. This can be understood that 3D video provides more spacial information than 2D video. Those subjective experimental results can help audio engineer familiar with 2D audio to create 3D audio, and be fundamental information of future research to make 2D to 3D audio conversion system. Furthermore when designing 3D broadcasting system with limited bandwidth and with 2D TV supportive, we propose to consider transmitting stereoscopic video, audio with enhanced localization, and metadata for TV sets to generate reverberation for spaciousness.

Adaptation for Object-based MPEG-4 Content with Multiple Streams (다중 스트림을 이용한 객체기반 MPEG-4 컨텐트의 적응 기법)

  • Cha Kyung-Ae
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.11 no.3
    • /
    • pp.69-81
    • /
    • 2006
  • In this paper, an adaptive algorithm is proposed in streaming MPEG-4 contents with fluctuating resource amount such as throughput of network conditions. In the area of adaptive streaming issue, a lot of researches have been made on how to represent encoded media(such as video) bitstream in scalable way. By contrast, MPEG-4 supports object-based multimedia content which is composed of various types of media streams such as audio, video, image and other graphical elements. Thus, it can be more effective to provide individual media streams in scalable way for streaming object-based content to heterogeneous environment. The proposed method provides the multiple media streams corresponding to an object with different qualities and bit rate in order to support object based scalability to the MPEG-4 content. In addition, an optimal selection of the multiple streams for each object to meet a given constraint is proposed. The selection process is adopted a multiple choice knapsack problem with multi-step selection for the MPEG-4 objects with different scalability levels. The proposed algorithm enforces the optimal selection process to maintain the perceptual qualities of more important objects at the best effort. The experimental results show that the set of selected media stream for presenting objects meets a current transmission condition with more high perceptual quality.

  • PDF

Time-Scale Modification of Polyphonic Audio Signals Using Sinusoidal Modeling (정현파 모델링을 이용한 폴리포닉 오디오 신호의 시간축 변화)

  • 장호근;박주성
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.2
    • /
    • pp.77-85
    • /
    • 2001
  • This paper proposes a method of time-scale modification of polyphonic audio signals based on a sinusoidal model. The signals are modeled with sinusoidal component and noise component. A multiresolution filter bank is designed which splits the input signal into six octave-spaced subbands without aliasing and sinusoidal modeling is applied to each subband signal. To alleviate smearing of transients in time-scale modification a dynamic segmentation method is applied to subbands which determines the analysis-synthesis frame size adaptively to fit time-frequency characteristics of the subband signal. For extracting sinusoidal components and calculating their parameters matching pursuit algorithm is applied to each analysis frame of subband signal. In accordance with spectrum analysis a psychoacoustic model implementing the effect of frequency masking is incorporated with matching pursuit to provide a resonable stop condition of iteration and reduce the number of sinusoids. The noise component obtained by subtracting the synthesized signal with sinusoidal components from the original signal is modeled by line-segment model of short time spectrum envelope. For various polyphonic audio signals the result of simulation shows suggested sinusoidal modeling can synthesize original signal without loss of perceptual quality and do more robust and high quality time-scale modification for large scale factor because of representing transients without any perceptual loss.

  • PDF

Performance Evaluation of MCLT-based Audio Watermark in DTV System (DTV 시스템에서의 MCLT 기반 오디오 워터마크 성능 평가)

  • Jeong, Youngho;Lee, Misuk;Lee, Taejin;Kim, Huiyong
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2017.06a
    • /
    • pp.219-222
    • /
    • 2017
  • 본 논문에서는 DTV 시스템을 대상으로 PN 시퀀스를 이용한 MCLT(Modulated Complex Lapped Transform) 기반 오디오 워터마크 알고리즘에 대한 BER 및 PEAQ(Perceptual Evaluation of Audio Quality) 성능 평가를 통해 오디오 신호 압축에 대한 워터마크의 강인성 및 워터마크 삽입에 따른 오디오 품질 열화 정도를 분석하였다. 이를 위해 오디오 신호 특성을 고려한 프로그램 장르별 시험용 방송 콘텐츠를 제작하고, Lab. Test 를 위한 DTV 송수신 시스템을 구축하였다. 오디오 인코딩 비트율 변화에 따른 성능 평가 결과, 광고 콘텐츠를 제외한 평균 BER(%)에서 192kbps 비트율이 128kpbs 비트율에 비해 0.0767 더 우수한 성능을 보였다. 오디오 워터마크 삽입에 따른 객관적 음질 평가에서는 PEAQ 점수가 약 -0.2 로 원래 오디오 신호와의 품질 차이가 매우 작은 것으로 나타났으며, 또한 DTV 시스템상의 신호 압축에 의해 발생하는 오디오 신호의 품질 저하 이외에 워터마크 삽입으로 인한 추가적인 음질 저하는 거의 발생하지 않는 것으로 분석되었다.

  • PDF

L2 Proficiency Effect on the Acoustic Cue-Weighting Pattern by Korean L2 Learners of English: Production and Perception of English Stops

  • Kong, Eun Jong;Yoon, In Hee
    • Phonetics and Speech Sciences
    • /
    • v.5 no.4
    • /
    • pp.81-90
    • /
    • 2013
  • This study explored how Korean L2 learners of English utilize multiple acoustic cues (VOT and F0) in perceiving and producing the English alveolar stop with a voicing contrast. Thirty-four 18-year-old high-school students participated in the study. Their English proficiency level was classified as either 'high' (HEP) or 'low' (LEP) according to high-school English level standardization. Thirty different synthesized syllables were presented in audio stimuli by combining a 6-step VOTs and a 5-step F0s. The listeners judged how close the audio stimulus was to /t/ or /d/ in L2 using a visual analogue scale. The L2 /d/ and /t/ productions collected from the 22 learners (12 HEP, 10 LEP) were acoustically analyzed by measuring VOT and F0 at the vowel onset. Results showed that LEP listeners attended to the F0 in the stimuli more sensitively than HEP listeners, suggesting that HEP listeners could inhibit less important acoustic dimensions better than LEP listeners in their L2 perception. The L2 production patterns also exhibited a group-difference between HEP and LEP in that HEP speakers utilized their VOT dimension (primary cue in L2) more effectively than LEP speakers. Taken together, the study showed that the relative cue-weighting strategies in L2 perception and production are closely related to the learner's L2 proficiency level in that more proficient learners had a better control of inhibiting and enhancing the relevant acoustic parameters.