• Title/Summary/Keyword: perceptual quality

Search Result 344, Processing Time 0.026 seconds

Improvement of Prosody Transplantation Technology for English Prosody Education and Its Application (운율교육을 위한 운율이식기술 개선 방안 연구)

  • Yi, So-Pae
    • MALSORI
    • /
    • no.61
    • /
    • pp.49-62
    • /
    • 2007
  • This study focused on the improvement of prosody transplantation technology to be used for effective prosody education. Issues making the technology a less acceptable tool for prosody education were addressed. Instead of merely copying the target pitch onto a learner's utterances, the target pitch was resealed in semitone before the transplantation. In so doing, distortion of a signal was minimized and the transplanted utterance could have the quality of sound not different from the learner's utterances. Instead of manual transplantation, an automatic procedure was proposed to increase the reliability and the consistency of the outcome and enable real time processing. The perceptual performance of the automatic transplantation was evaluated by the perception experiment showing the automatic ransplantation was as good as the manual process.

  • PDF

Image Quality Assessment Using Perceptual Color Difference (인지적 색 차이를 사용한 이미지 품질 평가)

  • Lee, Jee-Yong;Kim, Young-Jin
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2015.04a
    • /
    • pp.837-840
    • /
    • 2015
  • SSIM은 인간의 시각 체계가 이미지의 구조적 정보에 예민하다는 점을 이용하여 여러 가지 구조적 정보들의 유사성을 계산함으로써 이미지를 평가하는 대표적인 이미지 평가 기법이다. 하지만 SSIM은 컬러 이미지들에 대해 색 차이를 고려하지 못하는 문제가 있다. 이러한 문제를 해결하기 위해, HSI 색 공간을 활용한 SHSIM 기법이 제안되었으나 이 기법 또한 두 컬러 이미지 간 인지적인 색 차이를 충분히 반영하지는 못하고 있다. 본 논문에서는 CIE Lab 색 공간을 도입하여 대응 되는 픽셀들의 인지적 색 차이를 계산하여 이미지 평가에 활용하는 방법을 제안한다. 제안하는 기법의 성능을 평가하기 위해, 이미지 평가 분야에서 가장 많이 알려진 네 가지의 데이터베이스와 네 종류의 평가 기준들을 이용하였다. 실험 결과에서는 제안하는 기법이 다른 기법들보다 인간 시각 체계와 더 상관성이 높다는 것을 보여줌으로써 성능을 증명하였다.

Performance Analysis of Audio Data Hiding Method based on Phase Information with Various Window Length (주파수 변환의 길이에 따른 위상 기반 오디오 정보 은닉 기술의 음질 및 성능 분석)

  • Cho, Kiho;Kim, Nam Soo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.12
    • /
    • pp.232-237
    • /
    • 2013
  • The role of the window length of time-frequency transformation is important for the audio data hiding methods utilizing phase information. In this paper, the experiments for our audio data hiding method were conducted in order to evaluate the audio quality and robustness against reverberant environment. The experimental results showed the tendency that the worse audio quality but better robustness were obtained when the lengthy window was applied. The important reason for quality degradation was pre-echo which flatters the percussive sound. The results also indicated that the wireless communication theory related to the length of time-frequency transform can be applied in the field of audio data hiding and acoustic data transmission.

Quality Improvement of Low Bitrate HE-AAC using Linear Prediction Pre-processor (저 전송률 환경에서 선형예측 전처리기를 사용한 HE-AAC의 성능 향상)

  • Lee, Jae-Seong;Lee, Gun-Woo;Park, Young-Chul;Youn, Dae-Hee
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.8C
    • /
    • pp.822-829
    • /
    • 2009
  • This paper proposes a new method of improving the quality of High Efficiency Advanced Audio Coding (HE-AAC). HE-AAC encodes input source by allocating bits for each scalefactor bands appropriately according to human ear's psychoacoustic property. As a result, insufficient bits are assigned to the bands which have relatively low energy. This imbalance between different energy bands can cause decreasing of sound quality like musical noise. In the proposed system, a Linear Prediction (LP) module is combined with HE-AAC as a pre-processor to improve sound quality by even bits distribution. To apply accurate human being's psychoacoustic property, the psychoacoustic model uses Fast Fourier Transform (FFT) spectrum of original input signal to make masking threshold. In its implementation, masking threshold of psychoacoustic model is normalized using the LP spectral envelope in prior to quantization of the LP residual. Experimental result shows that, the proposed algorithm allocates bits appropriately for insufficient bits condition and improves the performance of HE-AAC.

A "GAP-Model" based Framework for Online VVoIP QoE Measurement

  • Calyam, Prasad;Ekici, Eylem;Lee, Chang-Gun;Haffner, Mark;Howes, Nathan
    • Journal of Communications and Networks
    • /
    • v.9 no.4
    • /
    • pp.446-456
    • /
    • 2007
  • Increased access to broadband networks has led to a fast-growing demand for voice and video over IP(VVoIP) applications such as Internet telephony(VoIP), videoconferencing, and IP television(IPTV). For pro-active troubleshooting of VVoIP performance bottlenecks that manifest to end-users as performance impairments such as video frame freezing and voice dropouts, network operators cannot rely on actual end-users to report their subjective quality of experience(QoE). Hence, automated and objective techniques that provide real-time or online VVoIP QoE estimates are vital. Objective techniques developed to-date estimate VVoIP QoE by performing frame-to-frame peak-signal-to-noise ratio(PSNR) comparisons of the original video sequence and the reconstructed video sequence obtained from the sender-side and receiver-side, respectively. Since processing such video sequences is time consuming and computationally intensive, existing objective techniques cannot provide online VVoIP QoE. In this paper, we present a novel framework that can provide online estimates of VVoIP QoE on network paths without end-user involvement and without requiring any video sequences. The framework features the "GAP-model", which is an offline model of QoE expressed as a function of measurable network factors such as bandwidth, delay, jitter, and loss. Using the GAP-model, our online framework can produce VVoIP QoE estimates in terms of "Good", "Acceptable", or "Poor"(GAP) grades of perceptual quality solely from the online measured network conditions.

Audio Quality Enhancement at a Low-bit Rate Perceptual Audio Coding (저비트율로 압축된 오디오의 음질 개선 방법)

  • 서정일;서진수;홍진우;강경옥
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.6
    • /
    • pp.566-575
    • /
    • 2002
  • Low-titrate audio coding enables a number of Internet and mobile multimedia streaming service more efficiently. For the help of next-generation mobile telephone technologies and digital audio/video compression algorithm, we can enjoy the real-time multimedia contents on our mobile devices (cellular phone, PDA notebook, etc). But the limited available bandwidth of mobile communication network prohibits transmitting high-qualify AV contents. In addition, most bandwidth is assigned to transmit video contents. In this paper, we design a novel and simple method for reproducing high frequency components. The spectrum of high frequency components, which are lost by down-sampling, are modeled by the energy rate with low frequency band in Bark scale, and these values are multiplexed with conventional coded bitstream. At the decoder side, the high frequency components are reconstructed by duplicating with low frequency band spectrum at a rate of decoded energy rates. As a result of segmental SNR and MOS test, we convinced that our proposed method enhances the subjective sound quality only 10%∼20% additional bits. In addition, this proposed method can apply all kinds of frequency domain audio compression algorithms, such as MPEG-1/2, AAC, AC-3, and etc.

Implementation of a Person Tracking Based Multi-channel Audio Panning System for Multi-view Broadcasting Services (다시점 방송 서비스를 위한 사용자 위치추적 기반 다채널 오디오 패닝 시스템 구현)

  • Kim, Yong-Guk;Yang, Jong-Yeol;Lee, Young-Han;Kim, Hong-Kook
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.150-157
    • /
    • 2009
  • In this paper, we propose a person tracking based multi-channel audio panning system for multi-view broadcasting services. Multi-view broadcasting is to render the video sequences that are captured from a set of cameras based on different viewpoints, and multi-channel audio panning techniques are necessary for audio rendering in these services. In order to apply such a realistic audio technique to this multi-view broadcasting service, person tracking techniques which are to estimate the position of users are also necessary. For these reasons, proposed methods are composed of two parts. The first part is a person tracking method by using ultrasonic satellites and receiver. We could obtain user's coordinates of high resolution and short duration about 10 mm and 150 ms. The second part is MPEG Surround parameter-based multi-channel audio panning method. It is a method to obtain panned multi-channel audio by controlling the MPEG Surround spatial parameters. A MUSHRA test is conducted to objectively evaluate the perceptual quality and measure localization performance using a dummy head. From the experiments, it is shown that the proposed method provides better perceptual quality and localization performance than the conventional parameter-based audio panning method. In addition, we implement the prototype of person tracking based multi-view broadcasting system by integrating proposed methods with multi-view display system.

  • PDF

Time-Scale Modification of Polyphonic Audio Signals Using Sinusoidal Modeling (정현파 모델링을 이용한 폴리포닉 오디오 신호의 시간축 변화)

  • 장호근;박주성
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.2
    • /
    • pp.77-85
    • /
    • 2001
  • This paper proposes a method of time-scale modification of polyphonic audio signals based on a sinusoidal model. The signals are modeled with sinusoidal component and noise component. A multiresolution filter bank is designed which splits the input signal into six octave-spaced subbands without aliasing and sinusoidal modeling is applied to each subband signal. To alleviate smearing of transients in time-scale modification a dynamic segmentation method is applied to subbands which determines the analysis-synthesis frame size adaptively to fit time-frequency characteristics of the subband signal. For extracting sinusoidal components and calculating their parameters matching pursuit algorithm is applied to each analysis frame of subband signal. In accordance with spectrum analysis a psychoacoustic model implementing the effect of frequency masking is incorporated with matching pursuit to provide a resonable stop condition of iteration and reduce the number of sinusoids. The noise component obtained by subtracting the synthesized signal with sinusoidal components from the original signal is modeled by line-segment model of short time spectrum envelope. For various polyphonic audio signals the result of simulation shows suggested sinusoidal modeling can synthesize original signal without loss of perceptual quality and do more robust and high quality time-scale modification for large scale factor because of representing transients without any perceptual loss.

  • PDF

A study on combination of loss functions for effective mask-based speech enhancement in noisy environments (잡음 환경에 효과적인 마스크 기반 음성 향상을 위한 손실함수 조합에 관한 연구)

  • Jung, Jaehee;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.3
    • /
    • pp.234-240
    • /
    • 2021
  • In this paper, the mask-based speech enhancement is improved for effective speech recognition in noise environments. In the mask-based speech enhancement, enhanced spectrum is obtained by multiplying the noisy speech spectrum by the mask. The VoiceFilter (VF) model is used as the mask estimation, and the Spectrogram Inpainting (SI) technique is used to remove residual noise of enhanced spectrum. In this paper, we propose a combined loss to further improve speech enhancement. In order to effectively remove the residual noise in the speech, the positive part of the Triplet loss is used with the component loss. For the experiment TIMIT database is re-constructed using NOISEX92 noise and background music samples with various Signal to Noise Ratio (SNR) conditions. Source to Distortion Ratio (SDR), Perceptual Evaluation of Speech Quality (PESQ), and Short-Time Objective Intelligibility (STOI) are used as the metrics of performance evaluation. When the VF was trained with the mean squared error and the SI model was trained with the combined loss, SDR, PESQ, and STOI were improved by 0.5, 0.06, and 0.002 respectively compared to the system trained only with the mean squared error.

A study on deep neural speech enhancement in drone noise environment (드론 소음 환경에서 심층 신경망 기반 음성 향상 기법 적용에 관한 연구)

  • Kim, Jimin;Jung, Jaehee;Yeo, Chaneun;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.3
    • /
    • pp.342-350
    • /
    • 2022
  • In this paper, actual drone noise samples are collected for speech processing in disaster environments to build noise-corrupted speech database, and speech enhancement performance is evaluated by applying spectrum subtraction and mask-based speech enhancement techniques. To improve the performance of VoiceFilter (VF), an existing deep neural network-based speech enhancement model, we apply the Self-Attention operation and use the estimated noise information as input to the Attention model. Compared to existing VF model techniques, the experimental results show 3.77%, 1.66% and 0.32% improvements for Source to Distortion Ratio (SDR), Perceptual Evaluation of Speech Quality (PESQ), and Short-Time Objective Intelligence (STOI), respectively. When trained with a 75% mix of speech data with drone sounds collected from the Internet, the relative performance drop rates for SDR, PESQ, and STOI are 3.18%, 2.79% and 0.96%, respectively, compared to using only actual drone noise. This confirms that data similar to real data can be collected and effectively used for model training for speech enhancement in environments where real data is difficult to obtain.