• Title/Summary/Keyword: Noise Predictive

Search Result 122, Processing Time 0.027 seconds

Fast mode decision by skipping variable block-based motion estimation and spatial predictive coding in H.264 (H.264의 가변 블록 크기 움직임 추정 및 공간 예측 부호화 생략에 의한 고속 모드 결정법)

  • 한기훈;이영렬
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.5
    • /
    • pp.417-425
    • /
    • 2003
  • H.264, which is the latest video coding standard of both ITU-T(International Telecommunication Union-Telecommunication standardization sector) and MPEG(Moving Picture Experts Group), adopts new video coding tools such as variable block size motion estimation, multiple reference frames, quarter-pel motion estimation/compensation(ME/MC), 4${\times}$4 Integer DCT(Discrete Cosine Transform), and Rate-Distortion Optimization, etc. These new video coding tools provide good coding of efficiency compared with existing video coding standards as H.263, MPEG-4, etc. However, these new coding tools require the increase of encoder complexity. Therefore, in order to apply H.264 to many real applications, fast algorithms are required for H.264 coding tools. In this paper, when encoder MacroBlock(MB) mode is decided by rate-distortion optimization tool, fast mode decision algorithm by skipping variable block size ME/MC and spatial-predictive coding, which occupies most encoder complexity, is proposed. In terms of computational complexity, the proposed method runs about 4 times as far as JM(Joint Model) 42 encoder of H.264, while the PSNR(peak signal-to-noise ratio)s of the decoded images are maintained.

A Study on A Multi-Pulse Linear Predictive Filtering And Likelihood Ratio Test with Adaptive Threshold (멀티 펄스에 의한 선형 예측 필터링과 적응 임계값을 갖는 LRT의 연구)

  • Lee, Ki-Yong;Lee, Joo-Hun;Song, Iick-Ho;Ann, Sou-Guil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.10 no.1
    • /
    • pp.20-29
    • /
    • 1991
  • A fundamental assumption in conventional linear predictive coding (LPC) analysis procedure is that the input to an all-pole vocal tract filter is white process. In the case of periodic inputs, however, a pitch bias error is introduced into the conventional LP coefficient. Multi-pulse (MP) LP analysis can reduce this bias, provided that an estimate of the excitation is available. Since the prediction error of conventional LP analysis can be modeled as the sum of an MP excitation sequence and a random noise sequence, we can view extracting MP sequences from the prediction error as a classical detection and estimation problem. In this paper, we propose an algorithm in which the locations and amplitudes of the MP sequences are first obtained by applying a likelihood ratio test (LRT) to the prediction error, and LP coefficients free of pitch bias are then obtained from the MP sequences. To verify the performance enhancement, we iterate the above procedure with adaptive threshold at each step.

  • PDF

HDTV Image Compression Algorithm Using Leak Factor and Human Visual System (누설요소와 인간 시각 시스템을 이용한 HDTV 영상 압축 알고리듬)

  • 김용하;최진수;이광천;하영호
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.19 no.5
    • /
    • pp.822-832
    • /
    • 1994
  • DSC-HDTV image compression algorithm removes spatial, temporal, and amplitude redundancies of an image by using transform coding, motion-compensated predictive coding, and adaptive quantization, respectively. In this paper, leak processing method which is used to recover image quality quickly from scene change and transmission error and adaptive quantization using perceptual weighting factor obtained by HVS are proposed. Perceptual weighting factor is calculated by contrast sensitivity, spatio-temporal masking and frequency sensitivity. Adaptive quantization uses the perceptual weighting factor and global distortion level from buffer history state. Redundant bits according to adaptation of HVS are used for the next image coding. In the case of scene change, DFD using motion compensated predictive coding has high value, large bit rate and unstabilized buffer states since reconstructed image has large quantization noise. Thus, leak factor is set to 0 for scene change frame and leak factor to 15/16 for next frame, and global distortion level is calculated by using standard deviation. Experimental results show that image quality of the proposed method is recovered after several frames and then buffer status is stabilized.

  • PDF

Electric Arc Furnace Voltage Flicker Mitigation by Applying a Predictive Method with Closed Loop Control of the TCR/FC Compensator

  • Kiyoumarsi, Arash;Ataei, Mohhamad;Hooshmand, Rahmat-Allah;Kolagar, Arash Dehestani
    • Journal of Electrical Engineering and Technology
    • /
    • v.5 no.1
    • /
    • pp.116-128
    • /
    • 2010
  • Modeling of the three phase electric arc furnace and its voltage flicker mitigation are the purposes of this paper. For modeling of the electric arc furnace, at first, the arc is modeled by using current-voltage characteristic of a real arc. Then, the arc random characteristic has been taken into account by modulating the ac voltage via a band limited white noise. The electric arc furnace compensation with static VAr compensator, Thyristor Controlled Reactor combined with a Fixed Capacitor bank (TCR/FC), is discussed for closed loop control of the compensator. Instantaneous flicker sensation curves, before and after accomplishing compensation, are measured based on IEC standard. A new method for controlling TCR/FC compensator is proposed. This method is based on applying a predictive approach with closed loop control of the TCR/FC. In this method, by using the previous samples of the load reactive power, the future values of the load reactive power are predicted in order to consider the time delay in the compensator control. Also, in closed loop control, two different approaches are considered. The former is based on voltage regulation at the point of common coupling (PCC) and the later is based on enhancement of power factor at PCC. Finally, in order to show the effectiveness of the proposed methodology, the simulation results are provided.

The Consideration for Optimum 3D Seismic Processing Procedures in Block II, Northern Part of South Yellow Sea Basin (대륙붕 2광구 서해분지 북부지역의 3D전산처리 최적화 방안시 고려점)

  • Ko, Seung-Won;Shin, Kook-Sun;Jung, Hyun-Young
    • The Korean Journal of Petroleum Geology
    • /
    • v.11 no.1 s.12
    • /
    • pp.9-17
    • /
    • 2005
  • In the main target area of the block II, Targe-scale faults occur below the unconformity developed around 1 km in depth. The contrast of seismic velocity around the unconformity is generally so large that the strong multiples and the radical velocity variation would deteriorate the quality of migrated section due to serious distortion. More than 15 kinds of data processing techniques have been applied to improve the image resolution for the structures farmed from this active crustal activity. The bad and noisy traces were edited on the common shot gathers in the first step to get rid of acquisition problems which could take place from unfavorable conditions such as climatic change during data acquisition. Correction of amplitude attenuation caused from spherical divergence and inelastic attenuation has been also applied. Mild F/K filter was used to attenuate coherent noise such as guided waves and side scatters. Predictive deconvolution has been applied before stacking to remove peg-leg multiples and water reverberations. The velocity analysis process was conducted at every 2 km interval to analyze migration velocity, and it was iterated to get the high fidelity image. The strum noise caused from streamer was completely removed by applying predictive deconvolution in time space and ${\tau}-P$ domain. Residual multiples caused from thin layer or water bottom were eliminated through parabolic radon transform demultiple process. The migration using curved ray Kirchhoff-style algorithm has been applied to stack data. The velocity obtained after several iteration approach for MVA (migration velocity analysis) was used instead or DMO for the migration velocity. Using various testing methods, optimum seismic processing parameter can be obtained for structural and stratigraphic interpretation in the Block II, Yellow Sea Basin.

  • PDF

The Effect of the Telephone Channel to the Performance of the Speaker Verification System (전화선 채널이 화자확인 시스템의 성능에 미치는 영향)

  • 조태현;김유진;이재영;정재호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.5
    • /
    • pp.12-20
    • /
    • 1999
  • In this paper, we compared speaker verification performance of the speech data collected in clean environment and in channel environment. For the improvement of the performance of speaker verification gathered in channel, we have studied on the efficient feature parameters in channel environment and on the preprocessing. Speech DB for experiment is consisted of Korean doublet of numbers, considering the text-prompted system. Speech features including LPCC(Linear Predictive Cepstral Coefficient), MFCC(Mel Frequency Cepstral Coefficient), PLP(Perceptually Linear Prediction), LSP(Line Spectrum Pair) are analyzed. Also, the preprocessing of filtering to remove channel noise is studied. To remove or compensate for the channel effect from the extracted features, cepstral weighting, CMS(Cepstral Mean Subtraction), RASTA(RelAtive SpecTrAl) are applied. Also by presenting the speech recognition performance on each features and the processing, we compared speech recognition performance and speaker verification performance. For the evaluation of the applied speech features and processing methods, HTK(HMM Tool Kit) 2.0 is used. Giving different threshold according to male or female speaker, we compare EER(Equal Error Rate) on the clean speech data and channel data. Our simulation results show that, removing low band and high band channel noise by applying band pass filter(150~3800Hz) in preprocessing procedure, and extracting MFCC from the filtered speech, the best speaker verification performance was achieved from the view point of EER measurement.

  • PDF

Application of the Onsite EEW Technology Using the P-Wave of Seismic Records in Korea (국내 지진관측기록의 P파를 이용한 지진현장경보기술 적용)

  • Lee, HoJun;Jeon, Inchan;Seo, JeongBeom;Lee, JinKoo
    • Journal of the Society of Disaster Information
    • /
    • v.16 no.1
    • /
    • pp.133-143
    • /
    • 2020
  • Purpose: This study aims to derive a predictive empirical equation for PGV prediction from P-wave using earthquake records in Korea and to verify the reliability of Onsite EEW. Method: The noise of P wave is removed from the observations of 627 seismic events in Korea to derive an empirical equation with PGV on the base rock, and reliability of Onsite alarms is verified from comparing PGV's predictions and observations through simulation using the empirical equation. Result: P-waves were extracted using the Filter Picker from earthquake observation records that eliminated noises, a linear regression with PGV was used to derive a predictive empirical equation for Onsite EEW. Through the on-site warning simulation we could get a success rate of 80% within the MMI±1 error range above MMI IV or higher. Conclusion: Through this study, the design feasibility and performance of Onsite EEWS using domestic earthquake records were verified. In order to increase validity, additional medium-sized seismic observations from abroad are required, the mis-detection of P waves is controlled, and the effect of seismic amplification on the surface is required.

Tracking Control using Disturbance Observer and ZPETC on LonWorks/IP Virtual Device Network (LonWorks/IP 가상 디바이스 네트워크에서 외란관측기와 ZPETC를 이용한 추종제어)

  • Song, Ki-Won
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.44 no.1
    • /
    • pp.33-39
    • /
    • 2007
  • LonWorks over IP (LonWorks/IP) virtual device network (VDN) is an integrated form of LonWorks device network and IP data network. LonWorks/IP VDN can offer ubiquitous access to the information on the factory floor and make it possible for the predictive and preventive maintenance on the factory floor. Timely response is inevitable for predictive and preventive maintenance on the factory floor under the real-time distributed control. The network induced uncertain time delay deteriorates the performance and stability of the real-time distributed control system on LonWorks/IP virtual device network. Therefore, in order to guarantee the stability and to improve the performance of the networked distributed control system the time-varying uncertain time delay needs to be compensated for. In this paper, under the real-time distributed control on LonWorks/IP VDN with uncertain time delay, a control scheme based on disturbance observer and ZPETC(Zero Phase Error Tracking Controller) phase lag compensator is proposed and tested through computer simulation. The result of the proposed control is compared with that of internal model controller (IMC) based on Smith predictor and disturbance observer. It is shown that the proposed control scheme is disturbance and noise tolerant and can significantly improve the stability and the tracking performance of the periodic reference. Therefore, the proposed control scheme is well suited for the distributed servo control for predictive maintenance on LonWorks/IP-based virtual device network with time-varying delay.

Development of smart car intelligent wheel hub bearing embedded system using predictive diagnosis algorithm

  • Sam-Taek Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.10
    • /
    • pp.1-8
    • /
    • 2023
  • If there is a defect in the wheel bearing, which is a major part of the car, it can cause problems such as traffic accidents. In order to solve this problem, big data is collected and monitoring is conducted to provide early information on the presence or absence of wheel bearing failure and type of failure through predictive diagnosis and management technology. System development is needed. In this paper, to implement such an intelligent wheel hub bearing maintenance system, we develop an embedded system equipped with sensors for monitoring reliability and soundness and algorithms for predictive diagnosis. The algorithm used acquires vibration signals from acceleration sensors installed in wheel bearings and can predict and diagnose failures through big data technology through signal processing techniques, fault frequency analysis, and health characteristic parameter definition. The implemented algorithm applies a stable signal extraction algorithm that can minimize vibration frequency components and maximize vibration components occurring in wheel bearings. In noise removal using a filter, an artificial intelligence-based soundness extraction algorithm is applied, and FFT is applied. The fault frequency was analyzed and the fault was diagnosed by extracting fault characteristic factors. The performance target of this system was over 12,800 ODR, and the target was met through test results.

Thermal Imagery-based Object Detection Algorithm for Low-Light Level Nighttime Surveillance System (저조도 야간 감시 시스템을 위한 열영상 기반 객체 검출 알고리즘)

  • Chang, Jeong-Uk;Lin, Chi-Ho
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.19 no.3
    • /
    • pp.129-136
    • /
    • 2020
  • In this paper, we propose a thermal imagery-based object detection algorithm for low-light level nighttime surveillance system. Many features selected by Haar-like feature selection algorithm and existing Adaboost algorithm are often vulnerable to noise and problems with similar or overlapping feature set for learning samples. It also removes noise from the feature set from the surveillance image of the low-light night environment, and implements it using the lightweight extended Haar feature and adaboost learning algorithm to enable fast and efficient real-time feature selection. Experiments use extended Haar feature points to recognize non-predictive objects with motion in nighttime low-light environments. The Adaboost learning algorithm with video frame 800*600 thermal image as input is implemented with CUDA 9.0 platform for simulation. As a result, the results of object detection confirmed that the success rate was about 90% or more, and the processing speed was about 30% faster than the computational results obtained through histogram equalization operations in general images.