• Title/Summary/Keyword: frame detection

Search Result 920, Processing Time 0.025 seconds

Multiple Moving Objects Detection and Tracking Algorithm for Intelligent Surveillance System (지능형 보안 시스템을 위한 다중 물체 탐지 및 추적 알고리즘)

  • Shi, Lan Yan;Joo, Young Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.6
    • /
    • pp.741-747
    • /
    • 2012
  • In this paper, we propose a fast and robust framework for detecting and tracking multiple targets. The proposed system includes two modules: object detection module and object tracking module. In the detection module, we preprocess the input images frame by frame, such as gray and binarization. Next after extracting the foreground object from the input images, morphology technology is used to reduce noises in foreground images. We also use a block-based histogram analysis method to distinguish human and other objects. In the tracking module, color-based tracking algorithm and Kalman filter are used. After converting the RGB images into HSV images, the color-based tracking algorithm to track the multiple targets is used. Also, Kalman filter is proposed to track the object and to judge the occlusion of different objects. Finally, we show the effectiveness and the applicability of the proposed method through experiments.

Error Concealment of MPEG-2 Intra Frames by Spatiotemporal Information of Inter Frames (인터 프레임의 시공간적 정보를 이용한 MPEG-2 인트라 프레임의 오류 은닉)

  • Kang, Min-Jung;Ryu, Chul
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.4 no.2
    • /
    • pp.31-39
    • /
    • 2003
  • The MPEG-2 source coding algorithm is very sensitive to transmission errors due to using of variable-length coding. When the compressed data are transmitted, transmission errors are generated and error correction scheme is not able to be corrected well them. In the decoder error concealment (EC) techniques must be used to conceal errors and it is able to minimize degradation of video quality. The proposed algorithm is method to conceal successive macroblock errors of I-frame and utilize temporal information of B-frame and spatial information of P-frame In the previous GOP which is temporally the nearest location to I-frame. This method can improve motion distortion and blurring by temporal and spatial errors which cause at existing error concealment techniques. In network where the violent transmission errors occur, we can conceal more efficiently severe slice errors. This algorithm is Peformed in MPEG-2 video codec and Prove that we can conceal efficiently slice errors of I-frame compared with other approaches by simulations.

  • PDF

Spatio-Temporal Error Concealment of I-frame using GOP structure of MPEG-2 (MPEG-2의 GOP 구조를 이용한 I 프레임의 시공간적 오류 은닉)

  • Kang, Min-Jung;Ryu, Chul
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.1C
    • /
    • pp.72-82
    • /
    • 2004
  • This paper proposes more robust error concealment techniques (ECTs) for MPEG-2 intra coded frame. MPEG-2 source coding algorithm is very sensitive to transmission errors due to the use of variable-length coding. The transmission errors are corrected by error correction scheme, however, they cannot be revised properly. Error concealment (EC) is used to conceal the errors which are not corrected by error correction and to provide minimum visual distortion at the decoder. If errors are generated in intra coded frame, that is the starting frame of GOP, they are propagated to other inter coded frames due to the nature of motion compensated prediction coding. Such propagation of error may cause severe visual distortion. The proposed algorithm in this paper utilizes the spatio-temporal information of neighboring inter coded frames to conceal the successive slices errors occurred in I-frame. The proposed method also overcomes the problems that previous ECTs reside. The proposed algorithm generates consistent performance even in network where the violent transmission errors frequently occur. Algorithm is performed in MPEG-2 video codec and we can confirm that the proposed algorithm provides less visible distortion and higher PSNR than other approaches through simulations.

A High-Speed Synchronization Method Robust to the Effect of Initial SFO in DRM Systems (DRM 시스템에서 초기 샘플링 주파수 옵셋의 영향에 강인한 고속 동기화 방식)

  • Kwon, Ki-Won;Cho, Yong-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.1A
    • /
    • pp.73-81
    • /
    • 2012
  • In this paper, we propose a high-speed synchronization method for Digital Radio Mondiale (DRM) receivers. In order to satisfy the high-speed synchronization requirement of DRM receivers, the proposed method eliminate the initial sampling frequency synchronization process in conventional synchronization methods. In the proposed method, sampling frequency tracking is performed after integer frequency synchronization and frame synchronization. Different correlation algorithms are applied to detect the first frame of the Orthogonal Frequency Division Multiplexing (OFDM) demodulation symbol with sampling frequency offset (SFO). A frame detection algorithm that is robust to SFO is selected based on the performance analysis and simulation. Simulation results show that the proposed method reduces the time spent for initial sampling frequency synchronization even if SFO is present in the DRM signal. In addition, it is verify that inter-cell differential correlation used between reference cells is roubst to the effect of initial SFO.

Noise Reduction Algorithm using Average Estimator Least Mean Square Filter of Frame Basis (프레임 단위의 AELMS를 이용한 잡음 제거 알고리즘)

  • Ahn, Chan-Shik;Choi, Ki-Ho
    • Journal of Digital Convergence
    • /
    • v.11 no.7
    • /
    • pp.135-140
    • /
    • 2013
  • Noise estimation and detection algorithm to adapt quickly to changing noise environment using the LMS Filter. However, the LMS Filter for noise estimation for a certain period of time and need time to adapt. If the signal changes occur, have the disadvantage of being more adaptive time-consuming. Therefore, noise removal method is proposed to a frame basis AELMS Filter to compensate. In this paper, we split the input signal on a frame basis in noisy environments. Remove the LMS Filter by configuring noise predictions using the mean and variance. Noise, even if the environment changes fast adaptation time to remove the noise. Remove noise and environmental noise and speech input signal is mixed to maintain the unique characteristics of the voice is a way to reduce the damage of voice information. Noise removal method using a frame basis AELMS Filter To evaluate the performance of the noise removal. Experimental results, the attenuation obtained by removing the noise of the changing environment was improved by an average of 6.8dB.

Deep Learning based Frame Synchronization Using Convolutional Neural Network (합성곱 신경망을 이용한 딥러닝 기반의 프레임 동기 기법)

  • Lee, Eui-Soo;Jeong, Eui-Rim
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.4
    • /
    • pp.501-507
    • /
    • 2020
  • This paper proposes a new frame synchronization technique based on convolutional neural network (CNN). The conventional frame synchronizers usually find the matching instance through correlation between the received signal and the preamble. The proposed method converts the 1-dimensional correlator ouput into a 2-dimensional matrix. The 2-dimensional matrix is input to a convolutional neural network, and the convolutional neural network finds the frame arrival time. Specifically, in additive white gaussian noise (AWGN) environments, the received signals are generated with random arrival times and they are used for training data of the CNN. Through computer simulation, the false detection probabilities in various signal-to-noise ratios are investigated and compared between the proposed CNN-based technique and the conventional one. According to the results, the proposed technique shows 2dB better performance than the conventional method.

Face Detection Using Shapes and Colors in Various Backgrounds

  • Lee, Chang-Hyun;Lee, Hyun-Ji;Lee, Seung-Hyun;Oh, Joon-Taek;Park, Seung-Bo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.7
    • /
    • pp.19-27
    • /
    • 2021
  • In this paper, we propose a method for detecting characters in images and detecting facial regions, which consists of two tasks. First, we separate two different characters to detect the face position of the characters in the frame. For fast detection, we use You Only Look Once (YOLO), which finds faces in the image in real time, to extract the location of the face and mark them as object detection boxes. Second, we present three image processing methods to detect accurate face area based on object detection boxes. Each method uses HSV values extracted from the region estimated by the detection figure to detect the face region of the characters, and changes the size and shape of the detection figure to compare the accuracy of each method. Each face detection method is compared and analyzed with comparative data and image processing data for reliability verification. As a result, we achieved the highest accuracy of 87% when using the split rectangular method among circular, rectangular, and split rectangular methods.

Detection of Frame Deletion Using Coding Pattern Analysis (부호화 패턴 분석을 이용한 동영상 삭제 검출 기법)

  • Hong, Jin Hyung;Yang, Yoonmo;Oh, Byung Tae
    • Journal of Broadcast Engineering
    • /
    • v.22 no.6
    • /
    • pp.734-743
    • /
    • 2017
  • In this paper, we introduce a technique to detect the video forgery using coding pattern analysis. In the proposed method, the recently developed standard HEVC codec, which is expected to be widely used in the future, is used. First, HEVC coding patterns of the forged and the original videos are analyzed to select the discriminative features, and the selected feature vectors are learned through the machine learning technique to model the classification criteria between two groups. Experimental results show that the proposed method is more effective to detect frame deletions for HEVC-coded videos than existing works.

A Study on Degradation Characteristic of High Strength Fire Resistance Steel for Frame Structure by Acoustic Emission (음향방출법에 의한 고강도 구조요 내화강의 열화특성에 관한 연구)

  • 김현수;남기우;강창룡
    • Proceedings of the Korea Committee for Ocean Resources and Engineering Conference
    • /
    • 2000.04a
    • /
    • pp.51-56
    • /
    • 2000
  • Demand for new nondestructive evaluations is growing to detect tensile crack growth behavior to predict long term performance of materials and structure in aggressive environments, especially when they are in non-visible area. Acoustic emission technique is well suited to these problems and has drawn a keen interests because of its dynamic detection ability, extreme sensitivity and location of growing defects. In this study, we analysed acoustic emission signals obtained in tensile test of high strength fire resistance steel for frame structure with time frequency analysis methods. The results obtained are summaries as follows ; In the T and TN specimen consisting of ferrite and pearlite grains, most of acoustic emission events were produced near yield point, mainly due to the dislocation activities during the deformation. However, B specimen under $600^{\circ}C$ - 10min had a two peak which was attribute to the presence of martensite phase. The first peak is before yield point the second is after yield point. The sources of second acoustic emission peak were the debonding of martensite-martensite interface and the micro-cracking of brittle martensite phase. In $600^{\circ}C$-30min to $700^{\circ}C$-60min specimens, many signals were observed from area before yield point and counts were decreased after yield point.

  • PDF

Real-time Face Localization for Video Monitoring (무인 영상 감시 시스템을 위한 실시간 얼굴 영역 추출 알고리즘)

  • 주영현;이정훈;문영식
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.11
    • /
    • pp.48-56
    • /
    • 1998
  • In this paper, a moving object detection and face region extraction algorithm which can be used in video monitoring systems is presented. The proposed algorithm is composed of two stages. In the first stage, each frame of an input video sequence is analyzed using three measures which are based on image pixel difference. If the current frame contains moving objects, their skin regions are extracted using color and frame difference information in the second stage. Since the proposed algorithm does not rely on computationally expensive features like optical flow, it is well suited for real-time applications. Experimental results tested on various sequences have shown the robustness of the proposed algorithm.

  • PDF