• Title/Summary/Keyword: 비디오 추적

Search Result 295, Processing Time 0.027 seconds

Behavior Pattern Analysis and Design of Retrieval Descriptor based on Temporal Histogram of Moving Object Coordinates (이동 객체 좌표의 시간적 히스토그램 기반 행동패턴 분석 및 검색 디스크립터 설계)

  • Lee, Jae-kwang;Lee, Kyu-won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.4
    • /
    • pp.811-819
    • /
    • 2017
  • A behavior pattern analysis algorithm based on descriptors consists of information of a moving object and temporal histogram is proposed. Background learning is performed first for detecting, tracking and analyzing moving objects. Each object is identified using an association of the center of gravity of objects and tracked individually. A temporal histogram represents a motion pattern using positions of the center of gravity and time stamp of objects. The characteristic and behavior of objects are figured out by comparing each coordinates of a position history in the histogram. Behavior information which is comprised with numbers of a start and end frame, and coordinates of positions of objects is stored and managed in the linked list. Descriptors are made with the stored information and the video retrieval algorithm is designed. We confirmed the higher retrieval accuracy compare with conventional methods.

Digital Watermarking for Robustness of Low Bit Rate Video Contents on the Mobile (모바일 상에서 비트율이 낮은 비디오 콘텐츠의 강인성을 위한 디지털 워터마킹)

  • Seo, Jung-Hee;Park, Hung-Bog
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.1 no.1
    • /
    • pp.47-54
    • /
    • 2012
  • Video contents in the mobile environment are processed with the low bit-rate relative to normal video contents due to the consideration of network traffic; hence, it is necessary to protect the copyright of the low bit-rate video contents. The algorithm for watermarking appropriate for the mobile environment should be developed because the performance of the mobile devices is much lower than that of personal computers. This paper suggested the invisible spread spectrum watermarking method to the low bit-rate video contents, considering the low performance of the mobile device in the M-Commerce environment; it also enables to track down illegal users of the video contents to protect the copyright. The robustness of the contents with watermark is expressed with the correlation of extraction algorithm from watermark removed or distorted contents. The results of our experiment showed that we could extract the innate frequencies of M-Sequence when we extracted M-Sequence after compressing the contents with watermark easily. Therefore, illegal users of the contents can be tracked down because watermark can be extracted from the low bit-rate video contents.

Analysis of Camera Operation in MPEG2 Compressed Domain Using Generalized Hough Transform Technique (일반화된 Hough 변환기법을 이용한 MPEG2 압축영역에서의 카메라의 움직임 해석)

  • Yoo, Won-Young;Choi, Jeong-Il;Lee, Joon-Whoan
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.11
    • /
    • pp.3566-3575
    • /
    • 2000
  • In this paper, we propose an simple and efficient method to estunate the camera operation by using compressed information, which is extracted diracily from MPEG2 stream without complete decoding. In the method, the motion vector is converted into approximate optical flow by using the feature of predicted frame, because the motion vector in MPEG2 video stream is not regular sequene. And they are used to estimate the camera operation, which consist of pan, and zoom by Hough transform technique. The method provided better results than the least square method for video stream of basketball and socer games. The proposed method can have a reduced computational complexity because the information is directiv abtained in compressed domain. Additionally it can be a useful technology in content-based searching and analysis of video information. Also, the estimatd cameral operationis applicable in searching or tracking objects in MPEG2 video stream without decoding.

  • PDF

Hydrodynamic scene separation from video imagery of ocean wave using autoencoder (오토인코더를 이용한 파랑 비디오 영상에서의 수리동역학적 장면 분리 연구)

  • Kim, Taekyung;Kim, Jaeil;Kim, Jinah
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.4
    • /
    • pp.9-16
    • /
    • 2019
  • In this paper, we propose a hydrodynamic scene separation method for wave propagation from video imagery using autoencoder. In the coastal area, image analysis methods such as particle tracking and optical flow with video imagery are usually applied to measure ocean waves owing to some difficulties of direct wave observation using sensors. However, external factors such as ambient light and weather conditions considerably hamper accurate wave analysis in coastal video imagery. The proposed method extracts hydrodynamic scenes by separating only the wave motions through minimizing the effect of ambient light during wave propagation. We have visually confirmed that the separation of hydrodynamic scenes is reasonably well extracted from the ambient light and backgrounds in the two videos datasets acquired from real beach and wave flume experiments. In addition, the latent representation of the original video imagery obtained through the latent representation learning by the variational autoencoder was dominantly determined by ambient light and backgrounds, while the hydrodynamic scenes of wave propagation independently expressed well regardless of the external factors.

Tracking Algorithm For Golf Swing Using the Information of Pixels and Movements (화소 및 이동 정보를 이용한 골프 스윙 궤도 추적 알고리즘)

  • Lee, Hong, Ro;Hwang, Chi-Jung
    • The KIPS Transactions:PartB
    • /
    • v.12B no.5 s.101
    • /
    • pp.561-566
    • /
    • 2005
  • This paper presents a visual tracking algorithm for the golf swing motion analysis by using the information of the pixels of video frames and movement of the golf club to solve the problem fixed center point in model based tracking method. The model based tracking method use the polynomial function for trajectory displaying of upswing and downswing. Therefore it is under the hypothesis of the no movement of the center of gravity so this method is not for the amateurs. we proposed method using the information of pixel and movement, we first detected the motion by using the information of pixel in the frames in golf swing motion. Then we extracted the club head and hand by a properties of club shaft that consist of the parallel line and the moved location of club in up-swing and down-swing. In addition, we can extract the center point of user by tracking center point of the line between center of head and both foots. And we made an experiment with data that movement of center point is big. Finally, we can track the real trajectory of club head, hand and center point by using proposed tracking algorithm.

Flame Detection Using Haar Wavelet and Moving Average in Infrared Video (적외선 비디오에서 Haar 웨이블릿과 이동평균을 이용한 화염검출)

  • Kim, Dong-Keun
    • The KIPS Transactions:PartB
    • /
    • v.16B no.5
    • /
    • pp.367-376
    • /
    • 2009
  • In this paper, we propose a flame detection method using Haar wavelet and moving averages in outdoor infrared video sequences. Our proposed method is composed of three steps which are Haar wavelet decomposition, flame candidates detection, and their tracking and flame classification. In Haar wavelet decomposition, each frame is decomposed into 4 sub- images(LL, LH, HL, HH), and also computed high frequency energy components using LH, HL, and HH. In flame candidates detection, we compute a binary image by thresholding in LL sub-image and apply morphology operations to the binary image to remove noises. After finding initial boundaries, final candidate regions are extracted using expanding initial boundary regions to their neighborhoods. In tracking and flame classification, features of region size and high frequency energy are calculated from candidate regions and tracked using queues, and we classify whether the tracked regions are flames by temporal changes of moving averages.

Measurement of Spatial Traffic Information by Image Processing (영상처리를 이용한 공간 교통정보 측정)

  • 권영탁;소영성
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.2 no.2
    • /
    • pp.28-38
    • /
    • 2001
  • Traffic information can be broadly categorized into point information and spatial information. Point information can be obtained by chocking only the presence of vehicles at prespecified points(small area), whereas spatial information can be obtained by monitoring large area of traffic scene. To obtain spatial information by image processing, we need to track vehicles in the whole area of traffic scene. Image detector system based on global tracking consists of video input, vehicle detection, vehicle tracking, and traffic information measurement. For video input, conventional approaches used auto iris which is very poor in adaptation for sudden brightness change. Conventional methods for background generation do not yield good results in intersections with heave traffic and most of the early studies measure only point information. In this paper, we propose user-controlled iris method to remedy the deficiency of auto iris and design flame difference-based background generation method which performs far better in complicated intersections. We also propose measurement method for spatial traffic information such as interval volume/lime/velocity, queue length, and turning/forward traffic flow. We obtain measurement accuracy of 95%∼100% when applying above mentioned new methods.

  • PDF

Video Summarization Using Eye Tracking and Electroencephalogram (EEG) Data (시선추적-뇌파 기반의 비디오 요약 생성 방안 연구)

  • Kim, Hyun-Hee;Kim, Yong-Ho
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.56 no.1
    • /
    • pp.95-117
    • /
    • 2022
  • This study developed and evaluated audio-visual (AV) semantics-based video summarization methods using eye tracking and electroencephalography (EEG) data. For this study, twenty-seven university students participated in eye tracking and EEG experiments. The evaluation results showed that the average recall rate (0.73) of using both EEG and pupil diameter data for the construction of a video summary was higher than that (0.50) of using EEG data or that (0.68) of using pupil diameter data. In addition, this study reported that the reasons why the average recall (0.57) of the AV semantics-based personalized video summaries was lower than that (0.69) of the AV semantics-based generic video summaries. The differences and characteristics between the AV semantics-based video summarization methods and the text semantics-based video summarization methods were compared and analyzed.

An Embedded System Design of Collusion Attack Prevention for Multimedia Content Protection on Ubiquitous Network Environment (유비쿼터스 네트워크 환경의 멀티미디어 콘텐츠 보호를 위한 공모공격 방지 임베디드 시스템 설계)

  • Rhee, Kang-Hyeon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.1
    • /
    • pp.15-21
    • /
    • 2010
  • This paper proposes the multimedia fingerprinting code insertion algorithm when video content is distributed in P2P environment, and designs the collusion codebook SRP(Small RISC Processor) embedded system for the collusion attack prevention. In the implemented system, it is detecting the fingerprinting code inserted in the video content of the client user in which it requests an upload to the web server and in which if it is certified content then transmitted to the streaming server then the implemented system allowed to distribute in P2P network. On the contrary, if it detects the collusion code, than the implemented system blocks to transmit the video content to the streaming server and discontinues to distribute in P2P network. And also it traces the colluders who generate the collusion code and participates in the collusion attack. The collusion code of the averaging attack is generated with 10% of BIBD code v. Based on the generated collusion code, the codebook is designed. As a result, when the insert quantity of the fingerprinting code is 0.15% upper in bitplane 0~3 of the Y(luminance) element of I-frame at the video compression of ASF for a streaming service and MP4 for an offline offer of video content, the correlation coefficient of the inserted original code and the detected code is above 0.15. At the correlation coefficient is above 0.1 then the detection ratio of the collusion code is 38%, and is above 0.2 then the trace ratio of the colluder is 20%.

Shadow Removal Based on Chromaticity and Entropy for Efficient Moving Object Tracking (효과적인 이동물체 추적을 위한 색도 영상과 엔트로피 기반의 그림자 제거)

  • Park, Ki-Hong
    • Journal of Advanced Navigation Technology
    • /
    • v.18 no.4
    • /
    • pp.387-392
    • /
    • 2014
  • Recently, various research for intelligent video surveillance system have been proposed, but the existing monitoring systems are inefficient because all of situational awareness is judged by the human. In this paper, shadow removal based moving object tracking method is proposed using the chromaticity and entropy image. The background subtraction model, effective in the context awareness environment, has been applied for moving object detection. After detecting the region of moving object, the shadow candidate region has been estimated and removed by RGB based chromaticity and minimum cross entropy images. For the validity of the proposed method, the highway video is used to experiment. Some experiments are conducted so as to verify the proposed method, and as a result, shadow removal and moving object tracking are well performed.