• Title/Summary/Keyword: Image rotation

Search Result 842, Processing Time 0.03 seconds

Surf points based Moving Target Detection and Long-term Tracking in Aerial Videos

  • Zhu, Juan-juan;Sun, Wei;Guo, Bao-long;Li, Cheng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.11
    • /
    • pp.5624-5638
    • /
    • 2016
  • A novel method based on Surf points is proposed to detect and lock-track single ground target in aerial videos. Videos captured by moving cameras contain complex motions, which bring difficulty in moving object detection. Our approach contains three parts: moving target template detection, search area estimation and target tracking. Global motion estimation and compensation are first made by grids-sampling Surf points selecting and matching. And then, the single ground target is detected by joint spatial-temporal information processing. The temporal process is made by calculating difference between compensated reference and current image and the spatial process is implementing morphological operations and adaptive binarization. The second part improves KALMAN filter with surf points scale information to predict target position and search area adaptively. Lastly, the local Surf points of target template are matched in this search region to realize target tracking. The long-term tracking is updated following target scaling, occlusion and large deformation. Experimental results show that the algorithm can correctly detect small moving target in dynamic scenes with complex motions. It is robust to vehicle dithering and target scale changing, rotation, especially partial occlusion or temporal complete occlusion. Comparing with traditional algorithms, our method enables real time operation, processing $520{\times}390$ frames at around 15fps.

Line-based Image Stabilization (선 기반 영상안정 방법에 관한 연구)

  • 차용준;소영성
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2001.06a
    • /
    • pp.165-168
    • /
    • 2001
  • 본 논문에서는 카메라 또는 카메라 플랫폼의 흔들림 등 외부 영향과 비디오 시퀀스내의 모션이 함께 존재할 경우 출렁이는 비디오를 전자적으로 안정화하는 방법을 제안한다. 일반적인 영상 안정 시스템은 모션 측정과 모션 보상의 두 과정으로 구성되는데 모션 측정에서는 프레임간 모션 모델을 가정하고 파라메타를 측정하며 모션 보상에서는 측정된 파라메타를 이용하여 모션을 보상한다. 영상 내에 카메라 모션 이외의 움직임이 있을 경우 파라메타의 측정을 일관성 없게 만들 수 있으므로 이를 해결하기 위해 MVSD(Motion Vector Scatter Diagram)에 기반한 영상 안정 방법이 제안되었다. 그러나 이 방법은 최적화 파라메타를 정량화 하는데 한계가 있고 또한 계산 시간이 오래 걸리는 단점이 있어 이의 해결을 위해 본 논문에서는 선 기반(Line-based) 영상 안정 방법을 제안한다. 이 방법은 먼저 기준 영상에서 median filter를 이용해 영상 내의 코너를 검출하고 특징적인 두 점을 선택하여 이를 선으로 연결한다. 현재 영상에서 correlation을 이용하여 상응하는 두 특징점을 찾고 subpixel 방법으로 정확한 위치를 계산하여 선을 구한다. 이 두 선을 일치시키는 과정에서 모션 파라메타를 구하는데 먼저 평행 이동을 통해 한쪽 글을 일치시키고 이 과정에서 translation x, y 파라메타를 구한다. 다음 단계에서 한 쪽 끝이 일치된 두 선이 이루는 각을 계산하여 rotation 파라메타를 구한다. 이 방법으로 구해진 파라메타를 이용하여 모션 보상을 함으로서 영상 안정을 이를 수 있었다.

  • PDF

DEVELOPMENT OF A TOY INTERFEROMETER FOR EDUCATION AND OBSERVATION OF SUN AT 21 cm

  • Park, Yong-Sun;Kim, Chang-Hee;Choi, Sang-In;Lee, Joo-Young;Jang, Woo-Min;Kim, Woo-Yeon;Jeong, Dae-Heon
    • Journal of The Korean Astronomical Society
    • /
    • v.41 no.3
    • /
    • pp.77-81
    • /
    • 2008
  • As a continuation of a previous work by Park et al. (2006), we have developed a two-element radio interferometer that can measure both the phase and amplitude of a visibility function. Two small radio telescopes with diameters of 2.3 m are used as before, but this time an external reference oscillator is shared by the two telescopes so that the local oscillator frequencies are identical. We do not use a hardware correlator; instead we record signals from the two telescopes onto a PC and then perform software correlation. Complex visibilities are obtained toward the sun at ${\lambda}\;=\;21\;cm$, for 24 baselines with the use of the earth rotation and positional changes of one element, where the maximum baseline length projected onto UV plane is ${\sim}\;90{\lambda}$. As expected, the visibility amplitude decreases with the baseline length, while the phase is almost constant. The image obtained by the Fourier transformation of the visibility function nicely delineates the sun, which is barely resolved due to the limited baseline length. The experiment demonstrates that this system can be used as a "toy" interferometer at least for the education of (under)graduate students.

Face Detection for Cast Searching in Video (비디오 등장인물 검색을 위한 얼굴검출)

  • Paik Seung-ho;Kim Jun-hwan;Yoo Ji-sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.10C
    • /
    • pp.983-991
    • /
    • 2005
  • Human faces are commonly found in a video such as a drama and provide useful information for video content analysis. Therefore, face detection plays an important role in applications such as face recognition, and face image database management. In this paper, we propose a face detection algorithm based on pre-processing of scene change detection for indexing and cast searching in video. The proposed algorithm consists of three stages: scene change detection stage, face region detection stage, and eyes and mouth detection stage. Experimental results show that the proposed algorithm can detect faces successfully over a wide range of facial variations in scale, rotation, pose, and position, and the performance is improved by $24\%$with profile images comparing with conventional methods using color components.

Image Retrieval using Corner Detection and Rotation Invariant Gabor Filter (코너 검출 및 회전불변 Gabor 필터를 이용한 영상 검색)

  • You, Hee-Jun;Kim, Dong-Hoon;Eum, Min-Young;Shin, Dae-Kyu;Kim, Hyun-Sool;Park, Sang-Hui
    • Proceedings of the KIEE Conference
    • /
    • 2002.07d
    • /
    • pp.2595-2597
    • /
    • 2002
  • 오늘날 많은 디지털 저장 매체의 발달로 방대한 양의 영상 데이터가 데이터베이스화 되고 있으며 이러한 데이터베이스에서 필요한 영상 데이터론 효율적으로 검색하는 방범이 중요한 문제로 대두되고 있다. 현재 영상의 색상, 형태 및 질감 특성을 사용하여 다양한 영상 검색 방법이 제안되고 있으며 본 연구에선 이중 질감을 특징으로 하는 Gator 특징 벡터를 사용하고자 한다. 즉, 영상의 인터레스트 포인트를 찾아내어 그 점에서 Gabor 웨이블릿을 이용하여 특징 벡터를 추출하고 VQ를 기반으로 한 히스토그램 인터섹션 방법을 이용하여 영상 검색을 한다. 기존의 Gator 웨이블릿 방법은 영상의 회전에 대해 잘 동작하지 못하는 단점을 가지고 있으며 이는 회전 영상에 대한 검색율 저하에 크게 작용한다. 이 문제를 해결하고자 본 논문에선 회전 불변 Gabor 필터를 이용한 영상 검색 방법을 제안하고자 한다.

  • PDF

A Study on the Automatic Identification of HANGEUL Seal by using the Image Processing (영상처리에 의한 한글인장의 자동직별에 관한 연구)

  • 이기돈;전병민;김상운
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.10 no.2
    • /
    • pp.69-75
    • /
    • 1985
  • The proposed seal identification procedure consists of the smoothing, rotation, thinning, and matching techniques. The seal images which are scanned by CCTV are thresholded into the binary prctures of $256{\times}256$ pixels through A/D converter and 6502 microcomputer. After the sample and target images are ratated into an identical orientation, a thinning process is used to extract the skeletons of the character strobes. The wighted map is constructed by distance weight from which the distance weighted correlation C is computed. The C is compared with the dicision constant C or C for the purpose of seal indentification. The identification rate is 95% and the total CPU time is less than 3 minutes for each identification in the experiment.

  • PDF

An integrated visual-inertial technique for structural displacement and velocity measurement

  • Chang, C.C.;Xiao, X.H.
    • Smart Structures and Systems
    • /
    • v.6 no.9
    • /
    • pp.1025-1039
    • /
    • 2010
  • Measuring displacement response for civil structures is very important for assessing their performance, safety and integrity. Recently, video-based techniques that utilize low-cost high-resolution digital cameras have been developed for such an application. These techniques however have relatively low sampling frequency and the results are usually contaminated with noises. In this study, an integrated visual-inertial measurement method that combines a monocular videogrammetric displacement measurement technique and a collocated accelerometer is proposed for displacement and velocity measurement of civil engineering structures. The monocular videogrammetric technique extracts three-dimensional translation and rotation of a planar target from an image sequence recorded by one camera. The obtained displacement is then fused with acceleration measured from a collocated accelerometer using a multi-rate Kalman filter with smoothing technique. This data fusion not only can improve the accuracy and the frequency bandwidth of displacement measurement but also provide estimate for velocity. The proposed measurement technique is illustrated by a shake table test and a pedestrian bridge test. Results show that the fusion of displacement and acceleration can mitigate their respective limitations and produce more accurate displacement and velocity responses with a broader frequency bandwidth.

Avalanche and Bit Independence Properties of Photon-counting Double Random Phase Encoding in Gyrator Domain

  • Lee, Jieun;Sultana, Nishat;Yi, Faliu;Moon, Inkyu
    • Current Optics and Photonics
    • /
    • v.2 no.4
    • /
    • pp.368-377
    • /
    • 2018
  • In this paper, we evaluate cryptographic properties of a double random phase encoding (DRPE) scheme in the discrete Gyrator domain with avalanche and bit independence criterions. DRPE in the discrete Gyrator domain is reported to have higher security than traditional DRPE in the Fourier domain because the rotation angle involved in the Gyrator transform is viewed as additional secret keys. However, our numerical experimental results demonstrate that the DRPE in the discrete Gyrator domain has an excellent bit independence feature but does not possess a good avalanche effect property and hence needs to be improved to satisfy with acceptable avalanche effect that would be robust against statistical-based cryptanalysis. We compare our results with the avalanche and bit independence criterion (BIC) performances of the conventional DRPE scheme, and improve the avalanche effect of DRPE in the discrete Gyrator domain by integrating a photon counting imaging technique. Although the Gyrator transform-based image cryptosystem has been studied, to the best of our knowledge, this is the first report on a cryptographic evaluation of discrete Gyrator transform with avalanche and bit independence criterions.

Unconstrained e-Book Control Program by Detecting Facial Characteristic Point and Tracking in Real-time (얼굴의 특이점 검출 및 실시간 추적을 이용한 e-Book 제어)

  • Kim, Hyun-Woo;Park, Joo-Yong;Lee, Jeong-Jick;Yoon, Young-Ro
    • Journal of Biomedical Engineering Research
    • /
    • v.35 no.2
    • /
    • pp.14-18
    • /
    • 2014
  • This study is about e-Book program based on human-computer interaction(HCI) system for physically handicapped person. By acquiring background knowledge of HCI, we know that if we use vision-based interface we can replace current computer input devices by extracting any characteristic point and tracing it. We decided betweeneyes as a characteristic point by analyzing facial input image using webcam. But because of three-dimensional structure of glasses, the person who is wearing glasses wasn't suitable for tracing between-eyes. So we changed characteristic point to the bridge of the nose after detecting between-eyes. By using this technique, we could trace rotation of head in real-time regardless of glasses. To test this program's usefulness, we conducted an experiment to analyze the test result on actual application. Consequently, we got 96.5% rate of success for controlling e-Book under proper condition by analyzing the test result of 20 subjects.

Identification System Based on Partial Face Feature Extraction (부분 얼굴 특징 추출에 기반한 신원 확인 시스템)

  • Choi, Sun-Hyung;Cho, Seong-Won;Chung, Sun-Tae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.2
    • /
    • pp.168-173
    • /
    • 2012
  • This paper presents a new human identification algorithm using partial features of the uncovered portion of face when a person wears a mask. After the face area is detected, the feature is extracted from the eye area above the mask. The identification process is performed by comparing the acquired one with the registered features. For extracting features SIFT(scale invariant feature transform) algorithm is used. The extracted features are independent of brightness and size- and rotation-invariant for the image. The experiment results show the effectiveness of the suggested algorithm.