• Title/Summary/Keyword: 이동영상

Search Result 3,200, Processing Time 0.038 seconds

하구둑 준공으로 인한 조간대 하천하구 물리화학적 변화탐지 LandsatTM 기반 원격탐사 모니터링

  • Sin, Eon-Seok;Kim, Hyeong-Mu;Lee, Jae-Bong;Lee, Hong-Ro
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2004.11a
    • /
    • pp.203-208
    • /
    • 2004
  • 환경오염과 자연재해의 증가추세에 대비한 하천과 해양 수자원의 통합관리시스템에 대한 요구가 증가하고 있으나 기존의 하천수자원관리시스템만으로는 이러한 요구에 부응하지 못하는 실정이다. 위성영상은 관심지역에 대한 광역적 조사와 시계열적 관찰에 효율적인 수단을 제공하므로 하천과 해양수자원의 동시병행 관찰에 유리하다. 본 연구는 위성영상을 이용한 하천 하구변화탐지 모니터링 원격탐사 시스템 구축을 위한 효과적인 영상획득과 영상보간법을 활용할 시스템 모델을 설계하고 구현한다. 이 위성영상을 이용한 하천 하구 변화탐지 모니터링 시스템을 위한 효과적인 영상획득과 영상보간법의 활용 시스템 모델을 전북과 충남 금강하구에서의 해수, 담수 혼합수역의 변화와 이동, 그리고 바다면적과 육지면적의 변화와 이동의 관찰에 적용하고 검증한 결과 제안한 하천 하구 변화탐지 모니터링 시스템을 위한 위성영상기반 모델이 기존 그리드정점관측방법에 비해 효과적이라는 결과를 얻었다.

  • PDF

Modified Mean Shift for Color Image Processing (컬러 영상 처리를 위한 Mean Shift 기법 개선)

  • Hwang, Young-chul;Bae, Jung-ho;Cha, Eui-young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.05a
    • /
    • pp.407-410
    • /
    • 2009
  • 본 논문에서는 개선된 mean shift를 이용한 컬러 영상 분할을 소개한다. Mean shift는 Yizong Cheng에 의해 재조명되고 Dorin Comaniciu 등에 의해 정리되어 영상 필터링(image filtering), 영상 분할(image segmentation), 물체 추적(object tracking) 등 여러 응용 분야에 널리 활용되고 있다. 커널을 이용해 밀도를 추정하고 밀도가 가장 높은 점으로 커널을 연속적으로 이동함으로써 지역적으로 주요한 위치로 데이터 값을 갱신시킨다. 그러나 영상에 포함된 모든 화소에 대해 mean shift를 수행해야하기 때문에 연산 시간이 많이 소요되는 단점이 있다. 본 논문에서는 mean shift 필터링 과정을 분석하고 참조수렴방법과 강제수렴방법을 이용해 소요 시간을 단축시켰다. 모든 점에 대해 mean shift를 수행하는 대신 특정 조건을 만족하는 픽셀은 이웃 픽셀의 수렴 값을 참조하고, mean shift 과정에 진동 또는 미미한 이동을 계속하는 픽셀은 강제 수렴을 실시하였다. 개선된 방법과 기존의 mean shift 방식을 적용하여 영상 필터링과 영상 분할에 적용한 실험에서 결과 영상에는 차이가 적고 기존의 방법에 비해 수행 시간이 24% 정도 소요됨을 확인하였다.

  • PDF

Study on the Real-Time Moving Object Tracking using Fuzzy Controller (퍼지 제어기를 이용한 실시간 이동 물체 추적에 관한 연구)

  • Kim Gwan-Hyung;Kang Sung-In;Lee Jae-Hyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.1
    • /
    • pp.191-196
    • /
    • 2006
  • This paper presents the moving object tracking method using vision system. In order to track object in real time, the image of moving object have to be located the origin of the image coordinate axes. Accordingly, Fuzzy Control System is investigated for tracking the moving object, which control the camera module with Pan/Tilt mechanism. Hereafter, so the this system is applied to mobile robot, we design and implement image processing board for vision system. Also fuzzy controller is implemented to the StrongArm board. Finally, the proposed fuzzy controller is useful for the real-time moving object tracking system by experiment.

A Moving Object Tracking System from a Moving Camera by Integration of Motion Estimation and Double Difference (BBME와 DD를 통합한 움직이는 카메라로부터의 이동물체 추적 시스템)

  • 설성욱;송진기;장지혜;이철헌;남기곤
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.2
    • /
    • pp.173-181
    • /
    • 2004
  • In this paper, we propose a system for automatic moving object detection and tracking in sequence images acquired from a moving camera. The proposed algorithm consists of moving object detection and its tracking. Moving object can be detected by integration of BBME and DD method We segment the detected object using histogram back projection, match it using histogram intersection, extract and track it using XY-projection. Computer simulation results have shown that the proposed algorithm is reliable and can successfully detect and track a moving object on image sequences obtained by a moving camera.

Augmented Reality Using Projective Information (비유클리드공간 정보를 사용하는 증강현실)

  • 서용덕;홍기상
    • Journal of Broadcast Engineering
    • /
    • v.4 no.2
    • /
    • pp.87-102
    • /
    • 1999
  • We propose an algorithm for augmenting a real video sequence with views of graphics ojbects without metric calibration of the video camera by representing the motion of the video camera in projective space. We define a virtual camera, through which views of graphics objects are generated. attached to the real camera by specifying image locations of the world coordinate system of the virtual world. The virtual camera is decomposed into calibration and motion components in order to make full use of graphics tools. The projective motion of the real camera recovered from image matches has a function of transferring the virtual camera and makes the virtual camera move according to the motion of the real camera. The virtual camera also follows the change of the internal parameters of the real camera. This paper shows theoretical and experimental results of our application of non-metric vision to augmented reality.

  • PDF

Vision-based Target Tracking for UAV and Relative Depth Estimation using Optical Flow (무인 항공기의 영상기반 목표물 추적과 광류를 이용한 상대깊이 추정)

  • Jo, Seon-Yeong;Kim, Jong-Hun;Kim, Jung-Ho;Lee, Dae-Woo;Cho, Kyeum-Rae
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.37 no.3
    • /
    • pp.267-274
    • /
    • 2009
  • Recently, UAVs (Unmanned Aerial Vehicles) are expected much as the Unmanned Systems for various missions. These missions are often based on the Vision System. Especially, missions such as surveillance and pursuit have a process which is carried on through the transmitted vision data from the UAV. In case of small UAVs, monocular vision is often used to consider weights and expenses. Research of missions performance using the monocular vision is continued but, actually, ground and target model have difference in distance from the UAV. So, 3D distance measurement is still incorrect. In this study, Mean-Shift Algorithm, Optical Flow and Subspace Method are posed to estimate the relative depth. Mean-Shift Algorithm is used for target tracking and determining Region of Interest (ROI). Optical Flow includes image motion information using pixel intensity. After that, Subspace Method computes the translation and rotation of image and estimates the relative depth. Finally, we present the results of this study using images obtained from the UAV experiments.

Real-Time Tracking of Moving Object by Adaptive Search in Spatial-temporal Spaces (시공간 적응탐색에 의한 실시간 이동물체 추적)

  • Kim, Gye-Young;Choi, Hyung-Ill
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.11
    • /
    • pp.63-77
    • /
    • 1994
  • This paper describes the real-time system which, through analyzing a sequence of images, can extract motional information on a moving object and can contol servo equipment to always locate the moving object at the center of an image frame. An image is a vast amount of two-dimensional signal, so it takes a lot of time to analyze the whole quantity of a given image. Especially, the time needed to load pixels from a memory to processor increase exponentially as the size of an image increases. To solve such a problem and track a moving object in real-time, this paper addresses how to selectively search the spatial and time domain. Based on the selective search of spatial and time domain, this paper suggests various types of techniques which are essential in implementing a real-time tracking system. That is, this paper describes how to detect an entrance of a moving object in the field of view of a camera and the direction of the entrance, how to determine the time interval of adjacent images, how to determine nonstationary areas formed by a moving object and calculated velocity and position information of a moving object based on the determined areas, how to control servo equipment to locate the moving object at the center of an image frame, and how to properly adjust time interval(${\Delta}$t) to track an object taking variable speed.

  • PDF

Digital Watermarking on Image for View-point Change and Malicious Attacks (영상의 시점변화와 악의적 공격에 대한 디지털 워터마킹)

  • Kim, Bo-Ra;Seo, Young-Ho;Kim, Dong-Wook
    • Journal of Broadcast Engineering
    • /
    • v.19 no.3
    • /
    • pp.342-354
    • /
    • 2014
  • This paper deals with digital watermarking methods to protect ownership of image with targeting the ultra-high multi-view or free-view image service in which an arbitrary viewpoint image should be rendered at the user side. The main purpose of it is not to propose a superior method to the previous methods but to show how difficult to construct a watermarking scheme to overcome the viewpoint translation attack. Therefore we target the images with various attacks including viewpoint translation. This paper first shows how high the error rate of the extracted watermark data from viewpoint-translated image by basic schemes of the method using 2DDCT(2D discrete cosine transform) and the one using 2DDWT(2D discrete wavelet transform), which are for 2D image. Because the difficulty in watermarking for the viewpoint-translated image comes from the fact that we don't know the translated viewpoint, we propose a scheme to find the translated viewpoint, which uses the image and the corresponding depth information at the original viewpoint. This method is used to construct the two non-blind watermarking methods to be proposed. They are used to show that recovery of the viewpoint affect a great deal of the error rate of the extracted watermark. Also by comparing the performances of the proposed methods and the previous ones, we show that the proposed ones are better in invisibility and robustness, even if they are non-blind.

Subpixel Shift Estimation in Noisy Image Using Iterative Phase Correlation of A Selected Local Region (잡음 영상에서 국부 영역의 반복적인 위상 상관도를 이용한 부화소 이동량 추정방법)

  • Ha, Ho-Gun;Jang, In-Su;Ko, Kyung-Woo;Ha, Yeong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.1
    • /
    • pp.103-119
    • /
    • 2010
  • In this paper, we propose a subpixel shift estimation method using phase correlation with a local region for the registration of noisy images. Phase correlation is commonly used to estimate the subpixel shift between images, which is derived from analyzing shifted and downsampled images. However, when the images are affected by additive white Gaussian noise and aliasing artifacts, the estimation error is increased. Thus, instead of using the whole image, the proposed method uses a specific local region that is less affect by noises. In addition, to improve the estimation accuracy, iterative phase correlation is applied between selected local regions rather than using a fitting function. the restricted range is determined by analyzing the maximum peak and the two adjacent values of the inverse Fourier transform of the normalized cross power spectrum. In the experiments, the proposed method shows higher accuracy in registering noisy images than the other methods. Thus, the edge-sharpness and clearness in the super-resolved image is also improved.

Emergency Situation Detection using Images from Surveillance Camera and Mobile Robot Tracking System (감시카메라 영상기반 응급상황 탐지 및 이동로봇 추적 시스템)

  • Han, Tae-Woo;Seo, Yong-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.9 no.5
    • /
    • pp.101-107
    • /
    • 2009
  • In this paper, we describe a method of detecting emergency situation using images from surveillance cameras and propose a mobile robot tracking system for detailed examination of that situation. We are able to track a few persons and recognize their actions by an analyzing image sequences acquired from a fixed camera on all sides of buildings. When emergency situation is detected, a mobile robot moves and closely examines the place where the emergency is occurred. In order to recognize actions of a few persons using a sequence of images from surveillance cameras images, we need to track and manage a list of the regions which are regarded as human appearances. Interest regions are segmented from the background using MOG(Mixture of Gaussian) model and continuously tracked using appearance model in a single image. Then we construct a MHI(Motion History Image) for a tracked person using silhouette information of region blobs and model actions. Emergency situation is finally detected by applying these information to neural network. And we also implement mobile robot tracking technology using the distance between the person and a mobile robot.

  • PDF