• Title/Summary/Keyword: video tracking

Search Result 617, Processing Time 1.986 seconds

Determination and evaluation of dynamic properties for structures using UAV-based video and computer vision system

  • Rithy Prak;Ji Ho Park;Sanggi Jeong;Arum Jang;Min Jae Park;Thomas H.-K. Kang;Young K. Ju
    • Computers and Concrete
    • /
    • v.31 no.5
    • /
    • pp.457-468
    • /
    • 2023
  • Buildings, bridges, and dams are examples of civil infrastructure that play an important role in public life. These structures are prone to structural variations over time as a result of external forces that might disrupt the operation of the structures, cause structural integrity issues, and raise safety concerns for the occupants. Therefore, monitoring the state of a structure, also known as structural health monitoring (SHM), is essential. Owing to the emergence of the fourth industrial revolution, next-generation sensors, such as wireless sensors, UAVs, and video cameras, have recently been utilized to improve the quality and efficiency of building forensics. This study presents a method that uses a target-based system to estimate the dynamic displacement and its corresponding dynamic properties of structures using UAV-based video. A laboratory experiment was performed to verify the tracking technique using a shaking table to excite an SDOF specimen and comparing the results between a laser distance sensor, accelerometer, and fixed camera. Then a field test was conducted to validate the proposed framework. One target marker is placed on the specimen, and another marker is attached to the ground, which serves as a stationary reference to account for the undesired UAV movement. The results from the UAV and stationary camera displayed a root mean square (RMS) error of 2.02% for the displacement, and after post-processing the displacement data using an OMA method, the identified natural frequency and damping ratio showed significant accuracy and similarities. The findings illustrate the capabilities and reliabilities of the methodology using UAV to evaluate the dynamic properties of structures.

Methodology for Vehicle Trajectory Detection Using Long Distance Image Tracking (원거리 차량 추적 감지 방법)

  • Oh, Ju-Taek;Min, Joon-Young;Heo, Byung-Do
    • International Journal of Highway Engineering
    • /
    • v.10 no.2
    • /
    • pp.159-166
    • /
    • 2008
  • Video image processing systems (VIPS) offer numerous benefits to transportation models and applications, due to their ability to monitor traffic in real time. VIPS based on a wide-area detection algorithm provide traffic parameters such as flow and velocity as well as occupancy and density. However, most current commercial VIPS utilize a tripwire detection algorithm that examines image intensity changes in the detection regions to indicate vehicle presence and passage, i.e., they do not identify individual vehicles as unique targets. If VIPS are developed to track individual vehicles and thus trace vehicle trajectories, many existing transportation models will benefit from more detailed information of individual vehicles. Furthermore, additional information obtained from the vehicle trajectories will improve incident detection by identifying lane change maneuvers and acceleration/deceleration patterns. However, unlike human vision, VIPS cameras have difficulty in recognizing vehicle movements over a detection zone longer than 100 meters. Over such a distance, the camera operators need to zoom in to recognize objects. As a result, vehicle tracking with a single camera is limited to detection zones under 100m. This paper develops a methodology capable of monitoring individual vehicle trajectories based on image processing. To improve traffic flow surveillance, a long distance tracking algorithm for use over 200m is developed with multi-closed circuit television (CCTV) cameras. The algorithm is capable of recognizing individual vehicle maneuvers and increasing the effectiveness of incident detection.

  • PDF

Face Tracking Method based on Neural Oscillatory Network Using Color Information (컬러 정보를 이용한 신경 진동망 기반 얼굴추적 방법)

  • Hwang, Yong-Won;Oh, Sang-Rok;You, Bum-Jae;Lee, Ji-Yong;Park, Mig-Non;Jeong, Mun-Ho
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.48 no.2
    • /
    • pp.40-46
    • /
    • 2011
  • This paper proposes a real-time face detection and tracking system that uses neural oscillators which can be applied to access regulation system or control systems of user authentication as well as a new algorithm. We study a way to track faces using the neural oscillatory network which imitates the artificial neural net of information handing ability of human and animals, and biological movement characteristic of a singular neuron. The system that is suggested in this paper can broadly be broken into two stages of process. The first stage is the process of face extraction, which involves the acquisition of real-time RGB24bit color video delivering with the use of a cheap webcam. LEGION(Locally Excitatory Globally Inhibitory)algorithm is suggested as the face extraction method to be preceded for face tracking. The second stage is a method for face tracking by discovering the leader neuron that has the greatest connection strength amongst neighbor neuron of extracted face area. Along with the suggested method, the necessary element of face track such as stability as well as scale problem can be resolved.

Object Tracking And Elimination Using Lod Edge Maps Generated from Modified Canny Edge Maps (수정된 캐니 에지 맵으로부터 만들어진 LOD 에지 맵을 이용한 물체 추적 및 소거)

  • Park, Ji-Hun;Jang, Yung-Dae;Lee, Dong-Hun;Lee, Jong-Kwan;Ham, Mi-Ok
    • The KIPS Transactions:PartB
    • /
    • v.14B no.3 s.113
    • /
    • pp.171-182
    • /
    • 2007
  • We propose a simple method for tracking a nonparameterized subject contour in a single video stream with a moving camera and changing background. Then we present a method to eliminate the tracked contour object by replacing with the background scene we get from other frame. First we track the object using LOD (Level-of-Detail) canny edge maps, then we generate background of each image frame and replace the tracked object in a scene by a background image from other frame that is not occluded by the tracked object. Our tracking method is based on level-of-detail (LOD) modified Canny edge maps and graph-based routing operations on the LOD maps. We get more edge pixels along LOD hierarchy. Our accurate tracking is based on reducing effects from irrelevant edges by selecting the stronger edge pixels, thereby relying on the current frame edge pixel as much as possible. The first frame background scene is determined by camera motion, camera movement between two image frames, and other background scenes are computed from the previous background scenes. The computed background scenes are used to eliminate the tracked object from the scene. In order to remove the tracked object, we generate approximated background for the first frame. Background images for subsequent frames are based on the first frame background or previous frame images. This approach is based on computing camera motion. Our experimental results show that our method works nice for moderate camera movement with small object shape changes.

Computation ally Efficient Video Object Segmentation using SOM-Based Hierarchical Clustering (SOM 기반의 계층적 군집 방법을 이용한 계산 효율적 비디오 객체 분할)

  • Jung Chan-Ho;Kim Gyeong-Hwan
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.4 s.310
    • /
    • pp.74-86
    • /
    • 2006
  • This paper proposes a robust and computationally efficient algorithm for automatic video object segmentation. For implementing the spatio-temporal segmentation, which aims for efficient combination of the motion segmentation and the color segmentation, an SOM-based hierarchical clustering method in which the segmentation process is regarded as clustering of feature vectors is employed. As results, problems of high computational complexity which required for obtaining exact segmentation results in conventional video object segmentation methods, and the performance degradation due to noise are significantly reduced. A measure of motion vector reliability which employs MRF-based MAP estimation scheme has been introduced to minimize the influence from the motion estimation error. In addition, a noise elimination scheme based on the motion reliability histogram and a clustering validity index for automatically identifying the number of objects in the scene have been applied. A cross projection method for effective object tracking and a dynamic memory to maintain temporal coherency have been introduced as well. A set of experiments has been conducted over several video sequences to evaluate the proposed algorithm, and the efficiency in terms of computational complexity, robustness from noise, and higher segmentation accuracy of the proposed algorithm have been proved.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

Terrain Geometry from Monocular Image Sequences

  • McKenzie, Alexander;Vendrovsky, Eugene;Noh, Jun-Yong
    • Journal of Computing Science and Engineering
    • /
    • v.2 no.1
    • /
    • pp.98-108
    • /
    • 2008
  • Terrain reconstruction from images is an ill-posed, yet commonly desired Structure from Motion task when compositing visual effects into live-action photography. These surfaces are required for choreography of a scene, casting physically accurate shadows of CG elements, and occlusions. We present a novel framework for generating the geometry of landscapes from extremely noisy point cloud datasets obtained via limited resolution techniques, particularly optical flow based vision algorithms applied to live-action video plates. Our contribution is a new statistical approach to remove erroneous tracks ('outliers') by employing a unique combination of well established techniques-including Gaussian Mixture Models (GMMs) for robust parameter estimation and Radial Basis Functions (REFs) for scattered data interpolation-to exploit the natural constraints of this problem. Our algorithm offsets the tremendously laborious task of modeling these landscapes by hand, automatically generating a visually consistent, camera position dependent, thin-shell surface mesh within seconds for a typical tracking shot.

Graph-based Object Detection and Tracking in H.264/AVC bitstream for Surveillance Video (H.264/AVC 비트스트림을 활용한 감시 비디오 내의 그래프 기반 객체 검출 및 추적)

  • Houari, Sabirin;Kim, Munchurl
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2010.11a
    • /
    • pp.100-103
    • /
    • 2010
  • In this paper we propose a method of detecting moving object in H.264/AVC bitstream by representing the $4{\times}4$ block partition units as nodes of graph. By constructing hierarchical graph by taking into account the relation between nodes and the spatial-temporal relations between graphs in frames, we are able to track small objects, distinguish two occluded objects, and identify objects that move and stop alternatively.

  • PDF

Detecting and Tracking Gaseous Objects in Video Data (화상 데이터에서 기체의 감지 및 추적)

  • 김원하
    • Proceedings of the IEEK Conference
    • /
    • 2000.11d
    • /
    • pp.21-24
    • /
    • 2000
  • 기체에 대한 영상처리 기술은 그 응용 분야가 매우 넓고, 그에 따른 산업적 · 경제적 중요성도 증가되게된다 한 예로서 자동으로 공장오염 감시나 산불감시등에 사용되는 영상기기에 곧바로 기체에 대한 영상처리 기술들이 필요하다. 그러나 기체는 고체와는 다른 다음과 같은 특성들을 가지고 있다. 첫째, 고체의 경우 물체의 경계선이 비교적 분명하지만, 기체의 경우 하나의 기체 내에서도 밀도 분포가 다르기 때문에 그 경계선에서도 밀도가 불규칙하여서 기체의 경계선을 정확히 정의하기 힘들다. 둘째, 기체 분석을 위한 영상들은 대체로 잡음이 많고, 기체의 크기에 비하여 해상도가 낮다. 따라서 기체 영상은 픽셀(Pixel) 단위로 분석 처리하기가 어렵다. 위와 같은 기체가 가지는 특성 때문에 고체에 대한 영상 처리 기술을 기체에 직접적으로 적용하기는 불가능하다. 본 연구에서는 화상 데이터에서 기체를 감지하여 추적하는 시스템을 개발하고자 한다.

  • PDF

Object-of-Interest Oriented Multi-Angle Video Acquisition Technique Using Object-Tracking based on Multi-PTZ Camera Position Control (객체 추적 연동 다중 PTZ 카메라 제어 기반 객체 중심 다각도 영상 획득 기술)

  • Kim, Y.K.;Um, G.M.;Cho, K.S.
    • Electronics and Telecommunications Trends
    • /
    • v.31 no.3
    • /
    • pp.1-8
    • /
    • 2016
  • 최근 개인화된 미디어의 출현과 더불어 방송통신 미디어 분야에서 개인별 맞춤형 방송 서비스에 대한 관심과 지원이 빠르게 확산되는 추세다. 특히, 다중 카메라를 이용한 관심 인물에 대한 다각도 영상과 같은 차별화된 영상을 제공하려는 수요가 꾸준히 증가하고 있다. 객체 중심의 영상을 생성하기 위한 관련 기술의 발전 및 수요 변화에 발맞춰 본고에서는 관련 기술의 개요 및 연구동향을 살펴보고, ETRI에서 개발 중인 객체 추적 기반의 다중 Pan-Tilt-Zoom(PTZ) 카메라 제어를 통한 객체 중심 다각도 영상 획득 기술을 소개하고자 한다.

  • PDF