• Title/Summary/Keyword: video tracking

Search Result 609, Processing Time 0.027 seconds

Traffic-Accident-in-Alley Prevention System by Object Tracking in Video Surveillance Camera Streaming Video (비디오 감시 카메라 내 사물 추적을 통한 골목길 교차로 사고 예방 시스템)

  • Kim, Hyungjin;Kim, Juneyoung;Park, Juhong;Shim, Jaeuk;Ko, Seokju;Kim, Jeongseok
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.05a
    • /
    • pp.536-539
    • /
    • 2020
  • 길이 좁고 차도와 인도의 구분이 없는 골목길의 특성상 사각지대가 많고 보행자의 동선을 예측하기 힘들어 교통사고가 많이 발생하고 있다. 따라서 본 논문에서는 AI 를 활용, 영상 내 사물을 추적하여 골목길에서의 사고를 예방하는 시스템을 제안한다. 해당 시스템은 Object - Detection & Tracking 을 사용하여 보행자 및 차량을 식별·추적하여 두 개 이상의 사물이 동시에 교차로에 접근 시 사고 예방 알람을 발생시킨다. 이 시스템을 전국에 설치되어 있는 CCTV 에 활용하면 추가적인 비용과 설치 시간에 제한받지 않고 전국적으로 응용할 수 있을 것으로 기대된다.

Specified Object Tracking in an Environment of Multiple Moving Objects using Particle Filter (파티클 필터를 이용한 다중 객체의 움직임 환경에서 특정 객체의 움직임 추적)

  • Kim, Hyung-Bok;Ko, Kwang-Eun;Kang, Jin-Shig;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.1
    • /
    • pp.106-111
    • /
    • 2011
  • Video-based detection and tracking of moving objects has been widely used in real-time monitoring systems and a videoconferencing. Also, because object motion tracking can be expanded to Human-computer interface and Human-robot interface, Moving object tracking technology is one of the important key technologies. If we can track a specified object in an environment of multiple moving objects, then there will be a variety of applications. In this paper, we introduce a specified object motion tracking using particle filter. The results of experiments show that particle filter can achieve good performance in single object motion tracking and a specified object motion tracking in an environment of multiple moving objects.

Implementation of an improved real-time object tracking algorithm using brightness feature information and color information of object

  • Kim, Hyung-Hoon;Cho, Jeong-Ran
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.5
    • /
    • pp.21-28
    • /
    • 2017
  • As technology related to digital imaging equipment is developed and generalized, digital imaging system is used for various purposes in fields of society. The object tracking technology from digital image data in real time is one of the core technologies required in various fields such as security system and robot system. Among the existing object tracking technologies, cam shift technology is a technique of tracking an object using color information of an object. Recently, digital image data using infrared camera functions are widely used due to various demands of digital image equipment. However, the existing cam shift method can not track objects in image data without color information. Our proposed tracking algorithm tracks the object by analyzing the color if valid color information exists in the digital image data, otherwise it generates the lightness feature information and tracks the object through it. The brightness feature information is generated from the ratio information of the width and the height of the area divided by the brightness. Experimental results shows that our tracking algorithm can track objects in real time not only in general image data including color information but also in image data captured by an infrared camera.

Fast Inter Mode Decision Algorithm Based on Macroblock Tracking in H.264/AVC Video

  • Kim, Byung-Gyu;Kim, Jong-Ho;Cho, Chang-Sik
    • ETRI Journal
    • /
    • v.29 no.6
    • /
    • pp.736-744
    • /
    • 2007
  • We propose a fast macroblock (MB) mode prediction and decision algorithm based on temporal correlation for P-slices in the H.264/AVC video standard. There are eight block types for temporal decorrelation, including SKIP mode based on rate-distortion (RD) optimization. This scheme gives rise to exhaustive computations (search) in the coding procedure. To overcome this problem, a thresholding method for fast inter mode decision using a MB tracking scheme to find the most correlated block and RD cost of the correlated block is suggested for early stop of the inter mode determination. We propose a two-step inter mode candidate selection method using statistical analysis. In the first step, a mode is selected based on the mode information of the co-located MB from the previous frame. Then, an adaptive thresholding scheme is applied using the RD cost of the most correlated MB. Secondly, additional candidate modes are considered to determine the best mode of the initial candidate modes that does not satisfy the designed thresholding rule. Comparative analysis shows that a speed-up factor of up to 70.59% is obtained when compared with the full mode search method with a negligible bit increment and a minimal loss of image quality.

  • PDF

A Study on Unmanned Image Tracking System based on Smart Phone (스마트폰 기반의 무인 영상 추적 시스템 연구)

  • Ahn, Byeong-tae
    • Journal of Convergence for Information Technology
    • /
    • v.9 no.3
    • /
    • pp.30-35
    • /
    • 2019
  • An unattended recording system based on smartphone based image image tracking is rapidly developing. Among the existing products, a system that automatically tracks and rotates the object to be photographed using an infrared signal is very expensive for general users. Therefore, this paper proposes a mobile unattended recording system that enables automatic recording by anyone who uses a smartphone. The system consists of a commercial mobile camera, a servomotor that moves the camera from side to side, a microcontroller to control the motor, and a commercial wireless Bluetooth Earset for video audio input. In this paper, we designed a system that enables unattended recording through image tracking using smartphone.

Face Detection based on Video Sequence (비디오 영상 기반의 얼굴 검색)

  • Ahn, Hyo-Chang;Rhee, Sang-Burm
    • Journal of the Semiconductor & Display Technology
    • /
    • v.7 no.3
    • /
    • pp.45-49
    • /
    • 2008
  • Face detection and tracking technology on video sequence has developed indebted to commercialization of teleconference, telecommunication, front stage of surveillance system using face recognition, and video-phone applications. Complex background, color distortion by luminance effect and condition of luminance has hindered face recognition system. In this paper, we have proceeded to research of face recognition on video sequence. We extracted facial area using luminance and chrominance component on $YC_bC_r$ color space. After extracting facial area, we have developed the face recognition system applied to our improved algorithm that combined PCA and LDA. Our proposed algorithm has shown 92% recognition rate which is more accurate performance than previous methods that are applied to PCA, or combined PCA and LDA.

  • PDF

Real-time Stabilization Method for Video acquired by Unmanned Aerial Vehicle (무인 항공기 촬영 동영상을 위한 실시간 안정화 기법)

  • Cho, Hyun-Tae;Bae, Hyo-Chul;Kim, Min-Uk;Yoon, Kyoungro
    • Journal of the Semiconductor & Display Technology
    • /
    • v.13 no.1
    • /
    • pp.27-33
    • /
    • 2014
  • Video from unmanned aerial vehicle (UAV) is influenced by natural environments due to the light-weight UAV, specifically by winds. Thus UAV's shaking movements make the video shaking. Objective of this paper is making a stabilized video by removing shakiness of video acquired by UAV. Stabilizer estimates camera's motion from calculation of optical flow between two successive frames. Estimated camera's movements have intended movements as well as unintended movements of shaking. Unintended movements are eliminated by smoothing process. Experimental results showed that our proposed method performs almost as good as the other off-line based stabilizer. However estimation of camera's movements, i.e., calculation of optical flow, becomes a bottleneck to the real-time stabilization. To solve this problem, we make parallel stabilizer making average 30 frames per second of stabilized video. Our proposed method can be used for the video acquired by UAV and also for the shaking video from non-professional users. The proposed method can also be used in any other fields which require object tracking, or accurate image analysis/representation.

A Generation Method of Spatially Encoded Video Data for Geographic Information Systems

  • Joo, In-Hak;Hwang, Tae-Hyun;Choi, Kyoung-Ho;Jang, Byung-Tae
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.801-803
    • /
    • 2003
  • In this paper, we present a method for generating and providing spatially encoded video data that can be effectively used by GIS applications. We collect the video data by a mobile mapping system called 4S-Van that is equipped by GPS, INS, CCD camera, and DVR system. The information about spatial object appearing in video, such as occupied region in each frame, attribute value, and geo-coordinate, are generated and encoded. We suggest methods that can generate such data for each frame in semi-automatic manner. We adopt standard MPEG-7 metadata format for representation of the spatially encoded video data to be generally used by GIS application. The spatial and attribute information encoded to each video frame can make visual browsing between map and video possible. The generated video data can be provided and applied to various GIS applications where location and visual data are both important.

  • PDF

Detection of Face Direction by Using Inter-Frame Difference

  • Jang, Bongseog;Bae, Sang-Hyun
    • Journal of Integrative Natural Science
    • /
    • v.9 no.2
    • /
    • pp.155-160
    • /
    • 2016
  • Applying image processing techniques to education, the face of the learner is photographed, and expression and movement are detected from video, and the system which estimates degree of concentration of the learner is developed. For one learner, the measuring system is designed in terms of estimating a degree of concentration from direction of line of learner's sight and condition of the eye. In case of multiple learners, it must need to measure each concentration level of all learners in the classroom. But it is inefficient because one camera per each learner is required. In this paper, position in the face region is estimated from video which photographs the learner in the class by the difference between frames within the motion direction. And the system which detects the face direction by the face part detection by template matching is proposed. From the result of the difference between frames in the first image of the video, frontal face detection by Viola-Jones method is performed. Also the direction of the motion which arose in the face region is estimated with the migration length and the face region is tracked. Then the face parts are detected to tracking. Finally, the direction of the face is estimated from the result of face tracking and face parts detection.

Depth Images-based Human Detection, Tracking and Activity Recognition Using Spatiotemporal Features and Modified HMM

  • Kamal, Shaharyar;Jalal, Ahmad;Kim, Daijin
    • Journal of Electrical Engineering and Technology
    • /
    • v.11 no.6
    • /
    • pp.1857-1862
    • /
    • 2016
  • Human activity recognition using depth information is an emerging and challenging technology in computer vision due to its considerable attention by many practical applications such as smart home/office system, personal health care and 3D video games. This paper presents a novel framework of 3D human body detection, tracking and recognition from depth video sequences using spatiotemporal features and modified HMM. To detect human silhouette, raw depth data is examined to extract human silhouette by considering spatial continuity and constraints of human motion information. While, frame differentiation is used to track human movements. Features extraction mechanism consists of spatial depth shape features and temporal joints features are used to improve classification performance. Both of these features are fused together to recognize different activities using the modified hidden Markov model (M-HMM). The proposed approach is evaluated on two challenging depth video datasets. Moreover, our system has significant abilities to handle subject's body parts rotation and body parts missing which provide major contributions in human activity recognition.