• Title/Summary/Keyword: video object extraction

Search Result 111, Processing Time 0.029 seconds

A Study on Unmanned Image Tracking System based on Smart Phone (스마트폰 기반의 무인 영상 추적 시스템 연구)

  • Ahn, Byeong-tae
    • Journal of Convergence for Information Technology
    • /
    • v.9 no.3
    • /
    • pp.30-35
    • /
    • 2019
  • An unattended recording system based on smartphone based image image tracking is rapidly developing. Among the existing products, a system that automatically tracks and rotates the object to be photographed using an infrared signal is very expensive for general users. Therefore, this paper proposes a mobile unattended recording system that enables automatic recording by anyone who uses a smartphone. The system consists of a commercial mobile camera, a servomotor that moves the camera from side to side, a microcontroller to control the motor, and a commercial wireless Bluetooth Earset for video audio input. In this paper, we designed a system that enables unattended recording through image tracking using smartphone.

Cast-Shadow Elimination of Vehicle Objects Using Backpropagation Neural Network (신경망을 이용한 차량 객체의 그림자 제거)

  • Jeong, Sung-Hwan;Lee, Jun-Whoan
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.7 no.1
    • /
    • pp.32-41
    • /
    • 2008
  • The moving object tracking in vision based observation using video uses difference method between GMM(Gaussian Mixture Model) based background and present image. In the case of racking object using binary image made by threshold, the object is merged not by object information but by Cast-Shadow. This paper proposed the method that eliminates Cast-Shadow using backpropagation Neural Network. The neural network is trained by abstracting feature value form training image of object range in 10-movies and Cast-Shadow range. The method eliminating Cast-Shadow is based on the method distinguishing shadow from binary image, its Performance is better(16.2%, 38.2%, 28.1%, 22.3%, 44.4%) than existing Cast-Shadow elimination algorithm(SNP, SP, DNM1, DNM2, CNCC).

  • PDF

3D GIS system using the CCTV camera (CCTV 카메라를 활용한 3D 지리정보시스템 구현)

  • Kim, Ik-Soon;Shin, Hyun-Shik
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.6 no.4
    • /
    • pp.559-565
    • /
    • 2011
  • In this paper, we propose the geographic information systems that is able to build geographic information effectively by creating 3D topography after extraction surrounding terrain information through the video shooting in the CCTV camera. We also propose tracing method for object recognized through the video shooting of camera and recognition method which is whether or not the terrain change according to success or not of tracing the object. We apply this method in the industry field we can build a geographic information close to the actual terrain, but also can be used for security, surveillance and tracking system.

Feature Extraction for Scene Change Detection in an MPEG Video Sequence (장면 전환 검출을 위한 MPEG 비디오 시퀀스로부터 특징 요소 추출)

  • 최윤석;곽영경;고성제
    • Journal of Broadcast Engineering
    • /
    • v.3 no.2
    • /
    • pp.127-137
    • /
    • 1998
  • In this paper, we propose the method of extracting edge information from MPEG video sequences for the detection of scene changes. In a the proposed method, five significant AC coefficients of each MPEG block are utilized to obtain edge images from the MPEG video. AC edge images obtained by the proposed scheme not only produce better object boundary information than conventional methods using only DC coefficients, but also can reduce the boundary effects produced by DC-based. Since the AC edge image contains the content information of each frame, it can be effectively utilized for the detection of scene change as well as the content-based video query. Experimental results show that the proposed method can be effectively utilized for the detection of scene changes.

  • PDF

Content-Based Image Retrieval Algorithm Using HAQ Algorithm and Moment-Based Feature (HAQ 알고리즘과 Moment 기반 특징을 이용한 내용 기반 영상 검색 알고리즘)

  • 김대일;강대성
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.4
    • /
    • pp.113-120
    • /
    • 2004
  • In this paper, we propose an efficient feature extraction and image retrieval algorithm for content-based retrieval method. First, we extract the object using Gaussian edge detector for input image which is key frames of MPEG video and extract the object features that are location feature, distributed dimension feature and invariant moments feature. Next, we extract the characteristic color feature using the proposed HAQ(Histogram Analysis md Quantization) algorithm. Finally, we implement an retrieval of four features in sequence with the proposed matching method for query image which is a shot frame except the key frames of MPEG video. The purpose of this paper is to propose the novel content-based image retrieval algerian which retrieves the key frame in the shot boundary of MPEG video belonging to the scene requested by user. The experimental results show an efficient retrieval for 836 sample images in 10 music videos using the proposed algorithm.

The Design of Object-of-Interest Extraction System Utilizing Metadata Filtering from Moving Object (이동객체의 메타데이터 필터링을 이용한 관심객체 추출 시스템 설계)

  • Kim, Taewoo;Kim, Hyungheon;Kim, Pyeongkang
    • Journal of KIISE
    • /
    • v.43 no.12
    • /
    • pp.1351-1355
    • /
    • 2016
  • The number of CCTV units is rapidly increasing annually, and the demand for intelligent video-analytics system is also increasing continuously for the effective monitoring of them. The existing analytics engines, however, require considerable computing resources and cannot provide a sufficient detection accuracy. For this paper, a light analytics engine was employed to analyze video and we collected metadata, such as an object's location and size, and the dwell time from the engine. A further data analysis was then performed to filter out the target of interest; as a result, it was possible to verify that a light engine and the heavy data analytics of the metadata from that engine can reject an enormous amount of environmental noise to extract the target of interest effectively. The result of this research is expected to contribute to the development of active intelligent-monitoring systems for the future.

METHOD FOR REAL-TIME EDGE EXTRACTION USING HARDWARE OF LATERAL INHIVITION TYPE OF SPATIAL FILTER

  • Serikawa, Seiichi;Morita, Kazuhiro;Shimomura, Teruo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1995.10a
    • /
    • pp.236-239
    • /
    • 1995
  • It is useful to simulate the human visual function for the purpose of image-processing. In this study, the hardware of the spatial filter with the sensitivity of lateral inhibition is realized by the combination of optical parts with electronic circuits. The diffused film with the characteristics of Gaussian type is prepared as a spatial filter. An object's image is convoluted with the spatial filter. From the difference of the convoluted images, the zero-cross position is detected at video rate. The edge of object is extracted in real-time by the use of this equipment. The resolution of edge changes with the value of the standard deviation of diffused film. In addition, it is possible to extract a directional edge selectively when the spatial filter with directional selectivity is used instead of Gaussian type of spatial filter.

  • PDF

Object tracking algorithm through RGB-D sensor in indoor environment (실내 환경에서 RGB-D 센서를 통한 객체 추적 알고리즘 제안)

  • Park, Jung-Tak;Lee, Sol;Park, Byung-Seo;Seo, Young-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.248-249
    • /
    • 2022
  • In this paper, we propose a method for classifying and tracking objects based on information of multiple users obtained using RGB-D cameras. The 3D information and color information acquired through the RGB-D camera are acquired and information about each user is stored. We propose a user classification and location tracking algorithm in the entire image by calculating the similarity between users in the current frame and the previous frame through the information on the location and appearance of each user obtained from the entire image.

  • PDF

A Feature Point Extraction and Identification Technique for Immersive Contents Using Deep Learning (딥 러닝을 이용한 실감형 콘텐츠 특징점 추출 및 식별 방법)

  • Park, Byeongchan;Jang, Seyoung;Yoo, Injae;Lee, Jaechung;Kim, Seok-Yoon;Kim, Youngmo
    • Journal of IKEEE
    • /
    • v.24 no.2
    • /
    • pp.529-535
    • /
    • 2020
  • As the main technology of the 4th industrial revolution, immersive 360-degree video contents are drawing attention. The market size of immersive 360-degree video contents worldwide is projected to increase from $6.7 billion in 2018 to approximately $70 billion in 2020. However, most of the immersive 360-degree video contents are distributed through illegal distribution networks such as Webhard and Torrent, and the damage caused by illegal reproduction is increasing. Existing 2D video industry uses copyright filtering technology to prevent such illegal distribution. The technical difficulties dealing with immersive 360-degree videos arise in that they require ultra-high quality pictures and have the characteristics containing images captured by two or more cameras merged in one image, which results in the creation of distortion regions. There are also technical limitations such as an increase in the amount of feature point data due to the ultra-high definition and the processing speed requirement. These consideration makes it difficult to use the same 2D filtering technology for 360-degree videos. To solve this problem, this paper suggests a feature point extraction and identification technique that select object identification areas excluding regions with severe distortion, recognize objects using deep learning technology in the identification areas, extract feature points using the identified object information. Compared with the previously proposed method of extracting feature points using stitching area for immersive contents, the proposed technique shows excellent performance gain.

Frontal Face Video Analysis for Detecting Fatigue States

  • Cha, Simyeong;Ha, Jongwoo;Yoon, Soungwoong;Ahn, Chang-Won
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.6
    • /
    • pp.43-52
    • /
    • 2022
  • We can sense somebody's feeling fatigue, which means that fatigue can be detected through sensing human biometric signals. Numerous researches for assessing fatigue are mostly focused on diagnosing the edge of disease-level fatigue. In this study, we adapt quantitative analysis approaches for estimating qualitative data, and propose video analysis models for measuring fatigue state. Proposed three deep-learning based classification models selectively include stages of video analysis: object detection, feature extraction and time-series frame analysis algorithms to evaluate each stage's effect toward dividing the state of fatigue. Using frontal face videos collected from various fatigue situations, our CNN model shows 0.67 accuracy, which means that we empirically show the video analysis models can meaningfully detect fatigue state. Also we suggest the way of model adaptation when training and validating video data for classifying fatigue.