• Title/Summary/Keyword: video detection system

Search Result 583, Processing Time 0.031 seconds

Preprocessing System for Real-time and High Compression MPEG-4 Video Coding (실시간 고압축 MPEG-4 비디오 코딩을 위한 전처리 시스템)

  • 김준기;홍성수;이호석
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.5
    • /
    • pp.509-520
    • /
    • 2003
  • In this paper, we developed a new and robust algorithm for a practical and very efficient MPEG-4 video coding. The MPEG-4 video group has developed the video Verification Model(VM) which evolved through time by means of core experiments. And in the standardization process, MS-FDAM was developed based on the standard document of ISO/IEC 14496-2 and VM as a reference MPEG-4 coding system. But MS -FDAM has drawbacks in practical MPEG-4 coding and it does not have the VOP extraction functionality. In this research, we implemented a preprocessing system for a real-time input and the VOP extraction for a practical content-based MPEG-4 video coding and also implemented the motion detection to achieve the high compression rate of 180:1.

Video object segmentation and frame preprocessing for real-time and high compression MPEG-4 encoding (실시간 고압축 MPEG-4 부호화를 위한 비디오 객체 분할과 프레임 전처리)

  • 김준기;이호석
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.2C
    • /
    • pp.147-161
    • /
    • 2003
  • Video object segmentation is one of the core technologies for content-based real-time MPEG-4 encoding system. For real-time requirement, the segmentation algorithm should be fast and accurate but almost all existing algorithms are computationally intensive and not suitable for real-time applications. The MPEG-4 VM(Verification Model) has provided basic algorithms for MPEG-4 encoding but it has many limitations in practical software development, real-time camera input system and compression efficiency. In this paper, we implemented the preprocessing system for real-time camera input and VOP extraction for content-based video coding and also implemented motion detection to achieve the 180 : 1 compression rate for real-time and high compression MPEG-4 encoding.

Design of Video Pre-processing Algorithm for High-speed Processing of Maritime Object Detection System and Deep Learning based Integrated System (해상 객체 검출 고속 처리를 위한 영상 전처리 알고리즘 설계와 딥러닝 기반의 통합 시스템)

  • Song, Hyun-hak;Lee, Hyo-chan;Lee, Sung-ju;Jeon, Ho-seok;Im, Tae-ho
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.117-126
    • /
    • 2020
  • A maritime object detection system is an intelligent assistance system to maritime autonomous surface ship(MASS). It detects automatically floating debris, which has a clash risk with objects in the surrounding water and used to be checked by a captain with a naked eye, at a similar level of accuracy to the human check method. It is used to detect objects around a ship. In the past, they were detected with information gathered from radars or sonar devices. With the development of artificial intelligence technology, intelligent CCTV installed in a ship are used to detect various types of floating debris on the course of sailing. If the speed of processing video data slows down due to the various requirements and complexity of MASS, however, there is no guarantee for safety as well as smooth service support. Trying to solve this issue, this study conducted research on the minimization of computation volumes for video data and the increased speed of data processing to detect maritime objects. Unlike previous studies that used the Hough transform algorithm to find the horizon and secure the areas of interest for the concerned objects, the present study proposed a new method of optimizing a binarization algorithm and finding areas whose locations were similar to actual objects in order to improve the speed. A maritime object detection system was materialized based on deep learning CNN to demonstrate the usefulness of the proposed method and assess the performance of the algorithm. The proposed algorithm performed at a speed that was 4 times faster than the old method while keeping the detection accuracy of the old method.

A Study on Taekwondo Training System using Hybrid Sensing Technique

  • Kwon, Doo Young
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.12
    • /
    • pp.1439-1445
    • /
    • 2013
  • We present a Taekwondo training system using a hybrid sensing technique of a body sensor and a visual sensor. Using a body sensor (accelerometer), rotational and inertial motion data are captured which are important for Taekwondo motion detection and evaluation. A visual sensor (camera) captures and records the sequential images of the performance. Motion chunk is proposed to structuralize Taekwondo motions and design HMM (Hidden Markov Model) for motion recognition. Trainees can evaluates their trial motions numerically by computing the distance to the standard motion performed by a trainer. For motion training video, the real-time video images captured by a camera is overlayed with a visualized body sensor data so that users can see how the rotational and inertial motion data flow.

Study for Drowsy Driving Detection & Prevention System (졸음운전 감지 및 방지 시스템 연구)

  • Ahn, Byeong-tae
    • Journal of Convergence for Information Technology
    • /
    • v.8 no.3
    • /
    • pp.193-198
    • /
    • 2018
  • Recently, the casualties of automobile traffic accidents are rapidly increasing, and serious accidents involving serious injury and death are increasing more than those of ordinary people. More than 70% of major accidents occur in drowsy driving. Therefore, in this paper, we studied the drowsiness prevention system to prevent large-scale disasters of traffic accidents. In this paper, we propose a real-time flicker recognition method for drowsy driving detection system and drowsy recognition according to the increase of carbon dioxide. The drowsy driving detection system applied the existing image detection and the deep running, and the carbon dioxide detection was developed based on the IoT. The drowsy prevention system using both of these techniques improved the accuracy compared to the existing products.

Video Evaluation System Using Scene Change Detection and User Profile (장면전환검출과 사용자 프로파일을 이용한 비디오 학습 평가 시스템)

  • Shin, Seong-Yoon
    • The KIPS Transactions:PartD
    • /
    • v.11D no.1
    • /
    • pp.95-104
    • /
    • 2004
  • This paper proposes an efficient remote video evaluation system that is matched well with personalized characteristics of students using information filtering based on user profile. For making a question in forms of video, a key frame extraction method based on coordinate, size and color information is proposed. And Question-mating intervals are extracted using gray-level histogram difference and time window. Also, question-making method that combined category-based system with keyword-based system is used for efficient evaluation. Therefore, students can enhance their study achievement through both supplementing their inferior area and preserving their interest area.

Context-Awareness Cat Behavior Captioning System (반려묘의 상황인지형 행동 캡셔닝 시스템)

  • Chae, Heechan;Choi, Yoona;Lee, Jonguk;Park, Daihee;Chung, Yongwha
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.1
    • /
    • pp.21-29
    • /
    • 2021
  • With the recent increase in the number of households raising pets, various engineering studies have been underway for pets. The final purpose of this study is to automatically generate situation-sensitive captions that can express implicit intentions based on the behavior and sound of cats by embedding the already mature behavioral detection technology of pets as basic element technology in the video capturing research. As a pilot project to this end, this paper proposes a high-level capturing system using optical-flow, RGB, and sound information of cat videos. That is, the proposed system uses video datasets collected in an actual breeding environment to extract feature vectors from the video and sound, then through hierarchical LSTM encoder and decoder, to identify the cat's behavior and its implicit intentions, and to perform learning to create context-sensitive captions. The performance of the proposed system was verified experimentally by utilizing video data collected in the environment where actual cats are raised.

Scalable Big Data Pipeline for Video Stream Analytics Over Commodity Hardware

  • Ayub, Umer;Ahsan, Syed M.;Qureshi, Shavez M.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.4
    • /
    • pp.1146-1165
    • /
    • 2022
  • A huge amount of data in the form of videos and images is being produced owning to advancements in sensor technology. Use of low performance commodity hardware coupled with resource heavy image processing and analyzing approaches to infer and extract actionable insights from this data poses a bottleneck for timely decision making. Current approach of GPU assisted and cloud-based architecture video analysis techniques give significant performance gain, but its usage is constrained by financial considerations and extremely complex architecture level details. In this paper we propose a data pipeline system that uses open-source tools such as Apache Spark, Kafka and OpenCV running over commodity hardware for video stream processing and image processing in a distributed environment. Experimental results show that our proposed approach eliminates the need of GPU based hardware and cloud computing infrastructure to achieve efficient video steam processing for face detection with increased throughput, scalability and better performance.

Image based Fire Detection using Convolutional Neural Network (CNN을 활용한 영상 기반의 화재 감지)

  • Kim, Young-Jin;Kim, Eun-Gyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.9
    • /
    • pp.1649-1656
    • /
    • 2016
  • Performance of the existing sensor-based fire detection system is limited according to factors in the environment surrounding the sensor. A number of image-based fire detection systems were introduced in order to solve these problem. But such a system can generate a false alarm for objects similar in appearance to fire due to algorithm that directly defines the characteristics of a flame. Also fir detection systems using movement between video flames cannot operate correctly as intended in an environment in which the network is unstable. In this paper, we propose an image-based fire detection method using CNN (Convolutional Neural Network). In this method, firstly we extract fire candidate region using color information from video frame input and then detect fire using trained CNN. Also, we show that the performance is significantly improved compared to the detection rate and missing rate found in previous studies.

High-definition Video Enhancement Using Color Constancy Based on Scene Unit and Modified Histogram Equalization (장면단위 색채 항상성과 변형 히스토그램 평활화 방법을 이용한 고선명 동영상의 화질 향상 방법)

  • Cho, Dong-Chan;Kang, Hyung-Sub;Kim, Whoi-Yul
    • Journal of Broadcast Engineering
    • /
    • v.15 no.3
    • /
    • pp.368-379
    • /
    • 2010
  • As high-definition video is broadly used in various system such as broadcast system and digital camcorder the proper method in order to improve the quality of high-definition video is needed. In this paper, we propose an efficient method to improve color and contrast of high-definition video. In order to apply the image enhancement method to high-definition video, scale-down video of high-definition video is used and the parameter for image enhancement method is computed from small size video. To enhance the color of high-definition video, we apply color constancy method. First, we separate the video into several scenes by cut detection method. Then, we apply color constancy to each scene with same parameter. To improve the contrast of high-definition video, we use union of original image and histogram equalized image, and weight is calculated based on sorting of histogram bins. Finally, the performance of proposed method is demonstrated in experiment section.