• Title/Summary/Keyword: fast video acquisition

Search Result 12, Processing Time 0.021 seconds

Single Pixel Compressive Camera for Fast Video Acquisition using Spatial Cluster Regularization

  • Peng, Yang;Liu, Yu;Lu, Kuiyan;Zhang, Maojun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.11
    • /
    • pp.5481-5495
    • /
    • 2018
  • Single pixel imaging technology has developed for years, however the video acquisition on the single pixel camera is not a well-studied problem in computer vision. This work proposes a new scheme for single pixel camera to acquire video data and a new regularization for robust signal recovery algorithm. The method establishes a single pixel video compressive sensing scheme to reconstruct the video clips in spatial domain by recovering the difference of the consecutive frames. Different from traditional data acquisition method works in transform domain, the proposed scheme reconstructs the video frames directly in spatial domain. At the same time, a new regularization called spatial cluster is introduced to improve the performance of signal reconstruction. The regularization derives from the observation that the nonzero coefficients often tend to be clustered in the difference of the consecutive video frames. We implement an experiment platform to illustrate the effectiveness of the proposed algorithm. Numerous experiments show the well performance of video acquisition and frame reconstruction on single pixel camera.

A Study on the Background Image Updating Algorithm for Detecting Fast Moving Objects (고속 객체 탐지를 위한 배경화면 갱신 알고리즘에 관한 연구)

  • Park, Jong-beom
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.15 no.4
    • /
    • pp.153-160
    • /
    • 2016
  • A developed skill of an intelligent CCTV is also advancing by using its Image Acquisition Device. The most important part in the field of detecting comparatively fast moving objects is to effectively reduce the loads on updating the background image in order to achieve real-time update. However, the ability of the current general-purpose computer extracting the texture as characteristics has limits in application mostly due to the loads on processes. In this thesis, an algorithm for real-time updating the background image in an applied area such as detecting the fast moving objects like a driving car in a video of at least 30 frames per second is suggested and the performance is analyzed by a test of extracting object region from real input image.

Velocity Measurement of Fast Moving Object for Traffic Information Acquisition (트래픽 정보 취득을 위한 고속이동물체 속도 측정)

  • Lee Jooshin
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.11C
    • /
    • pp.1527-1540
    • /
    • 2004
  • In this paper, velocity measurement of fast moving object for traffic information acquisition using line sampling of image is proposed. Velocity measurement for traffic information acquisition of moving object is that the first sample line and second sample line on the road is set, then car is detected by using difference image method between time-variance hue data of image when car is passing two sample lines and hue data of the reference image, and velocity of the car is measured by using frame number of video which is occupied by two sample lines. Identification of the car is performed by hue of the detected car between the first sample line and second sample line, respectively To examine the propriety of the proposed algorithm, identification and velocity measurement for driving car is evaluated. The evaluated results is that it is identified by hue data of car passing two sample lines, and the velocity measurement for driving car is less than 3% comparing with X-band speed gun.

Fast and Efficient Method for Fire Detection Using Image Processing

  • Celik, Turgay
    • ETRI Journal
    • /
    • v.32 no.6
    • /
    • pp.881-890
    • /
    • 2010
  • Conventional fire detection systems use physical sensors to detect fire. Chemical properties of particles in the air are acquired by sensors and are used by conventional fire detection systems to raise an alarm. However, this can also cause false alarms; for example, a person smoking in a room may trigger a typical fire alarm system. In order to manage false alarms of conventional fire detection systems, a computer vision-based fire detection algorithm is proposed in this paper. The proposed fire detection algorithm consists of two main parts: fire color modeling and motion detection. The algorithm can be used in parallel with conventional fire detection systems to reduce false alarms. It can also be deployed as a stand-alone system to detect fire by using video frames acquired through a video acquisition device. A novel fire color model is developed in CIE $L^*a^*b^*$ color space to identify fire pixels. The proposed fire color model is tested with ten diverse video sequences including different types of fire. The experimental results are quite encouraging in terms of correctly classifying fire pixels according to color information only. The overall fire detection system's performance is tested over a benchmark fire video database, and its performance is compared with the state-of-the-art fire detection method.

A Flow Analysis Framework for Traffic Video

  • Bai, Lu-Shuang;Xia, Ying;Lee, Sang-Chul
    • Journal of Korea Spatial Information System Society
    • /
    • v.11 no.2
    • /
    • pp.45-53
    • /
    • 2009
  • The fast progress on multimedia data acquisition technologies has enabled collecting vast amount of videos in real time. Although the amount of information gathered from these videos could be high in terms of quantity and quality, the use of the collected data is very limited typically by human-centric monitoring systems. In this paper, we propose a framework for analyzing long traffic video using series of content-based analyses tools. Our framework suggests a method to integrate theses analyses tools to extract highly informative features specific to a traffic video analysis. Our analytical framework provides (1) re-sampling tools for efficient and precise analysis, (2) foreground extraction methods for unbiased traffic flow analysis, (3) frame property analyses tools using variety of frame characteristics including brightness, entropy, Harris corners, and variance of traffic flow, and (4) a visualization tool that summarizes the entire video sequence and automatically highlight a collection of frames based on some metrics defined by semi-automated or fully automated techniques. Based on the proposed framework, we developed an automated traffic flow analysis system, and in our experiments, we show results from two example traffic videos taken from different monitoring angles.

  • PDF

Performance Analysis of 3D-HEVC Video Coding (3D-HEVC 비디오 부호화 성능 분석)

  • Park, Daemin;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.19 no.5
    • /
    • pp.713-725
    • /
    • 2014
  • Multi-view and 3D video technologies for a next generation video service are widely studied. These technologies can make users feel realistic experience as supporting various views. Because acquisition and transmission of a large number of views require a high cost, main challenges for multi-view and 3D video include view synthesis, video coding, and depth coding. Recently, JCT-3V (joint collaborative team on 3D video coding extension development) has being developed a new standard for multi-view and 3D video. In this paper, major tools adopted in this standard are introduced and evaluated in terms of coding efficiency and complexity. This performance analysis would be helpful for the development of a fast 3D video encoder as well as a new 3D video coding algorithm.

Fast Generation of Digital Video Holograms Using Multiple PCs (다수의 PC를 이용한 디지털 비디오 홀로그램의 고속 생성)

  • Park, Hanhoon;Kim, Changseob;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.22 no.4
    • /
    • pp.509-518
    • /
    • 2017
  • High-resolution digital holograms can be quickly generated by using a PC cluster that is based on server-client architecture and is composed of several GPU-equipped PCs. However, the data transmission time between PCs becomes a large obstacle for fast generation of video holograms because it linearly increases in proportion to the number of frames. To resolve the problem with the increase of data transmission time, this paper proposes a multi-threading-based method. Hologram generation in each client PC basically consists of three processes: acquisition of light sources, CGH operation using GPUs, and transmission of the result to the server PC. Unlike the previous method that sequentially executes the processes, the proposed method executes in parallel them by multi-threading and thus can significantly reduce the proportion of the data transmission time to the total hologram generation time. Through experiments, it was confirmed that the total generation time of a high-resolution video hologram with 150 frames can be reduced by about 30%.

Measurement of Sub-micrometer Features Based on The Topographic Contrast Using Reflection Confocal Microscopy

  • Lee SeungWoo;Kang DongKyun;Yoo HongKi;Kim TaeJoong;Gweon Dae-Gab;Lee Suk-Won;Kim Kwang-Soo
    • Journal of the Optical Society of Korea
    • /
    • v.9 no.1
    • /
    • pp.26-31
    • /
    • 2005
  • We describe the design and the implementation of video-rate reflection confocal scanning microscopy (CSM) using an acousto-optical deflector (AOD) for the fast horizontal scan and a galvanometer mirror (GM) for the slow vertical scan. Design parameters of the optical system are determined for optimal resolution and contrast. The OSLO simulations show that the performances of CSM are not changed with deflection angle and the wavefront errors of the system are less than 0.012λ. To evaluate the performances of designed CSM, we do a series of tests, measuring lateral and axial resolution, real time image acquisition. Due to a higher axial resolution compared with conventional microscopy, CSM can detect the surface of sub-micrometer features. We detect 138㎚ line shape pattern with a video-rate (30 frm/sec). And 10㎚ axial resolution is archived. The lateral resolution of the topographic images will be further enhanced by differential confocal microscopy (DCM) method and computational algorithms.

Development of an intelligent edge computing device equipped with on-device AI vision model (온디바이스 AI 비전 모델이 탑재된 지능형 엣지 컴퓨팅 기기 개발)

  • Kang, Namhi
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.5
    • /
    • pp.17-22
    • /
    • 2022
  • In this paper, we design a lightweight embedded device that can support intelligent edge computing, and show that the device quickly detects an object in an image input from a camera device in real time. The proposed system can be applied to environments without pre-installed infrastructure, such as an intelligent video control system for industrial sites or military areas, or video security systems mounted on autonomous vehicles such as drones. The On-Device AI(Artificial intelligence) technology is increasingly required for the widespread application of intelligent vision recognition systems. Computing offloading from an image data acquisition device to a nearby edge device enables fast service with less network and system resources than AI services performed in the cloud. In addition, it is expected to be safely applied to various industries as it can reduce the attack surface vulnerable to various hacking attacks and minimize the disclosure of sensitive data.

Haptic Media Broadcasting (촉각방송)

  • Cha, Jong-Eun;Kim, Yeong-Mi;Seo, Yong-Won;Ryu, Je-Ha
    • Broadcasting and Media Magazine
    • /
    • v.11 no.4
    • /
    • pp.118-131
    • /
    • 2006
  • With rapid development in ultra fast communication and digital multimedia, the realistic broadcasting technology, that can stimulate five human senses beyond the conventional audio-visual service is emerging as a new generation broadcasting technology. In this paper, we introduce a haptic broadcasting system and related core system and component techniques by which we can 'touch and feel' objects in an audio-visual scene. The system is composed of haptic media acquisition and creation, contents authoring, in the haptic broadcasting, the haptic media can be 3-D geometry, dynamic properties, haptic surface properties, movement, tactile information to enable active touch and manipulation and passive movement following and tactile effects. In the proposed system, active haptic exploration and manipulation of a 3-D mesh, active haptic exploration of depth video, passive kinesthetic interaction, and passive tactile interaction can be provided as potential haptic interaction scenarios and a home shopping, a movie with tactile effects, and conducting education scenarios are produced to show the feasibility of the proposed system.