• Title/Summary/Keyword: single camera

Search Result 776, Processing Time 0.025 seconds

Video-based Height Measurements of Multiple Moving Objects

  • Jiang, Mingxin;Wang, Hongyu;Qiu, Tianshuang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.9
    • /
    • pp.3196-3210
    • /
    • 2014
  • This paper presents a novel video metrology approach based on robust tracking. From videos acquired by an uncalibrated stationary camera, the foreground likelihood map is obtained by using the Codebook background modeling algorithm, and the multiple moving objects are tracked by a combined tracking algorithm. Then, we compute vanishing line of the ground plane and the vertical vanishing point of the scene, and extract the head feature points and the feet feature points in each frame of video sequences. Finally, we apply a single view mensuration algorithm to each of the frames to obtain height measurements and fuse the multi-frame measurements using RANSAC algorithm. Compared with other popular methods, our proposed algorithm does not require calibrating the camera, and can track the multiple moving objects when occlusion occurs. Therefore, it reduces the complexity of calculation and improves the accuracy of measurement simultaneously. The experimental results demonstrate that our method is effective and robust to occlusion.

Multi-task Architecture for Singe Image Dynamic Blur Restoration and Motion Estimation (단일 영상 비균일 블러 제거를 위한 다중 학습 구조)

  • Jung, Hyungjoo;Jang, Hyunsung;Ha, Namkoo;Yeon, Yoonmo;Kwon, Ku yong;Sohn, Kwanghoon
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.10
    • /
    • pp.1149-1159
    • /
    • 2019
  • We present a novel deep learning architecture for obtaining a latent image from a single blurry image, which contains dynamic motion blurs through object/camera movements. The proposed architecture consists of two sub-modules: blur image restoration and optical flow estimation. The tasks are highly related in that object/camera movements make cause blurry artifacts, whereas they are estimated through optical flow. The ablation study demonstrates that training multi-task architecture simultaneously improves both tasks compared to handling them separately. Objective and subjective evaluations show that our method outperforms the state-of-the-arts deep learning based techniques.

GPU-Accelerated Single Image Depth Estimation with Color-Filtered Aperture

  • Hsu, Yueh-Teng;Chen, Chun-Chieh;Tseng, Shu-Ming
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.3
    • /
    • pp.1058-1070
    • /
    • 2014
  • There are two major ways to implement depth estimation, multiple image depth estimation and single image depth estimation, respectively. The former has a high hardware cost because it uses multiple cameras but it has a simple software algorithm. Conversely, the latter has a low hardware cost but the software algorithm is complex. One of the recent trends in this field is to make a system compact, or even portable, and to simplify the optical elements to be attached to the conventional camera. In this paper, we present an implementation of depth estimation with a single image using a graphics processing unit (GPU) in a desktop PC, and achieve real-time application via our evolutional algorithm and parallel processing technique, employing a compute shader. The methods greatly accelerate the compute-intensive implementation of depth estimation with a single view image from 0.003 frames per second (fps) (implemented in MATLAB) to 53 fps, which is almost twice the real-time standard of 30 fps. In the previous literature, to the best of our knowledge, no paper discusses the optimization of depth estimation using a single image, and the frame rate of our final result is better than that of previous studies using multiple images, whose frame rate is about 20fps.

Effect of Ultrasonic Frequency on the Atomization Characteristics of Single Water Droplet in an Acoustic Levitation Field (음향 부양장(acoustic levitation field)에서 초음파 주파수(ultrasonic frequency)에 따른 단일 액적의 미립화 특성)

  • Suh, Hyun Kyu
    • Journal of ILASS-Korea
    • /
    • v.18 no.3
    • /
    • pp.126-131
    • /
    • 2013
  • This paper describes the effect of ultrasonic frequency(f) on the atomization and deformation characteristics of single water droplet in an acoustic levitation field. To achieve this, the ultrasonic levitator that can control sound pressure and velocity amplitude by changing frequency was installed, and visualization of single water droplet was conducted with high resolution ICCD and CCD camera. At the same time, atomization and deformation characteristics of single water droplet was studied in terms of normalized droplet diameter($d/d_0$), droplet diameter(d) variation and droplet volume(V) variation under different ultrasonic frequency(f) conditions. It was revealed that increase of ultrasonic frequency reduces the droplet diameter. Therefore, it is able to levitate with low sound pressure level. It also induces the wide oscillation range, large diameter and volume variation of water droplet. In conclusion, the increase of ultrasonic frequency(f) can enhance the atomization performance of single water droplet.

Learning Spatio-Temporal Topology of a Multiple Cameras Network by Tracking Human Movement (사람의 움직임 추적에 근거한 다중 카메라의 시공간 위상 학습)

  • Nam, Yun-Young;Ryu, Jung-Hun;Choi, Yoo-Joo;Cho, We-Duke
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.7
    • /
    • pp.488-498
    • /
    • 2007
  • This paper presents a novel approach for representing the spatio-temporal topology of the camera network with overlapping and non-overlapping fields of view (FOVs) in Ubiquitous Smart Space (USS). The topology is determined by tracking moving objects and establishing object correspondence across multiple cameras. To track people successfully in multiple camera views, we used the Merge-Split (MS) approach for object occlusion in a single camera and the grid-based approach for extracting the accurate object feature. In addition, we considered the appearance of people and the transition time between entry and exit zones for tracking objects across blind regions of multiple cameras with non-overlapping FOVs. The main contribution of this paper is to estimate transition times between various entry and exit zones, and to graphically represent the camera topology as an undirected weighted graph using the transition probabilities.

Facial Gaze Detection by Estimating Three Dimensional Positional Movements (얼굴의 3차원 위치 및 움직임 추정에 의한 시선 위치 추적)

  • Park, Gang-Ryeong;Kim, Jae-Hui
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.3
    • /
    • pp.23-35
    • /
    • 2002
  • Gaze detection is to locate the position on a monitor screen where a user is looking. In our work, we implement it with a computer vision system setting a single camera above a monitor and a user moves (rotates and/or translates) his face to gaze at a different position on the monitor. To detect the gaze position, we locate facial region and facial features(both eyes, nostrils and lip corners) automatically in 2D camera images. From the movement of feature points detected in starting images, we can compute the initial 3D positions of those features by camera calibration and parameter estimation algorithm. Then, when a user moves(rotates and/or translates) his face in order to gaze at one position on a monitor, the moved 3D positions of those features can be computed from 3D rotation and translation estimation and affine transform. Finally, the gaze position on a monitor is computed from the normal vector of the plane determined by those moved 3D positions of features. As experimental results, we can obtain the gaze position on a monitor(19inches) and the gaze position accuracy between the computed positions and the real ones is about 2.01 inches of RMS error.

Design and Implementation of Real time Monitoring System based on Web camera for safe agricultural product management (안전한 농산물 관리를 위한 웹 카메라 기반의 실시간 모니터링 시스템의 설계 및 구현)

  • Kim Tak-Chen;Ryu Kwang-Hee;Jung Hoe-Kyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.8
    • /
    • pp.1366-1372
    • /
    • 2006
  • After the import liberalization of agricultural products, The Imported agricultural products rapidly increased market share of domestic agricultural products. But Imported agricultural products include various agricultural chemicals and food additives. In order to improve competitiveness in domestic markets of farmhouses and to secure food safety, the farmers needs to introduce Systematic support and various system. In this paper, established system that use Monitoring technology, to inform production information and management information about agricultural products to consumer by real time. Therefore unused analog camera such as CCTV(Closed-Circuit Television) for real time Monitoring. This system Used web camera that offer picture quality that is good than CCTV at place that consists network without distinction in the place. An advantage of real time Monitoring system designed multi-vision interface showing multi images on single screen and, for the purpose of the improvement in efficiency, the functions of saving images and of scheduling the time to save the images.

A NEW AUTO-GUIDING SYSTEM FOR CQUEAN

  • CHOI, NAHYUN;PARK, WON-KEE;LEE, HYE-IN;JI, TAE-GEUN;JEON, YISEUL;IM, MYUNGSHI;PAK, SOOJONG
    • Journal of The Korean Astronomical Society
    • /
    • v.48 no.3
    • /
    • pp.177-185
    • /
    • 2015
  • We develop a new auto-guiding system for the Camera for QUasars in the EArly uNiverse (CQUEAN). CQUEAN is an optical CCD camera system attached to the 2.1-m Otto-Struve Telescope (OST) at McDonald Observatory, USA. The new auto-guiding system differs from the original one in the following: instead of the cassegrain focus of the OST, it is attached to the finder scope; it has its own filter system for observation of bright targets; and it is controlled with the CQUEAN Auto-guiding Package, a newly developed auto-guiding program. Finder scope commands a very wide field of view at the expense of poorer light gathering power than that of the OST. Based on the star count data and the limiting magnitude of the system, we estimate there are more than 5.9 observable stars with a single FOV using the new auto-guiding CCD camera. An adapter is made to attach the system to the finder scope. The new auto-guiding system successfully guided the OST to obtain science data with CQUEAN during the test run in 2014 February. The FWHM and ellipticity distributions of stellar profiles on CQUEAN, images guided with the new auto-guiding system, indicate similar guiding capabilities with the original auto-guiding system but with slightly poorer guiding performance at longer exposures, as indicated by the position angle distribution. We conclude that the new auto-guiding system has overall similar guiding performance to the original system. The new auto-guiding system will be used for the second generation CQUEAN, but it can be used for other cassegrain instruments of the OST.

A Study on Detecting Moving Objects using Multiple Fisheye Cameras (다중 어안 카메라를 이용한 움직이는 물체 검출 연구)

  • Bae, Kwang-Hyuk;Suhr, Jae-Kyu;Park, Kang-Ryoung;Kim, Jai-Hie
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.4
    • /
    • pp.32-40
    • /
    • 2008
  • Since vision-based surveillance system uses a conventional camera which has a narrow field of view, it is difficult to apply it into the environment whose the ceiling is low and the monitoring area is wide. To overcome this problem, the method of increasing the number of camera causes the increase of the cost and the difficulties of camera set-up For these problems, we propose a new surveillance system based on multiple fisheye cameras which have 180 degree field of view. The proposed method handles occlusions using the homography relation between the multiple fisheye cameras. In the experiment, four fisheye cameras were set up within the area of $17{\times}14m$ at height of 2.5 m and five people wandered and crossed with one another within this area. The detection rates of the proposed system was 83.0% while that of a single camera was 46.1%.

Real time Monitoring System using Web Camera (웹 카메라를 통한 실시간 모니터링 시스템)

  • Ryu, Kwang-Hee;Choi, Jong-Kun;Im, Young-Tae;Park, Yeon-Sik;Jung, Hoe-Kyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.1
    • /
    • pp.667-670
    • /
    • 2005
  • As security and surveillance have become the center of interest, remote controlled CCTV(Closed-Circuit Television) market has been formed while rapid development of digital image compression technology and Internet triggered the advent of web cameras. The characteristic of web camera is that it can provide users with higher quality image than CCTV at any place where Internet access is available. However, As for the system administrator, the existing web camera have disadvantage in that they allows users only. who are connected to the server of the web camera, to see the image from it. In this paper, in order to make up for this defect, designed multi-vision interface showing multi images on single screen and, for the purpose of the improvement in efficiency, the functions of saving images and of scheduling the time to save the images.

  • PDF