• Title/Summary/Keyword: single-view camera

Search Result 98, Processing Time 0.029 seconds

Interactive system using 3D integral imaging technique (3D 집적 영상을 이용한 인터렉티브 시스템)

  • Shin, Dong-Hak;Kim, Eun-Soo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.10a
    • /
    • pp.503-506
    • /
    • 2008
  • The integral imaging is a promising 3D display technology since it is able to deliver continuous viewing points, full parallax, and full color view to the observers in space. In this paper, we propose a novel interactive 3D integral imaging system using a single camera. The user interface is implemented by adding a camera in the conventional integral imaging system. To show the possibility of the proposed system, we implement the optical setup and present the preliminary results

  • PDF

Real-time Vision-based People Counting System for the Security Door

  • Kim, Jae-Won;Park, Kang-Sun;Park, Byeong-Doo;Ko, Sung-Jea
    • Proceedings of the IEEK Conference
    • /
    • 2002.07c
    • /
    • pp.1416-1419
    • /
    • 2002
  • This paper describes an implementation method for the people counting system which detects and tracks moving people using a fixed single camera. This system counts the number of moving objects (people) entering the security door. Moreover, the detected objects are tracked by the proposed tracking algorithm before entering the door. The proposed system with In-tel Pentium IV operates at an average rate of 10 frames a second on real world scenes where up to 6 persons come into the view of a vertically mounted camera.

  • PDF

Video-based Height Measurements of Multiple Moving Objects

  • Jiang, Mingxin;Wang, Hongyu;Qiu, Tianshuang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.9
    • /
    • pp.3196-3210
    • /
    • 2014
  • This paper presents a novel video metrology approach based on robust tracking. From videos acquired by an uncalibrated stationary camera, the foreground likelihood map is obtained by using the Codebook background modeling algorithm, and the multiple moving objects are tracked by a combined tracking algorithm. Then, we compute vanishing line of the ground plane and the vertical vanishing point of the scene, and extract the head feature points and the feet feature points in each frame of video sequences. Finally, we apply a single view mensuration algorithm to each of the frames to obtain height measurements and fuse the multi-frame measurements using RANSAC algorithm. Compared with other popular methods, our proposed algorithm does not require calibrating the camera, and can track the multiple moving objects when occlusion occurs. Therefore, it reduces the complexity of calculation and improves the accuracy of measurement simultaneously. The experimental results demonstrate that our method is effective and robust to occlusion.

Fusion algorithm for Integrated Face and Gait Identification (얼굴과 발걸음을 결합한 인식)

  • Nizami, Imran Fareed;Hong, Sug-Jun;Lee, Hee-Sung;Ann, Toh-Kar;Kim, Eun-Tai;Park, Mig-Non
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.11a
    • /
    • pp.15-18
    • /
    • 2007
  • Identification of humans from multiple view points is an important task for surveillance and security purposes. For optimal performance the system should use the maximum information available from sensors. Multimodal biometric systems are capable of utilizing more than one physiological or behavioral characteristic for enrollment, verification, or identification. Since gait alone is not yet established as a very distinctive feature, this paper presents an approach to fuse face and gait for identification. In this paper we will use the single camera case i.e. both the face and gait recognition is done using the same set of images captured by a single camera. The aim of this paper is to improve the performance of the system by utilizing the maximum amount of information available in the images. Fusion is considered at decision level. The proposed algorithm is tested on the NLPR database.

  • PDF

Combining Shape and SIFT Features for 3-D Object Detection and Pose Estimation (효과적인 3차원 객체 인식 및 자세 추정을 위한 외형 및 SIFT 특징 정보 결합 기법)

  • Tak, Yoon-Sik;Hwang, Een-Jun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.59 no.2
    • /
    • pp.429-435
    • /
    • 2010
  • Three dimensional (3-D) object detection and pose estimation from a single view query image has been an important issue in various fields such as medical applications, robot vision, and manufacturing automation. However, most of the existing methods are not appropriate in a real time environment since object detection and pose estimation requires extensive information and computation. In this paper, we present a fast 3-D object detection and pose estimation scheme based on surrounding camera view-changed images of objects. Our scheme has two parts. First, we detect images similar to the query image from the database based on the shape feature, and calculate candidate poses. Second, we perform accurate pose estimation for the candidate poses using the scale invariant feature transform (SIFT) method. We earned out extensive experiments on our prototype system and achieved excellent performance, and we report some of the results.

3D Reconstruction using multi-view structured light (다시점 구조광을 이용한 3D 복원)

  • Kang, Hyunmin;Park, Yongmun;Seo, Yongduek
    • Annual Conference of KIPS
    • /
    • 2022.11a
    • /
    • pp.288-289
    • /
    • 2022
  • In this paper, we propose a method of obtaining high density geometric information using multi-view structured light. Reconstruction error due to the difference in resolution between the projector and the camera occurs when reconstruction a 3D shape from a structured light system to a single projector. This shows that the error in the point cloud in 3D is also the same when reconstruction the shape of the object. So we propose a high density method using multiple projectors to solve such a reconstruction error.

Effectual Method FOR 3D Rebuilding From Diverse Images

  • Leung, Carlos Wai Yin;Hons, B.E.
    • 한국정보컨버전스학회:학술대회논문집
    • /
    • 2008.06a
    • /
    • pp.145-150
    • /
    • 2008
  • This thesis explores the problem of reconstructing a three-dimensional(3D) scene given a set of images or image sequences of the scene. It describes efficient methods for the 3D reconstruction of static and dynamic scenes from stereo images, stereo image sequences, and images captured from multiple viewpoints. Novel methods for image-based and volumetric modelling approaches to 3D reconstruction are presented, with an emphasis on the development of efficient algorithm which produce high quality and accurate reconstructions. For image-based 3D reconstruction a novel energy minimisation scheme, Iterated Dynamic Programming, is presented for the efficient computation of strong local minima of discontinuity preserving energyy functions. Coupled with a novel morphological decomposition method and subregioning schemes for the efficient computation of a narrowband matching cost volume. the minimisation framework is applied to solve problems in stereo matching, stereo-temporal reconstruction, motion estimation, 2D image registration and 3D image registration. This thesis establishes Iterated Dynamic Programming as an efficient and effective energy minimisation scheme suitable for computer vision problems which involve finding correspondences across images. For 3D reconstruction from multiple view images with arbitrary camera placement, a novel volumetric modelling technique, Embedded Voxel Colouring, is presented that efficiently embeds all reconstructions of a 3D scene into a single output in a single scan of the volumetric space under exact visibility. An adaptive thresholding framework is also introduced for the computation of the optimal set of thresholds to obtain high quality 3D reconstructions. This thesis establishes the Embedded Voxel Colouring framework as a fast, efficient and effective method for 3D reconstruction from multiple view images.

  • PDF

A Region Depth Estimation Algorithm using Motion Vector from Monocular Video Sequence (단안영상에서 움직임 벡터를 이용한 영역의 깊이추정)

  • 손정만;박영민;윤영우
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.2
    • /
    • pp.96-105
    • /
    • 2004
  • The recovering 3D image from 2D requires the depth information for each picture element. The manual creation of those 3D models is time consuming and expensive. The goal in this paper is to estimate the relative depth information of every region from single view image with camera translation. The paper is based on the fact that the motion of every point within image which taken from camera translation depends on the depth. Motion vector using full-search motion estimation is compensated for camera rotation and zooming. We have developed a framework that estimates the average frame depth by analyzing motion vector and then calculates relative depth of region to average frame depth. Simulation results show that the depth of region belongs to a near or far object is consistent accord with relative depth that man recognizes.

  • PDF

Robust pupil detection and gaze tracking under occlusion of eyes

  • Lee, Gyung-Ju;Kim, Jin-Suh;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.10
    • /
    • pp.11-19
    • /
    • 2016
  • The size of a display is large, The form becoming various of that do not apply to previous methods of gaze tracking and if setup gaze-track-camera above display, can solve the problem of size or height of display. However, This method can not use of infrared illumination information of reflected cornea using previous methods. In this paper, Robust pupil detecting method for eye's occlusion, corner point of inner eye and center of pupil, and using the face pose information proposes a method for calculating the simply position of the gaze. In the proposed method, capture the frame for gaze tracking that according to position of person transform camera mode of wide or narrow angle. If detect the face exist in field of view(FOV) in wide mode of camera, transform narrow mode of camera calculating position of face. The frame captured in narrow mode of camera include gaze direction information of person in long distance. The method for calculating the gaze direction consist of face pose estimation and gaze direction calculating step. Face pose estimation is estimated by mapping between feature point of detected face and 3D model. To calculate gaze direction the first, perform ellipse detect using splitting from iris edge information of pupil and if occlusion of pupil, estimate position of pupil with deformable template. Then using center of pupil and corner point of inner eye, face pose information calculate gaze position at display. In the experiment, proposed gaze tracking algorithm in this paper solve the constraints that form of a display, to calculate effectively gaze direction of person in the long distance using single camera, demonstrate in experiments by distance.

A Method for Estimating a Distance Using the Stereo Zoom Lens Module (양안 줌렌즈를 이용한 물체의 거리추정)

  • Hwang, Eun-Seop;Kim, Nam;Kwon, Ki-Chul
    • Korean Journal of Optics and Photonics
    • /
    • v.17 no.6
    • /
    • pp.537-543
    • /
    • 2006
  • A method of estimating the distance using single zoom camera limits a distance range(only optical axis) in field of view. So, in this paper, we propose a method of estimating the distance information in Stereoscopic display using the stereo zoom lens module for estimating the distance in the wide range. The binocular stereo zoom lens system is composed using a horizontal moving camera module. The left and right images are acquired in polarized stereo monitor for getting the conversion and estimating a distance. The error distance is under 10mm which has difference between optically a traced distance and an estimated distance in left and right range $(0mm{\sim}500mm)$ at center. This presents the system using a function of the zoom and conversion has more precise distance information than that of conversion control. Also, a method of estimating a distance from horizontal moving camera is more precise value than that from toe-in camera by comparing the error distance of the two camera methods.