• Title/Summary/Keyword: Stereo cameras.

Search Result 208, Processing Time 0.03 seconds

Distance Measuring Method for Motion Capture Animation (모션캡쳐 애니메이션을 위한 거리 측정방법)

  • Lee, Heei-Man;Seo, Jeong-Man;Jung, Suun-Key
    • The KIPS Transactions:PartB
    • /
    • v.9B no.1
    • /
    • pp.129-138
    • /
    • 2002
  • In this paper, a distance measuring algorithm for motion capture using color stereo camera is proposed. The color markers attached on articulations of an actor are captured by stereo color video cameras, and color region which has the same color of the marker's color in the captured images is separated from the other colors by finding dominant wavelength of colors. Color data in RGB (red, green, blue) color space is converted into CIE (Commission Internationale del'Eclairage) color space for the purpose of calculating wavelength. The dominant wavelength is selected from histogram of the neighbor wavelengths. The motion of the character in the cyber space is controlled by a program using the distance information of the moving markers.

A 2-D Image Camera Calibration using a Mapping Approximation of Multi-Layer Perceptrons (다층퍼셉트론의 정합 근사화에 의한 2차원 영상의 카메라 오차보정)

  • 이문규;이정화
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.4 no.4
    • /
    • pp.487-493
    • /
    • 1998
  • Camera calibration is the process of determining the coordinate relationship between a camera image and its real world space. Accurate calibration of a camera is necessary for the applications that involve quantitative measurement of camera images. However, if the camera plane is parallel or near parallel to the calibration board on which 2 dimensional objects are defined(this is called "ill-conditioned"), existing solution procedures are not well applied. In this paper, we propose a neural network-based approach to camera calibration for 2D images formed by a mono-camera or a pair of cameras. Multi-layer perceptrons are developed to transform the coordinates of each image point to the world coordinates. The validity of the approach is tested with data points which cover the whole 2D space concerned. Experimental results for both mono-camera and stereo-camera cases indicate that the proposed approach is comparable to Tsai's method[8]. Especially for the stereo camera case, the approach works better than the Tsai's method as the angle between the camera optical axis and the Z-axis increases. Therefore, we believe the approach could be an alternative solution procedure for the ill -conditioned camera calibration.libration.

  • PDF

Scanning Stereoscopic PIV for 3D Vorticity Measurement

  • SAKAKIBARA Jun;HORI Toshio
    • 한국가시화정보학회:학술대회논문집
    • /
    • 2004.12a
    • /
    • pp.1-13
    • /
    • 2004
  • A scanning stereo-PIV system was developed to measure the three-dimensional distribution of three-component velocity in a turbulent round jet. A laser light beam produced by a high repetition rate YLF pulse laser was expanded vertically by a cylindrical lens to form a laser light sheet. The light sheet is scanned in a direction normal to the sheet by a flat mirror mounted on an optical scanner, which is controlled by a programmable scanner controller. Two high-speed mega-pixel resolution C-MOS cameras captured the particle images illuminated by the light sheet, and stereoscopic PIV method was adopted to acquire the 3D-3C-velocity distribution of turbulent round jet in an octagonal tank filled with water. The jet Reynolds number was set at Re=1000 and the streamwise location of the measurement was fixed at approximately x = 40D. Time evolution of three-dimensional vortical structure, which is identified by vorticity, is visualized. It revealed that the existence of a group of hairpin-like vortex structures was quite evident around the rim of the shear layer of the jet. Turbulence statistics shows good agreement with the previous data, and divergence of a filtered (unfiltered) velocity vector field was $7\%\;(22\%)$ of root-me an-squared vorticity value.

  • PDF

Human Activity Recognition Using Body Joint-Angle Features and Hidden Markov Model

  • Uddin, Md. Zia;Thang, Nguyen Duc;Kim, Jeong-Tai;Kim, Tae-Seong
    • ETRI Journal
    • /
    • v.33 no.4
    • /
    • pp.569-579
    • /
    • 2011
  • This paper presents a novel approach for human activity recognition (HAR) using the joint angles from a 3D model of a human body. Unlike conventional approaches in which the joint angles are computed from inverse kinematic analysis of the optical marker positions captured with multiple cameras, our approach utilizes the body joint angles estimated directly from time-series activity images acquired with a single stereo camera by co-registering a 3D body model to the stereo information. The estimated joint-angle features are then mapped into codewords to generate discrete symbols for a hidden Markov model (HMM) of each activity. With these symbols, each activity is trained through the HMM, and later, all the trained HMMs are used for activity recognition. The performance of our joint-angle-based HAR has been compared to that of a conventional binary and depth silhouette-based HAR, producing significantly better results in the recognition rate, especially for the activities that are not discernible with the conventional approaches.

Simultaneous Tracking of Multiple Construction Workers Using Stereo-Vision (다수의 건설인력 위치 추적을 위한 스테레오 비전의 활용)

  • Lee, Yong-Ju;Park, Man-Woo
    • Journal of KIBIM
    • /
    • v.7 no.1
    • /
    • pp.45-53
    • /
    • 2017
  • Continuous research efforts have been made on acquiring location data on construction sites. As a result, GPS and RFID are increasingly employed on the site to track the location of equipment and materials. However, these systems are based on radio frequency technologies which require attaching tags on every target entity. Implementing the systems incurs time and costs for attaching/detaching/managing the tags or sensors. For this reason, efforts are currently being made to track construction entities using only cameras. Vision-based 3D tracking has been presented in a previous research work in which the location of construction manpower, vehicle, and materials were successfully tracked. However, the proposed system is still in its infancy and yet to be implemented on practical applications for two reasons. First, it does not involve entity matching across two views, and thus cannot be used for tracking multiple entities, simultaneously. Second, the use of a checker board in the camera calibration process entails a focus-related problem when the baseline is long and the target entities are located far from the cameras. This paper proposes a vision-based method to track multiple workers simultaneously. An entity matching procedure is added to acquire the matching pairs of the same entities across two views which is necessary for tracking multiple entities. Also, the proposed method simplified the calibration process by avoiding the use of a checkerboard, making it more adequate to the realistic deployment on construction sites.

Toward Occlusion-Free Depth Estimation for Video Production

  • Park, Jong-Il;Seiki-Inoue
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1997.06a
    • /
    • pp.131-136
    • /
    • 1997
  • We present a method to estimate a dense and sharp depth map using multiple cameras for the application to flexible video production. A key issue for obtaining sharp depth map is how to overcome the harmful influence of occlusion. Thus, we first propose to selectively use the depth information from multiple cameras. With a simple sort and discard technique, we resolve the occlusion problem considerably at a slight sacrifice of noise tolerance. However, boundary overreach of more textured area to less textured area at object boundaries still remains to be solved. We observed that the amount of boundary overreach is less than half the size of the matching window and, unlike usual stereo matching, the boundary overreach with the proposed occlusion-overcoming method shows very abrupt transition. Based on these observations, we propose a hierarchical estimation scheme that attempts to reduce boundary overreach such that edges of the depth map coincide with object boundaries on the one hand, and to reduce noisy estimates due to insufficient size of matching window on the other hand. We show the hierarchical method can produce a sharp depth map for a variety of images.

  • PDF

Position Estimation of Object Based on Vergence Movement of Cameras (카메라의 vergence 운동에 근거한 물체의 위치 추정)

  • 정남채
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.2 no.4
    • /
    • pp.59-64
    • /
    • 2001
  • In this paper it was proposed method that solve problems of method to segment region of zero disparity and algorithm that extract binocular disparity to estimate position of object by vergence movement of moving stereo cameras experimented to compare those. There was not change of density value almost in region that change of critcal value was not found almost in image, because a high critical value was set so that critical value may be kipt changelessly about all small regions in studied treatise so far. The corresponding points were extracted wrongly by the result. By because the characteristics of small region was evaluated by autocorrelation and the critical value was established that may be proportional to the autocorrelation value, it was confirmed that corresponding points are not extracted almost by mistake and binocular disparity could by extracted with high speed.

  • PDF

A Remote Measurement Technique for Rock Discontinuity (암반 불연속면의 원격 영상측량 기법)

  • 황상기
    • The Journal of Engineering Geology
    • /
    • v.11 no.2
    • /
    • pp.205-214
    • /
    • 2001
  • A simple automated measuring method for planar or linear features on the rock excavation surface is presented. Attitude of the planar and linear feature is calculated from 3D coordinates of points on the structures. Spatial coordinates are calculated from overlapped stereo images. Factors used in the calculation are (1) local coordinates of the left and right images, (2) the focal length of cameras, and (3) the distance between two cameras. A simple image capturing device and an image treatment routine coded by Visual Basic and GIS components are constructed for the remote measurements, The methodology shows less than 1 cm error when a point is measured from 179 cm in distance. The methodology is tested at the excavation site in PaiChai University. Remotely measured result matches well with the manual measurement within the reasonable error range.

  • PDF

Precision Evaluation of Three-dimensional Feature Points Measurement by Binocular Vision

  • Xu, Guan;Li, Xiaotao;Su, Jian;Pan, Hongda;Tian, Guangdong
    • Journal of the Optical Society of Korea
    • /
    • v.15 no.1
    • /
    • pp.30-37
    • /
    • 2011
  • Binocular-pair images obtained from two cameras can be used to calculate the three-dimensional (3D) world coordinate of a feature point. However, to apply this method, measurement accuracy of binocular vision depends on some structure factors. This paper presents an experimental study of measurement distance, baseline distance, and baseline direction. Their effects on camera reconstruction accuracy are investigated. The testing set for the binocular model consists of a series of feature points in stereo-pair images and corresponding 3D world coordinates. This paper discusses a method to increase the baseline distance of two cameras for enhancing the accuracy of a binocular vision system. Moreover, there is an inflexion point of the value and distribution of measurement errors when the baseline distance is increased. The accuracy benefit from increasing the baseline distance is not obvious, since the baseline distance exceeds 1000 mm in this experiment. Furthermore, it is observed that the direction errors deduced from the set-up are lower when the main measurement direction is similar to the baseline direction.

Evaluation of the Quantitative Practical Use of Smart Phone Stereo Cameras (스마트폰 스테레오 카메라의 정량적 활용성 평가)

  • Park, Kyeong-Sik;Choi, Seok-Keun
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.20 no.2
    • /
    • pp.93-100
    • /
    • 2012
  • The interest in 3-dimensional information and its practical use are rapidly increasing and thus some goods with stereoscopic views are being released. Mobile phones, unlike other units, are being closely utilized in everyday life and their applications are undoubtedly limitless. In this study, taking photographs with the stereo-camera of mobile phones has been accomplished and also the possibility of getting the quantitative information has been examined. In addition, this study aims to evaluate the quantitative practical use of mobile phones, evaluating the accuracy of the obtained quantitative information. Thus, interior orientation parameters were decided through the calibration of the lens of two cameras equipped with mobile phones. Using the determined interior orientation parameters, the 3-dimensional coordinates on the targets of the test field were calculated and then compared with precisely observed coordinates. Moreover, the performance of the orientation on the arbitrary building resulted in the standard deviation of $X={\pm}0.0674m$, $Y={\pm}0.25319$, and $Z={\pm}0.4983m$. The result also shows that the plot is possible. As a result, smart phones could be utilized for the acquisition of the quantitative information at close range and small measurement in which the high-accuracy on the basis of centimeters is not required.