• Title/Summary/Keyword: camera pose

Search Result 270, Processing Time 0.035 seconds

Measurement of 3D Shape of Fastener using Camera and Slit Laser (카메라와 슬릿 레이저를 이용한 나사 3D 형상 측정)

  • Kim, Jin Woo;Song, Tae Hun;Ha, Jong Eun
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.32 no.6
    • /
    • pp.537-542
    • /
    • 2015
  • The measurement of 3D shape is important in inspecting the quality of product. In this paper, we present a 3D shape measurement system of fastener using a camera and a slit laser. Calibration structure with slits is used in the extrinsic calibration of the camera and laser. The pose of the camera and laser is computed under the same world coordinate system in the calibration structure. Reflection of laser light on the metal surface causes many difficulties in the robust detection of them on image. We overcome this difficulty by using color and dynamic programming. Motor stage is used to rotate the fastener to recover the whole 3D shape of the surface of it.

A Study on Measurement and Control of position and pose of Mobile Robot using Ka13nan Filter and using lane detecting filter in monocular Vision (단일 비전에서 칼만 필티와 차선 검출 필터를 이용한 모빌 로봇 주행 위치.자세 계측 제어에 관한 연구)

  • 이용구;송현승;노도환
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.81-81
    • /
    • 2000
  • We use camera to apply human vision system in measurement. To do that, we need to know about camera parameters. The camera parameters are consisted of internal parameters and external parameters. we can fix scale factor&focal length in internal parameters, we can acquire external parameters. And we want to use these parameters in automatically driven vehicle by using camera. When we observe an camera parameters in respect with that the external parameters are important parameters. We can acquire external parameter as fixing focal length&scale factor. To get lane coordinate in image, we propose a lane detection filter. After searching lanes, we can seek vanishing point. And then y-axis seek y-sxis rotation component(${\beta}$). By using these parameter, we can find x-axis translation component(Xo). Before we make stepping motor rotate to be y-axis rotation component(${\beta}$), '0', we estimate image coordinates of lane at (t+1). Using this point, we apply this system to Kalman filter. And then we calculate to new parameters whick make minimum error.

  • PDF

Point Cloud Generation Method Based on Lidar and Stereo Camera for Creating Virtual Space (가상공간 생성을 위한 라이다와 스테레오 카메라 기반 포인트 클라우드 생성 방안)

  • Lim, Yo Han;Jeong, In Hyeok;Lee, San Sung;Hwang, Sung Soo
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.11
    • /
    • pp.1518-1525
    • /
    • 2021
  • Due to the growth of VR industry and rise of digital twin industry, the importance of implementing 3D data same as real space is increasing. However, the fact that it requires expertise personnel and huge amount of time is a problem. In this paper, we propose a system that generates point cloud data with same shape and color as a real space, just by scanning the space. The proposed system integrates 3D geometric information from lidar and color information from stereo camera into one point cloud. Since the number of 3D points generated by lidar is not enough to express a real space with good quality, some of the pixels of 2D image generated by camera are mapped to the correct 3D coordinate to increase the number of points. Additionally, to minimize the capacity, overlapping points are filtered out so that only one point exists in the same 3D coordinates. Finally, 6DoF pose information generated from lidar point cloud is replaced with the one generated from camera image to position the points to a more accurate place. Experimental results show that the proposed system easily and quickly generates point clouds very similar to the scanned space.

Estimating Location in Real-world of a Observer for Adaptive Parallax Barrier (적응적 패럴랙스 베리어를 위한 사용자 위치 추적 방법)

  • Kang, Seok-Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.12
    • /
    • pp.1492-1499
    • /
    • 2019
  • This paper propose how to track the position of the observer to control the viewing zone using an adaptive parallax barrier. The pose is estimated using a Constrained Local Model based on the shape model and Landmark for robust eye-distance measurement in the face pose. Camera's correlation converts distance and horizontal location to centimeter. The pixel pitch of the adaptive parallax barrier is adjusted according to the position of the observer's eyes, and the barrier is moved to adjust the viewing area. This paper propose a method for tracking the observer in the range of 60cm to 490cm, and measure the error, measurable range, and fps according to the resolution of the camera image. As a result, the observer can be measured within the absolute error range of 3.1642cm on average, and it was able to measure about 278cm at 320×240, about 488cm at 640×480, and about 493cm at 1280×960 depending on the resolution of the image.

Fall Detection Based on 2-Stacked Bi-LSTM and Human-Skeleton Keypoints of RGBD Camera (RGBD 카메라 기반의 Human-Skeleton Keypoints와 2-Stacked Bi-LSTM 모델을 이용한 낙상 탐지)

  • Shin, Byung Geun;Kim, Uung Ho;Lee, Sang Woo;Yang, Jae Young;Kim, Wongyum
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.11
    • /
    • pp.491-500
    • /
    • 2021
  • In this study, we propose a method for detecting fall behavior using MS Kinect v2 RGBD Camera-based Human-Skeleton Keypoints and a 2-Stacked Bi-LSTM model. In previous studies, skeletal information was extracted from RGB images using a deep learning model such as OpenPose, and then recognition was performed using a recurrent neural network model such as LSTM and GRU. The proposed method receives skeletal information directly from the camera, extracts 2 time-series features of acceleration and distance, and then recognizes the fall behavior using the 2-Stacked Bi-LSTM model. The central joint was obtained for the major skeletons such as the shoulder, spine, and pelvis, and the movement acceleration and distance from the floor were proposed as features of the central joint. The extracted features were compared with models such as Stacked LSTM and Bi-LSTM, and improved detection performance compared to existing studies such as GRU and LSTM was demonstrated through experiments.

Segmentation of Polygons with Different Colors and its Application to the Development of Vision-based Tangram Puzzle Game (다른 색으로 구성된 다각형들의 분할과 이를 이용한 영상 인식 기반 칠교 퍼즐 놀이 개발)

  • Lee, Jihye;Yi, Kang;Kim, Kyungmi
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.12
    • /
    • pp.1890-1900
    • /
    • 2017
  • Tangram game consists of seven pieces of polygons such as triangle, square, and parallelogram. Typical methods of image processing for object recognition may suffer from the existence of side thickness and shadow of the puzzle pieces that are dependent on the pose of 3D-shaped puzzle pieces and the direction of light sources. In this paper, we propose an image processing method that recognizes simple convex polygon-shaped objects irrespective of thickness and pose of puzzle objects. Our key algorithm to remove the thick side of piece of puzzle objects is based on morphological operations followed by logical operations with edge image and background image. By using the proposed object recognition method, we are able to implement a stable tangram game applications designed for tablet computers with front camera. As the experimental results, recognition rate is about 86 percent and recognition time is about 1ms on average. It shows the proposed algorithm is fast and accurate to recognize tangram blocks.

Camera calibration parameters estimation using perspective variation ratio of grid type line widths (격자형 선폭들의 투영변화비를 이용한 카메라 교정 파라메터 추정)

  • Jeong, Jun-Ik;Choi, Seong-Gu;Rho, Do-Hwan
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.30-32
    • /
    • 2004
  • With 3-D vision measuring, camera calibration is necessary to calculate parameters accurately. Camera calibration was developed widely in two categories. The first establishes reference points in space, and the second uses a grid type frame and statistical method. But, the former has difficulty to setup reference points and the latter has low accuracy. In this paper we present an algorithm for camera calibration using perspective ratio of the grid type frame with different line widths. It can easily estimate camera calibration parameters such as lens distortion, focal length, scale factor, pose, orientations, and distance. The advantage of this algorithm is that it can estimate the distance of the object. Also, the proposed camera calibration method is possible estimate distance in dynamic environment such as autonomous navigation. To validate proposed method, we set up the experiments with a frame on rotator at a distance of 1, 2, 3, 4[m] from camera and rotate the frame from -60 to 60 degrees. Both computer simulation and real data have been used to test the proposed method and very good results have been obtained. We have investigated the distance error affected by scale factor or different line widths and experimentally found an average scale factor that includes the least distance error with each image. The average scale factor tends to fluctuate with small variation and makes distance error decrease. Compared with classical methods that use stereo camera or two or three orthogonal planes, the proposed method is easy to use and flexible. It advances camera calibration one more step from static environments to real world such as autonomous land vehicle use.

  • PDF

A Study on the Automatic Lane Keeping Control Method of a Vehicle Based upon a Perception Net

  • Ahn, Doo-Sung;Choi, Jae-Weon
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.160.3-160
    • /
    • 2001
  • The objective of this research is to monitor and control the vehicle motion in order to remove out the existing safety risk based upon the human-machine cooperative vehicle control. A new control method is proposed to control the steering wheel of the vehicle to keep the lane. Desired angle of the steering wheel to control the vehicle motion could be calculated based upon vehicle dynamics, current and estimated pose of the vehicle every sample steps. The vehicle pose and the road curvature were calculated by geometrically fusing sensor data from camera image, tachometer and steering wheel encoder though the Perception Net, where not only the state variables, but also the corresponding uncertainties were propagated in ...

  • PDF

Development of a Robot arm capable of recognizing 3-D object using stereo vision

  • Kim, Sungjin;Park, Seungjun;Park, Hongphyo;Sangchul Won
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.128.6-128
    • /
    • 2001
  • In this paper, we present a methodology of sensing and control for a robot system designed to be capable of grasping an object and moving it to target point Stereo vision system is employed to determine to depth map which represents the distance from the camera. In stereo vision system we have used a center-referenced projection to represent the discrete match space for stereo correspondence. This center-referenced disparity space contains new occlusion points in addition to the match points which we exploit to create a concise representation of correspondence an occlusion. And from the depth map we find the target object´s pose and position in 3-D space. To find the target object´s pose and position, we use the method of the model-based recognition.

  • PDF

A Study on the automatic Lane keeping control method of a vehicle based upon a perception net (퍼셉션 넷에 기반한 차량의 자동 차선 위치 제어에 관한 연구)

  • 부광석;정문영
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.257-257
    • /
    • 2000
  • The objective of this research is to monitor and control the vehicle motion in order to remove out the existing safety risk based upon the human-machine cooperative vehicle control. A predictive control method is proposed to control the steering wheel of the vehicle to keep the lane. Desired angle of the steering wheel to control the vehicle motion could be calculated based upon vehicle dynamics, current and estimated pose of the vehicle every sample steps. The vehicle pose and the road curvature were calculated by geometrically fusing sensor data from camera image, tachometer and steering wheel encoder though the Perception Net, where not only the state variables, but also the corresponding uncertainties were propagated in forward and backward direction in such a way to satisfy the given constraint condition, maintain consistency, reduce the uncertainties, and guarantee robustness. A series of experiments was conducted to evaluate the control performance, in which a car Like robot was utilized to quit unwanted safety problem. As the results, the robot was keeping very well a given lane with arbitrary shape at moderate speed.

  • PDF