• Title/Summary/Keyword: camera vision

Search Result 1,386, Processing Time 0.053 seconds

A Defocus Technique based Depth from Lens Translation using Sequential SVD Factorization

  • Kim, Jong-Il;Ahn, Hyun-Sik;Jeong, Gu-Min;Kim, Do-Hyun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.383-388
    • /
    • 2005
  • Depth recovery in robot vision is an essential problem to infer the three dimensional geometry of scenes from a sequence of the two dimensional images. In the past, many studies have been proposed for the depth estimation such as stereopsis, motion parallax and blurring phenomena. Among cues for depth estimation, depth from lens translation is based on shape from motion by using feature points. This approach is derived from the correspondence of feature points detected in images and performs the depth estimation that uses information on the motion of feature points. The approaches using motion vectors suffer from the occlusion or missing part problem, and the image blur is ignored in the feature point detection. This paper presents a novel approach to the defocus technique based depth from lens translation using sequential SVD factorization. Solving such the problems requires modeling of mutual relationship between the light and optics until reaching the image plane. For this mutuality, we first discuss the optical properties of a camera system, because the image blur varies according to camera parameter settings. The camera system accounts for the camera model integrating a thin lens based camera model to explain the light and optical properties and a perspective projection camera model to explain the depth from lens translation. Then, depth from lens translation is proposed to use the feature points detected in edges of the image blur. The feature points contain the depth information derived from an amount of blur of width. The shape and motion can be estimated from the motion of feature points. This method uses the sequential SVD factorization to represent the orthogonal matrices that are singular value decomposition. Some experiments have been performed with a sequence of real and synthetic images comparing the presented method with the depth from lens translation. Experimental results have demonstrated the validity and shown the applicability of the proposed method to the depth estimation.

  • PDF

Real-time 3D Volumetric Model Generation using Multiview RGB-D Camera (다시점 RGB-D 카메라를 이용한 실시간 3차원 체적 모델의 생성)

  • Kim, Kyung-Jin;Park, Byung-Seo;Kim, Dong-Wook;Kwon, Soon-Chul;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.25 no.3
    • /
    • pp.439-448
    • /
    • 2020
  • In this paper, we propose a modified optimization algorithm for point cloud matching of multi-view RGB-D cameras. In general, in the computer vision field, it is very important to accurately estimate the position of the camera. The 3D model generation methods proposed in the previous research require a large number of cameras or expensive 3D cameras. Also, the methods of obtaining the external parameters of the camera through the 2D image have a large error. In this paper, we propose a matching technique for generating a 3D point cloud and mesh model that can provide omnidirectional free viewpoint using 8 low-cost RGB-D cameras. We propose a method that uses a depth map-based function optimization method with RGB images and obtains coordinate transformation parameters that can generate a high-quality 3D model without obtaining initial parameters.

Development of an intelligent camera for multiple body temperature detection (다중 체온 감지용 지능형 카메라 개발)

  • Lee, Su-In;Kim, Yun-Su;Seok, Jong-Won
    • Journal of IKEEE
    • /
    • v.26 no.3
    • /
    • pp.430-436
    • /
    • 2022
  • In this paper, we propose an intelligent camera for multiple body temperature detection. The proposed camera is composed of optical(4056*3040) and thermal(640*480), which detects abnormal symptoms by analyzing a person's facial expression and body temperature from the acquired image. The optical and thermal imaging cameras are operated simultaneously and detect an object in the optical image, in which the facial region and expression analysis are calculated from the object. Additionally, the calculated coordinate values from the optical image facial region are applied to the thermal image, also the maximum temperature is measured from the region and displayed on the screen. Abnormal symptom detection is determined by using the analyzed three facial expressions(neutral, happy, sadness) and body temperature values. In order to evaluate the performance of the proposed camera, the optical image processing part is tested on Caltech, WIDER FACE, and CK+ datasets for three algorithms(object detection, facial region detection, and expression analysis). Experimental results have shown 91%, 91%, and 84% accuracy scores each.

Development of Visual Inspection System for Minte needle probe (미세 탐침의 검사 시스템 개발)

  • Kang, Su-Min;Park, Se-Hyuk;Huh, Kyung-Moo
    • Proceedings of the KIEE Conference
    • /
    • 2008.04a
    • /
    • pp.123-124
    • /
    • 2008
  • The appearance inspection of various electronic products and parts has been executed by the eyesight of human. But inspection by eyesight can't bring about uniform inspection result. Because the appearance inspection result by eyesight of human is changed by condition of physical and spirit of the checker. So machine vision inspection system is currently used to many appearance inspection fields instead of the checker. Therefore we proposes a inspection system in this paper. it will be able to secure the objectivity of the prosecuting attorney using inspection system. Also this system has been developed only using PC, CCD Camera and Visual C++ for universal workplace.

  • PDF

Scene Recognition based Autonomous Robot Navigation robust to Dynamic Environments (동적 환경에 강인한 장면 인식 기반의 로봇 자율 주행)

  • Kim, Jung-Ho;Kweon, In-So
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.3
    • /
    • pp.245-254
    • /
    • 2008
  • Recently, many vision-based navigation methods have been introduced as an intelligent robot application. However, many of these methods mainly focus on finding an image in the database corresponding to a query image. Thus, if the environment changes, for example, objects moving in the environment, a robot is unlikely to find consistent corresponding points with one of the database images. To solve these problems, we propose a novel navigation strategy which uses fast motion estimation and a practical scene recognition scheme preparing the kidnapping problem, which is defined as the problem of re-localizing a mobile robot after it is undergone an unknown motion or visual occlusion. This algorithm is based on motion estimation by a camera to plan the next movement of a robot and an efficient outlier rejection algorithm for scene recognition. Experimental results demonstrate the capability of the vision-based autonomous navigation against dynamic environments.

  • PDF

Direct Depth and Color-based Environment Modeling and Mobile Robot Navigation (스테레오 비전 센서의 깊이 및 색상 정보를 이용한 환경 모델링 기반의 이동로봇 주행기술)

  • Park, Soon-Yong;Park, Mignon;Park, Sung-Kee
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.3
    • /
    • pp.194-202
    • /
    • 2008
  • This paper describes a new method for indoor environment mapping and localization with stereo camera. For environmental modeling, we directly use the depth and color information in image pixels as visual features. Furthermore, only the depth and color information at horizontal centerline in image is used, where optical axis passes through. The usefulness of this method is that we can easily build a measure between modeling and sensing data only on the horizontal centerline. That is because vertical working volume between model and sensing data can be changed according to robot motion. Therefore, we can build a map about indoor environment as compact and efficient representation. Also, based on such nodes and sensing data, we suggest a method for estimating mobile robot positioning with random sampling stochastic algorithm. With basic real experiments, we show that the proposed method can be an effective visual navigation algorithm.

  • PDF

Optimal algorithm of FOV for solder joint inspection using neural network (신경회로망을 이용한 납땜 검사 FOV의 최적화 알고리즘)

  • 오제휘;차영엽
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.1549-1552
    • /
    • 1997
  • In this paper, a optimal algorithm that can produce the FOV is proposed in terms of using the Kohonen's Self-Organizing Map(KSOM). A FOV, that stands for "Field Of View", means maximum area where a camera could be wholly seen and influences the total time of inspection of vision system. Therefore, we draw algorithm with a KSOM which aims to map an input space of N-dimensions into a one-or two-dimensional lattice of output layer neurons in order to optimize the number and location of FOV, instead of former sequentila method. Then, we show demonstratin through computer simulation using the real PCB data. PCB data.

  • PDF

The compensation of kinematic differences of a robot using image information (화상정보를 이용한 로봇기구학의 오차 보정)

  • Lee, Young-Jin;Lee, Min-Chul;Ahn, Chul-Ki;Son, Kwon;Lee, Jang-Myung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.1840-1843
    • /
    • 1997
  • The task environment of a robot is changing rapidly and task itself becomes complicated due to current industrial trends of multi-product and small lot size production. A convenient user-interfaced off-line programming(OLP) system is being developed in order to overcome the difficulty in teaching a robot task. Using the OLP system, operators can easily teach robot tasks off-line and verify feasibility of the task through simulation of a robot prior to the on-line execution. However, some task errors are inevitable by kinematic differences between the robot model in OLP and the actual robot. Three calibration methods using image information are proposed to compensate the kinematic differences. These methods compose of a relative position vector method, three point compensation method, and base line compensation method. To compensate a kinematic differences the vision system with one monochrome camera is used in the calibration experiment.

  • PDF

Implementation and Experimentation of Tracking Control of a Moving Object for Humanoid Robot Arms ROBOKER by Stereo Vision (스테레오 비전정보를 사용한 휴머노이드 로봇 팔 ROBOKER의 동적 물체 추종제어 구현 및 실험)

  • Lee, Woon-Kyu;Kim, Dong-Min;Choi, Ho-Jin;Kim, Jeong-Seob;Jung, Seul
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.10
    • /
    • pp.998-1004
    • /
    • 2008
  • In this paper, a visual servoing control technique of humanoid robot arms is implemented for tracking a moving object. An embedded time-delayed controller is designed on an FPGA(Programmable field gate array) chip and implemented to control humanoid robot arms. The position of the moving object is detected by a stereo vision camera and converted to joint commands through the inverse kinematics. Then the robot arm performs visual servoing control to track a moving object in real time fashion. Experimental studies are conducted and results demonstrate the feasibility of the visual feedback control method for a moving object tracking task by the humanoid robot arms called the ROBOKER.

Vision-based hand gesture recognition system for object manipulation in virtual space (가상 공간에서의 객체 조작을 위한 비전 기반의 손동작 인식 시스템)

  • Park, Ho-Sik;Jung, Ha-Young;Ra, Sang-Dong;Bae, Cheol-Soo
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.553-556
    • /
    • 2005
  • We present a vision-based hand gesture recognition system for object manipulation in virtual space. Most conventional hand gesture recognition systems utilize a simpler method for hand detection such as background subtractions with assumed static observation conditions and those methods are not robust against camera motions, illumination changes, and so on. Therefore, we propose a statistical method to recognize and detect hand regions in images using geometrical structures. Also, Our hand tracking system employs multiple cameras to reduce occlusion problems and non-synchronous multiple observations enhance system scalability. Experimental results show the effectiveness of our method.

  • PDF