• Title/Summary/Keyword: 2D vision

Search Result 619, Processing Time 0.027 seconds

Changes in Visual Function After Viewing an Anaglyph 3D Image (Anaglyph 3D입체 영상 시청 후의 시기능 변화)

  • Lee, Wook-Jin;Kwak, Ho-Won;Son, Jeong-Sik;Kim, In-Su;Yu, Dong-Sik
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.16 no.2
    • /
    • pp.179-186
    • /
    • 2011
  • Purpose: This study aimed to compare and assess changes of visual functions in viewing an anaglyph 3D image. Methods: Visual functions were examined before and after viewing a 2D image and an anaglyph 3D image with red-green glasses on seventy college students (mean age = 22.29${\pm}$2.19 years). Visual function tests were carried out for von Graefe phoria test, accommodative amplitude test by (-) lens addition, negative relative accommodation (NRA) and positive relative accommodation (PRA) test, negative relative convergence (NRC) and positive relative convergence (PRC) test, accommodative facility, and vergence facility test. Results: Assessment of the visual functions indicated that near exophoria and accommodative amplitude were reduced after viewing a 3D image, and although there were small changes in relation to these findings, NRC and PRC showed tendencies to increase and decrease at near, respectively. There were no significant changes with NRA and PRA, and accommodative and vergence facility were shown to have improved. Conclusions: Changes of visual functions were more in the 3D image than the 2D image, especially at near than distance. Particularly, the improvement of accommodative and vergence facility could be related to an effect of subsequent accommodation and vergence shift to have stereopsis in the 3D image. These results indicate that an anaglyph 3D image may, to some extent, be the effect of vision training such as anaglyphs.

Application of Stereo Vision for Shape Measurement of Free-form Surface using Shape-from-shading (자유곡면의 형상 측정에서 shape-from-shading을 접목한 스테레오 비전의 적용)

  • Yang, Young-Soo;Bae, Kang-Yul
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.16 no.5
    • /
    • pp.134-140
    • /
    • 2017
  • Shape-from-shading (SFS) or stereo vision algorithms can be utilized to measure the shape of an object with imaging techniques for effective sensing in non-contact measurements. SFS algorithms could reconstruct the 3D information from a 2D image data, offering relatively comprehensive information. Meanwhile, a stereo vision algorithm needs several feature points or lines to extract 3D information from two 2D images. However, to measure the size of an object with a freeform surface, the two algorithms need some additional information, such as boundary conditions and grids, respectively. In this study, a stereo vision scheme using the depth information obtained by shape-from-shading as patterns was proposed to measure the size of an object with a freeform surface. The feasibility of the scheme was proved with an experiment where the images of an object were acquired by a CCD camera at two positions, then processed by SFS, and finally by stereo matching. The experimental results revealed that the proposed scheme could recognize the size and shape of freeform surface fairly well.

Path finding via VRML and VISION overlay for Autonomous Robotic (로봇의 위치보정을 통한 경로계획)

  • Sohn, Eun-Ho;Park, Jong-Ho;Kim, Young-Chul;Chong, Kil-To
    • Proceedings of the KIEE Conference
    • /
    • 2006.10c
    • /
    • pp.527-529
    • /
    • 2006
  • In this paper, we find a robot's path using a Virtual Reality Modeling Language and overlay vision. For correct robot's path we describe a method for localizing a mobile robot in its working environment using a vision system and VRML. The robt identifies landmarks in the environment, using image processing and neural network pattern matching techniques, and then its performs self-positioning with a vision system based on a well-known localization algorithm. After the self-positioning procedure, the 2-D scene of the vision is overlaid with the VRML scene. This paper describes how to realize the self-positioning, and shows the overlap between the 2-D and VRML scenes. The method successfully defines a robot's path.

  • PDF

f-MRI with Three-Dimensional Visual Stimulation (삼차원 시각 자극을 이용한 f-MRI 연구)

  • Kim C.Y.;Park H.J.;Oh S.J.;Ahn C.B.
    • Investigative Magnetic Resonance Imaging
    • /
    • v.9 no.1
    • /
    • pp.24-29
    • /
    • 2005
  • Purpose : Instead of conventional two-dimensional (2-D) visual stimuli, three-dimensional (3-D) visual stimuli with stereoscopic vision were employed for the study of functional Magnetic Resonance Imaging (f-MRI). In this paper f-MRI with 3-D visual stimuli is investigated in comparison with f-MRI with 2-D visual stimuli. Materials and Methods : The anaglyph which generates stereoscopic vision by viewing color coded images with red-blue glasses is used for 3-D visual stimuli. Two-dimensional visual stimuli are also used for comparison. For healthy volunteers, f-MRI experiments were performed with 2-D and 3-D visual stimuli at 3.0 Tesla MRI system. Results : Occipital lobes were activated by the 3-D visual stimuli similarly as in the f-MRI with the conventional 2-D visual stimuli. The activated regions by the 3-D visual stimuli were, however, larger than those by the 2-D visual stimuli by $18\%$. Conclusion : Stereoscopic vision is the basis of the three-dimensional human perception. In this paper 3-D visual stimuli were applied using the anaglyph. Functional MRI was performed with 2-D and 3-D visual stimuli at 3.0 Tesla whole body MRI system. The occipital lobes activated by the 3-D visual stimuli appeared larger than those by the 2-D visual stimuli by about $18\%$. This is due to the more complex character of the 3-D human vision compared to 2-D vision. The f-MRI with 3-D visual stimuli may be useful in various fields using 3-D human vision such as virtual reality, 3-D display, and 3-D multimedia contents.

  • PDF

Industrial Bin-Picking Applications Using Active 3D Vision System (능동 3D비전을 이용한 산업용 로봇의 빈-피킹 공정기술)

  • Tae-Seok Jin
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.26 no.2_2
    • /
    • pp.249-254
    • /
    • 2023
  • The use of robots in automated factories requires accurate bin-picking to ensure that objects are correctly identified and selected. In the case of atypical objects with multiple reflections from their surfaces, this is a challenging task. In this paper, we developed a random 3D bin picking system by integrating the low-cost vision system with the robotics system. The vision system identifies the position and posture of candidate parts, then the robot system validates if one of the candidate parts is pickable; if a part is identified as pickable, then the robot will pick up this part and place it accurately in the right location.

Road marking classification method based on intensity of 2D Laser Scanner (신호세기를 이용한 2차원 레이저 스캐너 기반 노면표시 분류 기법)

  • Park, Seong-Hyeon;Choi, Jeong-hee;Park, Yong-Wan
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.11 no.5
    • /
    • pp.313-323
    • /
    • 2016
  • With the development of autonomous vehicle, there has been active research on advanced driver assistance system for road marking detection using vision sensor and 3D Laser scanner. However, vision sensor has the weak points that detection is difficult in situations involving severe illumination variance, such as at night, inside a tunnel or in a shaded area; and that processing time is long because of a large amount of data from both vision sensor and 3D Laser scanner. Accordingly, this paper proposes a road marking detection and classification method using single 2D Laser scanner. This method road marking detection and classification based on accumulation distance data and intensity data acquired through 2D Laser scanner. Experiments using a real autonomous vehicle in a real environment showed that calculation time decreased in comparison with 3D Laser scanner-based method, thus demonstrating the possibility of road marking type classification using single 2D Laser scanner.

Stereo matching algorithm based on systolic array architecture using edges and pixel data (에지 및 픽셀 데이터를 이용한 어레이구조의 스테레오 매칭 알고리즘)

  • Jung, Woo-Young;Park, Sung-Chan;Jung, Hong
    • Proceedings of the KIEE Conference
    • /
    • 2003.11c
    • /
    • pp.777-780
    • /
    • 2003
  • We have tried to create a vision system like human eye for a long time. We have obtained some distinguished results through many studies. Stereo vision is the most similar to human eye among those. This is the process of recreating 3-D spatial information from a pair of 2-D images. In this paper, we have designed a stereo matching algorithm based on systolic array architecture using edges and pixel data. This is more advanced vision system that improves some problems of previous stereo vision systems. This decreases noise and improves matching rate using edges and pixel data and also improves processing speed using high integration one chip FPGA and compact modules. We can apply this to robot vision and automatic control vehicles and artificial satellites.

  • PDF

Effective Real Time Tracking System using Stereo Vision

  • Lee, Hyun-Jin;Kuc, Tae-Young
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.70.1-70
    • /
    • 2001
  • Recently, research of visual control is getting more essential in robotic application, and acquiring 3D informations from the 2D images is becoming more important with development of vision system. For this application, we propose the effective way of controlling stereo vision tracking system for target tracking and calculating distance between target and camera. In this paper we address improved controller using dual-loop visual servo which is more effective compared with using single-loop visual servo for stereo vision tracking system. The speed and the accuracy for realizing a real time tracking are important. However, the vision processing speed is too slow to track object in real time by using only vision feedback data. So we use another feedback data from controller parts which offer state feedback ...

  • PDF

Recognition and Machining for Large 2D Object using Robot Vision (로봇 비젼을 이용한 대형 2차원 물체의 인식과 가공)

  • Cho, Che-Seung;Chung, Byeong-Mook
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.16 no.2 s.95
    • /
    • pp.68-73
    • /
    • 1999
  • Generally, most of machining processes are done according to the dimention of the draft made by CAD. However, there are many cases that a sample is given without the draft because of the simplicity of the shape in the machining of 2D objects. To cut the same shape as the given sample, this paper proposes the method to extract the geometric information about a large sample using the robot vision and to draw the demensional draft for the machining. Because the resolution of one frame in the vision system is too low, it is necessary to set up a camera according to the desired resolution and to capture the image moving along the contour. And the overall outline can be compounded of the sequentially captured images. In the experiment, we compared the product after the cutting with the original sample and found that the size of two objects was coincided within the allowed error bound.

  • PDF

Stereo Vision-Based 3D Pose Estimation of Product Labels for Bin Picking (빈피킹을 위한 스테레오 비전 기반의 제품 라벨의 3차원 자세 추정)

  • Udaya, Wijenayake;Choi, Sung-In;Park, Soon-Yong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.1
    • /
    • pp.8-16
    • /
    • 2016
  • In the field of computer vision and robotics, bin picking is an important application area in which object pose estimation is necessary. Different approaches, such as 2D feature tracking and 3D surface reconstruction, have been introduced to estimate the object pose accurately. We propose a new approach where we can use both 2D image features and 3D surface information to identify the target object and estimate its pose accurately. First, we introduce a label detection technique using Maximally Stable Extremal Regions (MSERs) where the label detection results are used to identify the target objects separately. Then, the 2D image features on the detected label areas are utilized to generate 3D surface information. Finally, we calculate the 3D position and the orientation of the target objects using the information of the 3D surface.