• Title/Summary/Keyword: $360^{\circ}$ Lens Distortion

Search Result 7, Processing Time 0.026 seconds

The Developement of Small 360° Oral Scanner Lens Module (소형 360° 구강 스캐너 렌즈 모듈 개발)

  • Kwak, Dong-Hoon;Lee, Sun-Gu;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.22 no.3
    • /
    • pp.858-861
    • /
    • 2018
  • In this paper, we propose the development of a small $360^{\circ}$ oral scanner lens module. The proposed small $360^{\circ}$ oral scanner lens module consists of a small $360^{\circ}$ high resolution(4MegaPixel) lens optical system, a 15mm image sensor unit, and a small $360^{\circ}$ mouth scanner lens external shape. A small $360^{\circ}$ high resolution lens optical system produces a total of nine lenses, the outer diameter of the lens not less than 15mm for use by children through the ages of adulthood. Light drawn by a small $360^{\circ}$ high resolution lens optical system is $90^{\circ}$ flexion so that image images are delivered to image sensors. The 15mm image sensor unit sends the converted value to the ISP(Image Signal Processor) of the embedded board after an image array through the column and the row address of the image sensor. The small $360^{\circ}$ mouth scanner lens outer shape was designed to fix the race to the developed lens. Results from authorized testing agencies to assess the performance of proposed small $360^{\circ}$ oral scanner lens modules, The optical resolving power of $360^{\circ}$ lens was more than 30% at 150 cycles/mm, $360^{\circ}$ lens angle was $360^{\circ}$ in vertical direction, $42^{\circ}{\sim}85^{\circ}$ in vertical direction, and lens distortion rate was 5% or less. It produced the same result as the world's highest level.

Development of 360° Omnidirectional IP Camera with High Resolution of 12Million Pixels (1200만 화소의 고해상도 360° 전방위 IP 카메라 개발)

  • Lee, Hee-Yeol;Lee, Sun-Gu;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.21 no.3
    • /
    • pp.268-271
    • /
    • 2017
  • In this paper, we propose the development of high resolution $360^{\circ}$ omnidirectional IP camera with 12 million pixels. The proposed 12-megapixel high-resolution $360^{\circ}$ omnidirectional IP camera consists of a lens unit with $360^{\circ}$ omnidirectional viewing angle and a 12-megapixel high-resolution IP camera unit. The lens section of $360^{\circ}$ omnidirectional viewing angle adopts the isochronous lens design method and the catadioptric facet production method to obtain the image without peripheral distortion which is inevitably generated in the fisheye lens. The 12 megapixel high-resolution IP camera unit consists of a CMOS sensor & ISP unit, a DSP unit, and an I / O unit, and converts the image input to the camera into a digital image to perform image distortion correction, image correction and image compression And then transmits it to the NVR (Network Video Recorder). In order to evaluate the performance of the proposed 12-megapixel high-resolution $360^{\circ}$ omnidirectional IP camera, 12.3 million pixel image efficiency, $360^{\circ}$ omnidirectional lens angle of view, and electromagnetic certification standard were measured.

A Study on Effective Stitching Technique of 360° Camera Image (360° 카메라 영상의 효율적인 스티칭 기법에 관한 연구)

  • Lee, Lang-Goo;Chung, Jean-Hun
    • Journal of Digital Convergence
    • /
    • v.16 no.2
    • /
    • pp.335-341
    • /
    • 2018
  • This study is a study on effective stitching technique for video recorded by using a dual-lens $360^{\circ}$ camera composed of two fisheye lenses. First of all, this study located a problem in the result of stitching by using a bundled program. And the study was carried out, focusing on looking for a stitching technique more efficient and closer to perfect by comparatively analyzing the results of stitching by using Autopano Video Pro and Autopano Giga, professional stitching program. As a result, it was shown that the problems of bundled program were horizontal and vertical distortion, exposure and color mismatch and unsmooth stitching line. And it was possible to solve the problem of the horizontal and vertical by using Automatic Horizon and Verticals Tool of Autopano Video Pro and Autopano Giga, problem of exposure and color by using Levels, Color and Edit Color Anchors and problem of stitching line by using Mask function. Based on this study, it is to be hoped that $360^{\circ}$ VR video content closer to perfect can be produced by efficient stitching technique for video recorded by using dual-lens $360^{\circ}$ camera in the future.

The Performance Analysis and Design of Selling Spectacle Lenses in Domestic Market (국내 시판 안경렌즈의 성능 분석 및 설계)

  • Kim, Se-Jin;Lim, Hyeon-Seon
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.15 no.4
    • /
    • pp.355-360
    • /
    • 2010
  • Purpose: Analysis performance for spectacle lens which sales in domestic market and optimization design a spectacle lens which is corrected aberration. Methods: Measured center thickness, radius and aspherical surface coefficient for spherical and aspherical lenses which were ${\pm}$5.00D. Refractive index for every lens was 1.6 and they came from 4 different companies. I used 3 types of equipment to measure lenses. ID-F150 (Mitutoyo) : Center Thickness, FOCOVISION (SR-2, Automation Robotics) : Radius, PGI 1240S (Taylor Hobson) : Aspherical surface coefficient. Designed a lens which had 27 mm of distance from lens rear surface to center of eye, 4 mm of pupil diameter and small aberration on center vision $30^{\circ}C$. To shorten axial distance compared with the measured lens rise merits for cosmetic. Lens Design tool was CODE V (Optical Research Associates). Results: -5.00D aspherical lens had somewhat high astigmatism and distortion compared with the spherical lens. But it had a merit for cosmetic because of short axial height and decrease edge thickness. Improved a performance of distortion and ascertain merits for cosmetic due to short axial height and decrease edge thickness same as (-) lens in case of +5.00 aspherical lens. Though an optimization process front surface aspherical lens had a good performance for astigmatism and distortion and the merit for beauty compared with measured spherical lens. Conclusions: Design trend for domestic aspherical lens is decrease axial height and thickness to increase a merit for cosmetic not but increase performance of aberration. From design theory for optimization design front surface aspherical spectacle lens which has improved performance of aberration and merit for cosmetic at the same time compared with the measured lens. Expect an improved performance from design back aspherical lens compared with front aspherical lens.

Omni-directional Vision SLAM using a Motion Estimation Method based on Fisheye Image (어안 이미지 기반의 움직임 추정 기법을 이용한 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Dai, Yanyan;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.8
    • /
    • pp.868-874
    • /
    • 2014
  • This paper proposes a novel mapping algorithm in Omni-directional Vision SLAM based on an obstacle's feature extraction using Lucas-Kanade Optical Flow motion detection and images obtained through fish-eye lenses mounted on robots. Omni-directional image sensors have distortion problems because they use a fish-eye lens or mirror, but it is possible in real time image processing for mobile robots because it measured all information around the robot at one time. In previous Omni-Directional Vision SLAM research, feature points in corrected fisheye images were used but the proposed algorithm corrected only the feature point of the obstacle. We obtained faster processing than previous systems through this process. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we remove the feature points of the floor surface using a histogram filter, and label the candidates of the obstacle extracted. Third, we estimate the location of obstacles based on motion vectors using LKOF. Finally, it estimates the robot position using an Extended Kalman Filter based on the obstacle position obtained by LKOF and creates a map. We will confirm the reliability of the mapping algorithm using motion estimation based on fisheye images through the comparison between maps obtained using the proposed algorithm and real maps.

Coordinates Transformation and Correction Techniques of the Distorted Omni-directional Image (왜곡된 전 방향 영상에서의 좌표 변환 및 보정)

  • Cha, Sun-Hee;Park, Young-Min;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.1
    • /
    • pp.816-819
    • /
    • 2005
  • This paper proposes a coordinate correction technique using the transformation of 3D parabolic coordinate function and BP(Back Propagation) neural network in order to solve space distortion problem caused by using catadioptric camera. Although Catadioptric camera can obtain omni-directional image at all directions of 360 degrees, it makes an image distorted because of an external form of lens itself. Accordingly, To obtain transformed ideal distance coordinate information from distorted image on 3 dimensional space, we use coordinate transformation function that uses coordinates of a focus at mirror in the shape of parabolic plane and another one which projected into the shape of parabolic from input image. An error of this course is modified by BP neural network algorithm.

  • PDF

Localization using Ego Motion based on Fisheye Warping Image (어안 워핑 이미지 기반의 Ego motion을 이용한 위치 인식 알고리즘)

  • Choi, Yun Won;Choi, Kyung Sik;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.1
    • /
    • pp.70-77
    • /
    • 2014
  • This paper proposes a novel localization algorithm based on ego-motion which used Lucas-Kanade Optical Flow and warping image obtained through fish-eye lenses mounted on the robots. The omnidirectional image sensor is a desirable sensor for real-time view-based recognition of a robot because the all information around the robot can be obtained simultaneously. The preprocessing (distortion correction, image merge, etc.) of the omnidirectional image which obtained by camera using reflect in mirror or by connection of multiple camera images is essential because it is difficult to obtain information from the original image. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we extract motion vectors using Lucas-Kanade Optical Flow in preprocessed image. Third, we estimate the robot position and angle using ego-motion method which used direction of vector and vanishing point obtained by RANSAC. We confirmed the reliability of localization algorithm using ego-motion based on fisheye warping image through comparison between results (position and angle) of the experiment obtained using the proposed algorithm and results of the experiment measured from Global Vision Localization System.