• Title/Summary/Keyword: Camera lens distortion

Search Result 132, Processing Time 0.028 seconds

Image Distortion Compensation for Improved Gait Recognition (보행 인식 시스템 성능 개선을 위한 영상 왜곡 보정 기법)

  • Jeon, Ji-Hye;Kim, Dae-Hee;Yang, Yoon-Gi;Paik, Joon-Ki;Lee, Chang-Su
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.4
    • /
    • pp.97-107
    • /
    • 2009
  • In image-based gait recognition systems, physical factors, such as the camera angle and the lens distortion, and environmental factors such as illumination determines the performance of recognition. In this paper we present a robust gait recognition method by compensating various types of image distortions. The proposed method is compared with existing gait recognition algorithm with consideration of both physical and environmental distortion factors in the input image. More specifically, we first present an efficient compensation algorithm of image distortion by using the projective transform, and test the feasibility of the proposed algorithm by comparing the recognition performances with and without the compensation process. Proposed method gives universal gait data which is invariant to both distance and environment. Gained data improved gait recognition rate about 41.5% in indoor image and about 55.5% in outdoor image. Proposed method can be used effectively in database(DB) construction, searching and tracking of specific objects.

Active 3D Shape Acquisition on a Smartphone (스마트폰에서의 능동적 3차원 형상 취득 기법)

  • Won, Jae-Hyun;Yoo, Jin-Woo;Park, In-Kyu
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.6
    • /
    • pp.27-34
    • /
    • 2011
  • In this paper, we propose an active 3D shape acquisition method based on photometric stereo using camera and flash on a smartphone. Two smartphones are used as the master and slave, in which the slave projects illumination from different locations while the master captures the images and processes photometric stereo algorithm to reconstruct 3D shape. In order to reduce the error, the smartphone's camera is calibrated to overcome the effect of the lens distortion and nonlinear camera sensor response. We apply 5-point algorithm to estimate the pose between smartphone cameras and then estimate lighting direction vector to run the photometric stereo algorithm. Experimental result shows that the proposed system enables us to use smartphone as a 3D camera with low cost and high quality.

Beach Profile Estimation Using a Photogrammetry (사진측정법을 이용한 해빈단면의 추정)

  • Kim, Baeck-Oon;Park, Yong-Ahn;Oh, Im-Sang;Khim, Boo-Keun;Choi, Kyung-Sik
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.3 no.4
    • /
    • pp.228-233
    • /
    • 1998
  • This study presents a close-range photogrammetry that is applicable to beach profile estimation using a non-metric camera. Based on the analysis of oblique video image in which the video camera was installed on a horizontal plane and the field of view was fixed, a new equation to analyze a photograph was developed considering the following aspects: (1) camera is allowed to be rotated about its optical axis and (2) a simple error model is adopted to correct lens distortion and other systematic errors associated with the non-metric camera, which improves accuracy of non-metric imageries. To test the modified technique, photographs of the beach were taken near the Donghae City in February, 1998. In addition, beach profiles were surveyed with conventional dumpy level and surveying staff. RMS error between the estimated and measured beach profiles is less than 10 cm in elevation.

  • PDF

Automatic Target Recognition for Camera Calibration (카메라 캘리브레이션을 위한 자동 타겟 인식)

  • Kim, Eui Myoung;Kwon, Sang Il
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.6
    • /
    • pp.525-534
    • /
    • 2018
  • Camera calibration is the process of determining the parameters such as the focal length of a camera, the position of a principal point, and lens distortions. For this purpose, images of checkerboard have been mainly used. When targets were automatically recognized in checkerboard image, the existing studies had limitations in that the user should have a good understanding of the input parameters for recognizing the target or that all checkerboard should appear in the image. In this study, a methodology for automatic target recognition was proposed. In this method, even if only a part of the checkerboard image was captured using rectangles including eight blobs, four each at the central portion and the outer portion of the checkerboard, the index of the target can be automatically assigned. In addition, there is no need for input parameters. In this study, three conditions were used to automatically extract the center point of the checkerboard target: the distortion of black and white pattern, the frequency of edge change, and the ratio of black and white pixels. Also, the direction and numbering of the checkerboard targets were made with blobs. Through experiments on two types of checkerboards, it was possible to automatically recognize checkerboard targets within a minute for 36 images.

Developing a first-person horror game using Unreal Engine and an action camera perspective (언리얼엔진과 액션 카메라 시점을 활용한 1인칭 공포 게임 개발)

  • Nam-Young Kim;Young-Min Joo;Won-Whoi Huh
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.75-81
    • /
    • 2024
  • This paper focuses on developing a first-person 3D game to provide extreme fear to players through realistic camera direction utilizing the features of action cameras. As a new camera production technique, we introduce perspective distortion using a wide-angle lens and camera shake when moving to provide higher immersion than existing games. The theme of the game is horror room escape, and the player starts with a firearm, but in order to overcome the concern that the game's difficulty is low due to the use of firearms, the player is asked to control the use of firearms by imposing burdens such as chasing monsters and reducing the number of magazines. The significance of this paper is that we developed a new type of 3D game that maximizes the fear effect of players through realistic production.

Coordinates Transformation and Correction Techniques of the Distorted Omni-directional Image (왜곡된 전 방향 영상에서의 좌표 변환 및 보정)

  • Cha, Sun-Hee;Park, Young-Min;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.1
    • /
    • pp.816-819
    • /
    • 2005
  • This paper proposes a coordinate correction technique using the transformation of 3D parabolic coordinate function and BP(Back Propagation) neural network in order to solve space distortion problem caused by using catadioptric camera. Although Catadioptric camera can obtain omni-directional image at all directions of 360 degrees, it makes an image distorted because of an external form of lens itself. Accordingly, To obtain transformed ideal distance coordinate information from distorted image on 3 dimensional space, we use coordinate transformation function that uses coordinates of a focus at mirror in the shape of parabolic plane and another one which projected into the shape of parabolic from input image. An error of this course is modified by BP neural network algorithm.

  • PDF

Development of PKNU3: A small-format, multi-spectral, aerial photographic system

  • Lee Eun-Khung;Choi Chul-Uong;Suh Yong-Cheol
    • Korean Journal of Remote Sensing
    • /
    • v.20 no.5
    • /
    • pp.337-351
    • /
    • 2004
  • Our laboratory originally developed the compact, multi-spectral, automatic aerial photographic system PKNU3 to allow greater flexibility in geological and environmental data collection. We are currently developing the PKNU3 system, which consists of a color-infrared spectral camera capable of simultaneous photography in the visible and near-infrared bands; a thermal infrared camera; two computers, each with an 80-gigabyte memory capacity for storing images; an MPEG board that can compress and transfer data to the computers in real-time; and the capability of using a helicopter platform. Before actual aerial photographic testing of the PKNU3, we experimented with each sensor. We analyzed the lens distortion, the sensitivity of the CCD in each band, and the thermal response of the thermal infrared sensor before the aerial photographing. As of September 2004, the PKNU3 development schedule has reached the second phase of testing. As the result of two aerial photographic tests, R, G, B and IR images were taken simultaneously; and images with an overlap rate of 70% using the automatic 1-s interval data recording time could be obtained by PKNU3. Further study is warranted to enhance the system with the addition of gyroscopic and IMU units. We evaluated the PKNU 3 system as a method of environmental remote sensing by comparing each chlorophyll image derived from PKNU 3 photographs. This appraisement was backed up with existing study that resulted in a modest improvement in the linear fit between the measures of chlorophyll and the RVI, NDVI and SAVI images stem from photographs taken by Duncantech MS 3100 which has same spectral configuration with MS 4000 used in PKNU3 system.

Wide-angle Optical Module Design for Mobile Phone Camera Using Recursive Numerical Computation Method (재귀적 수치 계산법을 적용한 모바일 폰용 광각 광학계 설계)

  • Kyu Haeng Lee;Sung Min Park;Kye Jin Jeon
    • Korean Journal of Optics and Photonics
    • /
    • v.35 no.4
    • /
    • pp.164-169
    • /
    • 2024
  • We applied recursive numerical computation to create a basic design of a camera optical module for mobile phones. To enhance the resolution performance for a 38-degree field of view, we constructed the optical system with six non-spherical lenses. However, to increase its applicability to a compact mobile phone, we limited the overall length to 5 mm in the design. Using the data obtained from the basic design, we proceeded with optimization design using the Zemax design tool. The optimized optical system achieved a resolution performance with a modulation transfer function value of more than 19% for a 280 lines/mm pattern and image distortion within 1.0% for all wavelength rays. In this paper, we verify the feasibility of using recursive numerical computation for the basic design of a compact mobile phone camera.

Multi-camera Calibration Method for Optical Motion Capture System (광학식 모션캡처를 위한 다중 카메라 보정 방법)

  • Shin, Ki-Young;Mun, Joung-H.
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.6
    • /
    • pp.41-49
    • /
    • 2009
  • In this paper, the multi-camera calibration algorithm for optical motion capture system is proposed. This algorithm performs 1st camera calibration using DLT(Direct linear transformation} method and 3-axis calibration frame with 7 optical markers. And 2nd calibration is performed by waving with a wand of known length(so called wand dance} throughout desired calibration volume. In the 1st camera calibration, it is obtained not only camera parameter but also radial lens distortion parameters. These parameters are used initial solution for optimization in the 2nd camera calibration. In the 2nd camera calibration, the optimization is performed. The objective function is to minimize the difference of distance between real markers and reconstructed markers. For verification of the proposed algorithm, re-projection errors are calculated and the distance among markers in the 3-axis frame and in the wand calculated. And then it compares the proposed algorithm with commercial motion capture system. In the 3D reconstruction error of 3-axis frame, average error presents 1.7042mm(commercial system) and 0.8765mm(proposed algorithm). Average error reduces to 51.4 percent in commercial system. In the distance between markers in the wand, the average error shows 1.8897mm in the commercial system and 2.0183mm in the proposed algorithm.

Vision-based Mobile Robot Localization and Mapping using fisheye Lens (어안렌즈를 이용한 비전 기반의 이동 로봇 위치 추정 및 매핑)

  • Lee Jong-Shill;Min Hong-Ki;Hong Seung-Hong
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.4
    • /
    • pp.256-262
    • /
    • 2004
  • A key component of an autonomous mobile robot is to localize itself and build a map of the environment simultaneously. In this paper, we propose a vision-based localization and mapping algorithm of mobile robot using fisheye lens. To acquire high-level features with scale invariance, a camera with fisheye lens facing toward to ceiling is attached to the robot. These features are used in mP building and localization. As a preprocessing, input image from fisheye lens is calibrated to remove radial distortion and then labeling and convex hull techniques are used to segment ceiling and wall region for the calibrated image. At the initial map building process, features we calculated for each segmented region and stored in map database. Features are continuously calculated for sequential input images and matched to the map. n some features are not matched, those features are added to the map. This map matching and updating process is continued until map building process is finished, Localization is used in map building process and searching the location of the robot on the map. The calculated features at the position of the robot are matched to the existing map to estimate the real position of the robot, and map building database is updated at the same time. By the proposed method, the elapsed time for map building is within 2 minutes for 50㎡ region, the positioning accuracy is ±13cm and the error about the positioning angle of the robot is ±3 degree for localization.

  • PDF