• Title/Summary/Keyword: Fisheye Image

Search Result 60, Processing Time 0.025 seconds

A Hardware Design for Realtime Correction of a Barrel Distortion Using the Nearest Pixels on a Corrected Image (보정 이미지의 최 근접 좌표를 이용한 실시간 방사 왜곡 보정 하드웨어 설계)

  • Song, Namhun;Yi, Joonhwan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.12
    • /
    • pp.49-60
    • /
    • 2012
  • In this paper, we propose a hardware design for correction of barrel distortion using the nearest coordinates in the corrected image. Because it applies the nearest distance on corrected image rather than adjacent distance on distorted image, the picture quality is improved by the image whole area, solve the staircase phenomenon in the exterior area. But, because of additional arithmetic operation using design of bilinear interpolation, required arithmetic operation is increased. Look up table(LUT) structure is proposed in order to solve this, coordinate rotation digital computer(CORDIC) algorithm is applied. The results of the synthesis using Design compiler, the design of implementing all processes of the interpolation method with the hardware is higher than the previous design about the throughput, In case of the rear camera, the design of using LUT and hardware together can reduce the size than the design of implementing all processes with the hardware.

Improved Image Restoration Algorithm about Vehicle Camera for Corresponding of Harsh Conditions (가혹한 조건에 대응하기 위한 차량용 카메라의 개선된 영상복원 알고리즘)

  • Jang, Young-Min;Cho, Sang-Bock;Lee, Jong-Hwa
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.2
    • /
    • pp.114-123
    • /
    • 2014
  • Vehicle Black Box (Event Data Recorder EDR) only recognizes the general surrounding environments of load. In addition, general EDR is difficult to recognize the images of a sudden illumination change. It appears that the lens is being a severe distortion. Therefore, general EDR does not provide the clues of the circumstances of the accident. To solve this problem, we estimate the value of Normalized Luminance Descriptor(NLD) and Normalized Contrast Descriptor(NCD). Illumination change is corrected using Normalized Image Quality(NIQ). Second, we are corrected lens distortion using model of Field Of View(FOV) based on designed method of fisheye lens. As a result, we propose integration algorithm of two methods that correct distortions of images using each Gamma Correction and Lens Correction in parallel.

Omnidirectional Camera Motion Estimation Using Projected Contours (사영 컨투어를 이용한 전방향 카메라의 움직임 추정 방법)

  • Hwang, Yong-Ho;Lee, Jae-Man;Hong, Hyun-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.5
    • /
    • pp.35-44
    • /
    • 2007
  • Since the omnidirectional camera system with a very large field of view could take many information about environment scene from few images, various researches for calibration and 3D reconstruction using omnidirectional image have been presented actively. Most of line segments of man-made objects we projected to the contours by using the omnidirectional camera model. Therefore, the corresponding contours among images sequences would be useful for computing the camera transformations including rotation and translation. This paper presents a novel two step minimization method to estimate the extrinsic parameters of the camera from the corresponding contours. In the first step, coarse camera parameters are estimated by minimizing an angular error function between epipolar planes and back-projected vectors from each corresponding point. Then we can compute the final parameters minimizing a distance error of the projected contours and the actual contours. Simulation results on the synthetic and real images demonstrated that our algorithm can achieve precise contour matching and camera motion estimation.

Using Omnidirectional Images for Semi-Automatically Generating IndoorGML Data

  • Claridades, Alexis Richard;Lee, Jiyeong;Blanco, Ariel
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.5
    • /
    • pp.319-333
    • /
    • 2018
  • As human beings spend more time indoors, and with the growing complexity of indoor spaces, more focus is given to indoor spatial applications and services. 3D topological networks are used for various spatial applications that involve navigation indoors such as emergency evacuation, indoor positioning, and visualization. Manually generating indoor network data is impractical and prone to errors, yet current methods in automation need expensive sensors or datasets that are difficult and expensive to obtain and process. In this research, a methodology for semi-automatically generating a 3D indoor topological model based on IndoorGML (Indoor Geographic Markup Language) is proposed. The concept of Shooting Point is defined to accommodate the usage of omnidirectional images in generating IndoorGML data. Omnidirectional images were captured at selected Shooting Points in the building using a fisheye camera lens and rotator and indoor spaces are then identified using image processing implemented in Python. Relative positions of spaces obtained from CAD (Computer-Assisted Drawing) were used to generate 3D node-relation graphs representing adjacency, connectivity, and accessibility in the study area. Subspacing is performed to more accurately depict large indoor spaces and actual pedestrian movement. Since the images provide very realistic visualization, the topological relationships were used to link them to produce an indoor virtual tour.

Development of 360° Omnidirectional IP Camera with High Resolution of 12Million Pixels (1200만 화소의 고해상도 360° 전방위 IP 카메라 개발)

  • Lee, Hee-Yeol;Lee, Sun-Gu;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.21 no.3
    • /
    • pp.268-271
    • /
    • 2017
  • In this paper, we propose the development of high resolution $360^{\circ}$ omnidirectional IP camera with 12 million pixels. The proposed 12-megapixel high-resolution $360^{\circ}$ omnidirectional IP camera consists of a lens unit with $360^{\circ}$ omnidirectional viewing angle and a 12-megapixel high-resolution IP camera unit. The lens section of $360^{\circ}$ omnidirectional viewing angle adopts the isochronous lens design method and the catadioptric facet production method to obtain the image without peripheral distortion which is inevitably generated in the fisheye lens. The 12 megapixel high-resolution IP camera unit consists of a CMOS sensor & ISP unit, a DSP unit, and an I / O unit, and converts the image input to the camera into a digital image to perform image distortion correction, image correction and image compression And then transmits it to the NVR (Network Video Recorder). In order to evaluate the performance of the proposed 12-megapixel high-resolution $360^{\circ}$ omnidirectional IP camera, 12.3 million pixel image efficiency, $360^{\circ}$ omnidirectional lens angle of view, and electromagnetic certification standard were measured.

Using Contour Matching for Omnidirectional Camera Calibration (투영곡선의 자동정합을 이용한 전방향 카메라 보정)

  • Hwang, Yong-Ho;Hong, Hyun-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.6
    • /
    • pp.125-132
    • /
    • 2008
  • Omnidirectional camera system with a wide view angle is widely used in surveillance and robotics areas. In general, most of previous studies on estimating a projection model and the extrinsic parameters from the omnidirectional images assume corresponding points previously established among views. This paper presents a novel omnidirectional camera calibration based on automatic contour matching. In the first place, we estimate the initial parameters including translation and rotations by using the epipolar constraint from the matched feature points. After choosing the interested points adjacent to more than two contours, we establish a precise correspondence among the connected contours by using the initial parameters and the active matching windows. The extrinsic parameters of the omnidirectional camera are estimated minimizing the angular errors of the epipolar plane of endpoints and the inverse projected 3D vectors. Experimental results on synthetic and real images demonstrate that the proposed algorithm obtains more precise camera parameters than the previous method.

A Study on Effective Stitching Technique of 360° Camera Image (360° 카메라 영상의 효율적인 스티칭 기법에 관한 연구)

  • Lee, Lang-Goo;Chung, Jean-Hun
    • Journal of Digital Convergence
    • /
    • v.16 no.2
    • /
    • pp.335-341
    • /
    • 2018
  • This study is a study on effective stitching technique for video recorded by using a dual-lens $360^{\circ}$ camera composed of two fisheye lenses. First of all, this study located a problem in the result of stitching by using a bundled program. And the study was carried out, focusing on looking for a stitching technique more efficient and closer to perfect by comparatively analyzing the results of stitching by using Autopano Video Pro and Autopano Giga, professional stitching program. As a result, it was shown that the problems of bundled program were horizontal and vertical distortion, exposure and color mismatch and unsmooth stitching line. And it was possible to solve the problem of the horizontal and vertical by using Automatic Horizon and Verticals Tool of Autopano Video Pro and Autopano Giga, problem of exposure and color by using Levels, Color and Edit Color Anchors and problem of stitching line by using Mask function. Based on this study, it is to be hoped that $360^{\circ}$ VR video content closer to perfect can be produced by efficient stitching technique for video recorded by using dual-lens $360^{\circ}$ camera in the future.

Fast Light Source Estimation Technique for Effective Synthesis of Mixed Reality Scene (효과적인 혼합현실 장면 생성을 위한 고속의 광원 추정 기법)

  • Shin, Seungmi;Seo, Woong;Ihm, Insung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.3
    • /
    • pp.89-99
    • /
    • 2016
  • One of the fundamental elements in developing mixed reality applications is to effectively analyze and apply the environmental lighting information to image synthesis. In particular, interactive applications require to process dynamically varying lighting sources in real-time, reflecting them properly in rendering results. Previous related works are not often appropriate for this because they are usually designed to synthesize photorealistic images, generating too many, often exponentially increasing, light sources or having too heavy a computational complexity. In this paper, we present a fast light source estimation technique that aims to search for primary light sources on the fly from a sequence of video images taken by a camera equipped with a fisheye lens. In contrast to previous methods, our technique can adust the number of found light sources approximately to the size that a user specifies. Thus, it can be effectively used in Phong-illumination-model-based direct illumination or soft shadow generation through light sampling over area lights.

Use of Unmanned Aerial Vehicle for Multi-temporal Monitoring of Soybean Vegetation Fraction

  • Yun, Hee Sup;Park, Soo Hyun;Kim, Hak-Jin;Lee, Wonsuk Daniel;Lee, Kyung Do;Hong, Suk Young;Jung, Gun Ho
    • Journal of Biosystems Engineering
    • /
    • v.41 no.2
    • /
    • pp.126-137
    • /
    • 2016
  • Purpose: The overall objective of this study was to evaluate the vegetation fraction of soybeans, grown under different cropping conditions using an unmanned aerial vehicle (UAV) equipped with a red, green, and blue (RGB) camera. Methods: Test plots were prepared based on different cropping treatments, i.e., soybean single-cropping, with and without herbicide application and soybean and barley-cover cropping, with and without herbicide application. The UAV flights were manually controlled using a remote flight controller on the ground, with 2.4 GHz radio frequency communication. For image pre-processing, the acquired images were pre-treated and georeferenced using a fisheye distortion removal function, and ground control points were collected using Google Maps. Tarpaulin panels of different colors were used to calibrate the multi-temporal images by converting the RGB digital number values into the RGB reflectance spectrum, utilizing a linear regression method. Excess Green (ExG) vegetation indices for each of the test plots were compared with the M-statistic method in order to quantitatively evaluate the greenness of soybean fields under different cropping systems. Results: The reflectance calibration methods used in the study showed high coefficients of determination, ranging from 0.8 to 0.9, indicating the feasibility of a linear regression fitting method for monitoring multi-temporal RGB images of soybean fields. As expected, the ExG vegetation indices changed according to different soybean growth stages, showing clear differences among the test plots with different cropping treatments in the early season of < 60 days after sowing (DAS). With the M-statistic method, the test plots under different treatments could be discriminated in the early seasons of <41 DAS, showing a value of M > 1. Conclusion: Therefore, multi-temporal images obtained with an UAV and a RGB camera could be applied for quantifying overall vegetation fractions and crop growth status, and this information could contribute to determine proper treatments for the vegetation fraction.

The Relationship between Temperature Patterns and Urban Morfometri in the Jakarta City, Indonesia

  • Maru, Rosmini;Ahmad, Shaharuddin
    • Asian Journal of Atmospheric Environment
    • /
    • v.9 no.2
    • /
    • pp.128-136
    • /
    • 2015
  • Sky View Factor (SVF) is one of the urban morfometri parameters that impact on the Urban Heat Island (UHI). SVF analisys was conducted in the city of Jakarta to investigate the relationship between urban temperature with urban morfometri. Jakarta City is the most populous city in the world that has a surrounding area $66,152km^2$ and the total population around 23 million people. The population of the city is the sixth highest in the world today. SVF measurements done by taking pictures at the six stations that have different morphological characteristics namely (1) the narrow streets Apartment Cempaka Mas (JS ITC), (2) the width of the road Apartment Cempaka Mas (JL ITC), (3) in front of Colleges Kanisius (DKK), (4) in front of office Journalist of Indonesia (DKWI), (5) Utan Kayu (UK), and (6) Tambun (TB). SVF value is obtained from the photgraphic image. Taking pictures at the location using a Nikon D90 camera with a Nikon Fisheye Nikkor 10.5 mm 1 : 2.8 G ED, further processed through a global mapper program. Therefore, the SVF derived from the six stations that vary 0.21 to 0.78. Temperature measurement is done during daylight hours from 06:00 am to 18:00 pm during the Western Part of Indonesia (WIB). Measurements performed at three different times, namely working days (HK) regular holidays (HCB) national holidays (HCN). The results showed that the highest average temperature of $33.32^{\circ}C$, occurring at UK station (SVF=0.45) at the time of HCB. Meanwhile, the average low temperature of $31.22^{\circ}C$ occurred at JLITC station (SVF=0.42). The two-time occurred on ordinary holidays. Maximum temperature of $38.4^{\circ}C$ occurred in Utan Kayu station (SFV=0.45) that occurred at 11.00 hrs, normal holidays. Furthermore minimum temperature 24.5 occurred at Tambun station (SVF=0.78) at 06.00 hrs in the morning at the usual holidays and national holidays. In general, the results showed that areas with large SVF has a lower temperature compared with areas with smaller SVF. Though, are not the only factors that matter, but this research may show that an increase in temperature in the city of Jakarta. Therefore, it is necessary to mitigate the serious from the government or society.