• Title/Summary/Keyword: FOV 모델

Search Result 17, Processing Time 0.024 seconds

Distortion Center Estimation using FOV Model and 2D Pattern (FOV 모델과 2D 패턴을 이용한 왜곡 중심 추정 기법)

  • Seo, Jeong-Goo;Kang, Euiseon
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.8
    • /
    • pp.11-19
    • /
    • 2013
  • This paper presents a simple method to estimate center of distortion and correct radial distortion from fish-eye lens. If the center of image is not locate that of lens in a straight line, the disadvantage of FOV model is low accurate because of correcting distortion without estimated centre of distortion. We propose a method accurately estimating Distortion center using FOV model and 2D pattern from wide angle lens. Our method determines the center of distortion in least error between straight lines and curves with FOV model. The results of experimental measurements on synthetic and real data are presented.

Detection Performance Analysis of the Telescope considering Pointing Angle Command Error (지향각 명령 오차를 고려한 망원경 탐지 성능 분석)

  • Lee, Hojin;Lee, Sangwook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.1
    • /
    • pp.237-243
    • /
    • 2017
  • In this paper, the detection performance of the electro-optical telescopes which observes and surveils space objects including artificial satellites, is analyzed. To perform the Modeling & Simulation(M&S) based analysis, satellite orbit model, telescope model, and the atmospheric model are constructed and a detection scenario observing the satellite is organized. Based on the organized scenario, pointing accuracy is analyzed according to the Field of View(FOV), which is one of the key factors of the telescope, considering pointing angle command error. In accordance with the preceding result, detection possibility according to the pixel-count of the detector and the FOV of the telescope is analyzed by discerning detection by Signal-to-Noise Ratio(SNR). The result shows that pointing accuracy increases with larger FOV, whereas the detection probability increases with smaller FOV and higher pixel-count. Therefore, major specification of the telescope such as FOV and pixel-count should be determined considering the result of M&S based analysis performed in this paper and the operational circumstances.

Wide FOV Panorama Image Acquisition Method (광각 파노라마 영상획득 방법)

  • Kim, Soon-Cheol;Yi, Soo-Yeong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.3
    • /
    • pp.2117-2122
    • /
    • 2015
  • Wide FOV(Field-of-View) is required to contain much more visual information in a single image. The wide FOV imaging system has many industrial applications such as surveillance, security, tele-conference, and mobile robots. In order to obtain a wide FOV panorama image, an imaging system with hyperbolic cylinder mirror is proposed in this paper. Because the horizontal FOV is more important than the vertical FOV in general, a hyperbolic cylinder mirror is designed in this paper, that has a hyperbolic curve in the horizontal surface and is the same as a planar mirror in the vertical axis. Imaging model of the proposed imaging system is presented by ray tracing method and the hyperbolic cylinder mirror is implemented. The imaging performance of wide FOV is verified by experiments in this paper. This imaging system is cost-effective and is possible to acquire a wide panorama image having 210 degree horizontal FOV in real-time without an extra image processing.

Comparative study on quality of scanned images from varying materials and surface conditions of standardized model for dental scanner evaluation (치과용 스캐너 평가를 위한 국제표준모델의 재료 및 표면 상태에 따른 스캔 영상 결과물 비교 연구)

  • Park, Ju-Hee;Seol, Jeong-Hwan;Lee, Jun Jae;Lee, Seung-Pyo;Lim, Young-Jun
    • Journal of Dental Rehabilitation and Applied Science
    • /
    • v.34 no.2
    • /
    • pp.104-115
    • /
    • 2018
  • Purpose: The purpose of this study is to evaluate the image acquisition ability of intraoral scanners by analyzing the comprehensiveness of scanned images from standardized model, and to identify problems of the model. Materials and Methods: Cast models and 3D-printed models were prepared according to international standards set by ISO12836 and ANSI/ADA no. 132, which were then scanned by model scanner and two different intraoral scanners (TRIOS3 and CS3500). The image acquisition performance of the scanners was classified into three grades, and the study was repeated with varying surface conditions of the models. Results: Model scanner produced the most accurate images in all models. Meanwhile, CS3500 showed good image reproducibility for angled structures and TRIOS3 showed good image reproducibility for rounded structures. As for model ingredients, improved plaster model best reproduced scan images regardless of the type of scanner used. When limited to 3D-printed model, powdered surface condition resulted in higher image quality. Conclusion: When scanning structures beyond FOV (field of view) in standardized models (following ISO12836 and ANSI/ADA 132), lack of reference points to help distinguish different faces confuses the scanning and matching process, resulting in inaccurate display of images. These results imply the need to develop a new standard model not confined to simple pattern repetition and symmetric structure.

Star Detectability Analysis of Daytime Star Sensor (주간 활용 별센서의 별 감지가능성 분석)

  • Nah, Ja-Kyoung;Yi, Yu;Kim, Yong-Ha
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.33 no.9
    • /
    • pp.89-96
    • /
    • 2005
  • This paper discusses the daytime atmospheric conditions and the possibility of daytime star detection with the purpose of practical use of the star sensor for daylight navigation. In order to estimate the daytime atmospheric data, we use the standard atmospheric model (LOWTRAN 7), from which atmospheric transmittance and radiance from background sky are calculated. Assuming the star sensor with an optical filter to reduce background radiation, different separation angles between the star sensor and the sun are set up to express the effect of the solar radiation. As considerations of field of view (FOV) of the star sensor, the variation of the sky background radiation and the star density of the detectable star are analyzed. In addition, the integration time to achieve a required signal-to-noise ratio and the number of the radiation-caused electrons of the charge coupled detector(CCD) working as the limit to daylight application of the star sensor are calculated.

A Surveillance System Combining Model-based Multiple Person Tracking and Non-overlapping Cameras (모델기반 다중 사람추적과 다수의 비겹침 카메라를 결합한 감시시스템)

  • Lee Youn-Mi;Lee Kyoung-Mi
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.12 no.4
    • /
    • pp.241-253
    • /
    • 2006
  • In modem societies, a monitoring system is required to automatically detect and track persons from several cameras scattered in a wide area. Combining multiple cameras with non-overlapping views and a tracking technique, we propose a method that tracks automatically the target persons in one camera and transfers the tracking information to other networked cameras through a server. So the proposed method tracks thoroughly the target persons over the cameras. In this paper, we use a person model to detect and distinguish the corresponding person and to transfer the person's tracking information. A movement of the tracked persons is defined on FOV lines of the networked cameras. The tracked person has 6 statuses. The proposed system was experimented in several indoor scenario. We achieved 91.2% in an averaged tracking rate and 96% in an averaged status rate.

Measurement of Construction Material Quantity through Analyzing Images Acquired by Drone And Data Augmentation (드론 영상 분석과 자료 증가 방법을 통한 건설 자재 수량 측정)

  • Moon, Ji-Hwan;Song, Nu-Lee;Choi, Jae-Gab;Park, Jin-Ho;Kim, Gye-Young
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.1
    • /
    • pp.33-38
    • /
    • 2020
  • This paper proposes a technique for counting construction materials by analyzing an image acquired by a Drone. The proposed technique use drone log which includes drone and camera information, RCNN for predicting construction material type, dummy area and Photogrammetry for counting the number of construction material. The existing research has large error ranges for predicting construction material detection and material dummy area, because of a lack of training data. To reduce the error ranges and improve prediction stability, this paper increases the training data with a method of data augmentation, but only uses rotated training data for data augmentation to prevent overfitting of the training model. For the quantity calculation, we use a drone log containing drones and camera information such as Yaw and FOV, RCNN model to find the pile of building materials in the image and to predict the type. And we synthesize all the information and apply it to the formula suggested in the paper to calculate the actual quantity of material pile. The superiority of the proposed method is demonstrated through experiments.

Image Data Loss Minimized Geometric Correction for Asymmetric Distortion Fish-eye Lens (비대칭 왜곡 어안렌즈를 위한 영상 손실 최소화 왜곡 보정 기법)

  • Cho, Young-Ju;Kim, Sung-Hee;Park, Ji-Young;Son, Jin-Woo;Lee, Joong-Ryoul;Kim, Myoung-Hee
    • Journal of the Korea Society for Simulation
    • /
    • v.19 no.1
    • /
    • pp.23-31
    • /
    • 2010
  • Due to the fact that fisheye lens can provide super wide angles with the minimum number of cameras, field-of-view over 180 degrees, many vehicles are attempting to mount the camera system. Not only use the camera as a viewing system, but also as a camera sensor, camera calibration should be preceded, and geometrical correction on the radial distortion is needed to provide the images for the driver's assistance. In this thesis, we introduce a geometric correction technique to minimize the loss of the image data from a vehicle fish-eye lens having a field of view over $180^{\circ}$, and a asymmetric distortion. Geometric correction is a process in which a camera model with a distortion model is established, and then a corrected view is generated after camera parameters are calculated through a calibration process. First, the FOV model to imitate a asymmetric distortion configuration is used as the distortion model. Then, we need to unify the axis ratio because a horizontal view of the vehicle fish-eye lens is asymmetrically wide for the driver, and estimate the parameters by applying a non-linear optimization algorithm. Finally, we create a corrected view by a backward mapping, and provide a function to optimize the ratio for the horizontal and vertical axes. This minimizes image data loss and improves the visual perception when the input image is undistorted through a perspective projection.

Improved Image Restoration Algorithm about Vehicle Camera for Corresponding of Harsh Conditions (가혹한 조건에 대응하기 위한 차량용 카메라의 개선된 영상복원 알고리즘)

  • Jang, Young-Min;Cho, Sang-Bock;Lee, Jong-Hwa
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.2
    • /
    • pp.114-123
    • /
    • 2014
  • Vehicle Black Box (Event Data Recorder EDR) only recognizes the general surrounding environments of load. In addition, general EDR is difficult to recognize the images of a sudden illumination change. It appears that the lens is being a severe distortion. Therefore, general EDR does not provide the clues of the circumstances of the accident. To solve this problem, we estimate the value of Normalized Luminance Descriptor(NLD) and Normalized Contrast Descriptor(NCD). Illumination change is corrected using Normalized Image Quality(NIQ). Second, we are corrected lens distortion using model of Field Of View(FOV) based on designed method of fisheye lens. As a result, we propose integration algorithm of two methods that correct distortions of images using each Gamma Correction and Lens Correction in parallel.

Methodology for Generating UAV's Effective Flight Area that Satisfies the Required Spatial Resolution (요구 공간해상도를 만족하는 무인기의 유효 비행 영역 생성 방법)

  • Ji Won Woo;Yang Gon Kim;Jung Woo An;Sang Yun Park;Gyeong Rae Nam
    • Journal of Advanced Navigation Technology
    • /
    • v.28 no.4
    • /
    • pp.400-407
    • /
    • 2024
  • The role of unmanned aerial vehicles (UAVs) in modern warfare is increasingly significant, making their capacity for autonomous missions essential. Accordingly, autonomous target detection/identification based on captured images is crucial, yet the effectiveness of AI models depends on image sharpness. Therefore, this study describes how to determine the field of view (FOV) of the camera and the flight position of the UAV considering the required spatial resolution. Firstly, the calculation of the size of the acquisition area is discussed in relation to the relative position of the UAV and the FOV of the camera. Through this, this paper first calculates the area that can satisfy the spatial resolution and then calculates the relative position of the UAV and the FOV of the camera that can satisfy it. Furthermore, this paper propose a method for calculating the effective range of the UAV's position that can satisfy the required spatial resolution, centred on the coordinate to be photographed. This is then processed into a tabular format, which can be used for mission planning.