• Title/Summary/Keyword: 좌표변환계산

Search Result 214, Processing Time 0.032 seconds

Flesh Tone Balance Algorithm for AWB of Facial Pictures (인물 사진을 위한 자동 톤 균형 알고리즘)

  • Bae, Tae-Wuk;Lee, Sung-Hak;Lee, Jung-Wook;Sohng, Kyu-Ik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.11C
    • /
    • pp.1040-1048
    • /
    • 2009
  • This paper proposes an auto flesh tone balance algorithm for the picture that is taken for people. General white balance algorithms bring neutral region into focus. But, other objects can be basis if its spectral reflectance is known. In this paper the basis for white balance is human face. For experiment, first, transfer characteristic of image sensor is analyzed and camera output RGB on average face chromaticity under standard illumination is calculated. Second, Output rate for the image is adjusted to make RGB rate for the face photo area taken under unknown illumination RGB rate that is already calculated. Input tri-stimulus XYZ can be calculated from camera output RGB by camera transfer matrix. And input tri-stimulus XYZ is transformed to standard color space (sRGB) using sRGB transfer matrix. For display, RGB data is encoded as eight-bit data after gamma correction. Algorithm is applied to average face color that is light skin color of Macbeth color chart and average color of various face colors that are actually measured.

Direct Measurement of Distortion of Optical System of Lithography (노광 광학계의 왜곡수차 측정에 관한 연구)

  • Joo, WonDon;Lee, JiHoon;Chae, SungMin;Kim, HyeJung;Jung, Mee Suk
    • Korean Journal of Optics and Photonics
    • /
    • v.23 no.3
    • /
    • pp.97-102
    • /
    • 2012
  • In general, one of the methods used to measure distortion is to use the full image of the regular pattern. However, because of low accuracy, this method is mainly used for an optical system such as a camera.. In order to measure distortion with high accuracy less than 1um, one can use the method of measuring the exact position of a mask image. In this case, a high accuracy stage with a laser encoder is required. In this paper, we investigate measurement of the distortion of high accuracy with a simple manual stage. The main idea is that we split and measure the mask image with the overlapping area by using CCD or CMOS, and then we get an exact position of the mask image by integrating the adjacent split images. We use the Canny Edge Detection method to get the position information of the mask image and we researched the process to exactly calculate distortion by using coordinate transformations and a least square method.

A Driver's Condition Warning System using Eye Aspect Ratio (눈 영상비를 이용한 운전자 상태 경고 시스템)

  • Shin, Moon-Chang;Lee, Won-Young
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.2
    • /
    • pp.349-356
    • /
    • 2020
  • This paper introduces the implementation of a driver's condition warning system using eye aspect ratio to prevent a car accident. The proposed driver's condition warning system using eye aspect ratio consists of a camera, that is required to detect eyes, the Raspberrypie that processes information on eyes from the camera, buzzer and vibrator, that are required to warn the driver. In order to detect and recognize driver's eyes, the histogram of oriented gradients and face landmark estimation based on deep-learning are used. Initially the system calculates the eye aspect ratio of the driver from 6 coordinates around the eye and then gets each eye aspect ratio values when the eyes are opened and closed. These two different eye aspect ratio values are used to calculate the threshold value that is necessary to determine the eye state. Because the threshold value is adaptively determined according to the driver's eye aspect ratio, the system can use the optimal threshold value to determine the driver's condition. In addition, the system synthesizes an input image from the gray-scaled and LAB model images to operate in low lighting conditions.

Extracting the Point of Impact from Simulated Shooting Target based on Image Processing (영상처리 기반 모의 사격 표적지 탄착점 추출)

  • Lee, Tae-Guk;Lim, Chang-Gyoon;Kim, Kang-Chul;Kim, Young-Min
    • Journal of Internet Computing and Services
    • /
    • v.11 no.1
    • /
    • pp.117-128
    • /
    • 2010
  • There are many researches related to a simulated shooting training system for replacing the real military and police shooting training. In this paper, we propose the point of impact from a simulated shooting target based on image processing instead of using a sensor based approach. The point of impact is extracted by analyzing the image extracted from the camera on the muzzle of a gun. The final shooting result is calculated by mapping the target and the coordinates of the point of impact. The recognition system is divided into recognizing the projection zone, extracting the point of impact on the projection zone, and calculating the shooting result from the point of impact. We find the vertices of the projection zone after converting the captured image to the binary image and extract the point of impact in it. We present the extracting process step by step and provide experiments to validate the results. The experiments show that exact vertices of the projection area and the point of impact are found and a conversion result for the final result is shown on the interface.

Camera and LiDAR Sensor Fusion for Improving Object Detection (카메라와 라이다의 객체 검출 성능 향상을 위한 Sensor Fusion)

  • Lee, Jongseo;Kim, Mangyu;Kim, Hakil
    • Journal of Broadcast Engineering
    • /
    • v.24 no.4
    • /
    • pp.580-591
    • /
    • 2019
  • This paper focuses on to improving object detection performance using the camera and LiDAR on autonomous vehicle platforms by fusing detected objects from individual sensors through a late fusion approach. In the case of object detection using camera sensor, YOLOv3 model was employed as a one-stage detection process. Furthermore, the distance estimation of the detected objects is based on the formulations of Perspective matrix. On the other hand, the object detection using LiDAR is based on K-means clustering method. The camera and LiDAR calibration was carried out by PnP-Ransac in order to calculate the rotation and translation matrix between two sensors. For Sensor fusion, intersection over union(IoU) on the image plane with respective to the distance and angle on world coordinate were estimated. Additionally, all the three attributes i.e; IoU, distance and angle were fused using logistic regression. The performance evaluation in the sensor fusion scenario has shown an effective 5% improvement in object detection performance compared to the usage of single sensor.

The Cross-Sectional Characteristic and Spring-Neap Variation of Residual Current and Net Volume Transport at the Yeomha Channel (경기만 염하수로에서의 잔차류 및 수송량의 대조-소조 변동과 단면 특성)

  • Lee, Dong Hwan;Yoon, Byung Il;Woo, Seung-Buhm
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.29 no.5
    • /
    • pp.217-227
    • /
    • 2017
  • The object of this study is to estimate the net volume transport and the residual flow that changed by space and time at southern part of Yeomha channel, Gyeonggi Bay. The cross-section observation was conducted at the mid-part (Line2) and the southern end (Line1) of Yeomha channel for 13 hours during neap and spring-tides, respectively. The Lagrange flux is calculated as the sum of Eulerian flux and Stokes drift, and the residual flow is calculated by using least square method. It is necessary to unify the spatial area of the observed cross-section and average time during the tidal cycle. In order to unify the cross-sectional area containing such a large vertical tidal variation, it was necessary to convert into sigma coordinate system by horizontally and vertically for every hour. The converted sigma coordinate system is estimated to be 3~5% error when compared with the z-level coordinate system which shows that there is no problem for analyzing the data. As a result, the cross-sectional residual flow shows a southward flow pattern in both spring and neap tides at Line2, and also have characteristic of the spatial residual flow fluctuation: it northwards in the main line direction and southwards at the end of both side of the waterway. It was confirmed that the residual flow characteristics at Line2 were changed by the net pressure due to the sea level difference. The analysis of the net volume transport showed that it tends to southwards at $576m^3s^{-1}$, $67m^3s^{-1}$ in each spring tide and neap tide at Line2. On the other hand, in the control Line1, it has tendency to northwards at $359m^3s^{-1}$ and $248m^3s^{-1}$. Based on the difference between the two observation lines, it is estimated that net volume transport will be out flow about $935m^3s^{-1}$ at spring tide stage and about $315m^3s^{-1}$ at neap tide stage as the intertidal zone between Yeongjong Island and Ganghwa Island. In other words, the difference of pressure gradient and Stokes drift during spring and neap tide is main causes of variation for residual current and net volume transport.

Improvement of 2-pass DInSAR-based DEM Generation Method from TanDEM-X bistatic SAR Images (TanDEM-X bistatic SAR 영상의 2-pass 위성영상레이더 차분간섭기법 기반 수치표고모델 생성 방법 개선)

  • Chae, Sung-Ho
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_1
    • /
    • pp.847-860
    • /
    • 2020
  • The 2-pass DInSAR (Differential Interferometric SAR) processing steps for DEM generation consist of the co-registration of SAR image pair, interferogram generation, phase unwrapping, calculation of DEM errors, and geocoding, etc. It requires complicated steps, and the accuracy of data processing at each step affects the performance of the finally generated DEM. In this study, we developed an improved method for enhancing the performance of the DEM generation method based on the 2-pass DInSAR technique of TanDEM-X bistatic SAR images was developed. The developed DEM generation method is a method that can significantly reduce both the DEM error in the unwrapped phase image and that may occur during geocoding step. The performance analysis of the developed algorithm was performed by comparing the vertical accuracy (Root Mean Square Error, RMSE) between the existing method and the newly proposed method using the ground control point (GCP) generated from GPS survey. The vertical accuracy of the DInSAR-based DEM generated without correction for the unwrapped phase error and geocoding error is 39.617 m. However, the vertical accuracy of the DEM generated through the proposed method is 2.346 m. It was confirmed that the DEM accuracy was improved through the proposed correction method. Through the proposed 2-pass DInSAR-based DEM generation method, the SRTM DEM error observed by DInSAR was compensated for the SRTM 30 m DEM (vertical accuracy 5.567 m) used as a reference. Through this, it was possible to finally create a DEM with improved spatial resolution of about 5 times and vertical accuracy of about 2.4 times. In addition, the spatial resolution of the DEM generated through the proposed method was matched with the SRTM 30 m DEM and the TanDEM-X 90m DEM, and the vertical accuracy was compared. As a result, it was confirmed that the vertical accuracy was improved by about 1.7 and 1.6 times, respectively, and more accurate DEM generation was possible with the proposed method. If the method derived in this study is used to continuously update the DEM for regions with frequent morphological changes, it will be possible to update the DEM effectively in a short time at low cost.

GPU-based dynamic point light particles rendering using 3D textures for real-time rendering (실시간 렌더링 환경에서의 3D 텍스처를 활용한 GPU 기반 동적 포인트 라이트 파티클 구현)

  • Kim, Byeong Jin;Lee, Taek Hee
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.3
    • /
    • pp.123-131
    • /
    • 2020
  • This study proposes a real-time rendering algorithm for lighting when each of more than 100,000 moving particles exists as a light source. Two 3D textures are used to dynamically determine the range of influence of each light, and the first 3D texture has light color and the second 3D texture has light direction information. Each frame goes through two steps. The first step is to update the particle information required for 3D texture initialization and rendering based on the Compute shader. Convert the particle position to the sampling coordinates of the 3D texture, and based on this coordinate, update the colour sum of the particle lights affecting the corresponding voxels for the first 3D texture and the sum of the directional vectors from the corresponding voxels to the particle lights for the second 3D texture. The second stage operates on a general rendering pipeline. Based on the polygon world position to be rendered first, the exact sampling coordinates of the 3D texture updated in the first step are calculated. Since the sample coordinates correspond 1:1 to the size of the 3D texture and the size of the game world, use the world coordinates of the pixel as the sampling coordinates. Lighting process is carried out based on the color of the sampled pixel and the direction vector of the light. The 3D texture corresponds 1:1 to the actual game world and assumes a minimum unit of 1m, but in areas smaller than 1m, problems such as stairs caused by resolution restrictions occur. Interpolation and super sampling are performed during texture sampling to improve these problems. Measurements of the time taken to render a frame showed that 146 ms was spent on the forward lighting pipeline, 46 ms on the defered lighting pipeline when the number of particles was 262144, and 214 ms on the forward lighting pipeline and 104 ms on the deferred lighting pipeline when the number of particle lights was 1,024766.

Dimensional Quality Assessment for Assembly Part of Prefabricated Steel Structures Using a Stereo Vision Sensor (스테레오 비전 센서 기반 프리팹 강구조물 조립부 형상 품질 평가)

  • Jonghyeok Kim;Haemin Jeon
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.37 no.3
    • /
    • pp.173-178
    • /
    • 2024
  • This study presents a technique for assessing the dimensional quality of assembly parts in Prefabricated Steel Structures (PSS) using a stereo vision sensor. The stereo vision system captures images and point cloud data of the assembly area, followed by applying image processing algorithms such as fuzzy-based edge detection and Hough transform-based circular bolt hole detection to identify bolt hole locations. The 3D center positions of each bolt hole are determined by correlating 3D real-world position information from depth images with the extracted bolt hole positions. Principal Component Analysis (PCA) is then employed to calculate coordinate axes for precise measurement of distances between bolt holes, even when the sensor and structure orientations differ. Bolt holes are sorted based on their 2D positions, and the distances between sorted bolt holes are calculated to assess the assembly part's dimensional quality. Comparison with actual drawing data confirms measurement accuracy with an absolute error of 1mm and a relative error within 4% based on median criteria.

Automated Image Matching for Satellite Images with Different GSDs through Improved Feature Matching and Robust Estimation (특징점 매칭 개선 및 강인추정을 통한 이종해상도 위성영상 자동영상정합)

  • Ban, Seunghwan;Kim, Taejung
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1257-1271
    • /
    • 2022
  • Recently, many Earth observation optical satellites have been developed, as their demands were increasing. Therefore, a rapid preprocessing of satellites became one of the most important problem for an active utilization of satellite images. Satellite image matching is a technique in which two images are transformed and represented in one specific coordinate system. This technique is used for aligning different bands or correcting of relative positions error between two satellite images. In this paper, we propose an automatic image matching method among satellite images with different ground sampling distances (GSDs). Our method is based on improved feature matching and robust estimation of transformation between satellite images. The proposed method consists of five processes: calculation of overlapping area, improved feature detection, feature matching, robust estimation of transformation, and image resampling. For feature detection, we extract overlapping areas and resample them to equalize their GSDs. For feature matching, we used Oriented FAST and rotated BRIEF (ORB) to improve matching performance. We performed image registration experiments with images KOMPSAT-3A and RapidEye. The performance verification of the proposed method was checked in qualitative and quantitative methods. The reprojection errors of image matching were in the range of 1.277 to 1.608 pixels accuracy with respect to the GSD of RapidEye images. Finally, we confirmed the possibility of satellite image matching with heterogeneous GSDs through the proposed method.