• Title/Summary/Keyword: camera coordinate transform

Search Result 31, Processing Time 0.027 seconds

A Study on Iris Recognition by Iris Feature Extraction from Polar Coordinate Circular Iris Region (극 좌표계 원형 홍채영상에서의 특징 검출에 의한 홍채인식 연구)

  • Jeong, Dae-Sik;Park, Kang-Ryoung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.3
    • /
    • pp.48-60
    • /
    • 2007
  • In previous researches for iris feature extraction, they transform a original iris image into rectangular one by stretching and interpolation, which causes the distortion of iris patterns. Consequently, it reduce iris recognition accuracy. So we are propose the method that extracts iris feature by using polar coordinates without distortion of iris patterns. Our proposed method has three strengths compared with previous researches. First, we extract iris feature directly from polar coordinate circular iris image. Though it requires a little more processing time, there is no degradation of accuracy for iris recognition and we compares the recognition performance of polar coordinate to rectangular type using by Hamming Distance, Cosine Distance and Euclidean Distance. Second, in general, the center position of pupil is different from that of iris due to camera angle, head position and gaze direction of user. So, we propose the method of iris feature detection based on polar coordinate circular iris region, which uses pupil and iris position and radius at the same time. Third, we overcome override point from iris patterns by using polar coordinates circular method. each overlapped point would be extracted from the same position of iris region. To overcome such problem, we modify Gabor filter's size and frequency on first track in order to consider low frequency iris patterns caused by overlapped points. Experimental results showed that EER is 0.29%, d' is 5,9 and EER is 0.16%, d' is 6,4 in case of using conventional rectangular image and proposed method, respectively.

Development of a Data Reduction algorithm for Optical Wide Field Patrol

  • Park, Sun-Youp;Keum, Kang-Hoon;Lee, Seong-Whan;Jin, Ho;Park, Yung-Sik;Yim, Hong-Suh;Jo, Jung Hyun;Moon, Hong-Kyu;Bae, Young-Ho;Choi, Jin;Choi, Young-Jun;Park, Jang-Hyun;Lee, Jung-Ho
    • Journal of Astronomy and Space Sciences
    • /
    • v.30 no.3
    • /
    • pp.193-206
    • /
    • 2013
  • The detector subsystem of the Optical Wide-field Patrol (OWL) network efficiently acquires the position and time information of moving objects such as artificial satellites through its chopper system, which consists of 4 blades in front of the CCD camera. Using this system, it is possible to get more position data with the same exposure time by changing the streaks of the moving objects into many pieces with the fast rotating blades during sidereal tracking. At the same time, the time data from the rotating chopper can be acquired by the time tagger connected to the photo diode. To analyze the orbits of the targets detected in the image data of such a system, a sequential procedure of determining the positions of separated streak lines was developed that involved calculating the World Coordinate System (WCS) solution to transform the positions into equatorial coordinate systems, and finally combining the time log records from the time tagger with the transformed position data. We introduce this procedure and the preliminary results of the application of this procedure to the test observation images.

A Comparison of System Performances Between Rectangular and Polar Exponential Grid Imaging System (POLAR EXPONENTIAL GRID와 장방형격자 영상시스템의 영상분해도 및 영상처리능력 비교)

  • Jae Kwon Eem
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.2
    • /
    • pp.69-79
    • /
    • 1994
  • The conventional machine vision system which has uniform rectangular grid requires tremendous amount of computation for processing and analysing an image especially in 2-D image transfermations such as scaling, rotation and 3-D reconvery problem typical in robot application environment. In this study, the imaging system with nonuiformly distributed image sensors simulating human visual system, referred to as Ploar Exponential Grid(PEG), is compared with the existing conventional uniform rectangular grid system in terms of image resolution and computational complexity. By mimicking the geometric structure of the PEG sensor cell, we obtained PEG-like images using computer simulation. With the images obtained from the simulation, image resolution of the two systems are compared and some basic image processing tasks such as image scaling and rotation are implemented based on the PEG sensor system to examine its performance. Furthermore Fourier transform of PEG image is described and implemented in image analysis point of view. Also, the range and heading-angle measurement errors usually encountered in 3-D coordinates recovery with stereo camera system are claculated based on the PEG sensor system and compared with those obtained from the uniform rectangular grid system. In fact, the PEC imaging system not only reduces the computational requirements but also has scale and rotational invariance property in Fourier spectrum. Hence the PEG system has more suitable image coordinate system for image scaling, rotation, and image recognition problem. The range and heading-angle measurement errors with PEG system are less than those of uniform rectangular rectangular grid system in practical measurement range.

  • PDF

The Road Traffic Sign Recognition and Automatic Positioning for Road Facility Management (도로시설물 관리를 위한 교통안전표지 인식 및 자동위치 취득 방법 연구)

  • Lee, Jun Seok;Yun, Duk Geun
    • International Journal of Highway Engineering
    • /
    • v.15 no.1
    • /
    • pp.155-161
    • /
    • 2013
  • PURPOSES: This study is to develop a road traffic sign recognition and automatic positioning for road facility management. METHODS: In this study, we installed the GPS, IMU, DMI, camera, laser sensor on the van and surveyed the car position, fore-sight image, point cloud of traffic signs. To insert automatic position of traffic sign, the automatic traffic sign recognition S/W developed and it can log the traffic sign type and approximate position, this study suggests a methodology to transform the laser point-cloud to the map coordinate system with the 3D axis rotation algorithm. RESULTS: Result show that on a clear day, traffic sign recognition ratio is 92.98%, and on cloudy day recognition ratio is 80.58%. To insert exact traffic sign position. This study examined the point difference with the road surveying results. The result RMSE is 0.227m and average is 1.51m which is the GPS positioning error. Including these error we can insert the traffic sign position within 1.51m CONCLUSIONS: As a result of this study, we can automatically survey the traffic sign type, position data of the traffic sign position error and analysis the road safety, speed limit consistency, which can be used in traffic sign DB.

Measurement and Algorithm Calculation of Maxillary Positioning Change by Use of an Optoelectronic Tracking System Marker in Orthognathic Surgery (악교정수술에서 광전자 포인트 마커를 이용한 상악골 위치 변화의 계측 및 계산 방법 연구)

  • Park, Jong-Woong;Kim, Soung-Min;Eo, Mi-Young;Park, Jung-Min;Myoung, Hoon;Lee, Jong-Ho;Kim, Myung-Jin
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • v.33 no.3
    • /
    • pp.233-240
    • /
    • 2011
  • Purpose: To apply a computer assisted navigation system to orthognathic surgery, a simple and efficient measuring algorithm calculation based on affine transformation was designed. A method of improving accuracy and reducing errors in orthognathic surgery by use of an optical tracking camera was studied. Methods: A total of 5 points on one surgical splint were measured and tracked by the Polaris $Vicra^{(R)}$ (Northern Digital Inc Co., Ontario, Canada) optical tracking system in two cases. The first case was to apply the transformation matrix at pre- and postoperative situations, and the second case was to apply an affine transformation only after the postoperative situation. In each situation, the predictive measuring value was changed to the final measuring value via an affine transformation algorithm and the expected coordinates calculated from the model were compared with those of the patient in the operation room. Results: The mean measuring error was $1.027{\pm}0.587$ using the affine transformation at pre- and postoperative situations and the average value after the postoperative situation was $0.928{\pm}0.549$. The farther a coordinate region was from the reference coordinates which constitutes the transform matrixes, the bigger the measuring error was found which was calculated from an affine transformation algorithm. Conclusion: Most difference errors were brought from mainly measuring process and lack of reproducibility, the affine transformation algorithm formula from postoperative measuring values by using of optic tracking system between those of model surgery and those of patient surgery can be selected as minimizing the difference error. To reduce coordinate calculation errors, minimum transformation matrices must be used and reference points which determine an affine transformation must be close to the area where coordinates are measured and calculated, as well as the reference points need to be scattered.

A Study on Multi-modal Near-IR Face and Iris Recognition on Mobile Phones (휴대폰 환경에서의 근적외선 얼굴 및 홍채 다중 인식 연구)

  • Park, Kang-Ryoung;Han, Song-Yi;Kang, Byung-Jun;Park, So-Young
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.2
    • /
    • pp.1-9
    • /
    • 2008
  • As the security requirements of mobile phones have been increasing, there have been extensive researches using one biometric feature (e.g., an iris, a fingerprint, or a face image) for authentication. Due to the limitation of uni-modal biometrics, we propose a method that combines face and iris images in order to improve accuracy in mobile environments. This paper presents four advantages and contributions over previous research. First, in order to capture both face and iris image at fast speed and simultaneously, we use a built-in conventional mega pixel camera in mobile phone, which is revised to capture the NIR (Near-InfraRed) face and iris image. Second, in order to increase the authentication accuracy of face and iris, we propose a score level fusion method based on SVM (Support Vector Machine). Third, to reduce the classification complexities of SVM and intra-variation of face and iris data, we normalize the input face and iris data, respectively. For face, a NIR illuminator and NIR passing filter on camera are used to reduce the illumination variance caused by environmental visible lighting and the consequent saturated region in face by the NIR illuminator is normalized by low processing logarithmic algorithm considering mobile phone. For iris, image transform into polar coordinate and iris code shifting are used for obtaining robust identification accuracy irrespective of image capturing condition. Fourth, to increase the processing speed on mobile phone, we use integer based face and iris authentication algorithms. Experimental results were tested with face and iris images by mega-pixel camera of mobile phone. It showed that the authentication accuracy using SVM was better than those of uni-modal (face or iris), SUM, MAX, NIN and weighted SUM rules.

Development of Real-Time Image Processing Algorithm on the Positions of Multi-Object in an Image Plane (한 이미지 평면에서 다물체 위치의 실시간 화상처리 알고리즘 개발)

  • Jang, W.S.;Kim, K.S.;Lee, S.M.
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.22 no.5
    • /
    • pp.523-531
    • /
    • 2002
  • This study is concentrated on the development of high speed multi-object image processing algorithm in real time. Recently, the use of vision system is rapidly increasing in inspection and robot's position control. To apply the vision system, it is necessary to transform the physical coordinate of object into the image information acquired by CCD camera. Thus, to use the application of the vision system to the inspection and robot's position control in real time, we have to know the position of object in the image plane. Particularly, in case of rigid body using multi-cue to identify its shape, the each position of multi-cue must be calculated in an image plane at the same time. To solve these problems, the image processing algorithm on the position of multi-cue is developed.

High Accurate Cup Positioning System for a Coffee Printer (커피 프린터를 위한 커피 잔 정밀 측위 시스템)

  • Kim, Heeseung;Lee, Jaesung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.10
    • /
    • pp.1950-1956
    • /
    • 2017
  • In food-printing field, precise positioning technique for a printing object is very important. In this paper, we propose cup positioning method for a latte-art printer through image processing. A camera sensor is installed on the upper side of the printer, and the image obtained from this is projected and converted into a top-view image. Then, the edge lines of the image is detected first, and then the coordinate of the center and the radius of the cup are detected through a Circular Hough transformation. The performance evaluation results show that the image processing time is 0.1 ~ 0.125 sec and the cup detection rate is 92.26%. This means that a cup is detected almost perfectly without affecting the whole latte-art printing time. The center point coordinates and radius values of cups detected by the proposed method show very small errors less than an average of 1.5 mm. Therefore, it seems that the problem of the printing position error is solved.

A Surface Image Velocimetry Algorithm for Analyzing Swaying Images (흔들리는 영상 분석을 위한 표면 영상 유속계 알고리듬)

  • Yu, Kwonk-Yu;Yoon, Byung-Man;Jung, Beom-Seok
    • Journal of Korea Water Resources Association
    • /
    • v.41 no.8
    • /
    • pp.855-862
    • /
    • 2008
  • Surface Image Velocimetry (SIV) is an instrument to measure water surface velocity by using image processing techniques. To improve its measuring accuracy, it is essential to get high quality images with low skewness. A truck-mounted SIV system would be a good way to get images, since its crane gives high altitude to the images. However, the images taken with a truck-mounted SIV would be swayed due to the movement of crane and the camera by winds. In that case, to analyze the images, it is necessary to compensate the side sway in the images. The present study is to develop an algorithm to analyze the swayed images by combining common image processing techniques and coordinate transform techniques. The system follows the traces of some selected fixed points and calculates the displacements of the video camera. By subtracting the average velocity of the fixed points from that of grid points, the velocity fields of the flow can be corrected. To evaluate the system's performance, two image sets were used, one image set without side sway and another set with side sway. The comparison of their results showed very close with the error of around 6 %.

Development of a real-time surface image velocimeter using an android smartphone (스마트폰을 이용한 실시간 표면영상유속계 개발)

  • Yu, Kwonkyu;Hwang, Jeong-Geun
    • Journal of Korea Water Resources Association
    • /
    • v.49 no.6
    • /
    • pp.469-480
    • /
    • 2016
  • The present study aims to develop a real-time surface image velocimeter (SIV) using an Android smartphone. It can measure river surface velocity by using its built-in sensors and processors. At first the SIV system figures out the location of the site using the GPS of the phone. It also measures the angles (pitch and roll) of the device by using its orientation sensors to determine the coordinate transform from the real world coordinates to image coordinates. The only parameter to be entered is the height of the phone from the water surface. After setting, the camera of the phone takes a series of images. With the help of OpenCV, and open source computer vision library, we split the frames of the video and analyzed the image frames to get the water surface velocity field. The image processing algorithm, similar to the traditional STIV (Spatio-Temporal Image Velocimeter), was based on a correlation analysis of spatio-temporal images. The SIV system can measure instantaneous velocity field (1 second averaged velocity field) once every 11 seconds. Averaging this instantaneous velocity measurement for sufficient amount of time, we can get an average velocity field. A series of tests performed in an experimental flume showed that the measurement system developed was greatly effective and convenient. The measured results by the system showed a maximum error of 13.9 % and average error less than 10 %, when we compared with the measurements by a traditional propeller velocimeter.