• Title/Summary/Keyword: Camera sensor

Search Result 1,277, Processing Time 0.024 seconds

Fingerprint Segmentation and Ridge Orientation Estimation with a Mobile Camera for Fingerprint Recognition (모바일 카메라를 이용한 지문인식을 위한 지문영역 추출 및 융선방향 추출 알고리즘)

  • Lee Chulhan;Lee Sanghoon;Kim Jaihie;Kim Sung-Jae
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.6
    • /
    • pp.89-98
    • /
    • 2005
  • Fingerprint segmentation and ridge orientation estimation algorithms with images from a mobile camera are proposed. The fingerprint images from a mobile camera are quite different from those from conventional sensor, called touch based sensor such as optical, capacitive, and thermal. For example, the images from a mobile camera are colored and the backgrounds or non-finger regions are very erratic depending on how the image capture time and place. Also the contrast between ridge and valley of a mobile camera image are lower than that of touch based sensor image. To segment fingerprint region, we first detect the initial region using color information and texture information. The LUT (Look Up Table) is used to model the color distribution of fingerprint images using manually segmented images and frequency information is extracted to discriminate between in focused fingerprint regions and out of focused background regions. With the detected initial region, the region growing algerian is executed to segment final fingerprint region. In fingerprint orientation estimation, the problem of gradient based method is very sensitive to outlier that occurred by scar and camera noise. To solve this problem, we propose a robust regression method that removes the outlier iteratively and effectively. In the experiments, we evaluated the result of the proposed fingerprint segmentation algerian using 600 manually segmented images and compared the orientation algorithms in terms of recognition accuracy.

Personalized Cooling Management System with Thermal Imaging Camera (열화상 카메라를 적용한 개인 맞춤형 냉각관리 시스템)

  • Lee, Young-Ji;Lee, Joo-Hyun;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.25 no.4
    • /
    • pp.782-785
    • /
    • 2021
  • In this paper, we propose a personalized cooling management system with thermal imaging camera. The proposed equipment uses a thermal imaging camera to control the amount of cold air and the system according to the difference between the user's skin temperature before and after the procedure. When the skin temperature is abnormally low, the cold air supply is cut off to prevent the possibility of a safety accident. It is economical by replacing the skin temperature sensor with a thermal imaging camera temperature measurement, and it can be visualized because the temperature can be checked with the thermal image. In addition, the proposed equipment improves the sensitivity of the sensor that measures the distance to the skin by calculating the focal length by using a dual laser pointer for the safety of a personalized cooling management system to which a thermal imaging camera is applied. In order to evaluate the performance of the proposed equipment, it was tested in an externally accredited testing institute. The first measured temperature range was -100℃~-160℃, indicating a wider temperature range than -150~-160℃(cryo generation/USA), which is the highest level currently used in the field. In addition, the error was measured to be ±3.2%~±3.5%, which showed better results than ±5%(CRYOTOP/China), which is the highest level currently used in the field. The second measured distance accuracy was measured as below ±4.0%, which was superior to ±5%(CRYOTOP/China), which is the highest level currently used in the field. Third, the nitrogen consumption was confirmed to be less than 0.15 L/min at the maximum, which was superior to the highest level of 6 L/min(POLAR BEAR/USA) currently used in the field. Therefore, it was determined that the performance of the personalized cooling management system applied with the thermal imaging camera proposed in this paper was excellent.

Study on object detection and distance measurement functions with Kinect for windows version 2 (키넥트(Kinect) 윈도우 V2를 통한 사물감지 및 거리측정 기능에 관한 연구)

  • Niyonsaba, Eric;Jang, Jong-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.6
    • /
    • pp.1237-1242
    • /
    • 2017
  • Computer vision is coming more interesting with new imaging sensors' new capabilities which enable it to understand more its surrounding environment by imitating human vision system with artificial intelligence techniques. In this paper, we made experiments with Kinect camera, a new depth sensor for object detection and distance measurement functions, most essential functions in computer vision such as for unmanned or manned vehicles, robots, drones, etc. Therefore, Kinect camera is used here to estimate the position or the location of objects in its field of view and measure the distance from them to its depth sensor in an accuracy way by checking whether that the detected object is real object or not to reduce processing time ignoring pixels which are not part of real object. Tests showed promising results with such low-cost range sensor, Kinect camera which can be used for object detection and distance measurement which are fundamental functions in computer vision applications for further processing.

Camera Calibration Using Neural Network with a Small Amount of Data (소수 데이터의 신경망 학습에 의한 카메라 보정)

  • Do, Yongtae
    • Journal of Sensor Science and Technology
    • /
    • v.28 no.3
    • /
    • pp.182-186
    • /
    • 2019
  • When a camera is employed for 3D sensing, accurate camera calibration is vital as it is a prerequisite for the subsequent steps of the sensing process. Camera calibration is usually performed by complex mathematical modeling and geometric analysis. On the other contrary, data learning using an artificial neural network can establish a transformation relation between the 3D space and the 2D camera image without explicit camera modeling. However, a neural network requires a large amount of accurate data for its learning. A significantly large amount of time and work using a precise system setup is needed to collect extensive data accurately in practice. In this study, we propose a two-step neural calibration method that is effective when only a small amount of learning data is available. In the first step, the camera projection transformation matrix is determined using the limited available data. In the second step, the transformation matrix is used for generating a large amount of synthetic data, and the neural network is trained using the generated data. Results of simulation study have shown that the proposed method as valid and effective.

An Efficient Implementation of Key Frame Extraction and Sharing in Android for Wireless Video Sensor Network

  • Kim, Kang-Wook
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.9
    • /
    • pp.3357-3376
    • /
    • 2015
  • Wireless sensor network is an important research topic that has attracted a lot of attention in recent years. However, most of the interest has focused on wireless sensor network to gather scalar data such as temperature, humidity and vibration. Scalar data are insufficient for diverse applications such as video surveillance, target recognition and traffic monitoring. However, if we use camera sensors in wireless sensor network to collect video data which are vast in information, they can provide important visual information. Video sensor networks continue to gain interest due to their ability to collect video information for a wide range of applications in the past few years. However, how to efficiently store the massive data that reflect environmental state of different times in video sensor network and how to quickly search interested information from them are challenging issues in current research, especially when the sensor network environment is complicated. Therefore, in this paper, we propose a fast algorithm for extracting key frames from video and describe the design and implementation of key frame extraction and sharing in Android for wireless video sensor network.

An Adaptive Colorimetry Analysis Method of Image using a CIS Transfer Characteristic and SGL Functions (CIS의 전달특성과 SGL 함수를 이용한 적응적인 영상의 Colorimetry 분석 기법)

  • Lee, Sung-Hak;Lee, Jong-Hyub;Sohng, Kyu-Ik
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.5
    • /
    • pp.641-650
    • /
    • 2010
  • Color image sensors (CIS) output color images through image sensors and image signal processing. Image sensors that convert light to electrical signal are divided into CMOS image sensor and CCD image sensor according to transferring method of signal charge. In general, a CIS has RGB output signals from tri-stimulus XYZ of the scene through image signal processing. This paper presents an adaptive colorimetric analysis method to obtain chromaticity and luminance using CIS under various environments. An image sensor for the use of colorimeter is characterized based on the CIE standard colorimetric observer. We use the method of least squares to derive a colorimetric characterization matrix between camera RGB output signals and CIE XYZ tristimulus values. We first survey the camera characterization in the standard environment then derive a SGL(shutter-gain-level) function which is relationship between luminance and auto exposure (AE) characteristic of CIS, and read the status of an AWB(auto white balance) function. Then we can apply CIS to measure luminance and chromaticity from camera outputs and AE resister values without any preprocessing. Camera RGB outputs, register values, and camera photoelectric characteristic are used to analyze the colorimetric results for real scenes such as chromaticity and luminance. Experimental results show that the proposed method is valid in the measuring performance. The proposed method can apply to various fields like surveillant systems of the display or security systems.

Development of a shape measuring system by hand-eye robot (Hand-Eye Robot에 의한 형상계측 시스템의 개발)

  • 정재문;김선일;양윤모
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1990.10a
    • /
    • pp.586-590
    • /
    • 1990
  • In this paper we describe the shape measuring technique and system with a non-contractive sensor, composed of slit-ray projector and solid-state camera. For improving the accuracy and preventing measuring dead point, this sensor part is attached to the end of robot, and each sensing is executed after one step moving. By patching these sensing data, whole measuring data is constructed. The calibration between sensor and world coordinate is implemented through the specific calibration block by transformation matrix method. The result of experiment was satisfactory.

  • PDF

Development of Mirror-based touchless fingerprint sensor (거울을 이용한 비접촉식 지문 센서 개발)

  • Choi, Hee-Seung;Choi, Kyung-Taek;Kim, Jai-Hie
    • Proceedings of the IEEK Conference
    • /
    • 2007.07a
    • /
    • pp.231-232
    • /
    • 2007
  • This paper introduce a new touchless fingerprint sensor. Two mirrors are used to capture the side fingerprint images which cannot detectable using a single camera. We also propose the techniques which can solve the image contrast, nonuniform illumination, DOF(Depth of Field) problems. This new sensor leads to bringing new challenges in the field of fingerprint recognition.

  • PDF

Development and Application of a Profile Measurement Sensor for Remote Laser Welding Robots (원격 레이저 용접 로봇을 위한 형상 측정 센서의 개발과 응용)

  • Kim, Chang-Hyun;Choi, Tae-Yong;Lee, Ju-Jang;Suh, Jeong;Park, Kyoung-Taik;Kang, Hee-Shin
    • Laser Solutions
    • /
    • v.12 no.2
    • /
    • pp.11-16
    • /
    • 2009
  • A new profile measurement sensor was developed for remote laser welding robots. A stripe laser and a vision camera are used in the profile sensor. A simple sensor guided control scheme using the developed sensor is also introduced. The sensor can be used to guide the welding head in the remote welding application, where the working distance reaches to 450mm. In experiments, the profile measurement and the seam tracking were carried out using the developed sensor.

  • PDF

A sensor controller for map building of home service robot using low cost PSD sensor (저가형 PSD센서를 이용한 홈서비스 로봇의 Map building용 센서 제어시스템)

  • Hyun, Wong-Keun;Lee, Chang-Hwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.10
    • /
    • pp.1897-1904
    • /
    • 2006
  • Home service robot must recognize and build map for indoor and components of the house such as furniture and chair etc. The previous researcher has developed a indoor map building system by using CCD camera and ultra sonic sensor. %no stems have some problem in such a way that (1) a distun resolution can be changed according to the number of pixel when we use a CCD camera system, (2) a measured distance can be decreased when it transmitted to the rubber because of being absorbed the sound energy. This paper represents an intelligent sensor controller of module has been developed by using optic PSD(Position Sensitive Detector) sensor any at a low price. To deduce the switching noise from beam power module and diffused reflection noise, we proposed a heuristic soft filter. The performance of the developed system was compared with ultra sonic sensor system by detecting the indoor wall environment. Some experiments were illustrated for the validity of the developed system.