• Title/Summary/Keyword: Camera sensor

Search Result 1,274, Processing Time 0.028 seconds

EM Development of Dual Head Star Tracker for STSAT-2 (과학기술위성2호의 이중 머리 별 추적기 개발)

  • Sin, Il-Sik;Lee, Seong-Ho;Yu, Chang-Wan;Nam, Myeong-Ryong
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.34 no.2
    • /
    • pp.96-100
    • /
    • 2006
  • We develop the Dual Head Star Tracker (DHST) to obtain the attitude information of science and Technology Satellite2 (STSAT-2). Because most of star sensor has only one head camera, star recognition is impossible when camera point to sun or earth. We therefore considered the DHST which can obtain star images from two spots simultaneously. That is, even though we fail a star recognition from an image obtained by one camera, it is possible to recognize stars from an image obtained by the other camera. In this paper, we introduce engineer model (EM) of the DHST and propose a star recognition and a star track algorithm.

Bayesian Sensor Fusion of Monocular Vision and Laser Structured Light Sensor for Robust Localization of a Mobile Robot (이동 로봇의 강인 위치 추정을 위한 단안 비젼 센서와 레이저 구조광 센서의 베이시안 센서융합)

  • Kim, Min-Young;Ahn, Sang-Tae;Cho, Hyung-Suck
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.4
    • /
    • pp.381-390
    • /
    • 2010
  • This paper describes a procedure of the map-based localization for mobile robots by using a sensor fusion technique in structured environments. A combination of various sensors with different characteristics and limited sensibility has advantages in view of complementariness and cooperation to obtain better information on the environment. In this paper, for robust self-localization of a mobile robot with a monocular camera and a laser structured light sensor, environment information acquired from two sensors is combined and fused by a Bayesian sensor fusion technique based on the probabilistic reliability function of each sensor predefined through experiments. For the self-localization using the monocular vision, the robot utilizes image features consisting of vertical edge lines from input camera images, and they are used as natural landmark points in self-localization process. However, in case of using the laser structured light sensor, it utilizes geometrical features composed of corners and planes as natural landmark shapes during this process, which are extracted from range data at a constant height from the navigation floor. Although only each feature group of them is sometimes useful to localize mobile robots, all features from the two sensors are simultaneously used and fused in term of information for reliable localization under various environment conditions. To verify the advantage of using multi-sensor fusion, a series of experiments are performed, and experimental results are discussed in detail.

A Study on the Vision Sensor Using Scanning Beam for Welding Process Automation (용접자동화를 위한 주사빔을 이용한 시각센서에 관한 연구)

  • You, Won-Sang;Na, Suck-Joo
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.20 no.3
    • /
    • pp.891-900
    • /
    • 1996
  • The vision sensor which is based on the optical triangulation theory with the laser as an auxiliary light source can detect not only the seam position but the shape of seam. In this study, a vision sensor using the scanning laser beam was investigated. To design the vision sensor which considers the reflectivity of the sensing object and satisfies the desired resolution and measuring range, the equation of the focused laser beam which has a Gaussian irradiance profile was firstly formulated, Secondly, the image formaing sequence, and thirdly the relation between the displacement in the measuring surface and the displacement in the camera plane was formulated. Therefore, the focused beam diameter in the measuring range could be determined and the influence of the relative location between the laser and camera plane could be estimated. The measuring range and the resolution of the vision sensor which was based on the Scheimpflug's condition could also be calculated. From the results mentioned above a vision sensor was developed, and an adequate calibration technique was proposed. The image processing algorithm which and recognize the center of joint and its shape informaitons was investigated. Using the developed vision sensor and image processing algorithm, the shape informations was investigated. Using the developed vision sensor and image processing algorithm, the shape informations of the vee-, butt- and lap joint were extracted.

An Occupant Sensing System Using Single Video Camera and Ultrasonic Sensor for Advanced Airbag (단일 비디오 카메라와 초음파센서를 이용한 스마트 에어백용 승객 감지 시스템)

  • Bae, Tae-Wuk;Lee, Jong-Won;Ha, Su-Young;Kim, Young-Choon;Ahn, Sang-Ho;Sohng, Kyu-Ik
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.1
    • /
    • pp.66-75
    • /
    • 2010
  • We proposed an occupant sensing system using single video camera and ultrasonic sensor for the advanced airbag. To detect the occupant form and the face position in real-time, we used the skin color and motion information. We made the candidate face block image using the threshold value of the color difference signal corresponding to skin color and difference value of current image and previous image of luminance signal to gel motion information. And then it detects the face by the morphology and the labeling. In case of night without color and luminance information, it detects the face by using the threshold value of the luminance signal get by infra-red LED instead of the color difference signal. To evaluate the performance of the proposed occupant detection system, it performed various experiments through the setting of the IEEE camera, ultrasonic sensor, and infra-red LED in vehicle jig.

Geometric calibration of digital photogrammetric camera in Sejong Test-bed (세종 테스트베드에서 항측용 디지털카메라의 기하학적 검정)

  • Seo, Sang-Il;Won, Jae-Ho;Lee, Jae-One;Park, Byoung-Uk
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.2
    • /
    • pp.181-188
    • /
    • 2012
  • The most recent, Digital photogrammetric camera, Airborne LiDAR and GPS/INS same sensors are used to acquire spatial information of various kinds in the field of aerial survey. In addition, Direct Georeferencing technology has been widely utilized with digital photogrammetric camera and GPS/INS. However, the sensor Calibration to be performed according to the combination of various sensors is followed by problems. Most of all, boresight calibration of integrated sensors is a critical element in the mapping process when using direct georeferencing or using the GPS/INS aerotriangulation. The establishment of a national test-bed in Sejong-si for aerial sensor calibration is absolutely necessary to solve this problem. And accurate calibration with used to integration of GPS/INS by aerotriangulation of aerial imagery was necessary for determination of system parameters, evaluation of systematic errors. Also, an investigation of efficient method for Direct georeferencing to determine the exterior orientation parameters and assessment of geometric accuracy of integrated sensors are performed.

3D Head Modeling using Depth Sensor

  • Song, Eungyeol;Choi, Jaesung;Jeon, Taejae;Lee, Sangyoun
    • Journal of International Society for Simulation Surgery
    • /
    • v.2 no.1
    • /
    • pp.13-16
    • /
    • 2015
  • Purpose We conducted a study on the reconstruction of the head's shape in 3D using the ToF depth sensor. A time-of-flight camera (ToF camera) is a range imaging camera system that resolves distance based on the known speed of light, measuring the time-of-flight of a light signal between the camera and the subject for each point of the image. The above method is the safest way of measuring the head shape of plagiocephaly patients in 3D. The texture, appearance and size of the head were reconstructed from the measured data and we used the SDF method for a precise reconstruction. Materials and Methods To generate a precise model, mesh was generated by using Marching cube and SDF. Results The ground truth was determined by measuring 10 people of experiment participants for 3 times repetitively and the created 3D model of the same part from this experiment was measured as well. Measurement of actual head circumference and the reconstructed model were made according to the layer 3 standard and measurement errors were also calculated. As a result, we were able to gain exact results with an average error of 0.9 cm, standard deviation of 0.9, min: 0.2 and max: 1.4. Conclusion The suggested method was able to complete the 3D model by minimizing errors. This model is very effective in terms of quantitative and objective evaluation. However, measurement range somewhat lacks 3D information for the manufacture of protective helmets, as measurements were made according to the layer 3 standard. As a result, measurement range will need to be widened to facilitate production of more precise and perfectively protective helmets by conducting scans on all head circumferences in the future.

Development of a Photoplethysmographic method using a CMOS image sensor for Smartphone (스마트폰의 CMOS 영상센서를 이용한 광용적맥파 측정방법 개발)

  • Kim, Ho Chul;Jung, Wonsik;Lee, Kwonhee;Nam, Ki Chang
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.6
    • /
    • pp.4021-4030
    • /
    • 2015
  • Pulse wave is the physiological responses through the autonomic nervous system such as ECG. It is relatively convenient because it can measure the signal just by applying a sensor on a finger. So, it can be usefully employed in the field of U-Healthcare. The objects of this study are acquiring the PPG (Photoplethysmography) one of the way of measuring the pulse waves in non-invasive way using the CMOS image sensor on a smartphone camera, developing the portable system judging stressful or not, and confirming the applicability in the field of u-Healthcare. PPG was acquired by using image data from smartphone camera without separate sensors and analyzed. Also, with that image signal data, HRV (Heart Rate Variability) and stress index were offered users by just using smartphone without separate host equipment. In addition, the reliability and accuracy of acquired data were improved by developing additional hardware device. From these experiments, we can confirm that measuring heart rate through the PPG, and the stress index for analysis the stress degree using the image of a smartphone camera are possible. In this study, we used a smartphone camera, not commercialized product or standardized sensor, so it has low resolution than those of using commercialized external sensor. However, despite this disadvantage, it can be usefully employed as the u-Healthcare device because it can obtain the promising data by developing additional external device for improvement reliability of result and optimization algorithm.

High-resolution Depth Generation using Multi-view Camera and Time-of-Flight Depth Camera (다시점 카메라와 깊이 카메라를 이용한 고화질 깊이 맵 제작 기술)

  • Kang, Yun-Suk;Ho, Yo-Sung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.6
    • /
    • pp.1-7
    • /
    • 2011
  • The depth camera measures range information of the scene in real time using Time-of-Flight (TOF) technology. Measured depth data is then regularized and provided as a depth image. This depth image is utilized with the stereo or multi-view image to generate high-resolution depth map of the scene. However, it is required to correct noise and distortion of TOF depth image due to the technical limitation of the TOF depth camera. The corrected depth image is combined with the color image in various methods, and then we obtain the high-resolution depth of the scene. In this paper, we introduce the principal and various techniques of sensor fusion for high-quality depth generation that uses multiple camera with depth cameras.

Satellite Camera Focus Mechanism Design and Verification (위성용 전자광학카메라의 초점제어시스템 설계 및 검증)

  • Park, Jong-Euk;Lee, Kijun
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.2_1
    • /
    • pp.227-236
    • /
    • 2018
  • The focus control mechanism of the multi-purpose camera can be required for the better quality image acquisition. A good image acquisition through the hardware system including the optics and image sensor, has to be processed before the post correction for improvement of image quality. In the case of the high-resolution satellite camera, the focus control is not a necessity, unlike a normal camera due to a fixed optical system, but may be required due to various reasons. Although there is a basic focus control method using a motor for satellite electronic optical camera, a focus control method using thermal control can be a good alternative because of its various advantages in design, installation, operation, contamination, high reliability and etc. In this paper, we describe the design method and implementation results for the focus control mechanism using the temperature sensor and heater installed in the telescope structure. In the proposed focus control method, the measured temperature information is converted into temperature data by the Kalman filter and the converted temperature data are used in PI controller for the thermal focus control.

Implementation of Real-Time Security System by using Dual Camera (이중카메라를 이용한 실시간 도난방지 시스템의 구현)

  • Lee, Kwang-Hyoung;Jung, Young-Hun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.10 no.1
    • /
    • pp.158-164
    • /
    • 2009
  • The real time security system using web camera shall correspond in commensurate with it in real time through classifying moving object and analyzing the behavior. But, as to the detection of moving object in real time image through a camera, it is difficult to detect movement correctly according to the change of unnecessary noises, lighting conditions and screened phenomenon. This paper proposes real time security system by dual camera and ultrasonic sensor, a method of advanced detection in order to detect correct movement of specific object. That is, we could improve the tracing characteristics by using ultrasonic sensor as measurement factor of changed position and verify through experiments that the information interchanged between camera upwards and in front of it have effect on tracing a specific object continuously. The results of the experiment show that recognition rate of object was 97.4% and the correct tracing could be done lastingly in a phenomena of screening object.