• Title/Summary/Keyword: Laser range sensor

Search Result 215, Processing Time 0.022 seconds

A Distance Measurement System Using a Laser Pointer and a Monocular Vision Sensor (레이저포인터와 단일카메라를 이용한 거리측정 시스템)

  • Jeon, Yeongsan;Park, Jungkeun;Kang, Taesam;Lee, Jeong-Oog
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.41 no.5
    • /
    • pp.422-428
    • /
    • 2013
  • Recently, many unmanned aerial vehicle (UAV) studies have focused on small UAVs, because they are cost effective and suitable in dangerous indoor environments where human entry is limited. Map building through distance measurement is a key technology for the autonomous flight of small UAVs. In many researches for unmanned systems, distance could be measured by using laser range finders or stereo vision sensors. Even though a laser range finder provides accurate distance measurements, it has a disadvantage of high cost. Calculating the distance using a stereo vision sensor is straightforward. However, the sensor is large and heavy, which is not suitable for small UAVs with limited payload. This paper suggests a low-cost distance measurement system using a laser pointer and a monocular vision sensor. A method to measure distance using the suggested system is explained and some experiments on map building are conducted with these distance measurements. The experimental results are compared to the actual data and the reliability of the suggested system is verified.

Design of range measurement systems using a sonar and a camera (초음파 센서와 카메라를 이용한 거리측정 시스템 설계)

  • Moon, Chang-Soo;Do, Yong-Tae
    • Journal of Sensor Science and Technology
    • /
    • v.14 no.2
    • /
    • pp.116-124
    • /
    • 2005
  • In this paper range measurement systems are designed using an ultrasonic sensor and a camera. An ultrasonic sensor provides the range measurement to a target quickly and simply but its low resolution is a disadvantage. We tackle this problem by employing a camera. Instead using a stereoscopic sensor, which is widely used for 3D sensing but requires a computationally intensive stereo matching, the range is measured by focusing and structured lighting. In focusing a straightforward focusing measure named as MMDH(min-max difference in histogram) is proposed and compared with existing techniques. In the method of structure lighting, light stripes projected by a beam projector are used. Compared to those using a laser beam projector, the designed system can be constructed easily in a low-budget. The system equation is derived by analysing the sensor geometry. A sensing scenario using the systems designed is in two steps. First, when better accuracy is required, measurements by ultrasonic sensing and focusing of a camera are fused by MLE(maximum likelihood estimation). Second, when the target is in a range of particular interest, a range map of the target scene is obtained by using structured lighting technique. The systems designed showed measurement accuracy up to 0.3[mm] approximately in experiments.

Real Time Linux System Design (리얼 타임 리눅스 시스템 설계)

  • Lee, Ah Ri;Hong, Seon Hack
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.10 no.2
    • /
    • pp.13-20
    • /
    • 2014
  • In this paper, we implemented the object scanning with nxtOSEK which is an open source platform. nxtOSEK consists of device driver of leJOS NXJ C/Assembly source code, TOPPERS/ATK(Automotive real time Kernel) and TOPPERS/JSP Real-Time Operating System source code that includes ARM7 specific porting part, and glue code make them work together. nxtOSEK can provide ANSI C by using GCC tool chain and C API and apply for real-time multi tasking features. We experimented the 3D scanning with ultra sonic and laser sensor which are made directly by laser module diode and experimented the measurement of scanning the object by knowing x, y, and z coordinates for every points that it scans. In this paper, the laser module is the dimension of $6{\times}10[mm]$ requiring 5volts/5[mW], and used the laser light of wavelength in the 650[nm] range. For detecting the object, we used the beacon detection algorithm and as the laser light swept the objects, the photodiode monitored the ambient light at interval of 10[ms] which is called a real time. We communicated the 3D scanning platform via bluetooth protocol with host platform and the results are displayed via DPlot graphic tool. And therefore we enhanced the functionality of the 3D scanner for identifying the image scanning with laser sensor modules compared to ultra sonic sensor.

Automated texture mapping for 3D modeling of objects with complex shapes --- a case study of archaeological ruins

  • Fujiwara, Hidetomo;Nakagawa, Masafumi;Shibasaki, Ryosuke
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.1177-1179
    • /
    • 2003
  • Recently, the ground-based laser profiler is used for acquisition of 3D spatial information of a rchaeological objects. However, it is very difficult to measure complicated objects, because of a relatively low-resolution. On the other hand, texture mapping can be a solution to complement the low resolution, and to generate 3D model with higher fidelity. But, a huge cost is required for the construction of textured 3D model, because huge labor is demanded, and the work depends on editor's experiences and skills . Moreover, the accuracy of data would be lost during the editing works. In this research, using the laser profiler and a non-calibrated digital camera, a method is proposed for the automatic generation of 3D model by integrating these data. At first, region segmentation is applied to laser range data to extract geometric features of an object in the laser range data. Various information such as normal vectors of planes, distances from a sensor and a sun-direction are used in this processing. Next, an image segmentation is also applied to the digital camera images, which include the same object. Then, geometrical relations are determined by corresponding the features extracted in the laser range data and digital camera’ images. By projecting digital camera image onto the surface data reconstructed from laser range image, the 3D texture model was generated automatically.

  • PDF

A Study on Stability Improvement of High Energy Laser Beam Wavefront Correction System

  • Jung, Jongkyu;Lee, Sooman
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.2
    • /
    • pp.1-7
    • /
    • 2018
  • The adaptive optics for compensating for optical wavefront distortion due to atmospheric turbulence has recently been used in systems that improve beam quality by eliminating the aberrations of high power laser beam wavefront. However, unseen-mode, which can not be measured in the wavefront sensor, increases the instability of the laser beam wavefront compensator on the adaptive optics system. As a method for improving such instability, a mathematical method for limiting the number of singular values is used when generating the command matrix involved in generation of the drive command of the wavefront compensator. In the past, however, we have relied solely on experimental methods to determine the limiting range of the singular values. In this paper, we propose a criterion for determining the limiting range of the singular values using the driving characteristics and the correlation technique of the wavefront compensator's actuators and have proved its performance experimentally.

A Data Fusion Method of Odometry Information and Distance Sensor for Effective Obstacle Avoidance of a Autonomous Mobile Robot (자율이동로봇의 효율적인 충돌회피를 위한 오도메트리 정보와 거리센서 데이터 융합기법)

  • Seo, Dong-Jin;Ko, Nak-Yong
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.57 no.4
    • /
    • pp.686-691
    • /
    • 2008
  • This paper proposes the concept of "virtual sensor data" and its application for real time obstacle avoidance. The virtual sensor data is virtual distance which takes care of the movement of the obstacle as well as that of the robot. In practical application, the virtual sensor data is calculated from the odometry data and the range sensor data. The virtual sensor data can be used in all the methods which use distance data for collision avoidance. Since the virtual sensor data considers the movement of the robot and the obstacle, the methods utilizing the virtual sensor data results in more smooth and safer collision-free motion.

Bayesian Sensor Fusion of Monocular Vision and Laser Structured Light Sensor for Robust Localization of a Mobile Robot (이동 로봇의 강인 위치 추정을 위한 단안 비젼 센서와 레이저 구조광 센서의 베이시안 센서융합)

  • Kim, Min-Young;Ahn, Sang-Tae;Cho, Hyung-Suck
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.4
    • /
    • pp.381-390
    • /
    • 2010
  • This paper describes a procedure of the map-based localization for mobile robots by using a sensor fusion technique in structured environments. A combination of various sensors with different characteristics and limited sensibility has advantages in view of complementariness and cooperation to obtain better information on the environment. In this paper, for robust self-localization of a mobile robot with a monocular camera and a laser structured light sensor, environment information acquired from two sensors is combined and fused by a Bayesian sensor fusion technique based on the probabilistic reliability function of each sensor predefined through experiments. For the self-localization using the monocular vision, the robot utilizes image features consisting of vertical edge lines from input camera images, and they are used as natural landmark points in self-localization process. However, in case of using the laser structured light sensor, it utilizes geometrical features composed of corners and planes as natural landmark shapes during this process, which are extracted from range data at a constant height from the navigation floor. Although only each feature group of them is sometimes useful to localize mobile robots, all features from the two sensors are simultaneously used and fused in term of information for reliable localization under various environment conditions. To verify the advantage of using multi-sensor fusion, a series of experiments are performed, and experimental results are discussed in detail.

Sensor Model Design of Range Sensor Based Probabilistic Localization for the Autonomous Mobile Robot (자율 주행 로봇의 확률론적 자기 위치 추정기법을 위해 거리 센서를 이용한 센서 모델 설계)

  • Kim, Kyung-Rock;Chung, Woo-Jin;Kim, Mun-Sang
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.27-29
    • /
    • 2004
  • This paper presents a sensor model design based on Monte Carlo Localization method. First, we define the measurement error of each sample using a map matching method by 2-D laser scanners and a pre-constructed grid-map of the environment. Second, samples are assigned probabilities due to matching errors from the gaussian probability density function considered of the sample's convergence. Simulation using real environment data shows good localization results by the designed sensor model.

  • PDF

Automatic Sweep Flattening for Wavelength Sweeping Laser of SS-OCT (SS-OCT용 파장 스위핑 레이저를 위한 자동 스위프 평탄화)

  • Eom, Jinseob
    • Journal of Sensor Science and Technology
    • /
    • v.26 no.1
    • /
    • pp.44-49
    • /
    • 2017
  • In this paper, the automatic sweep flattening for wavelength swept laser of SS-OCT has implemented. Through its performance test applied to the laser, 50 nm flat sweeping range, ${\pm}0.5dB$ fluctuation range, 22 sec the time required, and 10 mW average optical power were obtained. This shows that the realized automatic process can replace the inconvenient manual operation used for polarization control of current sweeping laser. Additionally it cuts costs for optical spectrum analyzer necessary to sweep monitoring.

Human Legs Stride Recognition and Tracking based on the Laser Scanner Sensor Data (레이저센서 데이터융합기반의 복수 휴먼보폭 인식과 추적)

  • Jin, Taeseok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.3
    • /
    • pp.247-253
    • /
    • 2019
  • In this paper, we present a new method for real-time tracking of human walking around a laser sensor system. The method converts range data with $r-{\theta}$ coordinates to a 2D image with x-y coordinates. Then human tracking is performed using human's features, i.e. appearances of human walking pattern, and the input range data. The laser sensor based human tracking method has the advantage of simplicity over conventional methods which extract human face in the vision data. In our method, the problem of estimating 2D positions and orientations of two walking human's ankle level is formulated based on a moving trajectory algorithm. In addition, the proposed tracking system employs a HMM to robustly track human in case of occlusions. Experimental results using a real system demonstrate usefulness of the proposed method.