• Title/Summary/Keyword: 2d laser sensor

Search Result 100, Processing Time 0.023 seconds

Laser 거리센서를 이용한 PCB에서의 납 도포상태검사 (Solder Paste Inspection of PCB using Laser Sensor)

  • 오승용;최경진;이용현;박종국
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2003년도 학술회의 논문집 정보 및 제어부문 A
    • /
    • pp.291-294
    • /
    • 2003
  • In this paper, 2D and 3D inspection algorithm for printed solder on PCB is introduced. The aim of inspection is the detection of error such as rich solder poor solder and missing solder. For Inspection, laser distance sensor is used. For 2D inspection, laser image that is created by normalizing laser data between 0 and 255 are used. Reference Image is made using gerber file. Image processing algorithm is used for 2D inspection. By adding thickness of metal stencil to laser image, volume for solder can be calculated and 3D inspection is carried out.

  • PDF

선편광된 10 GHz 선폭의 1 kW급 20/400-㎛ 이터븀 첨가 광섬유 레이저 (Linearly Polarized 1-kW 20/400-㎛ Yb-doped Fiber Laser with 10-GHz Linewidth)

  • 정예지;정민완;이강인;김태우;김재인;이용수;조준용
    • 한국광학회지
    • /
    • 제32권3호
    • /
    • pp.120-125
    • /
    • 2021
  • 본 연구에서는 다파장 빔결합을 위한 master oscillator power amplifier (MOPA) 구조의 선편광 고출력 이터븀 첨가 광섬유 레이저를 개발하였다. 유도 브릴루앙 산란(stimulated Brillouin scattering, SBS)을 억제하기 위하여 pseudo-random binary sequence (PRBS) 신호로 위상 변조 및 비트길이를 최적화한 선폭 약 10 GHz의 시드 레이저를 구현하였으며, 이를 이용하여 3단 증폭을 하였다. 주 증폭단에서는 모드 불안정성 현상(mode instability, MI)의 문턱값을 높이기 위하여 코어 및 클래딩의 직경이 각각 20 ㎛, 40 ㎛인 편광유지(polarization maintaining, PM) 이터븀 첨가 광섬유를 이용하고 지름이 약 9-12 cm인 나선형 홈에 적용하였다. 그 결과, 입사된 여기광 대비 기울기 효율이 83.7%인 1.004 kW의 레이저 출력을 얻었다. 또한, 빔품질(M2)과 편광소광율(polarization extinction ratio, PER)은 각각 1.12와 21.5 dB로 측정되었다. 더욱이, 역방향 스펙트럼의 레일리 신호와 SBS 신호의 첨두 세기 비율은 2.36 dB로 관측되어, SBS가 완화된 레이저 구현을 확인하였다. 또한 증폭 출력에 따라 기울기 효율 및 빔품질의 저하가 없어 모드불안정이 발생하지 않음을 확인하였다.

신호세기를 이용한 2차원 레이저 스캐너 기반 노면표시 분류 기법 (Road marking classification method based on intensity of 2D Laser Scanner)

  • 박성현;최정희;박용완
    • 대한임베디드공학회논문지
    • /
    • 제11권5호
    • /
    • pp.313-323
    • /
    • 2016
  • With the development of autonomous vehicle, there has been active research on advanced driver assistance system for road marking detection using vision sensor and 3D Laser scanner. However, vision sensor has the weak points that detection is difficult in situations involving severe illumination variance, such as at night, inside a tunnel or in a shaded area; and that processing time is long because of a large amount of data from both vision sensor and 3D Laser scanner. Accordingly, this paper proposes a road marking detection and classification method using single 2D Laser scanner. This method road marking detection and classification based on accumulation distance data and intensity data acquired through 2D Laser scanner. Experiments using a real autonomous vehicle in a real environment showed that calculation time decreased in comparison with 3D Laser scanner-based method, thus demonstrating the possibility of road marking type classification using single 2D Laser scanner.

CALOS : 주행계 추정을 위한 카메라와 레이저 융합 (CALOS : Camera And Laser for Odometry Sensing)

  • 복윤수;황영배;권인소
    • 로봇학회논문지
    • /
    • 제1권2호
    • /
    • pp.180-187
    • /
    • 2006
  • This paper presents a new sensor system, CALOS, for motion estimation and 3D reconstruction. The 2D laser sensor provides accurate depth information of a plane, not the whole 3D structure. On the contrary, the CCD cameras provide the projected image of whole 3D scene, not the depth of the scene. To overcome the limitations, we combine these two types of sensors, the laser sensor and the CCD cameras. We develop a motion estimation scheme appropriate for this sensor system. In the proposed scheme, the motion between two frames is estimated by using three points among the scan data and their corresponding image points, and refined by non-linear optimization. We validate the accuracy of the proposed method by 3D reconstruction using real images. The results show that the proposed system can be a practical solution for motion estimation as well as for 3D reconstruction.

  • PDF

3D Spreader Position Information by the CCD Cameras and the Laser Distance Measuring Unit for ATC

  • Bae, Dong-Suk;Lee, Jung-Jae;Lee, Bong-Ki;Lee, Jang-Myung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.1679-1684
    • /
    • 2004
  • This paper introduces a novel approach that can provide the three dimensional information on the movement of a spreader by using two CCD cameras and a laser distance sensor, which enables an ALS (Automatic Landing System) to be used for yard cranes at a harbor. So far a kind of 2D Laser scanner sensor or laser distance measuring units are used as corner detectors for the geometrical matching between the spreader and a container, which provides only 2D information which is not enough for an accurate and fast ALS system required presently. In addition to this deficiency in performance, the price for the system is too high to be adopted widely for the ALS. Therefore, to overcome these defects, a novel method to acquire the three dimensional information for the movement of a spreader including skew and sway angles is proposed using two CCD cameras and a laser distance sensor. To show the efficiency of proposed algorithm, real experiments are performed to show the accuracy improvement in distance measurement by fusing the sensory information of CCD camera and laser distance sensor.

  • PDF

평면 구조물의 단일점 일치를 이용한 2차원 레이저 거리감지센서의 자동 캘리브레이션 (Autonomous Calibration of a 2D Laser Displacement Sensor by Matching a Single Point on a Flat Structure)

  • 정지훈;강태선;신현호;김수종
    • 제어로봇시스템학회논문지
    • /
    • 제20권2호
    • /
    • pp.218-222
    • /
    • 2014
  • In this paper, we introduce an autonomous calibration method for a 2D laser displacement sensor (e.g. laser vision sensor and laser range finder) by matching a single point on a flat structure. Many arc welding robots install a 2D laser displacement sensor to expand their application by recognizing their environment (e.g. base metal and seam). In such systems, sensing data should be transformed to the robot's coordinates, and the geometric relation (i.e. rotation and translation) between the robot's coordinates and sensor coordinates should be known for the transformation. Calibration means the inference process of geometric relation between the sensor and robot. Generally, the matching of more than 3 points is required to infer the geometric relation. However, we introduce a novel method to calibrate using only 1 point matching and use a specific flat structure (i.e. circular hole) which enables us to find the geometric relation with a single point matching. We make the rotation component of the calibration results as a constant to use only a single point by moving a robot to a specific pose. The flat structure can be installed easily in a manufacturing site, because the structure does not have a volume (i.e. almost 2D structure). The calibration process is fully autonomous and does not need any manual operation. A robot which installed the sensor moves to the specific pose by sensing features of the circular hole such as length of chord and center position of the chord. We show the precision of the proposed method by performing repetitive experiments in various situations. Furthermore, we applied the result of the proposed method to sensor based seam tracking with a robot, and report the difference of the robot's TCP (Tool Center Point) trajectory. This experiment shows that the proposed method ensures precision.

자율주행을 위한 센서 데이터 융합 기반의 맵 생성 (Map Building Based on Sensor Fusion for Autonomous Vehicle)

  • 강민성;허수정;박익현;박용완
    • 한국자동차공학회논문집
    • /
    • 제22권6호
    • /
    • pp.14-22
    • /
    • 2014
  • An autonomous vehicle requires a technology of generating maps by recognizing surrounding environment. The recognition of the vehicle's environment can be achieved by using distance information from a 2D laser scanner and color information from a camera. Such sensor information is used to generate 2D or 3D maps. A 2D map is used mostly for generating routs, because it contains information only about a section. In contrast, a 3D map involves height values also, and therefore can be used not only for generating routs but also for finding out vehicle accessible space. Nevertheless, an autonomous vehicle using 3D maps has difficulty in recognizing environment in real time. Accordingly, this paper proposes the technology for generating 2D maps that guarantee real-time recognition. The proposed technology uses only the color information obtained by removing height values from 3D maps generated based on the fusion of 2D laser scanner and camera data.

CCD카메라와 레이저 센서를 조합한 지능형 로봇 빈-피킹에 관한 연구 (A Study on Intelligent Robot Bin-Picking System with CCD Camera and Laser Sensor)

  • 김진대;이재원;신찬배
    • 한국정밀공학회지
    • /
    • 제23권11호
    • /
    • pp.58-67
    • /
    • 2006
  • Due to the variety of signal processing and complicated mathematical analysis, it is not easy to accomplish 3D bin-picking with non-contact sensor. To solve this difficulties the reliable signal processing algorithm and a good sensing device has been recommended. In this research, 3D laser scanner and CCD camera is applied as a sensing device respectively. With these sensor we develop a two-step bin-picking method and reliable algorithm for the recognition of 3D bin object. In the proposed bin-picking, the problem is reduced to 2D intial recognition with CCD camera at first, and then 3D pose detection with a laser scanner. To get a good movement in the robot base frame, the hand eye calibration between robot's end effector and sensing device should be also carried out. In this paper, we examine auto-calibration technique in the sensor calibration step. A new thinning algorithm and constrained hough transform is also studied for the robustness in the real environment usage. From the experimental results, we could see the robust bin-picking operation under the non-aligned 3D hole object.

Capturing Distance Parameters Using a Laser Sensor in a Stereoscopic 3D Camera Rig System

  • Chung, Wan-Young;Ilham, Julian;Kim, Jong-Jin
    • 센서학회지
    • /
    • 제22권6호
    • /
    • pp.387-392
    • /
    • 2013
  • Camera rigs for shooting 3D video are classified as manual, motorized, or fully automatic. Even in an automatic camera rig, the process of Stereoscopic 3D (S3D) video capture is very complex and time-consuming. One of the key time-consuming operations is capturing the distance parameters, which are near distance, far distance, and convergence distance. Traditionally these distances are measured by tape measure or triangular indirect measurement methods. These two methods consume a long time for every scene in shot. In our study, a compact laser distance sensing system with long range distance sensitivity is developed. The system is small enough to be installed on top of a camera and the measuring accuracy is within 2% even at a range of 50 m. The shooting time of an automatic camera rig equipped with the laser distance sensing system can be reduced significantly to less than a minute.

멀티 라인 레이저 비전 센서를 이용한 고속 3차원 계측 및 모델링에 관한 연구 (Development of multi-line laser vision sensor and welding application)

  • 성기은;이세헌
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2002년도 춘계학술대회 논문집
    • /
    • pp.169-172
    • /
    • 2002
  • A vision sensor measure range data using laser light source. This sensor generally use patterned laser which shaped single line. But this vision sensor cannot satisfy new trend which feeds foster and more precise processing. The sensor's sampling rate increases as reduced image processing time. However, the sampling rate can not over 30fps, because a camera has mechanical sampling limit. If we use multi line laser pattern, we will measure multi range data in one image. In the case of using same sampling rate camera, number of 2D range data profile in one second is directly proportional to laser line's number. For example, the vision sensor using 5 laser lines can sample 150 profiles per second in best condition.

  • PDF