• Title/Summary/Keyword: Camera sensor

Search Result 1,274, Processing Time 0.022 seconds

A Study on Automatic Seam Tracking using Vision Sensor (비전센서를 이용한 자동추적장치에 관한 연구)

  • 전진환;조택동;양상민
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1995.10a
    • /
    • pp.1105-1109
    • /
    • 1995
  • A CCD-camera, which is structured with vision system, was used to realize automatic seam-tracking system and 3-D information which is needed to generate torch path, was obtained by using laser-slip beam. To extract laser strip and obtain welding-specific point, Adaptive Hough-transformation was used. Although the basic Hough transformation takes too much time to process image on line, it has a tendency to be robust to the noises as like spatter. For that reson, it was complemented with Adaptive Hough transformation to have an on-line processing ability for scanning a welding-specific point. the dead zone,where the sensing of weld line is impossible, is eliminated by rotating the camera with its rotating axis centered at welding torch. The camera angle is controlled so as to get the minimum image data for the sensing of weld line, hence the image processing time is reduced. The fuzzy controller is adapted to control the camera angle.

  • PDF

Forward Collision Warning System based on Radar driven Fusion with Camera (레이더/카메라 센서융합을 이용한 전방차량 충돌경보 시스템)

  • Moon, Seungwuk;Moon, Il Ki;Shin, Kwangkeun
    • Journal of Auto-vehicle Safety Association
    • /
    • v.5 no.1
    • /
    • pp.5-10
    • /
    • 2013
  • This paper describes a Forward Collision Warning (FCW) system based on the radar driven fusion with camera. The objective of FCW system is to provide an appropriate alert with satisfying the evaluation scenarios of US-NCAP and a driver acceptance. For this purpose, this paper proposed a data fusion algorithm and a collision warning algorithm. The data fusion algorithm generates information of fusion target depending on the confidence of camera sensor. The collision warning algorithm calculates indexes and determines an appropriate alert-timing by using analysis results of manual driving data. The FCW system with the proposed data fusion and collision warning algorithm was investigated via scenarios of US-NCAP and a real-road driving. It is shown that the proposed FCW system can improve the accuracy of an alarm-timing and reduce the false alarm in real roads.

Implementation of a Dashcam System using a Rotating Camera (회전 카메라를 이용한 블랙박스 시스템 구현)

  • Kim, Kiwan;Koo, Sung-Woo;Kim, Doo Yong
    • Journal of the Semiconductor & Display Technology
    • /
    • v.19 no.4
    • /
    • pp.34-38
    • /
    • 2020
  • In this paper, we implement a Dashcam system capable of shooting 360 degrees using a Raspberry Pi, shock sensors, distance sensors, and rotating camera with a servo motor. If there is an object approaching the vehicle by the distance sensor, the camera rotates to take a video. In the event of an external shock, videos and images are stored in the server to analyze the cause of the vehicle's accident and prevent the user from forging or tampering with videos or images. We also implement functions that transmit the message with the location and the intensity of the impact when the accident occurs and send the vehicle information to an insurance authority with by linking the system with a smart device. It is advantage that the authority analyzes the transmitted message and provides the accident handling information giving the user's safety and convenience.

Development of Camera Module for Vehicle Safety Support (차량 안전 지원용 카메라 모듈 개발)

  • Shin, Seong-Yoon;Cho, Seung-Pyo;Shin, Kwang-Seong;Lee, Hyun-Chang
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.672-673
    • /
    • 2022
  • In this paper, we discuss a camera that is fixed in the same view as the TOF sensor and can be installed horizontally in the vehicle's moving direction. This camera applies 1280×720 resolution to improve object recognition accuracy, outputs images at 30fps, and can apply a wide-angle fisheye lens of 180° or more.

  • PDF

Refinements of Multi-sensor based 3D Reconstruction using a Multi-sensor Fusion Disparity Map (다중센서 융합 상이 지도를 통한 다중센서 기반 3차원 복원 결과 개선)

  • Kim, Si-Jong;An, Kwang-Ho;Sung, Chang-Hun;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.4
    • /
    • pp.298-304
    • /
    • 2009
  • This paper describes an algorithm that improves 3D reconstruction result using a multi-sensor fusion disparity map. We can project LRF (Laser Range Finder) 3D points onto image pixel coordinatesusing extrinsic calibration matrixes of a camera-LRF (${\Phi}$, ${\Delta}$) and a camera calibration matrix (K). The LRF disparity map can be generated by interpolating projected LRF points. In the stereo reconstruction, we can compensate invalid points caused by repeated pattern and textureless region using the LRF disparity map. The result disparity map of compensation process is the multi-sensor fusion disparity map. We can refine the multi-sensor 3D reconstruction based on stereo vision and LRF using the multi-sensor fusion disparity map. The refinement algorithm of multi-sensor based 3D reconstruction is specified in four subsections dealing with virtual LRF stereo image generation, LRF disparity map generation, multi-sensor fusion disparity map generation, and 3D reconstruction process. It has been tested by synchronized stereo image pair and LRF 3D scan data.

  • PDF

To Use AI Camera Block Vision Algorithm Contents Development (AI Camera Block을 사용한 비전 알고리즘 콘텐츠 개발)

  • Lim, Tae Yoon;An, Jae-Yong;Oh, Junhyeok;Kim, Dong-Yeon;Won, JinSub;Hwang, Jun Ho;Do, Youngchae;Woo, Deok Ha;Lee, Seok
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.840-843
    • /
    • 2019
  • IoT 산업이 발전하면서 기존 토이와 IoT 기술을 결합한 스마트토이가 각광 받고 있다. 스마트토이는 수동적인 방식의 기존토이와는 다르게 토이 간 인터렉션이 가능하며 전자 센서들을 사용하여 토이를 사용하는 어린아이들에 코딩을 활용한 콘텐츠를 제공가능하다. 기존 스마트토이는 처음에는 호기심을 자극하지만, 익숙해지면 흥미가 떨어지는 현상을 보인다. 이에 본 논문에서는 기존 스마트토이가 갖는 재미요소 증가와 다양한 콘텐츠의 개발을 위해서 스마트 토이에 Artificial Intelligence(AI) 기능을 접목한 AI 카메라블록을 사용하여 새로운 콘텐츠를 개발하였다.

Scaling Attack Method for Misalignment Error of Camera-LiDAR Calibration Model (카메라-라이다 융합 모델의 오류 유발을 위한 스케일링 공격 방법)

  • Yi-ji Im;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.1099-1110
    • /
    • 2023
  • The recognition system of autonomous driving and robot navigation performs vision work such as object recognition, tracking, and lane detection after multi-sensor fusion to improve performance. Currently, research on a deep learning model based on the fusion of a camera and a lidar sensor is being actively conducted. However, deep learning models are vulnerable to adversarial attacks through modulation of input data. Attacks on the existing multi-sensor-based autonomous driving recognition system are focused on inducing obstacle detection by lowering the confidence score of the object recognition model.However, there is a limitation that an attack is possible only in the target model. In the case of attacks on the sensor fusion stage, errors in vision work after fusion can be cascaded, and this risk needs to be considered. In addition, an attack on LIDAR's point cloud data, which is difficult to judge visually, makes it difficult to determine whether it is an attack. In this study, image scaling-based camera-lidar We propose an attack method that reduces the accuracy of LCCNet, a fusion model (camera-LiDAR calibration model). The proposed method is to perform a scaling attack on the point of the input lidar. As a result of conducting an attack performance experiment by size with a scaling algorithm, an average of more than 77% of fusion errors were caused.

Map Building Based on Sensor Fusion for Autonomous Vehicle (자율주행을 위한 센서 데이터 융합 기반의 맵 생성)

  • Kang, Minsung;Hur, Soojung;Park, Ikhyun;Park, Yongwan
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.22 no.6
    • /
    • pp.14-22
    • /
    • 2014
  • An autonomous vehicle requires a technology of generating maps by recognizing surrounding environment. The recognition of the vehicle's environment can be achieved by using distance information from a 2D laser scanner and color information from a camera. Such sensor information is used to generate 2D or 3D maps. A 2D map is used mostly for generating routs, because it contains information only about a section. In contrast, a 3D map involves height values also, and therefore can be used not only for generating routs but also for finding out vehicle accessible space. Nevertheless, an autonomous vehicle using 3D maps has difficulty in recognizing environment in real time. Accordingly, this paper proposes the technology for generating 2D maps that guarantee real-time recognition. The proposed technology uses only the color information obtained by removing height values from 3D maps generated based on the fusion of 2D laser scanner and camera data.

A Method for Quantitative Measurement of Lateral Flow Immunoassay Using Color Camera (컬러 카메라를 이용한 측면유동 면역 어세이 정량분석 방법)

  • Park, Jongwon
    • Journal of Biomedical Engineering Research
    • /
    • v.35 no.1
    • /
    • pp.1-7
    • /
    • 2014
  • Among semi-quantitative or fully quantitative lateral flow assay readers, an image sensor-based instrument has been widely used because of its simple setup, cheap sensor price, and compact equipment size. For all previous approaches, monochrome CCD or CMOS cameras were used for lateral flow assay imaging in which the overall intensities of all colors were taken into consideration to estimate the analyte content, although the analyte related color information is only limited to a narrow wavelength range. In the present work, we introduced a color CCD camera as a sensor and a color decomposition method to improve the sensitivity of the quantitative biosensor system which utilizes the lateral flow assay successfully. The proposed setup and image processing method were applied to achieve the quantification of imitatively dispensed particles on the surface of a porous membrane first, and the measurement result was then compared with that using a monochrome CCD. The compensation method was proposed in different illumination conditions. Eventually, the color decomposition method was introduced to the commercially available lateral flow immunochromatographic assay for the diagnosis of myocardial infarction. The measurement sensitivity utilizing the color image sensor is significantly improved since the slopes of the linear curve fit are enhanced from 0.0026 to 0.0040 and from 0.0802 to 0.1141 for myoglobin and creatine kinase (CK)-MB detection, respectively.

Cylindrical Object Recognition using Sensor Data Fusion (센서데이터 융합을 이용한 원주형 물체인식)

  • Kim, Dong-Gi;Yun, Gwang-Ik;Yun, Ji-Seop;Gang, Lee-Seok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.8
    • /
    • pp.656-663
    • /
    • 2001
  • This paper presents a sensor fusion method to recognize a cylindrical object a CCD camera, a laser slit beam and ultrasonic sensors on a pan/tilt device. For object recognition with a vision sensor, an active light source projects a stripe pattern of light on the object surface. The 2D image data are transformed into 3D data using the geometry between the camera and the laser slit beam. The ultrasonic sensor uses an ultrasonic transducer array mounted in horizontal direction on the pan/tilt device. The time of flight is estimated by finding the maximum correlation between the received ultrasonic pulse and a set of stored templates - also called a matched filter. The distance of flight is calculated by simply multiplying the time of flight by the speed of sound and the maximum amplitude of the filtered signal is used to determine the face angle to the object. To determine the position and the radius of cylindrical objects, we use a statistical sensor fusion. Experimental results show that the fused data increase the reliability for the object recognition.

  • PDF