• Title/Summary/Keyword: Camera sensor

Search Result 1,274, Processing Time 0.03 seconds

Depthmap Generation with Registration of LIDAR and Color Images with Different Field-of-View (다른 화각을 가진 라이다와 칼라 영상 정보의 정합 및 깊이맵 생성)

  • Choi, Jaehoon;Lee, Deokwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.6
    • /
    • pp.28-34
    • /
    • 2020
  • This paper proposes an approach to the fusion of two heterogeneous sensors with two different fields-of-view (FOV): LIDAR and an RGB camera. Registration between data captured by LIDAR and an RGB camera provided the fusion results. Registration was completed once a depthmap corresponding to a 2-dimensional RGB image was generated. For this fusion, RPLIDAR-A3 (manufactured by Slamtec) and a general digital camera were used to acquire depth and image data, respectively. LIDAR sensor provided distance information between the sensor and objects in a scene nearby the sensor, and an RGB camera provided a 2-dimensional image with color information. Fusion of 2D image and depth information enabled us to achieve better performance with applications of object detection and tracking. For instance, automatic driver assistance systems, robotics or other systems that require visual information processing might find the work in this paper useful. Since the LIDAR only provides depth value, processing and generation of a depthmap that corresponds to an RGB image is recommended. To validate the proposed approach, experimental results are provided.

Reduction of Radiated Emission of an Infrared Camera Using a Spread Spectrum Clock Generator (확산 스펙트럼 생성기를 이용한 적외선 카메라의 방사노이즈 저감에 관한 연구)

  • Choi, Bongjun;Lee, Yongchun;Yoon, Juhyun;Kim, Eunjun
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.27 no.12
    • /
    • pp.1097-1104
    • /
    • 2016
  • The infrared camera is difficult to satisfy the RE-102 specification of Mil-Std-461. Especially, in the case of UAV electronics, shielded cable is not used, so it is difficult to meet the electromagnetic compatibility standard. In the RE-102 test of the IR camera for UAV, radiated noise exceeding 30 dBuV/m was observed in the range of 50 MHz to 200 MHz. As a result of pcb em scan, peak noise which caused by the harmonic frequency of the digital control signal clock was observed. Radiated noise was reduced by up to 22.9 dBuV/m by applying the spread spectrum clock generator(SSCG) with 3 % down spreading method to the camera control clock.

Distance measurement System from detected objects within Kinect depth sensor's field of view and its applications (키넥트 깊이 측정 센서의 가시 범위 내 감지된 사물의 거리 측정 시스템과 그 응용분야)

  • Niyonsaba, Eric;Jang, Jong-Wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.05a
    • /
    • pp.279-282
    • /
    • 2017
  • Kinect depth sensor, a depth camera developed by Microsoft as a natural user interface for game appeared as a very useful tool in computer vision field. In this paper, due to kinect's depth sensor and its high frame rate, we developed a distance measurement system using Kinect camera to test it for unmanned vehicles which need vision systems to perceive the surrounding environment like human do in order to detect objects in their path. Therefore, kinect depth sensor is used to detect objects in its field of view and enhance the distance measurement system from objects to the vision sensor. Detected object is identified in accuracy way to determine if it is a real object or a pixel nose to reduce the processing time by ignoring pixels which are not a part of a real object. Using depth segmentation techniques along with Open CV library for image processing, we can identify present objects within Kinect camera's field of view and measure the distance from them to the sensor. Tests show promising results that this system can be used as well for autonomous vehicles equipped with low-cost range sensor, Kinect camera, for further processing depending on the application type when they reach a certain distance far from detected objects.

  • PDF

Implementation and Control of Crack Tracking Robot Using Force Control : Crack Detection by Laser and Camera Sensor Using Neural Network (힘제어 기반의 틈새 추종 로봇의 제작 및 제어에 관한 연구 : Part Ⅰ. 신경회로망을 이용한 레이저와 카메라에 의한 틈새 검출 및 로봇 제작)

  • Cho Hyun Taek;Jung Seul
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.4
    • /
    • pp.290-296
    • /
    • 2005
  • This paper presents the implementation of a crack tracking mobile robot. The crack tracking robot is built for tracking cracks on the pavement. To track cracks, crack must be detected by laser and camera sensors. Laser sensor projects laser on the pavement to detect the discontinuity on the surface and the camera captures the image to find the crack position. Then the robot is commanded to follow the crack. To detect crack position correctly, neural network is used to minimize the positional errors of the captured crack position obtained by transformation from 2 dimensional images to 3 dimensional images.

Pseudo-RGB-based Place Recognition through Thermal-to-RGB Image Translation (열화상 영상의 Image Translation을 통한 Pseudo-RGB 기반 장소 인식 시스템)

  • Seunghyeon Lee;Taejoo Kim;Yukyung Choi
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.1
    • /
    • pp.48-52
    • /
    • 2023
  • Many studies have been conducted to ensure that Visual Place Recognition is reliable in various environments, including edge cases. However, existing approaches use visible imaging sensors, RGB cameras, which are greatly influenced by illumination changes, as is widely known. Thus, in this paper, we use an invisible imaging sensor, a long wave length infrared camera (LWIR) instead of RGB, that is shown to be more reliable in low-light and highly noisy conditions. In addition, although the camera sensor used to solve this problem is an LWIR camera, but since the thermal image is converted into RGB image the proposed method is highly compatible with existing algorithms and databases. We demonstrate that the proposed method outperforms the baseline method by about 0.19 for recall performance.

Robust 2D human upper-body pose estimation with fully convolutional network

  • Lee, Seunghee;Koo, Jungmo;Kim, Jinki;Myung, Hyun
    • Advances in robotics research
    • /
    • v.2 no.2
    • /
    • pp.129-140
    • /
    • 2018
  • With the increasing demand for the development of human pose estimation, such as human-computer interaction and human activity recognition, there have been numerous approaches to detect the 2D poses of people in images more efficiently. Despite many years of human pose estimation research, the estimation of human poses with images remains difficult to produce satisfactory results. In this study, we propose a robust 2D human body pose estimation method using an RGB camera sensor. Our pose estimation method is efficient and cost-effective since the use of RGB camera sensor is economically beneficial compared to more commonly used high-priced sensors. For the estimation of upper-body joint positions, semantic segmentation with a fully convolutional network was exploited. From acquired RGB images, joint heatmaps accurately estimate the coordinates of the location of each joint. The network architecture was designed to learn and detect the locations of joints via the sequential prediction processing method. Our proposed method was tested and validated for efficient estimation of the human upper-body pose. The obtained results reveal the potential of a simple RGB camera sensor for human pose estimation applications.

Vision-based Navigation for VTOL Unmanned Aerial Vehicle Landing (수직이착륙 무인항공기 자동 착륙을 위한 영상기반 항법)

  • Lee, Sang-Hoon;Song, Jin-Mo;Bae, Jong-Sue
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.18 no.3
    • /
    • pp.226-233
    • /
    • 2015
  • Pose estimation is an important operation for many vision tasks. This paper presents a method of estimating the camera pose, using a known landmark for the purpose of autonomous vertical takeoff and landing(VTOL) unmanned aerial vehicle(UAV) landing. The proposed method uses a distinctive methodology to solve the pose estimation problem. We propose to combine extrinsic parameters from known and unknown 3-D(three-dimensional) feature points, and inertial estimation of camera 6-DOF(Degree Of Freedom) into one linear inhomogeneous equation. This allows us to use singular value decomposition(SVD) to neatly solve the given optimization problem. We present experimental results that demonstrate the ability of the proposed method to estimate camera 6DOF with the ease of implementation.

Flicker-Free Spatial-PSK Modulation for Vehicular Image-Sensor Systems Based on Neural Networks (신경망 기반 차량 이미지센서 시스템을 위한 플리커 프리 공간-PSK 변조 기법)

  • Nguyen, Trang;Hong, Chang Hyun;Islam, Amirul;Le, Nam Tuan;Jang, Yeong Min
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.8
    • /
    • pp.843-850
    • /
    • 2016
  • This paper introduces a novel modulation scheme for vehicular communication in taking advantage of existing LED lights available on a car. Our proposed 2-Phase Shift Keying (2-PSK) is a spatial modulation approach in which a pair of LED light sources in a car (either rear LEDs or front LEDs) is used as a transmitter. A typical camera (i.e. low frame rate at no greater than 30fps) that either a global shutter camera or a rolling shutter camera can be used as a receiver. The modulation scheme is a part of our Image Sensor Communication proposal submitted to IEEE 802.15.7r1 (TG7r1) recently. Also, a neural network approach is applied to improve the performance of LEDs detection and decoding under the noisy situation. Later, some analysis and experiment results are presented to indicate the performance of our system

Flip Chip Interconnection Method Applied to Small Camera Module

  • Segawa, Masao;Ono, Michiko;Karasawa, Jun;Hirohata, Kenji;Aoki, Makoto;Ohashi, Akihiro;Sasaki, Tomoaki;Kishimoto, Yasukazu
    • Proceedings of the International Microelectronics And Packaging Society Conference
    • /
    • 2000.10a
    • /
    • pp.39-45
    • /
    • 2000
  • A small camera module fabricated by including bare chip bonding methods is utilized to realize advanced mobile devices. One of the driving forces is the TOG (Tape On Glass) bonding method which reduces the packaging size of the image sensor clip. The TOG module is a new thinner and smaller image sensor module, using flip chip interconnection method with the ACP (Anisotropic Conductive Paste). The TOG production process was established by determining the optimum bonding conditions for both optical glass bonding and image sensor clip bonding lo the flexible PCB. The bonding conditions, including sufficient bonding margins, were studied. Another bonding method is the flip chip bonding method for DSP (Digital Signal Processor) chip. A new AC\ulcorner was developed to enable the short resin curing time of 10 sec. The bonding mechanism of the resin curing method was evaluated using FEM analysis. By using these flip chip bonding techniques, small camera module was realized.

  • PDF

Development of a Portable Multi-sensor System for Geo-referenced Images and its Accuracy Evaluation (Geo-referenced 영상 획득을 위한 휴대용 멀티센서 시스템 구축 및 정확도 평가)

  • Lee, Ji-Hun;Choi, Kyoung-Ah;Lee, Im-Pyeong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.6
    • /
    • pp.637-643
    • /
    • 2010
  • In this study, we developed a Portable Multi-sensor System, which consists of a video camera, a GPS/MEMS IMU and a UMPC to acquire video images and position/attitude data. We performed image georeferencing based on the bundle adjustment without ground control points using the acquired data and then evaluated the effectiveness of our system through the accuracy verification. The experimental results showed that the RMSE of relative coordinates on the ground point coordinates obtained from our system was several centimeters. Our system can be efficiently utilized to obtain the 3D model of object and their relative coordinates. In future, we plan to improve the accuracy of absolute coordinates through the rigorous calibration of the system and camera.