• Title/Summary/Keyword: LIDAR sensor

Search Result 108, Processing Time 0.027 seconds

Scaling attack for Camera-Lidar calibration model (카메라-라이다 정합 모델에 대한 스케일링 공격)

  • Yi-JI IM;Dae-Seon Choi
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.298-300
    • /
    • 2023
  • 자율주행 및 robot navigation 시스템에서 물체 인식 성능향상을 위해 대부분 MSF(Multi-Sensor Fusion) 기반 설계를 한다. 따라서 각 센서로부터 들어온 정보를 정합하는 것은 정확한 MSF 알고리즘을 위한 필요조건이다. 다양한 선행 연구에서 2D 데이터에 대한 공격을 진행했다. 자율주행에서는 3D 데이터를 다루어야 하므로 선행 연구에서 하지 않았던 3D 데이터 공격을 진행했다. 본 연구에서는 스케일링 공격 기반 카메라-라이다 센서 간 정합 모델의 정확도를 저하시키는 공격 방법을 제안한다. 제안 방법은 입력 라이다의 포인트 클라우드에 스케일링 공격을 적용하여 다운스케일링 단계에서 공격하고자 한다. 실험 결과, 입력 데이터에 공격하였을 때 공격 전보다 평균제곱 이동오류는 56% 이상, 평균 사원수 각도 오류는 98% 이상 증가했음을 보였다. 다운스케일링 크기 별, 알고리즘별 공격을 적용했을 때, 10×20 크기로 다운스케일링 하고 lanczos4 알고리즘을 적용했을 때 가장 효과적으로 공격할 수 있음을 확인했다.

High Resolution InSAR Phase Simulation using DSM in Urban Areas (도심지역 DSM을 이용한 고해상도 InSAR 위상 시뮬레이션)

  • Yoon, Geun-Won;Kim, Sang-Wan;Lee, Yong-Woong;Lee, Dong-Cheon;Won, Joong-Sun
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.2
    • /
    • pp.181-190
    • /
    • 2011
  • Since the radar satellite missions such as TerraSAR-X and COSMO-SkyMed were launched in 2007, the spatial resolution of spaceborne SAR(Synthetic Aperture Radar) images reaches about 1 meter at spotlight mode. In 2011, the first Korean SAR satellite, KOMPSAT-5, will be launched, operating at X-band with the highest spatial resolution of 1 m as well. The improved spatial resolution of state-of-the-art SAR sensor suggests expanding InSAR(Interferometric SAR) analysis in urban monitoring. By the way, the shadow and layover phenomena are more prominent in urban areas due to building structure because of inherent side-looking geometry of SAR system. Up to date the most conventional algorithms do not consider the return signals at the frontage of building during InSAR phase and SAR intensity simulation. In this study the new algorithm introducing multi-scattering in layover region is proposed for phase and intensity simulation, which is utilized a precise LIDAR DSM(Digital Surface Model) in urban areas. The InSAR phases simulated by the proposed method are compared with TerraSAR-X spotlight data. As a result, both InSAR phases are well matched, even in layover areas. This study will be applied to urban monitoring using high resolution SAR data, in terms of change detection and displacement monitoring at the scale of building unit.

Attention based Feature-Fusion Network for 3D Object Detection (3차원 객체 탐지를 위한 어텐션 기반 특징 융합 네트워크)

  • Sang-Hyun Ryoo;Dae-Yeol Kang;Seung-Jun Hwang;Sung-Jun Park;Joong-Hwan Baek
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.2
    • /
    • pp.190-196
    • /
    • 2023
  • Recently, following the development of LIDAR technology which can detect distance from the object, the interest for LIDAR based 3D object detection network is getting higher. Previous networks generate inaccurate localization results due to spatial information loss during voxelization and downsampling. In this study, we propose an attention-based convergence method and a camera-LIDAR convergence system to acquire high-level features and high positional accuracy. First, by introducing the attention method into the Voxel-RCNN structure, which is a grid-based 3D object detection network, the multi-scale sparse 3D convolution feature is effectively fused to improve the performance of 3D object detection. Additionally, we propose the late-fusion mechanism for fusing outcomes in 3D object detection network and 2D object detection network to delete false positive. Comparative experiments with existing algorithms are performed using the KITTI data set, which is widely used in the field of autonomous driving. The proposed method showed performance improvement in both 2D object detection on BEV and 3D object detection. In particular, the precision was improved by about 0.54% for the car moderate class compared to Voxel-RCNN.

Evaluation of Applicability for 3D Scanning of Abandoned or Flooded Mine Sites Using Unmanned Mobility (무인 이동체를 이용한 폐광산 갱도 및 수몰 갱도의 3차원 형상화 위한 적용성 평가)

  • Soolo Kim;Gwan-in Bak;Sang-Wook Kim;Seung-han Baek
    • Tunnel and Underground Space
    • /
    • v.34 no.1
    • /
    • pp.1-14
    • /
    • 2024
  • An image-reconstruction technology, involving the deployment of an unmanned mobility equipped with high-speed LiDAR (Light Detection And Ranging) has been proposed to reconstruct the shape of abandoned mine. Unmanned mobility operation is remarkably useful in abandoned mines fraught with operational difficulties including, but not limited to, obstacles, sludge, underwater and narrow tunnel with the diameter of 1.5 m or more. For cases of real abandoned mines, quadruped robots, quadcopter drones and underwater drones are respectively deployed on land, air, and water-filled sites. In addition to the advantage of scanning the abandoned mines with 2D solid-state lidar sensors, rotation of radiation at an inclination angle offers an increased efficiency for simultaneous reconstruction of mineshaft shapes and detecting obstacles. Sensor and robot posture were used for computing rotation matrices that helped compute geographical coordinates of the solid-state lidar data. Next, the quadruped robot scanned the actual site to reconstruct tunnel shape. Lastly, the optimal elements necessary to increase utility in actual fields were found and proposed.

A Study on the interface of information processing system on Human enhancement fire fighting helmet (휴먼 증강 소방헬멧 정보처리 시스템 인터페이스 연구)

  • Park, Hyun-Ju;Lee, Kam-Yeon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.497-498
    • /
    • 2018
  • In the fire scene, it is difficult to see 1m ahead because of power failure, smoke and toxic gas, even with thermal imaging camera and Xenon searchlight. Analysis of the smoke particles in the fire scene shows that even if the smoke is $5{\mu}m$ or less in wavelength, it is difficult to obtain a front view when using a conventional thermal imaging camera if the visual distance exceeds 1 meter. In the case of black smoke with a particle wavelength of $5{\mu}m$ or more, a space permeation sensor technology using various sensors other than a single sensor is required because chemical materials, gas, and water molecules are mixed. Firefighters need a smoke detection technology for smoke detection and spatial information visualization for forward safety view.In this paper, we design the interface of the information processing system with 32bit CPU core and peripheral circuit. We also implemented and simulated the interface with Lidar sensor. Through this, we provide interface that can implement information processing system of human enhancement fire helmet in the future.

  • PDF

People Count For Managing Hospital Facilities (병원시설의 출입 인원 관리를 위한 새로운 인원 계수 방법)

  • Ryoo, Yun-Kyoo
    • Journal of the Health Care and Life Science
    • /
    • v.8 no.2
    • /
    • pp.121-125
    • /
    • 2020
  • People counting has always been a method of interest for maximizing energy saving by identifying the congestion level or amount of use of a specific facility to efficiently manage the facility, or automatically implementing a power saving function by identifying the number of people entering and exiting a specific place such as a toilet. The method of counting people by image processing is very expensive and has the disadvantage of being severely affected by the surrounding environment of the lighting. In the case of the area sensor, there is a disadvantage of counting as one person when the number of people passes close with arms folded. In order to solve the existing method, which is expensive, affected by lighting, or inaccurate the number of people in certain cases, this paper proposes a new method of counting people using the principle of LiADAR. Accurate counting of the number of people entering the hospital will help manage hospital facilities, but it will also help to establish effective quarantine measures at the present time when Corona 19 is prevalent.

Physical Offset of UAVs Calibration Method for Multi-sensor Fusion (다중 센서 융합을 위한 무인항공기 물리 오프셋 검보정 방법)

  • Kim, Cheolwook;Lim, Pyeong-chae;Chi, Junhwa;Kim, Taejung;Rhee, Sooahm
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1125-1139
    • /
    • 2022
  • In an unmanned aerial vehicles (UAVs) system, a physical offset can be existed between the global positioning system/inertial measurement unit (GPS/IMU) sensor and the observation sensor such as a hyperspectral sensor, and a lidar sensor. As a result of the physical offset, a misalignment between each image can be occurred along with a flight direction. In particular, in a case of multi-sensor system, an observation sensor has to be replaced regularly to equip another observation sensor, and then, a high cost should be paid to acquire a calibration parameter. In this study, we establish a precise sensor model equation to apply for a multiple sensor in common and propose an independent physical offset estimation method. The proposed method consists of 3 steps. Firstly, we define an appropriate rotation matrix for our system, and an initial sensor model equation for direct-georeferencing. Next, an observation equation for the physical offset estimation is established by extracting a corresponding point between a ground control point and the observed data from a sensor. Finally, the physical offset is estimated based on the observed data, and the precise sensor model equation is established by applying the estimated parameters to the initial sensor model equation. 4 region's datasets(Jeon-ju, Incheon, Alaska, Norway) with a different latitude, longitude were compared to analyze the effects of the calibration parameter. We confirmed that a misalignment between images were adjusted after applying for the physical offset in the sensor model equation. An absolute position accuracy was analyzed in the Incheon dataset, compared to a ground control point. For the hyperspectral image, root mean square error (RMSE) for X, Y direction was calculated for 0.12 m, and for the point cloud, RMSE was calculated for 0.03 m. Furthermore, a relative position accuracy for a specific point between the adjusted point cloud and the hyperspectral images were also analyzed for 0.07 m, so we confirmed that a precise data mapping is available for an observation without a ground control point through the proposed estimation method, and we also confirmed a possibility of multi-sensor fusion. From this study, we expect that a flexible multi-sensor platform system can be operated through the independent parameter estimation method with an economic cost saving.

Indoor Location and Pose Estimation Algorithm using Artificial Attached Marker (인공 부착 마커를 활용한 실내 위치 및 자세 추정 알고리즘)

  • Ahn, Byeoung Min;Ko, Yun-Ho;Lee, Ji Hong
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.2
    • /
    • pp.240-251
    • /
    • 2016
  • This paper presents a real-time indoor location and pose estimation method that utilizes simple artificial markers and image analysis techniques for the purpose of warehouse automation. The conventional indoor localization methods cannot work robustly in warehouses where severe environmental changes usually occur due to the movement of stocked goods. To overcome this problem, the proposed framework places artificial markers having different interior pattern on the predefined position of the warehouse floor. The proposed algorithm obtains marker candidate regions from a captured image by a simple binarization and labeling procedure. Then it extracts maker interior pattern information from each candidate region in order to decide whether the candidate region is a true marker or not. The extracted interior pattern information and the outer boundary of the marker are used to estimate location and heading angle of the localization system. Experimental results show that the proposed localization method can provide high performance which is almost equivalent to that of the conventional method using an expensive LIDAR sensor and AMCL algorithm.

Cluster-Based Spin Images for Characterizing Diffuse Objects in 3D Range Data

  • Lee, Heezin;Oh, Sangyoon
    • Journal of Sensor Science and Technology
    • /
    • v.23 no.6
    • /
    • pp.377-382
    • /
    • 2014
  • Detecting and segmenting diffuse targets in laser ranging data is a critical problem for tactical reconnaissance. In this study, we propose a new method that facilitates the characterization of diffuse irregularly shaped objects using "spin images," i.e., local 2D histograms of laser returns oriented in 3D space, and a clustering process. The proposed "cluster-based spin imaging" method resolves the problem of using standard spin images for diffuse targets and it eliminates much of the computational complexity that characterizes the production of conventional spin images. The direct processing of pre-segmented laser points, including internal points that penetrate through a diffuse object's topmost surfaces, avoids some of the requirements of the approach used at present for spin image generation, while it also greatly reduces the high computational time overheads incurred by searches to find correlated images. We employed 3D airborne range data over forested terrain to demonstrate the effectiveness of this method in discriminating the different geometric structures of individual tree clusters. Our experiments showed that cluster-based spin images have the potential to separate classes in terms of different ages and portions of tree crowns.

Generation of 3D Campus Models using Multi-Sensor Data (다중센서데이터를 이용한 캠퍼스 3차원 모델의 구축)

  • Choi Kyoung-Ah;Kang Moon-Kwon;Shin Hyo-Sung;Lee Im-Pyeong
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2006.04a
    • /
    • pp.205-210
    • /
    • 2006
  • With the development of recent technology such as telematics, LBS, and ubiquitous, the applications of 3D GIS are rapidly increased. As 3D GIS is mainly based on urban models consisting of the realistic digital models of the objects existing in an urban area, demands for urban models and its continuous update is expected to be drastically increased. The purpose of this study is thus to propose more efficient and precise methods to construct urban models with its experimental verification. Applying the proposed methods, the terrain and sophisticated building models are constructed for the area of $270,600m^2$ with 23 buildings in the University of Seoul. For the terrain models, airborne imagery and LIDAR data is used, while the ground imagery is mainly used for the building models. It is found that the generated models reflect the correct geometry of the buildings and terrain surface. The textures of building surfaces, generated automatically using the projective transformation however, are not well-constructed because of being blotted out and shaded by objects such as trees, near buildings, and other obstacles. Consequently, the algorithms on the texture extraction should be improved to construct more realistic 3D models. Furthermore, the inside of buildings should be modeled for various potential applications in the future.

  • PDF