• Title/Summary/Keyword: stereo sensor

Search Result 199, Processing Time 0.214 seconds

Development of High-Sensitivity Detection Sensor and Module for Spatial Distribution Measurement of Multi Gamma Sources (다종 감마선 공간분포 측정을 위한 고감도 검출센서 및 탐지모듈 개발)

  • Hwang, Young-Gwan;Lee, Nam-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.10a
    • /
    • pp.705-707
    • /
    • 2017
  • Stereo-based spatial radiation detection devices can obtain not only spatial distribution information about the radiation source but also distance information from the detection device to the source. And it provides more efficient information on the source than the existing radiation imaging device. In order to provide high-speed information on the spectrum and type of gamma-ray source, a high-sensitivity detection sensor with high sensitivity is required, and a technique capable of solving the saturation phenomenon at a high dose is needed. In this paper, we constructed a high sensitivity sensor for the measurement of multiple gamma - ray spatial distributions using improved function of detection module to solve saturation to high dose and conducted research to increase the scope of a single detector. The result of this paper improves the performance of gamma ray.

  • PDF

Estimation of the Dimensions of Horticultural Products and the Mean Plant Height of Plug Seedlings Using Three-Dimensional Images (3차원 영상을 이용한 원예산물의 크기와 플러그묘의 평균초장 추정)

  • Jang, Dong Hwa;Kim, Hyeon Tae;Kim, Yong Hyeon
    • Journal of Bio-Environment Control
    • /
    • v.28 no.4
    • /
    • pp.358-365
    • /
    • 2019
  • This study was conducted to estimate the dimensions of horticultural products and the mean plant height of plug seedlings using three-dimensional (3D) images. Two types of camera, a ToF camera and a stereo-vision camera, were used to acquire 3D images for horticultural products and plug seedlings. The errors calculated from the ToF images for dimensions of horticultural products and mean height of plug seedlings were lower than those predicted from stereo-vision images. A new indicator was defined for determining the mean plant height of plug seedlings. Except for watermelon with tap, the errors of circumference and height of horticultural products were 0.0-3.0% and 0.0-4.7%, respectively. Also, the error of mean plant height for plug seedlings was 0.0-5.5%. The results revealed that 3D images can be utilized to estimate accurately the dimensions of horticultural products and the plant height of plug seedlings. Moreover, our method is potentially applicable for segmenting objects and for removing outliers from the point cloud data based on the 3D images of horticultural crops.

Building Height Extraction using Triangular Vector Structure from a Single High Resolution Satellite Image (삼각벡터구조를 이용한 고해상도 위성 단영상에서의 건물 높이 추출)

  • Kim, Hye-Jin;Han, Dong-Yeob;Kim, Yong-Il
    • Korean Journal of Remote Sensing
    • /
    • v.22 no.6
    • /
    • pp.621-626
    • /
    • 2006
  • Today's commercial high resolution satellite imagery such as IKONOS and QuickBird, offers the potential to extract useful spatial information for geographical database construction and GIS applications. Extraction of 3D building information from high resolution satellite imagery is one of the most active research topics. There have been many previous works to extract 3D information based on stereo analysis, including sensor modelling. Practically, it is not easy to obtain stereo high resolution satellite images. On single image performance, most studies applied the roof-bottom points or shadow length extracted manually to sensor models with DEM. It is not suitable to apply these algorithms for dense buildings. We aim to extract 3D building information from a single satellite image in a simple and practical way. To measure as many buildings as possible, in this paper, we suggested a new way to extract building height by triangular vector structure that consists of a building bottom point, its corresponding roof point and a shadow end point. The proposed method could increase the number of measurable building, and decrease the digitizing error and the computation efficiency.

High-resolution Depth Generation using Multi-view Camera and Time-of-Flight Depth Camera (다시점 카메라와 깊이 카메라를 이용한 고화질 깊이 맵 제작 기술)

  • Kang, Yun-Suk;Ho, Yo-Sung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.6
    • /
    • pp.1-7
    • /
    • 2011
  • The depth camera measures range information of the scene in real time using Time-of-Flight (TOF) technology. Measured depth data is then regularized and provided as a depth image. This depth image is utilized with the stereo or multi-view image to generate high-resolution depth map of the scene. However, it is required to correct noise and distortion of TOF depth image due to the technical limitation of the TOF depth camera. The corrected depth image is combined with the color image in various methods, and then we obtain the high-resolution depth of the scene. In this paper, we introduce the principal and various techniques of sensor fusion for high-quality depth generation that uses multiple camera with depth cameras.

An analysis of Electro-Optical Camera (EOC) on KOMPSAT-1 during mission life of 3 years

  • Baek Hyun-Chul;Yong Sang-Soon;Kim Eun-Kyou;Youn Heong-Sik;Choi Hae-Jin
    • Proceedings of the KSRS Conference
    • /
    • 2004.10a
    • /
    • pp.512-514
    • /
    • 2004
  • The Electro-Optical Camera (EOC) is a high spatial resolution, visible imaging sensor which collects visible image data of the earth's sunlit surface and is the primary payload on KOMPSAT-l. The purpose of the EOC payload is to provide high resolution visible imagery data to support cartography of the Korean Peninsula. The EOC is a push broom-scanned sensor which incorporates a single nadir looking telescope. At the nominal altitude of 685Km with the spacecraft in a nadir pointing attitude, the EOC collects data with a ground sample distance of approximately 6.6 meters and a swath width of around 17Km. The EOC is designed to operate with a duty cycle of up to 2 minutes (contiguous) per orbit over the mission lifetime of 3 years with the functions of programmable gain/offset. The EOC has no pointing mechanism of its own. EOC pointing is accomplished by right and left rolling of the spacecraft, as needed. Under nominal operating conditions, the spacecraft can be rolled to an angle in the range from +/- 15 to 30 degrees to support the collection of stereo data. In this paper, the status of EOC such as temperature, dark calibration, cover operation and thermal control is checked and analyzed by continuously monitored state of health (SOH) data and image data during the mission life of 3 years. The aliveness of EOC and operation continuation beyond mission life is confirmed by the results of the analysis.

  • PDF

JAXA'S EARTH OBSERVING PROGRAM

  • Shimoda, Haruhisa
    • Proceedings of the KSRS Conference
    • /
    • v.1
    • /
    • pp.7-10
    • /
    • 2006
  • Four programs, i.e. TRMM, ADEOS2, ASTER, and ALOS are going on in Japanese Earth Observation programs. TRMM and ASTER are operating well, and TRMM operation will be continued to 2009. ADEOS2 was failed, but AMSR-E on Aqua is operating. ALOS (Advanced Land Observing Satellite) was successfully launched on $24^{th}$ Jan. 2006. ALOS carries three instruments, i.e., PRISM (Panchromatic Remote Sensing Instrument for Stereo Mapping), AVNIR-2 (Advanced Visible and Near Infrared Radiometer), and PALSAR (Phased Array L band Synthetic Aperture Radar). PRISM is a 3 line panchromatic push broom scanner with 2.5m IFOV. AVNIR-2 is a 4 channel multi spectral scanner with 10m IFOV. PALSAR is a full polarimetric active phased array SAR. PALSAR has many observation modes including full polarimetric mode and scan SAR mode. After the unfortunate accident of ADEOS2, JAXA still have plans of Earth observation programs. Next generation satellites will be launched in 2008-2012 timeframe. They are GOSAT (Greenhouse Gas Observation Satellite), GCOM-W and GCOM-C (ADEOS-2 follow on), and GPM (Global Precipitation Mission) core satellite. GOSAT will carry 2 instruments, i.e. a green house gas sensor and a cloud/aerosol imager. The main sensor is a Fourier transform spectrometer (FTS) and covers 0.76 to 15 ${\mu}m$ region with 0.2 to 0.5 $cm^{-1}$ resolution. GPM is a joint project with NASA and will carry two instruments. JAXA will develop DPR (Dual frequency Precipitation Radar) which is a follow on of PR on TRMM. Another project is EarthCare. It is a joint project with ESA and JAXA is going to provide CPR (Cloud Profiling Radar). Discussions on future Earth Observation programs have been started including discussions on ALOS F/O.

  • PDF

A Study on the Fusion of DEM Generated from Images of Optical Satellite and SAR (광학 위성영상과 SAR 위성영상의 DEM 융합에 관한 연구)

  • Yeu, Bock-Mo;Hong, Jae-Min;Jin, Kyeong-Hyeok;Yoon, Chang-Rak
    • 한국지형공간정보학회:학술대회논문집
    • /
    • 2002.11a
    • /
    • pp.58-65
    • /
    • 2002
  • The most widespread techniques for DEM generation are stereoscopy for optical sensor images and interfereometry for SAR images. These techniques suffer from certain sensor and processing limitations, which can be overcome by the synergetic use of both sensors and DEMs respectively. In this paper, different strategies for fusing SAR and optical data are combined to derive high quality DEM products. The multiresolution wavelet transform, which take advantage of the complementary properties of SAR and stereo optical DEMs, will be applied for the fusion process. By taking advantage of the fact that errors of the DEMs are of different nature using the multiresolution wavelet transform, affected part are filtered and replaced by those of the counterpart and is tested with two sets of SPOT and ERS DEM, resulting in a remarkable improvement in DEM. For the analysis of results, the reference DEM is generated from digital base map(1:5000).

  • PDF

The Obstacle Size Prediction Method Based on YOLO and IR Sensor for Avoiding Obstacle Collision of Small UAVs (소형 UAV의 장애물 충돌 회피를 위한 YOLO 및 IR 센서 기반 장애물 크기 예측 방법)

  • Uicheon Lee;Jongwon Lee;Euijin Choi;Seonah Lee
    • Journal of Aerospace System Engineering
    • /
    • v.17 no.6
    • /
    • pp.16-26
    • /
    • 2023
  • With the growing demand for unmanned aerial vehicles (UAVs), various collision avoidance methods have been proposed, mainly using LiDAR and stereo cameras. However, it is difficult to apply these sensors to small UAVs due to heavy weight or lack of space. The recently proposed methods use a combination of object recognition models and distance sensors, but they lack information on the obstacle size. This disadvantage makes distance determination and obstacle coordination complicated in an early-stage collision avoidance. We propose a method for estimating obstacle sizes using a monocular camera-YOLO and infrared sensor. Our experimental results confirmed that the accuracy was 86.39% within the distance of 40 cm. In addition, the proposed method was applied to a small UAV to confirm whether it was possible to avoid obstacle collisions.

Development of a Satellite Image Preprocessing System for Obtaining 3-D Positional Information -Focused on KOMPSAT and SPOT Imagery- (3차원 위치정보를 취득하기 위한 위성영상처리 시스템 개발 - KOMPSAT 및 SPOT영상을 중심으로 -)

  • 유환희;김동규;진경혁;우해인
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.19 no.3
    • /
    • pp.291-300
    • /
    • 2001
  • In this paper, we developed a Satellite Image Processing System for obtaining 3-D positional information which is composed of five process modules. As a procedure of them, the Data Process module is the procedure that reads and processes the header file to generate data files. and then calculates orbital parameters and sensor attitudes for obtaining of 3-D positional information with them. The 3D Process module is to calculate 3-D positional information and the Dialog Process module is to correct the time of image frame center using the single image or stereo images for implementing the 3D Process module. We expect to obtain 3-D positional information with the header file and minimum GCPs(1∼2 points) using this system efficiently and economically in comparison with existing commercial software packages.

  • PDF

Visual Sensing of the Light Spot of a Laser Pointer for Robotic Applications

  • Park, Sung-Ho;Kim, Dong Uk;Do, Yongtae
    • Journal of Sensor Science and Technology
    • /
    • v.27 no.4
    • /
    • pp.216-220
    • /
    • 2018
  • In this paper, we present visual sensing techniques that can be used to teach a robot using a laser pointer. The light spot of an off-the-shelf laser pointer is detected and its movement is tracked on consecutive images of a camera. The three-dimensional position of the spot is calculated using stereo cameras. The light spot on the image is detected based on its color, brightness, and shape. The detection results in a binary image, and morphological processing steps are performed on the image to refine the detection. The movement of the laser spot is measured using two methods. The first is a simple method of specifying the region of interest (ROI) centered at the current location of the light spot and finding the spot within the ROI on the next image. It is assumed that the movement of the spot is not large on two consecutive images. The second method is using a Kalman filter, which has been widely employed in trajectory estimation problems. In our simulation study of various cases, Kalman filtering shows better results mostly. However, there is a problem of fitting the system model of the filter to the pattern of the spot movement.