• Title/Summary/Keyword: Vehicle sensing data

Search Result 141, Processing Time 0.026 seconds

Development of 3D Crop Segmentation Model in Open-field Based on Supervised Machine Learning Algorithm (지도학습 알고리즘 기반 3D 노지 작물 구분 모델 개발)

  • Jeong, Young-Joon;Lee, Jong-Hyuk;Lee, Sang-Ik;Oh, Bu-Yeong;Ahmed, Fawzy;Seo, Byung-Hun;Kim, Dong-Su;Seo, Ye-Jin;Choi, Won
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.64 no.1
    • /
    • pp.15-26
    • /
    • 2022
  • 3D open-field farm model developed from UAV (Unmanned Aerial Vehicle) data could make crop monitoring easier, also could be an important dataset for various fields like remote sensing or precision agriculture. It is essential to separate crops from the non-crop area because labeling in a manual way is extremely laborious and not appropriate for continuous monitoring. We, therefore, made a 3D open-field farm model based on UAV images and developed a crop segmentation model using a supervised machine learning algorithm. We compared performances from various models using different data features like color or geographic coordinates, and two supervised learning algorithms which are SVM (Support Vector Machine) and KNN (K-Nearest Neighbors). The best approach was trained with 2-dimensional data, ExGR (Excess of Green minus Excess of Red) and z coordinate value, using KNN algorithm, whose accuracy, precision, recall, F1 score was 97.85, 96.51, 88.54, 92.35% respectively. Also, we compared our model performance with similar previous work. Our approach showed slightly better accuracy, and it detected the actual crop better than the previous approach, while it also classified actual non-crop points (e.g. weeds) as crops.

Construction and Experiment of an Educational Radar System (교육용 레이다 시스템의 제작 및 실험)

  • Ji, Younghun;Lee, Hoonyol
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.2
    • /
    • pp.293-302
    • /
    • 2014
  • Radar systems are used in remote sensing mainly as space-borne, airborne and ground-based Synthetic Aperture Radar (SAR), scatterometer and Doppler radar. Those systems are composed of expensive equipments and require expertise and professional skills for operation. Because of the limitation in getting experiences of the radar and SAR systems and its operations in ordinary universities and institutions, it is difficult to learn and exercise essential principles of radar hardware which are essential to understand and develop new application fields. To overcome those difficulties, in this paper, we present the construction and experiment of a low-cost educational radar system based on the blueprints of the MIT Cantenna system. The radar system was operated in three modes. Firstly, the velocity of moving cars was measured in Doppler radar mode. Secondly, the range of two moving targets were measured in radar mode with range resolution. Lastly, 2D images were constructed in GB-SAR mode to enhance the azimuth resolution. Additionally, we simulated the SAR raw data to compare Deramp-FFT and ${\omega}-k$ algorithms and to analyze the effect of antenna positional error for SAR focusing. We expect the system can be further developed into a light-weight SAR system onboard a unmanned aerial vehicle by improving the system with higher sampling frequency, I/Q acquisition, and more stable circuit design.

Estimate and Analysis of Planetary Boundary Layer Height (PBLH) using a Mobile Lidar Vehicle system (이동형 차량탑재 라이다 시스템을 활용한 경계층고도 산출 및 분석)

  • Nam, Hyoung-Gu;Choi, Won;Kim, Yoo-Jun;Shim, Jae-Kwan;Choi, Byoung-Choel;Kim, Byung-Gon
    • Korean Journal of Remote Sensing
    • /
    • v.32 no.3
    • /
    • pp.307-321
    • /
    • 2016
  • Planetary Boundary Layer Height (PBLH) is a major input parameter for weather forecasting and atmosphere diffusion models. In order to estimate the sub-grid scale variability of PBLH, we need to monitor PBLH data with high spatio-temporal resolution. Accordingly, we introduce a LIdar observation VEhicle (LIVE), and analyze PBLH derived from the lidar loaded in LIVE. PBLH estimated from LIVE shows high correlations with those estimated from both WRF model ($R^2=0.68$) and radiosonde ($R^2=0.72$). However, PBLH from lidar tend to be overestimated in comparison with those from both WRF and radiosonde because lidar appears to detect height of Residual Layer (RL) as PBLH which is overall below near the overlap height (< 300 m). PBLH from lidar with 10 min time resolution shows typical diurnal variation since it grows up after sunrise and reaches the maximum after 2 hours of sun culmination. The average growth rate of PBLH during the analysis period (2014/06/26 ~ 30) is 1.79 (-2.9 ~ 5.7) m $min^{-1}$. In addition, the lidar signal measured from moving LIVE shows that there is very low noise in comparison with that from the stationary observation. The PBLH from LIVE is 1065 m, similar to the value (1150 m) derived from the radiosonde launched at Sokcho. This study suggests that LIVE can observe continuous and reliable PBLH with high resolution in both stationary and mobile systems.

A Moving Path Control of an Automatic Guided Vehicle Using Relative Distance Fingerprinting (상대거리 지문 정보를 이용한 무인이송차량의 주행 경로 제어)

  • Hong, Youn Sik;Kim, Da Jung;Hong, Sang Hyun
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.2 no.10
    • /
    • pp.427-436
    • /
    • 2013
  • In this paper, a method of moving path control of an automatic guided vehicle in an indoor environment through recognition of marker images using vision sensors is presented. The existing AGV moving control system using infrared-ray sensors and landmarks have faced at two critical problems. Since there are many windows in a crematorium, they are going to let in too much sunlight in the main hall which is the moving area of AGVs. Sunlight affects the correct recognition of landmarks due to refraction and/or reflection of sunlight. The second one is that a crematorium has a narrow indoor environment compared to typical industrial fields. Particularly when an AVG changes its direction to enter the designated furnace the information provided by guided sensors cannot be utilized to estimate its location because the rotating space is too narrow to get them. To resolve the occurrences of such circumstances that cannot access sensing data in a WSN environment, a relative distance from marker to an AGV will be used as fingerprinting used for location estimation. Compared to the existing fingerprinting method which uses RSS, our proposed method may result in a higher reliable estimation of location. Our experimental results show that the proposed method proves the correctness and applicability. In addition, our proposed approach will be applied to the AGV system in the crematorium so that it can transport a dead body safely from the loading place to its rightful destination.

Leader - Follower based Formation Guidance Law and Autonomous Formation Flight Test of Multiple MAVs (편대 유도 법칙 및 초소형 비행체의 자동 편대 비행 구현)

  • You, Dong-Il;Shim, Hyun-Chul
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.39 no.2
    • /
    • pp.121-127
    • /
    • 2011
  • This paper presents an autonomous formation flight algorithm for micro aerial vehicles (MAVs) and its flight test results. Since MAVs have severe limits on the payload and flight time, formation of MAVs can help alleviate the mission load of each MAV by sharing the tasks or coverage areas. The proposed formation guidance law is designed using nonlinear dynamic inversion method based on 'Leader-Follower' formation geometric relationship. The sensing of other vehicles in a formation is achieved by sharing the vehicles' states using a high-speed radio data link. the designed formation law was simulated with flight data of MAV to verify its robustness against sensor noises. A series of test flights were performed to validate the proposed formation guidance law. The test result shows that the proposed formation flight algorithm with inter-communication is feasible and yields satisfactory results.

Evaluation of Rededge-M Camera for Water Color Observation after Image Preprocessing (영상 전처리 수행을 통한 Rededge-M 카메라의 수색 관측에의 활용성 검토)

  • Kim, Wonkook;Roh, Sang-Hyun;Moon, Yongseon;Jung, Sunghun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.3
    • /
    • pp.167-175
    • /
    • 2019
  • Water color analysis allows non-destructive estimation of abundance of optically active water constituents in the water body. Recently, there have been increasing needs for light-weighted multispectral cameras that can be integrated with low altitude unmanned platforms such as drones, autonomous vehicles, and heli-kites, for the water color analysis by spectroradiometers. This study performs the preprocessing of the Micasense Rededge-M camera which recently receives a growing attention from the earth observation community for its handiness and applicability for local environment monitoring, and investigates the applicability of Rededge-M data for water color analysis. The Vignette correction and the band alignment were conducted for the radiometric image data from Rededge-M, and the sky, water, and solar radiation essential for the water color analysis, and the resultant remote sensing reflectance were validated with an independent hyperspectral instrument, TriOS RAMSES. The experiment shows that Rededge-M generally satisfies the basic performance criteria for water color analysis, although noticeable differences are observed in the blue (475 nm) and the near-infrared (840 nm) band compared with RAMSES.

Automatic Generation of Clustered Solid Building Models Based on Point Cloud (포인트 클라우드 데이터 기반 군집형 솔리드 건물 모델 자동 생성 기법)

  • Kim, Han-gyeol;Hwang, YunHyuk;Rhee, Sooahm
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.6_1
    • /
    • pp.1349-1365
    • /
    • 2020
  • In recent years, in the fields of smart cities and digital twins, research on model generation is increasing due to the advantage of acquiring actual 3D coordinates by using point clouds. In addition, there is an increasing demand for a solid model that can easily modify the shape and texture of the building. In this paper, we propose a method to create a clustered solid building model based on point cloud data. The proposed method consists of five steps. Accordingly, in this paper, we propose a method to create a clustered solid building model based on point cloud data. The proposed method consists of five steps. In the first step, the ground points were removed through the planarity analysis of the point cloud. In the second step, building area was extracted from the ground removed point cloud. In the third step, detailed structural area of the buildings was extracted. In the fourth step, the shape of 3D building models with 3D coordinate information added to the extracted area was created. In the last step, a 3D building solid model was created by giving texture to the building model shape. In order to verify the proposed method, we experimented using point clouds extracted from unmanned aerial vehicle images using commercial software. As a result, 3D building shapes with a position error of about 1m compared to the point cloud was created for all buildings with a certain height or higher. In addition, it was confirmed that 3D models on which texturing was performed having a resolution of less than twice the resolution of the original image was generated.

Comparison of Reflectance and Vegetation Index Changes by Type of UAV-Mounted Multi-Spectral Sensors (무인비행체 탑재 다중분광 센서별 반사율 및 식생지수 변화 비교)

  • Lee, Kyung-do;Ahn, Ho-yong;Ryu, Jae-hyun;So, Kyu-ho;Na, Sang-il
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.947-958
    • /
    • 2021
  • This study was conducted to provide basic data for crop monitoring by comparing and analyzing changes in reflectance and vegetation index by sensor of multi-spectral sensors mounted on unmanned aerial vehicles. For four types of unmanned aerial vehicle-mounted multispectral sensors, such as RedEdge-MX, S110 NIR, Sequioa, and P4M, on September 14 and September 15, 2020, aerial images were taken, once in the morning and in the afternoon, a total of 4 times, and reflectance and vegetation index were calculated and compared. In the case of reflectance, the time-series coefficient of variation of all sensors showed an average value of about 10% or more, indicating that there is a limit to its use. The coefficient of variation of the vegetation index by sensor for the crop test group showed an average value of 1.2 to 3.6% in the crop experimental sites with high vitality due to thick vegetation, showing variability within 5%. However, this was a higher value than the coefficient of variation on a clear day, and it is estimated that the weather conditions such as clouds were different in the morning and afternoon during the experiment period. It is thought that it is necessary to establish and implement a UAV flight plan. As a result of comparing the NDVI between the multi-spectral sensors of the unmanned aerial vehicle, in this experiment, it is thought that the RedEdeg-MX sensor can be used together without special correction of the NDVI value even if several sensors of the same type are used in a stable light environment. RedEdge-MX, P4M, and Sequioa sensors showed a linear relationship with each other, but supplementary experiments are needed to evaluate joint utilization through off-set correction between vegetation indices.

Normalized Digital Surface Model Extraction and Slope Parameter Determination through Region Growing of UAV Data (무인항공기 데이터의 영역 확장법 적용을 통한 정규수치표면모델 추출 및 경사도 파라미터 설정)

  • Yeom, Junho;Lee, Wonhee;Kim, Taeheon;Han, Youkyung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.6
    • /
    • pp.499-506
    • /
    • 2019
  • NDSM (Normalized Digital Surface Model) is key information for the detailed analysis of remote sensing data. Although NDSM can be simply obtained by subtracting a DTM (Digital Terrain Model) from a DSM (Digital Surface Model), in case of UAV (Unmanned Aerial Vehicle) data, it is difficult to get an accurate DTM due to high resolution characteristics of UAV data containing a large number of complex objects on the ground such as vegetation and urban structures. In this study, RGB-based UAV vegetation index, ExG (Excess Green) was used to extract initial seed points having low ExG values for region growing such that a DTM can be generated cost-effectively based on high resolution UAV data. For this process, local window analysis was applied to resolve the problem of erroneous seed point extraction from local low ExG points. Using the DSM values of seed points, region growing was applied to merge neighboring terrain pixels. Slope criteria were adopted for the region growing process and the seed points were determined as terrain points in case the size of segments is larger than 0.25 ㎡. Various slope criteria were tested to derive the optimized value for UAV data-based NDSM generation. Finally, the extracted terrain points were evaluated and interpolation was performed using the terrain points to generate an NDSM. The proposed method was applied to agricultural area in order to extract the above ground heights of crops and check feasibility of agricultural monitoring.

Development of Android-Based Photogrammetric Unmanned Aerial Vehicle System (안드로이드 기반 무인항공 사진측량 시스템 개발)

  • Park, Jinwoo;Shin, Dongyoon;Choi, Chuluong;Jeong, Hohyun
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.3
    • /
    • pp.215-226
    • /
    • 2015
  • Normally, aero photography using UAV uses about 430 MHz bandwidth radio frequency (RF) modem and navigates and remotely controls through the connection between UAV and ground control system. When using the exhausting method, it has communication range of 1-2 km with frequent cross line and since wireless communication sends information using radio wave as a carrier, it has 10 mW of signal strength limitation which gave restraints on life my distance communication. The purpose of research is to use communication technologies such as long-term evolution (LTE) of smart camera, Bluetooth, Wi-Fi and other communication modules and cameras that can transfer data to design and develop automatic shooting system that acquires images to UAV at the necessary locations. We conclude that the android based UAV filming and communication module system can not only film images with just one smart camera but also connects UAV system and ground control system together and also able to obtain real-time 3D location information and 3D position information using UAV system, GPS, a gyroscope, an accelerometer, and magnetic measuring sensor which will allow us to use real-time position of the UAV and correction work through aerial triangulation.