• Title/Summary/Keyword: vehicle-mounted camera

Search Result 63, Processing Time 0.028 seconds

An Efficient Pedestrian Recognition Method based on PCA Reconstruction and HOG Feature Descriptor (PCA 복원과 HOG 특징 기술자 기반의 효율적인 보행자 인식 방법)

  • Kim, Cheol-Mun;Baek, Yeul-Min;Kim, Whoi-Yul
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.10
    • /
    • pp.162-170
    • /
    • 2013
  • In recent years, the interests and needs of the Pedestrian Protection System (PPS), which is mounted on the vehicle for the purpose of traffic safety improvement is increasing. In this paper, we propose a pedestrian candidate window extraction and unit cell histogram based HOG descriptor calculation methods. At pedestrian detection candidate windows extraction stage, the bright ratio of pedestrian and its circumference region, vertical edge projection, edge factor, and PCA reconstruction image are used. Dalal's HOG requires pixel based histogram calculation by Gaussian weights and trilinear interpolation on overlapping blocks, But our method performs Gaussian down-weight and computes histogram on a per-cell basis, and then the histogram is combined with the adjacent cell, so our method can be calculated faster than Dalal's method. Our PCA reconstruction error based pedestrian detection candidate window extraction method efficiently classifies background based on the difference between pedestrian's head and shoulder area. The proposed method improves detection speed compared to the conventional HOG just using image without any prior information from camera calibration or depth map obtained from stereo cameras.

Case Study: Cost-effective Weed Patch Detection by Multi-Spectral Camera Mounted on Unmanned Aerial Vehicle in the Buckwheat Field

  • Kim, Dong-Wook;Kim, Yoonha;Kim, Kyung-Hwan;Kim, Hak-Jin;Chung, Yong Suk
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.64 no.2
    • /
    • pp.159-164
    • /
    • 2019
  • Weed control is a crucial practice not only in organic farming, but also in modern agriculture because it can lead to loss in crop yield. In general, weed is distributed in patches heterogeneously in the field. These patches vary in size, shape, and density. Thus, it would be efficient if chemicals are sprayed on these patches rather than spraying uniformly in the field, which can pollute the environment and be cost prohibitive. In this sense, weed detection could be beneficial for sustainable agriculture. Studies have been conducted to detect weed patches in the field using remote sensing technologies, which can be classified into a method using image segmentation based on morphology and a method with vegetative indices based on the wavelength of light. In this study, the latter methodology has been used to detect the weed patches. As a result, it was found that the vegetative indices were easier to operate as it did not need any sophisticated algorithm for differentiating weeds from crop and soil as compared to the former method. Consequently, we demonstrated that the current method of using vegetative index is accurate enough to detect weed patches, and will be useful for farmers to control weeds with minimal use of chemicals and in a more precise manner.

A study on measurement and compensation of automobile door gap using optical triangulation algorithm (광 삼각법 측정 알고리즘을 이용한 자동차 도어 간격 측정 및 보정에 관한 연구)

  • Kang, Dong-Sung;Lee, Jeong-woo;Ko, Kang-Ho;Kim, Tae-Min;Park, Kyu-Bag;Park, Jung Rae;Kim, Ji-Hun;Choi, Doo-Sun;Lim, Dong-Wook
    • Design & Manufacturing
    • /
    • v.14 no.1
    • /
    • pp.8-14
    • /
    • 2020
  • In general, auto parts production assembly line is assembled and produced by automatic mounting by an automated robot. In such a production site, quality problems such as misalignment of parts (doors, trunks, roofs, etc.) to be assembled with the vehicle body or collision between assembly robots and components are often caused. In order to solve such a problem, the quality of parts is manually inspected by using mechanical jig devices outside the automated production line. Automotive inspection technology is the most commonly used field of vision, which includes surface inspection such as mounting hole spacing and defect detection, body panel dents and bends. It is used for guiding, providing location information to the robot controller to adjust the robot's path to improve process productivity and manufacturing flexibility. The most difficult weighing and measuring technology is to calibrate the surface analysis and position and characteristics between parts by storing images of the part to be measured that enters the camera's field of view mounted on the side or top of the part. The problem of the machine vision device applied to the automobile production line is that the lighting conditions inside the factory are severely changed due to various weather changes such as morning-evening, rainy days and sunny days through the exterior window of the assembly production plant. In addition, since the material of the vehicle body parts is a steel sheet, the reflection of light is very severe, which causes a problem in that the quality of the captured image is greatly changed even with a small light change. In this study, the distance between the car body and the door part and the door are acquired by the measuring device combining the laser slit light source and the LED pattern light source. The result is transferred to the joint robot for assembling parts at the optimum position between parts, and the assembly is done at the optimal position by changing the angle and step.

Cognitive and Behavioral Effects of Augmented Reality Navigation System (증강현실 내비게이션의 인지적.행동적 영향에 관한 연구)

  • Kim, Kyong-Ho;Cho, Sung-Ik;Lee, Jae-Sik;Wohn, Kwang-Yun
    • Journal of the Korea Society for Simulation
    • /
    • v.18 no.4
    • /
    • pp.9-20
    • /
    • 2009
  • Navigation system providing route-guidance and traffic information is one of the most widely used driver-support system these days. Most of the navigation system is based on the 2D map paradigm so the information is ed and encoded from the real world. As a result it imposes a cognitive burden to the driver to interpret and translate the ed information to real world information. As a new concept of navigation system, augmented-reality navigation system (AR navigation) is suggested recently. It provides navigational guidance by imposing graphical information on real image captured by camera mounted on a vehicle in real-time. The ultimate goal of navigation system is to assist the driving task with least driving workload whether it is based on the abstracted graphic paradigm or realistic image paradigm. In this paper, we describe the comparative studies on how map navigation and AR navigation affect for driving tasks by experimental research. From the result of this research we obtained a basic knowledge about the two paradigms of navigation systems. On the basis of this knowledge, we are going to find the optimal design of navigation system supporting driving task most effectively, by analyzing characteristics of driving tasks and navigational information from the human-vehicle interface point of view.

Estimation of Fresh Weight, Dry Weight, and Leaf Area Index of Soybean Plant using Multispectral Camera Mounted on Rotor-wing UAV (회전익 무인기에 탑재된 다중분광 센서를 이용한 콩의 생체중, 건물중, 엽면적 지수 추정)

  • Jang, Si-Hyeong;Ryu, Chan-Seok;Kang, Ye-Seong;Jun, Sae-Rom;Park, Jun-Woo;Song, Hye-Young;Kang, Kyeong-Suk;Kang, Dong-Woo;Zou, Kunyan;Jun, Tae-Hwan
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.21 no.4
    • /
    • pp.327-336
    • /
    • 2019
  • Soybean is one of the most important crops of which the grains contain high protein content and has been consumed in various forms of food. Soybean plants are generally cultivated on the field and their yield and quality are strongly affected by climate change. Recently, the abnormal climate conditions, including heat wave and heavy rainfall, frequently occurs which would increase the risk of the farm management. The real-time assessment techniques for quality and growth of soybean would reduce the losses of the crop in terms of quantity and quality. The objective of this work was to develop a simple model to estimate the growth of soybean plant using a multispectral sensor mounted on a rotor-wing unmanned aerial vehicle(UAV). The soybean growth model was developed by using simple linear regression analysis with three phenotypic data (fresh weight, dry weight, leaf area index) and two types of vegetation indices (VIs). It was found that the accuracy and precision of LAI model using GNDVI (R2= 0.789, RMSE=0.73 ㎡/㎡, RE=34.91%) was greater than those of the model using NDVI (R2= 0.587, RMSE=1.01 ㎡/㎡, RE=48.98%). The accuracy and precision based on the simple ratio indices were better than those based on the normalized vegetation indices, such as RRVI (R2= 0.760, RMSE=0.78 ㎡/㎡, RE=37.26%) and GRVI (R2= 0.828, RMSE=0.66 ㎡/㎡, RE=31.59%). The outcome of this study could aid the production of soybeans with high and uniform quality when a variable rate fertilization system is introduced to cope with the adverse climate conditions.

Diurnal Change of Reflectance and Vegetation Index from UAV Image in Clear Day Condition (청천일 무인기 영상의 반사율 및 식생지수 일주기 변화)

  • Lee, Kyung-do;Na, Sang-il;Park, Chan-won;Hong, Suk-young;So, Kyu-ho;Ahn, Ho-yong
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_1
    • /
    • pp.735-747
    • /
    • 2020
  • Recent advanced UAV (Unmanned Aerial Vehicle) technology supply new opportunities for estimating crop condition using high resolution imagery. We analyzed the diurnal change of reflectance and NDVI (Normalized Difference Vegetation Index) in UAV imagery for crop monitoring in clear day condition. Multi-spectral images were obtained from a 5-band multi-spectral camera mounted on rotary wing UAV. Reflectance were derived by the direct method using down-welling irradiance measurement. Reflectance using UAV imagery on calibration tarp, concrete and crop experimental sites did not show stable by time and daily reproducible values. But the CV (Coefficient of Variation) of diurnal NDVI on crop experimental sites was less than 5%. As a result of comparing NDVI at the similar time for two day, the daily mean average ratio of error showed a difference of 0.62 to 3.97%. Therefore, it is considered that NDVI using UAV imagery can be used for time series crop monitoring.

Integrating UAV Remote Sensing with GIS for Predicting Rice Grain Protein

  • Sarkar, Tapash Kumar;Ryu, Chan-Seok;Kang, Ye-Seong;Kim, Seong-Heon;Jeon, Sae-Rom;Jang, Si-Hyeong;Park, Jun-Woo;Kim, Suk-Gu;Kim, Hyun-Jin
    • Journal of Biosystems Engineering
    • /
    • v.43 no.2
    • /
    • pp.148-159
    • /
    • 2018
  • Purpose: Unmanned air vehicle (UAV) remote sensing was applied to test various vegetation indices and make prediction models of protein content of rice for monitoring grain quality and proper management practice. Methods: Image acquisition was carried out by using NIR (Green, Red, NIR), RGB and RE (Blue, Green, Red-edge) camera mounted on UAV. Sampling was done synchronously at the geo-referenced points and GPS locations were recorded. Paddy samples were air-dried to 15% moisture content, and then dehulled and milled to 92% milling yield and measured the protein content by near-infrared spectroscopy. Results: Artificial neural network showed the better performance with $R^2$ (coefficient of determination) of 0.740, NSE (Nash-Sutcliffe model efficiency coefficient) of 0.733 and RMSE (root mean square error) of 0.187% considering all 54 samples than the models developed by PR (polynomial regression), SLR (simple linear regression), and PLSR (partial least square regression). PLSR calibration models showed almost similar result with PR as 0.663 ($R^2$) and 0.169% (RMSE) for cloud-free samples and 0.491 ($R^2$) and 0.217% (RMSE) for cloud-shadowed samples. However, the validation models performed poorly. This study revealed that there is a highly significant correlation between NDVI (normalized difference vegetation index) and protein content in rice. For the cloud-free samples, the SLR models showed $R^2=0.553$ and RMSE = 0.210%, and for cloud-shadowed samples showed 0.479 as $R^2$ and 0.225% as RMSE respectively. Conclusion: There is a significant correlation between spectral bands and grain protein content. Artificial neural networks have the strong advantages to fit the nonlinear problem when a sigmoid activation function is used in the hidden layer. Quantitatively, the neural network model obtained a higher precision result with a mean absolute relative error (MARE) of 2.18% and root mean square error (RMSE) of 0.187%.

Accuracy Assessment of Feature Collection Method with Unmanned Aerial Vehicle Images Using Stereo Plotting Program StereoCAD (수치도화 프로그램 StereoCAD를 이용한 무인 항공영상의 묘사 정확도 평가)

  • Lee, Jae One;Kim, Doo Pyo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.40 no.2
    • /
    • pp.257-264
    • /
    • 2020
  • Vectorization is currently the main method in feature collection (extraction) during digital mapping using UAV-Photogrammetry. However, this method is time consuming and prone to gross elevation errors when extracted from a DSM (Digital Surface Model), because three-dimensional feature coordinates are vectorized separately: plane information from an orthophoto and height from a DSM. Consequently, the demand for stereo plotting method capable of acquiring three- dimensional spatial information simultaneously is increasing. However, this method requires an expensive equipment, a Digital Photogrammetry Workstation (DPW), and the technology itself is still incomplete. In this paper, we evaluated the accuracy of low-cost stereo plotting system, Menci's StereoCAD, by analyzing its three-dimensional spatial information acquisition. Images were taken with a FC 6310 camera mounted on a Phantom4 pro at a 90 m altitude with a Ground Sample Distance (GSD) of 3 cm. The accuracy analysis was performed by comparing differences in coordinates between the results from the ground survey and the stereo plotting at check points, and also at the corner points by layers. The results showed that the Root Mean Square Error (RMSE) at check points was 0.048 m for horizontal and 0.078 m for vertical coordinates, respectively, and for different layers, it ranged from 0.104 m to 0.127 m for horizontal and 0.086 m to 0.092 m for vertical coordinates, respectively. In conclusion, the results showed 1: 1,000 digital topographic map can be generated using a stereo plotting system with UAV images.

Edge Response Analysis of UAV-Images Using a Slanted Target (경사 타겟을 이용한 무인항공영상의 경계반응 분석)

  • Lee, Jae One;Sung, Sang Min
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.4
    • /
    • pp.317-325
    • /
    • 2020
  • UAV (Unmanned Aerial Vehicle) photogrammetry has recently emerged as a means of obtaining highly precise and rapid spatial information due to its cost-effectiveness and high efficiency. However, current procedures or regulations for quantitative quality verification methods and certification processes for UAV-images are insufficient. In addition, the current verification method for image quality is not evaluated by an MTF (Modulation Transfer Function) analysis or edge response analysis, which can analyze the degree of contrast including image resolution, and only relies on the GSD (Ground Sample Distance) analysis. Therefore, in this study, the edge response analysis using a Slanted edge target was performed along with GSD analysis to confirm the necessity of analyzing edge response analysis in UAV-images quality analysis. Furthermore, a Matlab GUI-based software tool was developed to help streamline the edge response analysis. As a result, we confirmed the need for edge response analysis since the outputs of the edge response analysis from the same GSD had significantly different outcomes. Additionally, we found that the quality of the edge response analysis of UAV-images is proportional to the performance of the camera mounted on the UAV.

An Experimental Study on Assessing Precision and Accuracy of Low-cost UAV-based Photogrammetry (저가형 UAV 사진측량의 정밀도 및 정확도 분석 실험에 관한 연구)

  • Yun, Seonghyeon;Lee, Hungkyu;Choi, Woonggyu;Jeong, Woochul;Jo, Eonjeong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.3
    • /
    • pp.207-215
    • /
    • 2022
  • This research has been focused on accessing precision and accuracy of UAV (Unmanned Aerial Vehicle)-derived 3-D surveying coordinates. To this end, a highly precise and accurate testing control network had been established by GNSS (Global Navigation Satellite Systems) campaign and its network adjustment. The coordinates of the ground control points and the check points were estimated within 1cm accuracy for 95% of the confidence level. FC330 camera mounted on DJI Phantom 4 repeatedly took aerial photos of an experimental area seven times, and then processed them by two widely used software packages. To evaluate the precision and accuracy of the aerial surveys, 3-D coordinates of the ten check points which automatically extracted by software were compared with GNSS solutions. For the 95% confidence level, the standard deviation of two software's result is within 1cm, 2cm, and 4cm for the north-south, east-west, and height direction, and RMSE (Root Mean Square Error) is within 9cm and 8cm for the horizontal, vertical component, respectively. The interest is that the standard deviation is much smaller than RMSE. The F-ratio test was performed to confirm the statistical difference between the two software processing results. For the standard deviation and RMSE of most positional components, exception of RMSE of the height, the null hypothesis of the one-tailed tests was rejected. It indicates that the result of UAV photogrammetry can be different statistically based on the processing software.