• Title/Summary/Keyword: camera image

Search Result 4,917, Processing Time 0.03 seconds

Use of Unmanned Aerial Vehicle for Multi-temporal Monitoring of Soybean Vegetation Fraction

  • Yun, Hee Sup;Park, Soo Hyun;Kim, Hak-Jin;Lee, Wonsuk Daniel;Lee, Kyung Do;Hong, Suk Young;Jung, Gun Ho
    • Journal of Biosystems Engineering
    • /
    • v.41 no.2
    • /
    • pp.126-137
    • /
    • 2016
  • Purpose: The overall objective of this study was to evaluate the vegetation fraction of soybeans, grown under different cropping conditions using an unmanned aerial vehicle (UAV) equipped with a red, green, and blue (RGB) camera. Methods: Test plots were prepared based on different cropping treatments, i.e., soybean single-cropping, with and without herbicide application and soybean and barley-cover cropping, with and without herbicide application. The UAV flights were manually controlled using a remote flight controller on the ground, with 2.4 GHz radio frequency communication. For image pre-processing, the acquired images were pre-treated and georeferenced using a fisheye distortion removal function, and ground control points were collected using Google Maps. Tarpaulin panels of different colors were used to calibrate the multi-temporal images by converting the RGB digital number values into the RGB reflectance spectrum, utilizing a linear regression method. Excess Green (ExG) vegetation indices for each of the test plots were compared with the M-statistic method in order to quantitatively evaluate the greenness of soybean fields under different cropping systems. Results: The reflectance calibration methods used in the study showed high coefficients of determination, ranging from 0.8 to 0.9, indicating the feasibility of a linear regression fitting method for monitoring multi-temporal RGB images of soybean fields. As expected, the ExG vegetation indices changed according to different soybean growth stages, showing clear differences among the test plots with different cropping treatments in the early season of < 60 days after sowing (DAS). With the M-statistic method, the test plots under different treatments could be discriminated in the early seasons of <41 DAS, showing a value of M > 1. Conclusion: Therefore, multi-temporal images obtained with an UAV and a RGB camera could be applied for quantifying overall vegetation fractions and crop growth status, and this information could contribute to determine proper treatments for the vegetation fraction.

A New Algorithm for the Interpretation of Joint Orientation Using Multistage Convergent Photographing Technique (수렴다중촬영기법을 이용한 새로운 절리방향 해석방법)

  • 김재동;김종훈
    • Tunnel and Underground Space
    • /
    • v.13 no.6
    • /
    • pp.486-494
    • /
    • 2003
  • When the orientations of joints are measured on a rock exposure, there are frequent cases that are difficult to approach by the surveyor to the target joints or to set up scanlines on the slope. In this study, to complement such limit and weak points, a new algorithm was developed to interpret joint orientation from analyzing the images of rock slope. As a method of arranging the multiple images of a rock slope, the multistage convergent photographing system was introduced to overcome the limitation of photographing direction which existing method such as parallel stereophotogrammetric system has and to cover the range of image measurement, which is the overlapping area between the image pair, to a maximum extent. To determine camera parameters in the perspective projection equation that are the main elements of the analysis method, a new method was developed introducing three ground control points and single ground guide point. This method could be considered to be very simple compared with other existing methods using a number of ground control points and complicated analysis process. So the global coordinates of a specific point on a rock slope could be analyzed with this new method. The orientation of a joint could be calculated using the normal vector of the joint surface which can be derived from the global coordinates of several points on the joint surface analyzed from the images.

Development of Road Surface Management System using Digital Imagery (수치영상을 이용한 도로 노면관리시스템 개발)

  • Seo, Dong-Ju
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.10 no.1
    • /
    • pp.35-46
    • /
    • 2007
  • In the study digital imagery was used to examine asphalt concrete pavements. With digitally mastered-image information that was filmed with a video camera fixed on a car travelling on road at a consistent speed, a road surface management system that can gain road surface information (Crack, Rutting, IRI) was developed using an object-oriented language "Delphi". This system was designed to improve visualized effects by animations and graphs. After analyzing the accuracy of 3-D coordinates of road surfaces that were decided using multiple image orientation and bundle adjustment method, the average of standard errors turned out to be 0.0427m in the X direction, 0.0527m in the Y direction and 0.1539m in the Z direction. As a result, it was found to be good enough to be put to practical use for maps drawn on scales below 1/1000, which are currently producted and used in our country, and GIS data. According to the analysis of the accuracy in crack width on 12 spots using a digital video camera, the standard error was found to be ${\pm}0.256mm$, which is considered as high precision. In order to get information on rutting, the physically measured cross sections of 4 spots were compared with cross sections generated from digital images. Even though a maximum error turned out to be 10.88mm, its practicality is found in work efficiency.

  • PDF

The Tunnel Lane Positioning System of a Autonomous Vehicle in the LED Lighting (LED 조명을 이용한 자율주행차용 터널 차로측위 시스템)

  • Jeong, Jae hoon;Lee, Dong heon;Byun, Gi-sig;Cho, Hyung rae;Cho, Yoon ho
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.16 no.1
    • /
    • pp.186-195
    • /
    • 2017
  • Recently, autonomous vehicles have been studied actively. There are various technologies such as ITS, Connected Car, V2X and ADAS in order to realize such autonomous driving. Among these technologies, it is particularly important to recognize where the vehicle is on the road in order to change the lane and drive to the destination. Generally, it is done through GPS and camera image processing. However, there are limitations on the reliability of the positioning due to shaded areas such as tunnels in the case of GPS, and there are limitations in recognition and positioning according to the state of the road lane and the surrounding environment when performing the camera image processing. In this paper, we propose that LED lights should be installed for autonomous vehicles in tunnels which are shaded area of the GPS. In this paper, we show that it is possible to measure the position of the current lane of the autonomous vehicle by analyzing the color temperature after constructing the tunnel LED lighting simulation environment which illuminates light of different color temperature by lane. Based on the above, this paper proposes a lane positioning technique using tunnel LED lights.

Active Object Tracking System based on Stereo Vision (스테레오 비젼 기반의 능동형 물체 추적 시스템)

  • Ko, Jung-Hwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.4
    • /
    • pp.159-166
    • /
    • 2016
  • In this paper, an active object tracking system basing on the pan/tilt-embedded stereo camera system is suggested and implemented. In the proposed system, once the face area of a target is detected from the input stereo image by using a YCbCr color model and phase-type correlation scheme and then, using this data as well as the geometric information of the tracking system, the distance and 3D information of the target are effectively extracted in real-time. Basing on these extracted data the pan/tilted-embedded stereo camera system is adaptively controlled and as a result, the proposed system can track the target adaptively under the various circumstance of the target. From some experiments using 480 frames of the test input stereo image, it is analyzed that a standard variation between the measured and computed the estimated target's height and an error ratio between the measured and computed 3D coordinate values of the target is also kept to be very low value of 1.03 and 1.18% on average, respectively. From these good experimental results a possibility of implementing a new real-time intelligent stereo target tracking and surveillance system using the proposed scheme is finally suggested.

Influence of Iodinated Contrast Media and Paramagnetic Contrast Media on Changes in Uptake Counts of 99mTc

  • Cho, Jae-Hwan;Lee, Jin-Hyeok;Park, Cheol-Soo;Lee, Sun-Yeob;Lee, Jin;Moon, Deog-Hwan;Lee, Hae-Kag
    • Journal of Magnetics
    • /
    • v.19 no.3
    • /
    • pp.248-254
    • /
    • 2014
  • The purpose of this study is to figure out how uptake counts of technetium ($^{99m}Tc$) among radioisotopes in the human body are affected if computed tomography (CT), magnetic resonance imaging (MRI) and isotope examination are performed consecutively. $^{99m}Tc$ isotope material, iodinated contrast media for CT and paramagnetic contrast media for magnetic resonance (MR) were used as experimental materials. First, $^{99m}Tc$ was added to 4 cc normal saline in a test tube. Then, 2 cc of CT contrast media such as $Iopamidol^{(R)}$ and $Dotarem^{(R)}$ were diluted with 2 cc normal saline, and 2cc of MRI contrast media such as $Primovist^{(R)}$ and $Gadovist^{(R)}$ were diluted with 2 cc normal saline. Each distributed contrast media was a total of 4 cc and included 10m Ci of $^{99m}Tc$. A gamma camera, a LEHR (Low energy high resolution) collimator and a pin-hole collimator were used for image acquisition. Image acquisition was repeated a total of 6 times and 120 frames were obtained and uptake counts of $^{99m}Tc$ were measured (from this procedure). In this study, as a result of measuring the uptake counts of $^{99m}Tc$ using the LEHR collimator, the uptake counts were less measured in all contrast media than normal saline as a reference. In particular, the lowest uptake counts were measured when $Gadovist^{(R)}$, contrast media for MRI, was used. However, the result of measuring the uptake counts of $^{99m}Tc$ using the pin-hole collimator showed higher uptake counts in all contrast media, except for $Iopamidol^{(R)}$, than normal saline as a reference. The highest uptake counts were measured particularly when $Primovist^{(R)}$, contrast media for MRI, was used. In performing the gamma camera examination using contrast media and $^{99m}Tc$, it is considered significant to check the changes in the uptake counts to improve various diagnosis values.

Information Hiding Technique in Smart Phone for the Implementation of GIS Web-Map Service (GIS 웹 맵 서비스 구현을 위한 스마트 폰에서의 정보은닉 기법)

  • Kim, Jin-Ho;Seo, Yong-Su;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.5
    • /
    • pp.710-721
    • /
    • 2010
  • Recently, for the advancement of embedded technology about mobile device, a new kind of service, mash-up is appeared. It is service or application combining multimedia content making tool or device and web-GIS(geographic information system) service in the mobile environment. This service can be ease to use for casual user and can apply in various ways. So, It is served in web 2.0 environment actively. But, in the mashup service, because generated multimedia contents linked with web map are new type of multimedia contents which include user's migration routes in the space such as GPS coordinates. Thus, there are no protection ways for intellectual property created by GIS web-map service users and user's privacy. In this paper, we proposed a location and user information hiding scheme for GIS web-map service. This scheme embeds location and user information into a picture that is taken by camera module on the mobile phone. It is not only protecting way for user's privacy but is also tracing way against illegal photographer who is peeping person through hidden camera. And than, we also realized proposed scheme on the mobile smart phone. For minimizing margin of error about location coordinate value against contents manipulating attacks, GPS information is embedded into chrominance signal of contents considering weight of each digit about binary type of GPS coordinate value. And for tracing illegal photographer, user information such as serial number of mobile phone, phone number and photographing date is embedded into frequency spectrum of contents luminance signal. In the experimental results, we confirmed that the error of extracted information against various image processing attacks is within reliable tolerance. And after file format translation attack, we extracted embedded information from the attacked contents without no damage. Using similarity between extracted one and original templete, we also extracted whole information from damaged chrominance signal of contents by various image processing attacks.

Object Detection Method on Vision Robot using Sensor Fusion (센서 융합을 이용한 이동 로봇의 물체 검출 방법)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.249-254
    • /
    • 2007
  • A mobile robot with various types of sensors and wireless camera is introduced. We show this mobile robot can detect objects well by combining the results of active sensors and image processing algorithm. First, to detect objects, active sensors such as infrared rays sensors and supersonic waves sensors are employed together and calculates the distance in real time between the object and the robot using sensor's output. The difference between the measured value and calculated value is less than 5%. We focus on how to detect a object region well using image processing algorithm because it gives robots the ability of working for human. This paper suggests effective visual detecting system for moving objects with specified color and motion information. The proposed method includes the object extraction and definition process which uses color transformation and AWUPC computation to decide the existence of moving object. Shape information and signature algorithm are used to segment the objects from background regardless of shape changes. We add weighing values to each results from sensors and the camera. Final results are combined to only one value which represents the probability of an object in the limited distance. Sensor fusion technique improves the detection rate at least 7% higher than the technique using individual sensor.

On the Experimental Modeling of Focal Plane Compensation Device for Image Stabilization of Small Satellite (소형위성 광학탑재체의 영상안정화를 위한 초점면부 보정장치의 실험적 모델링에 관한 연구)

  • Kang, Myoung-Soo;Hwang, Jai-Hyuk;Bae, Jae-Sung;Park, Jean-Ho
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.43 no.8
    • /
    • pp.757-764
    • /
    • 2015
  • Mathematical modeling of focal plane compensation device in the small earth-observation satellite camera has been conducted experimently for compensation of micro-vibration disturbance. The PZT actuators are used as control actuators for compensation device. It is quite difficult to build up mathematical model because of hysteresis characteristic of PZT actuators. Therefore, the compensation device system is assumed as a $2^{nd}$ order linear system and modeled by using MATLAB System Identification Toolbox. It has been found that four linear models of compensation device are needed to meet 10% error in the input frequency range of 0~50Hz. These models describe accurately the dynamics of compensation device in the 4 divided domains of the input frequency range of 0~50Hz, respectively. Micro-vibration disturbance can be compensated by feedback control strategy of switching four models appropriately according to the input frequency.

Accuracy of Parcel Boundary Demarcation in Agricultural Area Using UAV-Photogrammetry (무인 항공사진측량에 의한 농경지 필지 경계설정 정확도)

  • Sung, Sang Min;Lee, Jae One
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.1
    • /
    • pp.53-62
    • /
    • 2016
  • In recent years, UAV Photogrammetry based on an ultra-light UAS(Unmanned Aerial System) installed with a low-cost compact navigation device and a camera has attracted great attention through fast and accurate acquirement of geo-spatial data. In particular, UAV Photogrammetry do gradually replace the traditional aerial photogrammetry because it is able to produce DEMs(Digital Elevation Models) and Orthophotos rapidly owing to large amounts of high resolution image collection by a low-cost camera and image processing software combined with computer vision technique. With these advantages, UAV-Photogrammetry has therefore been applying to a large scale mapping and cadastral surveying that require accurate position information. This paper presents experimental results of an accuracy performance test with images of 4cm GSD from a fixed wing UAS to demarcate parcel boundaries in agricultural area. Consequently, the accuracy of boundary point extracted from UAS orthoimage has shown less than 8cm compared with that of terrestrial cadastral surveying. This means that UAV images satisfy the tolerance limit of distance error in cadastral surveying for the scale of 1: 500. And also, the area deviation is negligible small, about 0.2%(3.3m2), against true area of 1,969m2 by cadastral surveying. UAV-Photogrammetry is therefore as a promising technology to demarcate parcel boundaries.