• Title/Summary/Keyword: Drone image

Search Result 269, Processing Time 0.024 seconds

Target Latitude and Longitude Detection Using UAV Rotation Angle (UAV의 회전각을 이용한 목표물 위경도 탐지 방법)

  • Shin, Kwang-Seong;Jung, Nyum;Youm, Sungkwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.1
    • /
    • pp.107-112
    • /
    • 2020
  • Recently, as the field of use of drones is diversified, it is actively used not only for surveying but also for search and rescue work. In these applications it is very important to know the location of the target or the location of the UAV. This paper proposes a target detection method using images taken from drones. The proposed method calculates the latitude and longitude information of the target by finding the location of the target by comparing it with the image to find the image taken by the drone. The exact latitude and longitude information of the target is calculated by calculating the actual distance corresponding to the distance of the image image using the characteristics of the pinhole camera. The proposed method through the actual experiment confirmed that the latitude and longitude of the target was accurately identified.

Comparison of Deep-Learning Algorithms for the Detection of Railroad Pedestrians

  • Fang, Ziyu;Kim, Pyeoungkee
    • Journal of information and communication convergence engineering
    • /
    • v.18 no.1
    • /
    • pp.28-32
    • /
    • 2020
  • Railway transportation is the main land-based transportation in most countries. Accordingly, railway-transportation safety has always been a key issue for many researchers. Railway pedestrian accidents are the main reasons of railway-transportation casualties. In this study, we conduct experiments to determine which of the latest convolutional neural network models and algorithms are appropriate to build pedestrian railroad accident prevention systems. When a drone cruises over a pre-specified path and altitude, the real-time status around the rail is recorded, following which the image information is transmitted back to the server in time. Subsequently, the images are analyzed to determine whether pedestrians are present around the railroads, and a speed-deceleration order is immediately sent to the train driver, resulting in a reduction of the instances of pedestrian railroad accidents. This is the first part of an envisioned drone-based intelligent security system. This system can effectively address the problem of insufficient manual police force.

Automated 3D Model Reconstruction of Disaster Site Using Aerial Imagery Acquired By Drones

  • Kim, Changyoon;Moon, Hyounseok;Lee, Woosik
    • International conference on construction engineering and project management
    • /
    • 2015.10a
    • /
    • pp.671-672
    • /
    • 2015
  • Due to harsh conditions of disaster areas, understanding of current feature of collapsed buildings, terrain, and other infrastructures is critical issue for disaster managers. However, because of difficulties in acquiring the geographical information of the disaster site such as large disaster site and limited capability of rescue workers, comprehensive site investigation of current location of survivors buried under the remains of the building is not an easy task for disaster managers. To overcome these circumstances of disaster site, this study makes use of an unmanned aerial vehicle, commonly known as a drone to effectively acquire current image data from the large disaster areas. The framework of 3D model reconstruction of disaster site using aerial imagery acquired by drones was also presented. The proposed methodology is expected to assist rescue workers and disaster managers in achieving a rapid and accurate identification of survivors under the collapsed building.

  • PDF

Research on the Design of Drone System for Field Support Using AR Smart Glasses Technology (AR스마트안경 기술을 접목한 현장 지원용 드론(Drone)시스템 설계에 대한 연구)

  • Lee, Kyung-Hwan;Jeong, Jin-Kuk;Ryu, Gab-Sang
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.4
    • /
    • pp.27-32
    • /
    • 2020
  • High-resolution images taken by drones are being used for a variety of information, including monitoring. The management of agricultural facilities still uses mostly human survey methods. Surveying agricultural facilities, surveying the appearance of agricultural facilities, and the sleeping environment have legal and environmental constraints that are inaccessible to humans. In addition, in an area where information such as 3D maps and satellite maps are outdated or not provided, human investigation is inevitable, and a lot of time and money are invested. The purpose of this research is to design and develop drone system for field support incorporating AR smart glasses technology for the maintenance and management of agricultural facilities to improve the difficulties of using existing drones. In addition, We will also suggest ways to secure the safety of personal information in order to solve the damages caused by the exposure of personal information that may occur through video shooting.

Discriminant analysis to detect fire blight infection on pear trees using RGB imagery obtained by a rotary wing drone

  • Kim, Hyun-Jung;Noh, Hyun-Kwon;Kang, Tae-Hwan
    • Korean Journal of Agricultural Science
    • /
    • v.47 no.2
    • /
    • pp.349-360
    • /
    • 2020
  • Fire-blight disease is a kind of contagious disease affecting apples, pears, and some other members of the family Rosaceae. Due to its extremely strong infectivity, once an orchard is confirmed to be infected, all of the orchards located within 100 m must be buried under the ground, and the sites are prohibited to cultivate any fruit trees for 5 years. In South Korea, fire-blight was confirmed for the first time in the Ansung area in 2015, and the infection is still being identified every year. Traditional approaches to detect fire-blight are expensive and require much time, additionally, also the inspectors have the potential to transmit the pathogen, Thus, it is necessary to develop a remote, unmanned monitoring system for fire-blight to prevent the spread of the disease. This study was conducted to detect fire-blight on pear trees using discriminant analysis with color information collected from a rotary-wing drone. The images of the infected trees were obtained at a pear orchard in Cheonan using an RGB camera attached to a rotary-wing drone at an altitude of 4 m, and also using a smart phone RGB camera on the ground. RGB and Lab color spaces and discriminant analysis were used to develop the image processing algorithm. As a result, the proposed method had an accuracy of approximately 75% although the system still requires many flaws to be improved.

Design of a GCS System Supporting Vision Control of Quadrotor Drones (쿼드로터드론의 영상기반 자율비행연구를 위한 지상제어시스템 설계)

  • Ahn, Heejune;Hoang, C. Anh;Do, T. Tuan
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.10
    • /
    • pp.1247-1255
    • /
    • 2016
  • The safety and autonomous flight function of micro UAV or drones is crucial to its commercial application. The requirement of own building stable drones is still a non-trivial obstacle for researchers that want to focus on the intelligence function, such vision and navigation algorithm. The paper present a GCS using commercial drone and hardware platforms, and open source software. The system follows modular architecture and now composed of the communication, UI, image processing. Especially, lane-keeping algorithm. are designed and verified through testing at a sports stadium. The designed lane-keeping algorithm estimates drone position and heading in the lane using Hough transform for line detection, RANSAC-vanishing point algorithm for selecting the desired lines, and tracking algorithm for stability of lines. The flight of drone is controlled by 'forward', 'stop', 'clock-rotate', and 'counter-clock rotate' commands. The present implemented system can fly straight and mild curve lane at 2-3 m/s.

Machine learning based radar imaging algorithm for drone detection and classification (드론 탐지 및 분류를 위한 레이다 영상 기계학습 활용)

  • Moon, Min-Jung;Lee, Woo-Kyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.5
    • /
    • pp.619-627
    • /
    • 2021
  • Recent advance in low cost and light-weight drones has extended their application areas in both military and private sectors. Accordingly surveillance program against unfriendly drones has become an important issue. Drone detection and classification technique has long been emphasized in order to prevent attacks or accidents by commercial drones in urban areas. Most commercial drones have small sizes and low reflection and hence typical sensors that use acoustic, infrared, or radar signals exhibit limited performances. Recently, artificial intelligence algorithm has been actively exploited to enhance radar image identification performance. In this paper, we adopt machined learning algorithm for high resolution radar imaging in drone detection and classification applications. For this purpose, simulation is carried out against commercial drone models and compared with experimental data obtained through high resolution radar field test.

Study of Machine Learning based on EEG for the Control of Drone Flight (뇌파기반 드론제어를 위한 기계학습에 관한 연구)

  • Hong, Yejin;Cho, Seongmin;Cha, Dowan
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.249-251
    • /
    • 2022
  • In this paper, we present machine learning to control drone flight using EEG signals. We defined takeoff, forward, backward, left movement and right movement as control targets and measured EEG signals from the frontal lobe for controlling using Fp1. Fp2 Fp2 two-channel dry electrode (NeuroNicle FX2) measuring at 250Hz sampling rate. And the collected data were filtered at 6~20Hz cutoff frequency. We measured the motion image of the action associated with each control target open for 5.19 seconds. Using Matlab's classification learner for the measured EEG signal, the triple layer neural network, logistic regression kernel, nonlinear polynomial Support Vector Machine(SVM) learning was performed, logistic regression kernel was confirmed as the highest accuracy for takeoff and forward, backward, left movement and right movement of the drone in learning by class True Positive Rate(TPR).

  • PDF

Research of the Delivery Autonomy and Vision-based Landing Algorithm for Last-Mile Service using a UAV (무인기를 이용한 Last-Mile 서비스를 위한 배송 자동화 및 영상기반 착륙 알고리즘 연구)

  • Hanseob Lee;Hoon Jung
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.2
    • /
    • pp.160-167
    • /
    • 2023
  • This study focuses on the development of a Last-Mile delivery service using unmanned vehicles to deliver goods directly to the end consumer utilizing drones to perform autonomous delivery missions and an image-based precision landing algorithm for handoff to a robot in an intermediate facility. As the logistics market continues to grow rapidly, parcel volumes increase exponentially each year. However, due to low delivery fees, the workload of delivery personnel is increasing, resulting in a decrease in the quality of delivery services. To address this issue, the research team conducted a study on a Last-Mile delivery service using unmanned vehicles and conducted research on the necessary technologies for drone-based goods transportation in this paper. The flight scenario begins with the drone carrying the goods from a pickup location to the rooftop of a building where the final delivery destination is located. There is a handoff facility on the rooftop of the building, and a marker on the roof must be accurately landed upon. The mission is complete once the goods are delivered and the drone returns to its original location. The research team developed a mission planning algorithm to perform the above scenario automatically and constructed an algorithm to recognize the marker through a camera sensor and achieve a precision landing. The performance of the developed system has been verified through multiple trial operations within ETRI.

Position Recognition and Indoor Autonomous Flight of a Small Quadcopter Using Distributed Image Matching (분산영상 매칭을 이용한 소형 쿼드콥터의 실내 비행 위치인식과 자율비행)

  • Jin, Taeseok
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.23 no.2_2
    • /
    • pp.255-261
    • /
    • 2020
  • We consider the problem of autonomously flying a quadcopter in indoor environments. Navigation in indoor settings poses two major issues. First, real time recognition of the marker captured by the camera. Second, The combination of the distributed images is used to determine the position and orientation of the quadcopter in an indoor environment. We autonomously fly a miniature RC quadcopter in small known environments using an on-board camera as the only sensor. We use an algorithm that combines data-driven image classification with image-combine techniques on the images captured by the camera to achieve real 3D localization and navigation.