• Title/Summary/Keyword: vehicle detection algorithm

Search Result 501, Processing Time 0.028 seconds

Optical Flow Based Collision Avoidance of Multi-Rotor UAVs in Urban Environments

  • Yoo, Dong-Wan;Won, Dae-Yeon;Tahk, Min-Jea
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.12 no.3
    • /
    • pp.252-259
    • /
    • 2011
  • This paper is focused on dynamic modeling and control system design as well as vision based collision avoidance for multi-rotor unmanned aerial vehicles (UAVs). Multi-rotor UAVs are defined as rotary-winged UAVs with multiple rotors. These multi-rotor UAVs can be utilized in various military situations such as surveillance and reconnaissance. They can also be used for obtaining visual information from steep terrains or disaster sites. In this paper, a quad-rotor model is introduced as well as its control system, which is designed based on a proportional-integral-derivative controller and vision-based collision avoidance control system. Additionally, in order for a UAV to navigate safely in areas such as buildings and offices with a number of obstacles, there must be a collision avoidance algorithm installed in the UAV's hardware, which should include the detection of obstacles, avoidance maneuvering, etc. In this paper, the optical flow method, one of the vision-based collision avoidance techniques, is introduced, and multi-rotor UAV's collision avoidance simulations are described in various virtual environments in order to demonstrate its avoidance performance.

De-blurring Algorithm for Performance Improvement of Searching a Moving Vehicle on Fisheye CCTV Image (어안렌즈사용 CCTV이미지에서 차량 정보 수집의 성능개선을 위한 디블러링 알고리즘)

  • Lee, In-Jung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.4C
    • /
    • pp.408-414
    • /
    • 2010
  • When we are collecting traffic information on CCTV images, we have to install the detect zone in the image area during pan-tilt system is on duty. An automation of detect zone with pan-tilt system is not easy because of machine error. So the fisheye lens attached camera or convex mirror camera is needed for getting wide area images. In this situation some troubles are happened, that is a decreased system speed or image distortion. This distortion is caused by occlusion of angled ray as like trembled snapshot in digital camera. In this paper, we propose two methods of de-blurring to overcome distortion, the one is image segmentation by nonlinear diffusion equation and the other is deformation for some segmented area. As the results of doing de-blurring methods, the de-blurring image has 15 decibel increased PSNR and the detection rate of collecting traffic information is more than 5% increasing than in distorted images.

Real time instruction classification system

  • Sang-Hoon Lee;Dong-Jin Kwon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.3
    • /
    • pp.212-220
    • /
    • 2024
  • A recently the advancement of society, AI technology has made significant strides, especially in the fields of computer vision and voice recognition. This study introduces a system that leverages these technologies to recognize users through a camera and relay commands within a vehicle based on voice commands. The system uses the YOLO (You Only Look Once) machine learning algorithm, widely used for object and entity recognition, to identify specific users. For voice command recognition, a machine learning model based on spectrogram voice analysis is employed to identify specific commands. This design aims to enhance security and convenience by preventing unauthorized access to vehicles and IoT devices by anyone other than registered users. We converts camera input data into YOLO system inputs to determine if it is a person, Additionally, it collects voice data through a microphone embedded in the device or computer, converting it into time-domain spectrogram data to be used as input for the voice recognition machine learning system. The input camera image data and voice data undergo inference tasks through pre-trained models, enabling the recognition of simple commands within a limited space based on the inference results. This study demonstrates the feasibility of constructing a device management system within a confined space that enhances security and user convenience through a simple real-time system model. Finally our work aims to provide practical solutions in various application fields, such as smart homes and autonomous vehicles.

Convergence CCTV camera embedded with Deep Learning SW technology (딥러닝 SW 기술을 이용한 임베디드형 융합 CCTV 카메라)

  • Son, Kyong-Sik;Kim, Jong-Won;Lim, Jae-Hyun
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.1
    • /
    • pp.103-113
    • /
    • 2019
  • License plate recognition camera is dedicated device designed for acquiring images of the target vehicle for recognizing letters and numbers in a license plate. Mostly, it is used as a part of the system combined with server and image analysis module rather than as a single use. However, building a system for vehicle license plate recognition is costly because it is required to construct a facility with a server providing the management and analysis of the captured images and an image analysis module providing the extraction of numbers and characters and recognition of the vehicle's plate. In this study, we would like to develop an embedded type convergent camera (Edge Base) which can expand the function of the camera to not only the license plate recognition but also the security CCTV function together and to perform two functions within the camera. This embedded type convergence camera equipped with a high resolution 4K IP camera for clear image acquisition and fast data transmission extracted license plate area by applying YOLO, a deep learning software for multi object recognition based on open source neural network algorithm and detected number and characters of the plate and verified the detection accuracy and recognition accuracy and confirmed that this camera can perform CCTV security function and vehicle number plate recognition function successfully.

Research of the Face Extract Algorithm from Road Side Images Obtained by vehicle (차량에서 획득된 도로 주변 영상에서의 얼굴 추출 방안 연구)

  • Rhee, Soo-Ahm;Kim, Tae-Jung;Kim, Moon-Gie;Yun, Duk-Geun;Sung, Jung-Gon
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.16 no.1
    • /
    • pp.49-55
    • /
    • 2008
  • The face extraction is very important to provide the images of the roads and road sides without the problem of privacy. For face extraction form roadside images, we detected the skin color area by using HSI and YCrCb color models. Efficient skin color detection was achieved by using these two models. We used a connectivity and intensity difference for grouping, skin color regions further we applied shape conditions (rate, area, number and oval condition) and determined face candidate regions. We applied thresholds to region, and determined the region as the face if black part was over 5% of the whole regions. As the result of the experiment 28 faces has been extracted among 38 faces had problem of privacy. The reasons which the face was not extracted were the effect of shadow of the face, and the background objects. Also objects with the color similar to the face were falsely extracted. For improvement, we need to adjust the threshold.

  • PDF

Case Study: Cost-effective Weed Patch Detection by Multi-Spectral Camera Mounted on Unmanned Aerial Vehicle in the Buckwheat Field

  • Kim, Dong-Wook;Kim, Yoonha;Kim, Kyung-Hwan;Kim, Hak-Jin;Chung, Yong Suk
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.64 no.2
    • /
    • pp.159-164
    • /
    • 2019
  • Weed control is a crucial practice not only in organic farming, but also in modern agriculture because it can lead to loss in crop yield. In general, weed is distributed in patches heterogeneously in the field. These patches vary in size, shape, and density. Thus, it would be efficient if chemicals are sprayed on these patches rather than spraying uniformly in the field, which can pollute the environment and be cost prohibitive. In this sense, weed detection could be beneficial for sustainable agriculture. Studies have been conducted to detect weed patches in the field using remote sensing technologies, which can be classified into a method using image segmentation based on morphology and a method with vegetative indices based on the wavelength of light. In this study, the latter methodology has been used to detect the weed patches. As a result, it was found that the vegetative indices were easier to operate as it did not need any sophisticated algorithm for differentiating weeds from crop and soil as compared to the former method. Consequently, we demonstrated that the current method of using vegetative index is accurate enough to detect weed patches, and will be useful for farmers to control weeds with minimal use of chemicals and in a more precise manner.

Semantic Object Detection based on LiDAR Distance-based Clustering Techniques for Lightweight Embedded Processors (경량형 임베디드 프로세서를 위한 라이다 거리 기반 클러스터링 기법을 활용한 의미론적 물체 인식)

  • Jung, Dongkyu;Park, Daejin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.10
    • /
    • pp.1453-1461
    • /
    • 2022
  • The accuracy of peripheral object recognition algorithms using 3D data sensors such as LiDAR in autonomous vehicles has been increasing through many studies, but this requires high performance hardware and complex structures. This object recognition algorithm acts as a large load on the main processor of an autonomous vehicle that requires performing and managing many processors while driving. To reduce this load and simultaneously exploit the advantages of 3D sensor data, we propose 2D data-based recognition using the ROI generated by extracting physical properties from 3D sensor data. In the environment where the brightness value was reduced by 50% in the basic image, it showed 5.3% higher accuracy and 28.57% lower performance time than the existing 2D-based model. Instead of having a 2.46 percent lower accuracy than the 3D-based model in the base image, it has a 6.25 percent reduction in performance time.

VERTICAL OZONE DENSITY PROFILING BY UV RADIOMETER ONBOARD KSR-III

  • Hwang Seung-Hyun;Kim Jhoon;Lee Soo-Jin;Kim Kwang-Soo;Ji Ki-Man;Shin Myung-Ho;Chung Eui-Seung
    • Bulletin of the Korean Space Science Society
    • /
    • 2004.10b
    • /
    • pp.372-375
    • /
    • 2004
  • The UV radiometer payload was launched successfully from the west coastal area of Korea Peninsula aboard KSR-III on 28, Nov 2002. KSR-III was the Korean third generation sounding rocket and was developed as intermediate step to larger space launch vehicle with liquid propulsion engine system. UV radiometer onboard KSR-III consists of UV and visible band optical phototubes to measure the direct solar attenuation during rocket ascending phase. For UV detection, 4 channel of sensors were installed in electronics payload section and each channel has 255, 290, 310nm center wavelengths, respectively. 450nm channel was used as reference for correction of the rocket attitude during the flight. Transmission characteristics of all channels were calibrated precisely prior to the flight test at the Optical Lab. in KARI (Korea Aerospace Research Institute). During a total of 231s flight time, the onboard data telemetered to the ground station in real time. The ozone column density was calculated by this telemetry raw data. From the calculated column density, the vertical ozone profile over Korea Peninsula was obtained with sensor calibration data. Our results had reasonable agreements compared with various observations such as ground Umkhr measurement at Yonsei site, ozonesonde at Pohang site, and satellite measurements of HALOE and POAM. The sensitivity analysis of retrieval algorithm for parameters was performed and it was provided that significant error sources of the retrieval algorithm.

  • PDF

A study on the imputation solution for missing speed data on UTIS by using adaptive k-NN algorithm (적응형 k-NN 기법을 이용한 UTIS 속도정보 결측값 보정처리에 관한 연구)

  • Kim, Eun-Jeong;Bae, Gwang-Soo;Ahn, Gye-Hyeong;Ki, Yong-Kul;Ahn, Yong-Ju
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.13 no.3
    • /
    • pp.66-77
    • /
    • 2014
  • UTIS(Urban Traffic Information System) directly collects link travel time in urban area by using probe vehicles. Therefore it can estimate more accurate link travel speed compared to other traffic detection systems. However, UTIS includes some missing data caused by the lack of probe vehicles and RSEs on road network, system failures, and other factors. In this study, we suggest a new model, based on k-NN algorithm, for imputing missing data to provide more accurate travel time information. New imputation model is an adaptive k-NN which can flexibly adjust the number of nearest neighbors(NN) depending on the distribution of candidate objects. The evaluation result indicates that the new model successfully imputed missing speed data and significantly reduced the imputation error as compared with other models(ARIMA and etc). We have a plan to use the new imputation model improving traffic information service by applying UTIS Central Traffic Information Center.

Road Surface Damage Detection based on Object Recognition using Fast R-CNN (Fast R-CNN을 이용한 객체 인식 기반의 도로 노면 파손 탐지 기법)

  • Shim, Seungbo;Chun, Chanjun;Ryu, Seung-Ki
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.18 no.2
    • /
    • pp.104-113
    • /
    • 2019
  • The road management institute needs lots of cost to repair road surface damage. These damages are inevitable due to natural factors and aging, but maintenance technologies for efficient repair of the broken road are needed. Various technologies have been developed and applied to cope with such a demand. Recently, maintenance technology for road surface damage repair is being developed using image information collected in the form of a black box installed in a vehicle. There are various methods to extract the damaged region, however, we will discuss the image recognition technology of the deep neural network structure that is actively studied recently. In this paper, we introduce a new neural network which can estimate the road damage and its location in the image by region-based convolution neural network algorithm. In order to develop the algorithm, about 600 images were collected through actual driving. Then, learning was carried out and compared with the existing model, we developed a neural network with 10.67% accuracy.