• 제목/요약/키워드: Detection accuracy

검색결과 3,981건 처리시간 0.032초

Optical Flow Measurement Based on Boolean Edge Detection and Hough Transform

  • Chang, Min-Hyuk;Kim, Il-Jung;Park, Jong an
    • International Journal of Control, Automation, and Systems
    • /
    • 제1권1호
    • /
    • pp.119-126
    • /
    • 2003
  • The problem of tracking moving objects in a video stream is discussed in this pa-per. We discussed the popular technique of optical flow for moving object detection. Optical flow finds the velocity vectors at each pixel in the entire video scene. However, optical flow based methods require complex computations and are sensitive to noise. In this paper, we proposed a new method based on the Hough transform and on voting accumulation for improving the accuracy and reducing the computation time. Further, we applied the Boo-lean based edge detector for edge detection. Edge detection and segmentation are used to extract the moving objects in the image sequences and reduce the computation time of the CHT. The Boolean based edge detector provides accurate and very thin edges. The difference of the two edge maps with thin edges gives better localization of moving objects. The simulation results show that the proposed method improves the accuracy of finding the optical flow vectors and more accurately extracts moving objects' information. The process of edge detection and segmentation accurately find the location and areas of the real moving objects, and hence extracting moving information is very easy and accurate. The Combinatorial Hough Transform and voting accumulation based optical flow measures optical flow vectors accurately. The direction of moving objects is also accurately measured.

Remote Distance Measurement from a Single Image by Automatic Detection and Perspective Correction

  • Layek, Md Abu;Chung, TaeChoong;Huh, Eui-Nam
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권8호
    • /
    • pp.3981-4004
    • /
    • 2019
  • This paper proposes a novel method for locating objects in real space from a single remote image and measuring actual distances between them by automatic detection and perspective transformation. The dimensions of the real space are known in advance. First, the corner points of the interested region are detected from an image using deep learning. Then, based on the corner points, the region of interest (ROI) is extracted and made proportional to real space by applying warp-perspective transformation. Finally, the objects are detected and mapped to the real-world location. Removing distortion from the image using camera calibration improves the accuracy in most of the cases. The deep learning framework Darknet is used for detection, and necessary modifications are made to integrate perspective transformation, camera calibration, un-distortion, etc. Experiments are performed with two types of cameras, one with barrel and the other with pincushion distortions. The results show that the difference between calculated distances and measured on real space with measurement tapes are very small; approximately 1 cm on an average. Furthermore, automatic corner detection allows the system to be used with any type of camera that has a fixed pose or in motion; using more points significantly enhances the accuracy of real-world mapping even without camera calibration. Perspective transformation also increases the object detection efficiency by making unified sizes of all objects.

도로의 높낮이 변화와 초목이 존재하는 환경에서의 비전 센서 기반 (Vision-sensor-based Drivable Area Detection Technique for Environments with Changes in Road Elevation and Vegetation)

  • 이상재;현종길;권연수;심재훈;문병인
    • 센서학회지
    • /
    • 제28권2호
    • /
    • pp.94-100
    • /
    • 2019
  • Drivable area detection is a major task in advanced driver assistance systems. For drivable area detection, several studies have proposed vision-sensor-based approaches. However, conventional drivable area detection methods that use vision sensors are not suitable for environments with changes in road elevation. In addition, if the boundary between the road and vegetation is not clear, judging a vegetation area as a drivable area becomes a problem. Therefore, this study proposes an accurate method of detecting drivable areas in environments in which road elevations change and vegetation exists. Experimental results show that when compared to the conventional method, the proposed method improves the average accuracy and recall of drivable area detection on the KITTI vision benchmark suite by 3.42%p and 8.37%p, respectively. In addition, when the proposed vegetation area removal method is applied, the average accuracy and recall are further improved by 6.43%p and 9.68%p, respectively.

3차원 포인트 클라우드 데이터를 활용한 객체 탐지 기법인 PointNet과 RandLA-Net (PointNet and RandLA-Net Algorithms for Object Detection Using 3D Point Clouds)

  • 이동건;지승환;박본영
    • 대한조선학회논문집
    • /
    • 제59권5호
    • /
    • pp.330-337
    • /
    • 2022
  • Research on object detection algorithms using 2D data has already progressed to the level of commercialization and is being applied to various manufacturing industries. Object detection technology using 2D data has an effective advantage, there are technical limitations to accurate data generation and analysis. Since 2D data is two-axis data without a sense of depth, ambiguity arises when approached from a practical point of view. Advanced countries such as the United States are leading 3D data collection and research using 3D laser scanners. Existing processing and detection algorithms such as ICP and RANSAC show high accuracy, but are used as a processing speed problem in the processing of large-scale point cloud data. In this study, PointNet a representative technique for detecting objects using widely used 3D point cloud data is analyzed and described. And RandLA-Net, which overcomes the limitations of PointNet's performance and object prediction accuracy, is described a review of detection technology using point cloud data was conducted.

9축센서 기반의 도로시설물 충돌감지 알고리즘 (Collision Detection Algorithm using a 9-axis Sensor in Road Facility)

  • 홍기현;이병문
    • 한국멀티미디어학회논문지
    • /
    • 제25권2호
    • /
    • pp.297-310
    • /
    • 2022
  • Road facilities such as CCTV poles have potential risk of collision accidents with a car. A collision detection algorithm installed in the facility allows the collision accident to be known remotely. Most collision detection algorithms are operated by simply focusing on whether a collision have occurred, because these methods are used to measure only acceleration data from a 3-axis sensor to detect collision. However, it is difficult to detect other detailed information such as malfunction of the sensor, collision direction and collision strength, because it is not known without witness the accident. Therefore, we proposed enhanced detection algorithm to get the collision direction, and the collision strength from the tilt of the facility after accident using a 9-axis sensor in this paper. In order to confirm the performance of the algorithm, an accuracy evaluation experiment was conducted according to the data measurement cycle and the invocation cycle to an detection algorithm. As a result, the proposed enhanced algorithm confirmed 100% accuracy for 50 weak collisions and 50 strong collisions at the 9-axis data measurement cycle of 10ms and the invocation cycle of 1,000ms. In conclusion, the algorithm proposed is expected to provide more reliable and detailed information than existing algorithm.

Traffic Accident Detection Based on Ego Motion and Object Tracking

  • Kim, Da-Seul;Son, Hyeon-Cheol;Si, Jong-Wook;Kim, Sung-Young
    • 한국정보기술학회 영문논문지
    • /
    • 제10권1호
    • /
    • pp.15-23
    • /
    • 2020
  • In this paper, we propose a new method to detect traffic accidents in video from vehicle-mounted cameras (vehicle black box). We use the distance between vehicles to determine whether an accident has occurred. To calculate the position of each vehicle, we use object detection and tracking method. By the way, in a crowded road environment, it is so difficult to decide an accident has occurred because of parked vehicles at the edge of the road. It is not easy to discriminate against accidents from non-accidents because a moving vehicle and a stopped vehicle are mixed on a regular downtown road. In this paper, we try to increase the accuracy of the vehicle accident detection by using not only the motion of the surrounding vehicle but also ego-motion as the input of the Recurrent Neural Network (RNN). We improved the accuracy of accident detection compared to the previous method.

Automatic Detection of Dead Trees Based on Lightweight YOLOv4 and UAV Imagery

  • Yuanhang Jin;Maolin Xu;Jiayuan Zheng
    • Journal of Information Processing Systems
    • /
    • 제19권5호
    • /
    • pp.614-630
    • /
    • 2023
  • Dead trees significantly impact forest production and the ecological environment and pose constraints to the sustainable development of forests. A lightweight YOLOv4 dead tree detection algorithm based on unmanned aerial vehicle images is proposed to address current limitations in dead tree detection that rely mainly on inefficient, unsafe and easy-to-miss manual inspections. An improved logarithmic transformation method was developed in data pre-processing to display tree features in the shadows. For the model structure, the original CSPDarkNet-53 backbone feature extraction network was replaced by MobileNetV3. Some of the standard convolutional blocks in the original extraction network were replaced by depthwise separable convolution blocks. The new ReLU6 activation function replaced the original LeakyReLU activation function to make the network more robust for low-precision computations. The K-means++ clustering method was also integrated to generate anchor boxes that are more suitable for the dataset. The experimental results show that the improved algorithm achieved an accuracy of 97.33%, higher than other methods. The detection speed of the proposed approach is higher than that of YOLOv4, improving the efficiency and accuracy of the detection process.

Automatic Detection of Congestive Heart Failure and Atrial Fibrillation with Short RR Interval Time Series

  • Yoon, Kwon-Ha;Nam, Yunyoung;Thap, Tharoeun;Jeong, Changwon;Kim, Nam Ho;Ko, Joem Seok;Noh, Se-Eung;Lee, Jinseok
    • Journal of Electrical Engineering and Technology
    • /
    • 제12권1호
    • /
    • pp.346-355
    • /
    • 2017
  • Atrial fibrillation (AF) and Congestive heart failure (CHF) are increasingly widespread, costly, deadly diseases and are associated with significant morbidity and mortality. In this study, we analyzed three statistical methods for automatic detection of AF and CHF based on the randomness, variability and complexity of the heart beat interval, which is RRI time series. Specifically, we used short RRI time series with 16 beats and employed the normalized root mean square of successive RR differences (RMSSD), the sample entropy and the Shannon entropy. The detection performance was analyzed using four large well documented databases, namely the MIT-BIH Atrial fibrillation (n=23), the MIT-BIH Normal Sinus Rhythm (n=18), the BIDMC Congestive Heart Failure (n=13) and the Congestive Heart Failure RRI databases (n=25). Using thresholds by Receiver Operating Characteristic (ROC) curves, we found that the normalized RMSSD provided the highest accuracy. The overall sensitivity, specificity and accuracy for AF and CHF were 0.8649, 0.9331 and 0.9104, respectively. Regarding CHF detection, the detection rate of CHF (NYHA III-IV) was 0.9113 while CHF (NYHA I-II) was 0.7312, which shows that the detection rate of CHF with higher severity is higher than that of CHF with lower severity. For the clinical 24 hour data (n=42), the overall sensitivity, specificity and accuracy for AF and CHF were 0.8809, 0.9406 and 0.9108, respectively, using normalized RMSSD.

영상분류에 의한 하우스재배지 탐지 활용성 분석 (Analyzing the Applicability of Greenhouse Detection Using Image Classification)

  • 성증수;이성순;백승희
    • 한국측량학회지
    • /
    • 제30권4호
    • /
    • pp.397-404
    • /
    • 2012
  • 농업과 관광이 주요 산업인 제주지역은 소득 증대를 위해 노지재배에서 시설재배로의 전환이 활발하게 진행되고 있으므로 하우스재배지에 대한 지속적인 현황 파악이 필요하다. 이에 본 연구에서는 고해상도 위성영상을 이용하여 하우스재배지 탐지를 위한 효과적인 영상분류 방법을 제시하고자 하였다. Formosat-2 위성영상을 대상으로 감독분류와 규칙기반분류 방법을 적용하여 하우스재배지를 분류하였으며, 두 가지 결과를 연계하여 하우스재배지 탐지를 위한 정확도 향상 방안을 모색하였다. 각 분류 방법별 결과는 육안 탐지 결과와의 비교를 통해 정확도를 산출하였다. 연구 결과, 감독분류 방법 중 마하라노비스 거리법이 가장 높은 탐지 결과를 얻을 수 있었으며 감독분류 결과와 규칙기반분류 결과의 연계 시 탐지 정확도가 향상됨을 확인하였다. 향후 감독분류 결과와 규칙기반분류 결과의 연계 과정에 대한 추가적인 연구가 이루어진다면 하우스재배지의 효율적인 탐지가 가능할 것으로 기대된다.

가상 데이터를 활용한 번호판 문자 인식 및 차종 인식 시스템 제안 (Proposal for License Plate Recognition Using Synthetic Data and Vehicle Type Recognition System)

  • 이승주;박구만
    • 방송공학회논문지
    • /
    • 제25권5호
    • /
    • pp.776-788
    • /
    • 2020
  • 본 논문에서는 딥러닝을 이용한 차종 인식과 자동차 번호판 문자 인식 시스템을 제안한다. 기존 시스템에서는 영상처리를 통한 번호판 영역 추출과 DNN을 이용한 문자 인식 방법을 사용하였다. 이러한 시스템은 환경이 변화되면 인식률이 하락되는 문제가 있다. 따라서, 제안하는 시스템은 실시간 검출과 환경 변화에 따른 정확도 하락에 초점을 맞춰 1-stage 객체 검출 방법인 YOLO v3를 사용하였으며, RGB 카메라 한 대로 실시간 차종 및 번호판 문자 인식이 가능하다. 학습데이터는 차종 인식과 자동차 번호판 영역 검출의 경우 실제 데이터를 사용하며, 자동차 번호판 문자 인식의 경우 가상 데이터만을 사용하였다. 각 모듈별 정확도는 차종 검출은 96.39%, 번호판 검출은 99.94%, 번호판 검출은 79.06%를 기록하였다. 이외에도 YOLO v3의 경량화 네트워크인 YOLO v3 tiny를 이용하여 정확도를 측정하였다.