• Title/Summary/Keyword: object detection

Search Result 2,430, Processing Time 0.028 seconds

U2Net-based Single-pixel Imaging Salient Object Detection

  • Zhang, Leihong;Shen, Zimin;Lin, Weihong;Zhang, Dawei
    • Current Optics and Photonics
    • /
    • v.6 no.5
    • /
    • pp.463-472
    • /
    • 2022
  • At certain wavelengths, single-pixel imaging is considered to be a solution that can achieve high quality imaging and also reduce costs. However, achieving imaging of complex scenes is an overhead-intensive process for single-pixel imaging systems, so low efficiency and high consumption are the biggest obstacles to their practical application. Improving efficiency to reduce overhead is the solution to this problem. Salient object detection is usually used as a pre-processing step in computer vision tasks, mimicking human functions in complex natural scenes, to reduce overhead and improve efficiency by focusing on regions with a large amount of information. Therefore, in this paper, we explore the implementation of salient object detection based on single-pixel imaging after a single pixel, and propose a scheme to reconstruct images based on Fourier bases and use U2Net models for salient object detection.

A Fire Deteetion System based on YOLOv5 using Web Camera (웹카메라를 이용한 YOLOv5 기반 화재 감지 시스템)

  • Park, Dae-heum;Jang, Si-woong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.69-71
    • /
    • 2022
  • Today, the AI market is very large due to the development of AI. Among them, the most advanced AI is image detection. Thus, there are many object detection models using YOLOv5.However, most object detection in AI is focused on detecting objects that are stereotyped.In order to recognize such unstructured data, the object may be recognized by learning and filtering the object. Therefore, in this paper, a fire monitoring system using YOLOv5 was designed to detect and analyze unstructured data fires and suggest ways to improve the fire object detection model.

  • PDF

RFID Tag Detection on a Water Content Using a Back-propagation Learning Machine

  • Jo, Min-Ho;Lim, Chang-Gyoon;Zimmers, Emory W.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.1 no.1
    • /
    • pp.19-31
    • /
    • 2007
  • RFID tag is detected by an RFID antenna and information is read from the tag detected, by an RFID reader. RFID tag detection by an RFID reader is very important at the deployment stage. Tag detection is influenced by factors such as tag direction on a target object, speed of a conveyer moving the object, and the contents of an object. The water content of the object absorbs radio waves at high frequencies, typically approximately 900 MHz, resulting in unstable tag signal power. Currently, finding the best conditions for factors influencing the tag detection requires very time consuming work at deployment. Thus, a quick and simple RFID tag detection scheme is needed to improve the current time consuming trial-and-error experimental method. This paper proposes a back-propagation learning-based RFID tag detection prediction scheme, which is intelligent and has the advantages of ease of use and time/cost savings. The results of simulation with the proposed scheme demonstrate a high prediction accuracy for tag detection on a water content, which is comparable with the current method in terms of time/cost savings.

Yolo based Light Source Object Detection for Traffic Image Big Data Processing (교통 영상 빅데이터 처리를 위한 Yolo 기반 광원 객체 탐지)

  • Kang, Ji-Soo;Shim, Se-Eun;Jo, Sun-Moon;Chung, Kyungyong
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.8
    • /
    • pp.40-46
    • /
    • 2020
  • As interest in traffic safety increases, research on autonomous driving, which reduces the incidence of traffic accidents, is increased. Object recognition and detection are essential for autonomous driving. Therefore, research on object recognition and detection through traffic image big data is being actively conducted to determine the road conditions. However, because most existing studies use only daytime data, it is difficult to recognize objects on night roads. Particularly, in the case of a light source object, it is difficult to use the features of the daytime as it is due to light smudging and whitening. Therefore, this study proposes Yolo based light source object detection for traffic image big data processing. The proposed method performs image processing by applying color model transitions to night traffic image. The object group is determined by extracting the characteristics of the object through image processing. It is possible to increase the recognition rate of light source object detection on a night road through a deep learning model using candidate group data.

A Study on the Comparison of 2-D Circular Object Tracking Algorithm Using Vision System (비젼 시스템을 이용한 2-D 원형 물체 추적 알고리즘의 비교에 관한 연구)

  • Han, Kyu-Bum;Kim, Jung-Hoon;Baek, Yoon-Su
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.16 no.7
    • /
    • pp.125-131
    • /
    • 1999
  • In this paper, the algorithms which can track the two dimensional moving circular object using simple vision system are described. In order to track the moving object, the process of finding the object feature points - such as centroid of the object, corner points, area - is indispensable. With the assumption of two-dimensional circular moving object, the centroid of the circular object is computed from three points on the object circumference. Different kinds of algorithms for computing three edge points - simple x directional detection method, stick method. T-shape method are suggested. Through the computer simulation and experiments, three algorithms are compared from the viewpoint of detection accuracy and computational time efficiency.

  • PDF

Attention based Feature-Fusion Network for 3D Object Detection (3차원 객체 탐지를 위한 어텐션 기반 특징 융합 네트워크)

  • Sang-Hyun Ryoo;Dae-Yeol Kang;Seung-Jun Hwang;Sung-Jun Park;Joong-Hwan Baek
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.2
    • /
    • pp.190-196
    • /
    • 2023
  • Recently, following the development of LIDAR technology which can detect distance from the object, the interest for LIDAR based 3D object detection network is getting higher. Previous networks generate inaccurate localization results due to spatial information loss during voxelization and downsampling. In this study, we propose an attention-based convergence method and a camera-LIDAR convergence system to acquire high-level features and high positional accuracy. First, by introducing the attention method into the Voxel-RCNN structure, which is a grid-based 3D object detection network, the multi-scale sparse 3D convolution feature is effectively fused to improve the performance of 3D object detection. Additionally, we propose the late-fusion mechanism for fusing outcomes in 3D object detection network and 2D object detection network to delete false positive. Comparative experiments with existing algorithms are performed using the KITTI data set, which is widely used in the field of autonomous driving. The proposed method showed performance improvement in both 2D object detection on BEV and 3D object detection. In particular, the precision was improved by about 0.54% for the car moderate class compared to Voxel-RCNN.

Moving Object Detection and Tracking in Image Sequence with complex background (복잡한 배경을 가진 영상 시퀀스에서의 이동 물체 검지 및 추적)

  • 정영기;호요성
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.615-618
    • /
    • 1999
  • In this paper, a object detection and tracking algorithm is presented which exhibits robust properties for image sequences with complex background. The proposed algorithm is composed of three parts: moving object detection, object tracking, and motion analysis. The moving object detection algorithm is implemented using a temporal median background method which is suitable for real-time applications. In the motion analysis, we propose a new technique for removing a temporal clutter, such as a swaying plant or a light reflection of a background object. In addition, we design a multiple vehicle tracking system based on Kalman filtering. Computer simulation of the proposed scheme shows its robustness for MPEG-7 test image sequences.

  • PDF

Object Detection with LiDAR Point Cloud and RGBD Synthesis Using GNN

  • Jung, Tae-Won;Jeong, Chi-Seo;Lee, Jong-Yong;Jung, Kye-Dong
    • International journal of advanced smart convergence
    • /
    • v.9 no.3
    • /
    • pp.192-198
    • /
    • 2020
  • The 3D point cloud is a key technology of object detection for virtual reality and augmented reality. In order to apply various areas of object detection, it is necessary to obtain 3D information and even color information more easily. In general, to generate a 3D point cloud, it is acquired using an expensive scanner device. However, 3D and characteristic information such as RGB and depth can be easily obtained in a mobile device. GNN (Graph Neural Network) can be used for object detection based on these characteristics. In this paper, we have generated RGB and RGBD by detecting basic information and characteristic information from the KITTI dataset, which is often used in 3D point cloud object detection. We have generated RGB-GNN with i-GNN, which is the most widely used LiDAR characteristic information, and color information characteristics that can be obtained from mobile devices. We compared and analyzed object detection accuracy using RGBD-GNN, which characterizes color and depth information.

Dense Optical flow based Moving Object Detection at Dynamic Scenes (동적 배경에서의 고밀도 광류 기반 이동 객체 검출)

  • Lim, Hyojin;Choi, Yeongyu;Nguyen Khac, Cuong;Jung, Ho-Youl
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.11 no.5
    • /
    • pp.277-285
    • /
    • 2016
  • Moving object detection system has been an emerging research field in various advanced driver assistance systems (ADAS) and surveillance system. In this paper, we propose two optical flow based moving object detection methods at dynamic scenes. Both proposed methods consist of three successive steps; pre-processing, foreground segmentation, and post-processing steps. Two proposed methods have the same pre-processing and post-processing steps, but different foreground segmentation step. Pre-processing calculates mainly optical flow map of which each pixel has the amplitude of motion vector. Dense optical flows are estimated by using Farneback technique, and the amplitude of the motion normalized into the range from 0 to 255 is assigned to each pixel of optical flow map. In the foreground segmentation step, moving object and background are classified by using the optical flow map. Here, we proposed two algorithms. One is Gaussian mixture model (GMM) based background subtraction, which is applied on optical map. Another is adaptive thresholding based foreground segmentation, which classifies each pixel into object and background by updating threshold value column by column. Through the simulations, we show that both optical flow based methods can achieve good enough object detection performances in dynamic scenes.

Object Detection using Multiple Color Normalization and Moving Color Information (다중색상정규화와 움직임 색상정보를 이용한 물체검출)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.12B no.7 s.103
    • /
    • pp.721-728
    • /
    • 2005
  • This paper suggests effective object detection system for moving objects with specified color and motion information. The proposed detection system includes the object extraction and definition process which uses MCN(Multiple Color Normalization) and MCWUPC(Moving Color Weighted Unmatched Pixel Count) computation to decide the existence of moving object and object segmentation technique using signature information is used to exactly extract the objects with high probability. Finally, real time detection system is implemented to verify the effectiveness of the technique and experiments show that the success rate of object tracking is more than $89\%$ of total 120 image frames.