• Title/Summary/Keyword: single-image detection

Search Result 356, Processing Time 0.03 seconds

Combining Shape and SIFT Features for 3-D Object Detection and Pose Estimation (효과적인 3차원 객체 인식 및 자세 추정을 위한 외형 및 SIFT 특징 정보 결합 기법)

  • Tak, Yoon-Sik;Hwang, Een-Jun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.59 no.2
    • /
    • pp.429-435
    • /
    • 2010
  • Three dimensional (3-D) object detection and pose estimation from a single view query image has been an important issue in various fields such as medical applications, robot vision, and manufacturing automation. However, most of the existing methods are not appropriate in a real time environment since object detection and pose estimation requires extensive information and computation. In this paper, we present a fast 3-D object detection and pose estimation scheme based on surrounding camera view-changed images of objects. Our scheme has two parts. First, we detect images similar to the query image from the database based on the shape feature, and calculate candidate poses. Second, we perform accurate pose estimation for the candidate poses using the scale invariant feature transform (SIFT) method. We earned out extensive experiments on our prototype system and achieved excellent performance, and we report some of the results.

Hand Raising Pose Detection in the Images of a Single Camera for Mobile Robot (주행 로봇을 위한 단일 카메라 영상에서 손든 자세 검출 알고리즘)

  • Kwon, Gi-Il
    • The Journal of Korea Robotics Society
    • /
    • v.10 no.4
    • /
    • pp.223-229
    • /
    • 2015
  • This paper proposes a novel method for detection of hand raising poses from images acquired from a single camera attached to a mobile robot that navigates unknown dynamic environments. Due to unconstrained illumination, a high level of variance in human appearances and unpredictable backgrounds, detecting hand raising gestures from an image acquired from a camera attached to a mobile robot is very challenging. The proposed method first detects faces to determine the region of interest (ROI), and in this ROI, we detect hands by using a HOG-based hand detector. By using the color distribution of the face region, we evaluate each candidate in the detected hand region. To deal with cases of failure in face detection, we also use a HOG-based hand raising pose detector. Unlike other hand raising pose detector systems, we evaluate our algorithm with images acquired from the camera and images obtained from the Internet that contain unknown backgrounds and unconstrained illumination. The level of variance in hand raising poses in these images is very high. Our experiment results show that the proposed method robustly detects hand raising poses in complex backgrounds and unknown lighting conditions.

Realization of Object Detection Algorithm and Eight-channel LiDAR sensor for Autonomous Vehicles (자율주행자동차를 위한 8채널 LiDAR 센서 및 객체 검출 알고리즘의 구현)

  • Kim, Ju-Young;Woo, Seong Tak;Yoo, Jong-Ho;Park, Young-Bin;Lee, Joong-Hee;Cho, Hyun-Chang;Choi, Hyun-Yong
    • Journal of Sensor Science and Technology
    • /
    • v.28 no.3
    • /
    • pp.157-163
    • /
    • 2019
  • The LiDAR sensor, which is widely regarded as one of the most important sensors, has recently undergone active commercialization owing to the significant growth in the production of ADAS and autonomous vehicle components. The LiDAR sensor technology involves radiating a laser beam at a particular angle and acquiring a three-dimensional image by measuring the lapsed time of the laser beam that has returned after being reflected. The LiDAR sensor has been incorporated and utilized in various devices such as drones and robots. This study focuses on object detection and recognition by employing sensor fusion. Object detection and recognition can be executed as a single function by incorporating sensors capable of recognition, such as image sensors, optical sensors, and propagation sensors. However, a single sensor has limitations with respect to object detection and recognition, and such limitations can be overcome by employing multiple sensors. In this paper, the performance of an eight-channel scanning LiDAR was evaluated and an object detection algorithm based on it was implemented. Furthermore, object detection characteristics during daytime and nighttime in a real road environment were verified. Obtained experimental results corroborate that an excellent detection performance of 92.87% can be achieved.

Visual Attention Detection By Adaptive Non-Local Filter

  • Anh, Dao Nam
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.5 no.1
    • /
    • pp.49-54
    • /
    • 2016
  • Regarding global and local factors of a set of features, a given single image or multiple images is a common approach in image processing. This paper introduces an application of an adaptive version of non-local filter whose original version searches non-local similarity for removing noise. Since most images involve texture partner in both foreground and background, extraction of signified regions with texture is a challenging task. Aiming to the detection of visual attention regions for images with texture, we present the contrast analysis of image patches located in a whole image but not nearby with assistance of the adaptive filter for estimation of non-local divergence. The method allows extraction of signified regions with texture of images of wild life. Experimental results for a benchmark demonstrate the ability of the proposed method to deal with the mentioned challenge.

Footstep Detection and Classification Algorithms based Seismic Sensor (진동센서 기반 걸음걸이 검출 및 분류 알고리즘)

  • Kang, Youn Joung;Lee, Jaeil;Bea, Jinho;Lee, Chong Hyun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.1
    • /
    • pp.162-172
    • /
    • 2015
  • In this paper, we propose an adaptive detection algorithm of footstep and a classification algorithm for activities of the detected footstep. The proposed algorithm can detect and classify whole movement as well as individual and irregular activities, since it does not use continuous footstep signals which are used by most previous research. For classifying movement, we use feature vectors obtained from frequency spectrum from FFT, CWT, AR model and image of AR spectrogram. With SVM classifier, we obtain classification accuracy of single footstep activities over 90% when feature vectors using AR spectrogram image are used.

The Tracing Algorithm for Center Pixel of Character Image and the Design of Neural Chip (문자영상의 중심화소 추적 알고리즘 및 신경칩 설계)

  • 고휘진;여진경;정호선
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.29B no.8
    • /
    • pp.35-43
    • /
    • 1992
  • We have presented the tracing algorithm for center pixel of character image. Character image was read by scanner device. Performing the tracing process, it can be possible to detect feature points, such as branch point, stroke of 4 directions. So, the tracing process covers the thinning and feature point detection process for improving the processing time. Usage of suggested tracing algorithm instead of thinning that is the preprocessing of character recognition increases speed up to 5 times. The preprocessing chip has been designed by using single layer perceptron algorithm.

  • PDF

Detection Range Improvement of Radiation Sensor for Radiation Contamination Distribution Imaging (방사선 오염분포 영상화를 위한 방사선 센서의 탐지 범위 개선에 관한 연구)

  • Song, Keun-Young;Hwang, Young-Gwan;Lee, Nam-Ho;Na, Jun-Hee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.12
    • /
    • pp.1535-1541
    • /
    • 2019
  • To carry out safe and rapid decontamination in radiological accident areas, acquisition of various information on radiation sources is needed. In particular, to figure out the location and distribution of radiation sources is essential for rapid follow-up and removal of contaminants as well as minimizing worker damage. The radiation distribution detection device is used to obtain the position and distribution information of the radiation source. In the case of a radiation distribution detection device, a detection sensor unit is generally composed of a single sensor, and the detection range is limited due to the physical characteristics of the single sensor. We applied a calibration detector for controlling the detection sensitivity of a single sensor for radiation detection and improved the limited detection range of radiation dose rate. Also, gamma irradiation test confirmed the improvement of radiation distribution detection range.

Learning of Rules for Edge Detection of Image using Fuzzy Classifier System (퍼지 분류가 시스템을 이용한 영상의 에지 검출 규칙 학습)

  • 정치선;반창봉;심귀보
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.10 no.3
    • /
    • pp.252-259
    • /
    • 2000
  • In this paper, we propose a Fuzzy Classifier System(FCS) to find a set of fuzzy rules which can carry out the edge detection of a image. The FCS is based on the fuzzy logic system combined with machine learning. Therefore the antecedent and consequent of a classifier in FCS are the same as those of a fuzzy rule. There are two different approaches, Michigan and Pittsburgh approaches, to acquire appropriate fuzzy rules by evolutionary computation. In this paper, we use the Michigan style in which a single fuzzy if-then rule is coded as an individual. Also the FCS employs the Genetic Algorithms to generate new rules and modify rules when performance of the system needs to be improved. The proposed method is evaluated by applying it to the edge detection of a gray-level image that is a pre-processing step of the computer vision. the differences of average gray-level of the each vertical/horizontal arrays of neighborhood pixels are represented into fuzzy sets, and then the center pixel is decided whether it is edge pixel or not using fuzzy if-then rules. We compare the resulting image with a conventional edge image obtained by the other edge detection method such as Sobel edge detection.

  • PDF

Lane Departure Detection Using a Partial Top-view Image (부분 top-view 영상을 이용한 차선 이탈 검출)

  • Park, Han-dong;Oh, Jeong-su
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.8
    • /
    • pp.1553-1559
    • /
    • 2017
  • This paper proposes a lane departure detection algorithm using a single camera equipped in front of a vehicle. The proposed algorithm generates a partial top-view image for a small ROI (region of interest) designated on the top-view space form the image acquired by the camera, detects lanes on the small partial top-view image, and makes a decision on the lane departure by checking overlap between the pre-assigned virtual vehicle and the detected lanes. The proposed algorithm also includes the removal of lines occurred by road symbols (noises) disturbing the lane departure detection between lanes and the prediction of lost lanes using lane information of previous fames. In lane departure detection test using real road videos, the proposed algorithm makes the right decision of 99.0% in lane keeping conditions and 94.7% in lane departure conditions.

Comparison of Two Methods for Stationary Incident Detection Based on Background Image

  • Ghimire, Deepak;Lee, Joonwhoan
    • Smart Media Journal
    • /
    • v.1 no.3
    • /
    • pp.48-55
    • /
    • 2012
  • In general, background subtraction based methods are used to detect the moving objects in visual tracking applications. In this paper we employed background subtraction based scheme to detect the temporarily stationary objects. We proposed two schemes for stationary object detection and we compare those in terms of detection performance and computational complexity. In the first approach we used single background and in the second approach we used dual backgrounds, generated with different learning rates, in order to detect temporarily stopped object. Finally, we used normalized cross correlation (NCC) based image comparison to monitor and track the detected stationary object in a video scene. The proposed method is robust with partial occlusion, short time fully occlusion and illumination changes, as well as it can operate in real time.

  • PDF