• Title/Summary/Keyword: object clustering

Search Result 272, Processing Time 0.022 seconds

Computer Vision System for Analysis of Geometrical Characteristics of Agricultural Products and Microscopic Particles (I) -Algorithms for Automatic Threshold Selection- (농산물 및 미립자의 기하학적 특성 분석을 위한 컴퓨터 시각 시스템(I) -자동(自動) 문턱값 설정(設定) 알고리즘-)

  • Lee, J.W.;Noh, S.H.
    • Journal of Biosystems Engineering
    • /
    • v.17 no.2
    • /
    • pp.132-142
    • /
    • 1992
  • The main objective of this paper is to evaluate and modify the existing algorithms for the automatic threshold selection. Four existing algorithms were evaluated quantitatively using test images of coffee droplets and an apple. The images had the different area ratio of the object to the image size, different average gray values between the object and the background, and different S/N ratio of the Gaussian noise. The result showed that Histogram Clustering Method and Maximum Entropy Method were better than Moment Preserving Method and Simple Image Statistic Method in automatic thresholding.

  • PDF

Confidence-based Background Subtraction Algorithm for Moving Cameras (움직이는 카메라를 위한 신뢰도 기반의 배경 제거 알고리즘)

  • Mun, Hyeok;Lee, Bok Ju;Choi, Young Kyu
    • Journal of the Semiconductor & Display Technology
    • /
    • v.16 no.4
    • /
    • pp.30-35
    • /
    • 2017
  • Moving object segmentation from a nonstationary camera is a difficult problem due to the motion of both camera and the object. In this paper, we propose a new confidence-based background subtraction technique from moving camera. The method is based on clustering of motion vectors and generating adaptive multi-homography from a pair of adjacent video frames. The main innovation concerns the use of confidence images for each foreground and background motion groups. Experimental results revealed that our confidence-based approach robustly detect moving targets in sequences taken by a freely moving camera.

  • PDF

Position Clustering of Moving Object based on Global Color Model (글로벌 칼라기반의 이동물체 위치 클러스터링)

  • Jin, Tae-Seok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.05a
    • /
    • pp.868-871
    • /
    • 2009
  • We propose an global color model based method for tracking motions of multiple human using a networked multiple-camera system in intelligent space as a human-robot coexistent system. An intelligent space is a space where many intelligent devices, such as computers and sensors(color CCD cameras for example), are distributed. Human beings can be a part of intelligent space as well. One of the main goals of intelligent space is to assist humans and to do different services for them. In order to be capable of doing that, intelligent space must be able to do different human related tasks. One of them is to identify and track multiple objects seamlessly.

  • PDF

Building Change Detection Using Deep Learning for Remote Sensing Images

  • Wang, Chang;Han, Shijing;Zhang, Wen;Miao, Shufeng
    • Journal of Information Processing Systems
    • /
    • v.18 no.4
    • /
    • pp.587-598
    • /
    • 2022
  • To increase building change recognition accuracy, we present a deep learning-based building change detection using remote sensing images. In the proposed approach, by merging pixel-level and object-level information of multitemporal remote sensing images, we create the difference image (DI), and the frequency-domain significance technique is used to generate the DI saliency map. The fuzzy C-means clustering technique pre-classifies the coarse change detection map by defining the DI saliency map threshold. We then extract the neighborhood features of the unchanged pixels and the changed (buildings) from pixel-level and object-level feature images, which are then used as valid deep neural network (DNN) training samples. The trained DNNs are then utilized to identify changes in DI. The suggested strategy was evaluated and compared to current detection methods using two datasets. The results suggest that our proposed technique can detect more building change information and improve change detection accuracy.

An Object Detection and Tracking System using Fuzzy C-means and CONDENSATION (Fuzzy C-means와 CONDENSATION을 이용한 객체 검출 및 추적 시스템)

  • Kim, Jong-Ho;Kim, Sang-Kyoon;Hang, Goo-Seun;Ahn, Sang-Ho;Kang, Byoung-Doo
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.16 no.4
    • /
    • pp.87-98
    • /
    • 2011
  • Detecting a moving object from videos and tracking it are basic and necessary preprocessing steps in many video systems like object recognition, context aware, and intelligent visual surveillance. In this paper, we propose a method that is able to detect a moving object quickly and accurately in a condition that background and light change in a real time. Furthermore, our system detects strongly an object in a condition that the target object is covered with other objects. For effective detection, effective Eigen-space and FCM are combined and employed, and a CONDENSATION algorithm is used to trace a detected object strongly. First, training data collected from a background image are linear-transformed using Principal Component Analysis (PCA). Second, an Eigen-background is organized from selected principal components having excellent discrimination ability on an object and a background. Next, an object is detected with FCM that uses a convolution result of the Eigen-vector of previous steps and the input image. Finally, an object is tracked by using coordinates of an detected object as an input value of condensation algorithm. Images including various moving objects in a same time are collected and used as training data to realize our system that is able to be adapted to change of light and background in a fixed camera. The result of test shows that the proposed method detects an object strongly in a condition having a change of light and a background, and partial movement of an object.

A Statistical Image Segmentation Method in the Hierarchical Image Structure (계층적 영상구조에서 통계적 방법에 의한 영상분할)

  • 최성진
    • Journal of Broadcast Engineering
    • /
    • v.1 no.2
    • /
    • pp.165-175
    • /
    • 1996
  • In this paper, the image segmentation method based on the hierarchical pyramid image structure of reduced resolution versions of the image for solving the problems in the conventional methods is presented. This method is described the object detection and delineation by statistical approach. In the object detection method, IFSVR( Inverse-father-son variance ratio) method and FSVR(father-son variance ratio ) method are proposed for solving clustering validity problem occurred In the hierarchical pyramid image structure. An optimal object pixel Is detected at some level by this method. In the object delineation method, the iterative algorithm by top-down traversing method is proposed for moving the optimal object pixel to levels of higher resolution. Using the computer simulation, the results by the proposed statistical methods and object traversing method are investigated for the binary Image and the real image. At the results of computer simulation, the proposed methods of image segmentation based on the hierarchical pyramid Image structure seem to have useful properties and deserve consideration as a possible alternative to existing methods of image segmentation. The computation for the proposed method is required 0(log n) for n${\times}$n input image.

  • PDF

Extraction of paddy field in Jaeryeong, North Korea by object-oriented classification with RapidEye NDVI imagery (RapidEye 위성영상의 시계열 NDVI 및 객체기반 분류를 이용한 북한 재령군의 논벼 재배지역 추출 기법 연구)

  • Lee, Sang-Hyun;Oh, Yun-Gyeong;Park, Na-Young;Lee, Sung Hack;Choi, Jin-Yong
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.56 no.3
    • /
    • pp.55-64
    • /
    • 2014
  • While utilizing high resolution satellite image for land use classification has been popularized, object-oriented classification has been adapted as an affordable classification method rather than conventional statistical classification. The aim of this study is to extract the paddy field area using object-oriented classification with time series NDVI from high-resolution satellite images, and the RapidEye satellite images of Jaeryung-gun in North Korea were used. For the implementation of object-oriented classification, creating objects by setting of scale and color factors was conducted, then 3 different land use categories including paddy field, forest and water bodies were extracted from the objects applying the variation of time-series NDVI. The unclassified objects which were not involved into the previous extraction classified into 6 categories using unsupervised classification by clustering analysis. Finally, the unsuitable paddy field area were assorted from the topographic factors such as elevation and slope. As the results, about 33.6 % of the total area (32313.1 ha) were classified to the paddy field (10847.9 ha) and 851.0 ha was classified to the unsuitable paddy field based on the topographic factors. The user accuracy of paddy field classification was calculated to 83.3 %, and among those, about 60.0 % of total paddy fields were classified from the time-series NDVI before the unsupervised classification. Other land covers were classified as to upland(5255.2 ha), forest (10961.0 ha), residential area and bare land (3309.6 ha), and lake and river (1784.4 ha) from this object-oriented classification.

Camera and LiDAR Sensor Fusion for Improving Object Detection (카메라와 라이다의 객체 검출 성능 향상을 위한 Sensor Fusion)

  • Lee, Jongseo;Kim, Mangyu;Kim, Hakil
    • Journal of Broadcast Engineering
    • /
    • v.24 no.4
    • /
    • pp.580-591
    • /
    • 2019
  • This paper focuses on to improving object detection performance using the camera and LiDAR on autonomous vehicle platforms by fusing detected objects from individual sensors through a late fusion approach. In the case of object detection using camera sensor, YOLOv3 model was employed as a one-stage detection process. Furthermore, the distance estimation of the detected objects is based on the formulations of Perspective matrix. On the other hand, the object detection using LiDAR is based on K-means clustering method. The camera and LiDAR calibration was carried out by PnP-Ransac in order to calculate the rotation and translation matrix between two sensors. For Sensor fusion, intersection over union(IoU) on the image plane with respective to the distance and angle on world coordinate were estimated. Additionally, all the three attributes i.e; IoU, distance and angle were fused using logistic regression. The performance evaluation in the sensor fusion scenario has shown an effective 5% improvement in object detection performance compared to the usage of single sensor.

Stereo Vision-based Visual Odometry Using Robust Visual Feature in Dynamic Environment (동적 환경에서 강인한 영상특징을 이용한 스테레오 비전 기반의 비주얼 오도메트리)

  • Jung, Sang-Jun;Song, Jae-Bok;Kang, Sin-Cheon
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.4
    • /
    • pp.263-269
    • /
    • 2008
  • Visual odometry is a popular approach to estimating robot motion using a monocular or stereo camera. This paper proposes a novel visual odometry scheme using a stereo camera for robust estimation of a 6 DOF motion in the dynamic environment. The false results of feature matching and the uncertainty of depth information provided by the camera can generate the outliers which deteriorate the estimation. The outliers are removed by analyzing the magnitude histogram of the motion vector of the corresponding features and the RANSAC algorithm. The features extracted from a dynamic object such as a human also makes the motion estimation inaccurate. To eliminate the effect of a dynamic object, several candidates of dynamic objects are generated by clustering the 3D position of features and each candidate is checked based on the standard deviation of features on whether it is a real dynamic object or not. The accuracy and practicality of the proposed scheme are verified by several experiments and comparisons with both IMU and wheel-based odometry. It is shown that the proposed scheme works well when wheel slip occurs or dynamic objects exist.

  • PDF

The Object Recognition Using Multi-Sonar Sensor and Neural Networks (복수 초음파센서와 신경망을 이용한 형상인식)

  • Kim, Dong-Gi;O, Tae-Gyun;Gang, Lee-Seok
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.24 no.11
    • /
    • pp.2875-2882
    • /
    • 2000
  • Typically, the ultrasonic sensors can be used in navigation systems for modeling of the enviornment, obstacle avoidance, and map building. In this paper, we tried to approach an object classification method using the range data of the ultrasonic sensors. A characterization of the sonar scan is described that allows the differentiation of planes, corners, edges, cylindrical and rectangular pillars by processing the scanned data from three sonars. To use the data from the ultrasonic sensors as input to the neural networks, we have introduced a clustering, threshold, and bit operation algorithm for the obtained raw data, After repeated training of the neural network, the performance of the proposed method was obtained through experiments. Also, the recognition ranges of the proposed method were investigated. As a result of experiments, we found that the proposed method successfully recognized the objects within the accuracy of 78%.