• 제목/요약/키워드: image clustering

검색결과 600건 처리시간 0.021초

THE MODIFIED UNSUPERVISED SPECTRAL ANGLE CLASSIFICATION (MUSAC) OF HYPERION, HYPERION-FLASSH AND ETM+ DATA USING UNIT VECTOR

  • Kim, Dae-Sung;Kim, Yong-Il
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2005년도 Proceedings of ISRS 2005
    • /
    • pp.134-137
    • /
    • 2005
  • Unsupervised spectral angle classification (USAC) is the algorithm that can extract ground object information with the minimum 'Spectral Angle' operation on behalf of 'Spectral Euclidian Distance' in the clustering process. In this study, our algorithm uses the unit vector instead of the spectral distance to compute the mean of cluster in the unsupervised classification. The proposed algorithm (MUSAC) is applied to the Hyperion and ETM+ data and the results are compared with K-Meails and former USAC algorithm (FUSAC). USAC is capable of clearly classifying water and dark forest area and produces more accurate results than K-Means. Atmospheric correction for more accurate results was adapted on the Hyperion data (Hyperion-FLAASH) but the results did not have any effect on the accuracy. Thus we anticipate that the 'Spectral Angle' can be one of the most accurate classifiers of not only multispectral images but also hyperspectral images. Furthermore the cluster unit vector can be an efficient technique for determination of each cluster mean in the USAC.

  • PDF

거리 사상 함수 및 RBF 네트워크의 2단계 알고리즘을 적용한 서류 레이아웃 분할 방법 (A Two-Stage Document Page Segmentation Method using Morphological Distance Map and RBF Network)

  • 신현경
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제35권9호
    • /
    • pp.547-553
    • /
    • 2008
  • 본 논문에서는 2 단계 서류 레이아웃 분할 방법을 제안한다. 서류 분할의 1 차 단계는 top-down 계열의 영역 추출로서 모폴로지 기반의 거리 함수를 사용하여 주어진 영상 데이타를 사각형 영역들로 분할한다. 거리 사상 함수를 통한 예비 결과는 성능 개선을 위한 2 차 단계의 입력 변수로 작용한다. 서류 분할의 2차 단계로서 기계 학습 이론을 적용한다. 통계 모델을 따르는 RBF 신경망을 선택하였고, 은닉 층의 설계를 위해 코호넨 네트워크의 자기 조직화 성격을 활용한 데이타 군집화 기법을 기반으로 하였다. 본 논문에서는 300개의 영상에서 추출된 영역 데이타를 통해 학습된 신경망이 1차 단계에서 도출된 예비 결과를 개선함을 연구 결과로 제시하였다.

피라미드 영상과 퍼지이론을 이용한 폐부 혈관의 검출에 관한 연구 (A Study on the Detection of Pulmonary Blood Vessel Using Pyramid Images and Fuzzy Theory)

  • 황준현;박광석;민병구
    • 대한의용생체공학회:의공학회지
    • /
    • 제12권2호
    • /
    • pp.99-106
    • /
    • 1991
  • For the automatic detection of pulmonary blood vessels, a new algorithm is proposed using the fact that human recognizes a pattern orderly according to their size. This method simulates the human recognition process by the pyramid images. For the detection of vessels using multilevel image, large and wtde ones are detected from the most compressed level, followed by the detection of small and narrow ones from the less compressed images with FCM(fuzzy c means) clustering algorithm which classifies similar data into a group. As the proposed algorithm detects blood vessels orderly according to their size, there is no need to consider the variation of parameters and the branch points which should be considered in other detection algirithms. In the detection of patterns whose size changes successively like pulmonary blood vessels, this proposed algorithm can be properly applied

  • PDF

DeepCleanNet: Training Deep Convolutional Neural Network with Extremely Noisy Labels

  • Olimov, Bekhzod;Kim, Jeonghong
    • 한국멀티미디어학회논문지
    • /
    • 제23권11호
    • /
    • pp.1349-1360
    • /
    • 2020
  • In recent years, Convolutional Neural Networks (CNNs) have been successfully implemented in different tasks of computer vision. Since CNN models are the representatives of supervised learning algorithms, they demand large amount of data in order to train the classifiers. Thus, obtaining data with correct labels is imperative to attain the state-of-the-art performance of the CNN models. However, labelling datasets is quite tedious and expensive process, therefore real-life datasets often exhibit incorrect labels. Although the issue of poorly labelled datasets has been studied before, we have noticed that the methods are very complex and hard to reproduce. Therefore, in this research work, we propose Deep CleanNet - a considerably simple system that achieves competitive results when compared to the existing methods. We use K-means clustering algorithm for selecting data with correct labels and train the new dataset using a deep CNN model. The technique achieves competitive results in both training and validation stages. We conducted experiments using MNIST database of handwritten digits with 50% corrupted labels and achieved up to 10 and 20% increase in training and validation sets accuracy scores, respectively.

강인한 특징 추출에 기반한 대상물체 검출 (Target Object Detection Based on Robust Feature Extraction)

  • 장석우;허문행
    • 한국산학기술학회논문지
    • /
    • 제15권12호
    • /
    • pp.7302-7308
    • /
    • 2014
  • 특정한 제한을 두지 않는 복잡한 자연환경에서 사용자가 원하는 목표 물체만을 정확하게 검출하는 작업은 컴퓨터 비전 및 영상처리 분야에서 중요하지만 매우 어려운 문제 중의 하나이다. 본 논문에서는 반사가 존재하는 여러 환경에서 목표하는 물체를 강인하게 검출하는 새로운 방법을 제안한다. 제안된 방법에서는 먼저 스테레오 카메라를 이용하여 목표 물체를 촬영한 다음, 물체를 가장 잘 표현하는 라인과 코너 특징들을 추출한다. 그런 다음, 촬영된 좌우 영상으로부터 호모그래픽 변환을 이용하여 실제로 존재하지 않는 반사된 특징들을 효과적으로 제거한다. 마지막으로, 반사된 특징들을 제거한 실제 특징들만을 군집화하여 대상 물체만을 강건하게 검출한다. 본 논문의 실험결과에서는 제안된 알고리즘이 기존의 알고리즘에 비해서 반사가 존재하는 자연 환경에서 목표 물체를 보다 강인하게 검출한다는 것을 보여준다.

Aerial Object Detection and Tracking based on Fusion of Vision and Lidar Sensors using Kalman Filter for UAV

  • Park, Cheonman;Lee, Seongbong;Kim, Hyeji;Lee, Dongjin
    • International journal of advanced smart convergence
    • /
    • 제9권3호
    • /
    • pp.232-238
    • /
    • 2020
  • In this paper, we study on aerial objects detection and position estimation algorithm for the safety of UAV that flight in BVLOS. We use the vision sensor and LiDAR to detect objects. We use YOLOv2 architecture based on CNN to detect objects on a 2D image. Additionally we use a clustering method to detect objects on point cloud data acquired from LiDAR. When a single sensor used, detection rate can be degraded in a specific situation depending on the characteristics of sensor. If the result of the detection algorithm using a single sensor is absent or false, we need to complement the detection accuracy. In order to complement the accuracy of detection algorithm based on a single sensor, we use the Kalman filter. And we fused the results of a single sensor to improve detection accuracy. We estimate the 3D position of the object using the pixel position of the object and distance measured to LiDAR. We verified the performance of proposed fusion algorithm by performing the simulation using the Gazebo simulator.

무인차량 자율주행을 위한 레이다 영상의 정지물체 너비추정 기법 (Width Estimation of Stationary Objects using Radar Image for Autonomous Driving of Unmanned Ground Vehicles)

  • 김성준;양동원;김수진;정영헌
    • 한국군사과학기술학회지
    • /
    • 제18권6호
    • /
    • pp.711-720
    • /
    • 2015
  • Recently many studies of Radar systems mounted on ground vehicles for autonomous driving, SLAM (Simultaneous localization and mapping) and collision avoidance have been reported. Since several pixels per an object may be generated in a close-range radar application, a width of an object can be estimated automatically by various signal processing techniques. In this paper, we tried to attempt to develop an algorithm to estimate obstacle width using Radar images. The proposed method consists of 5 steps - 1) background clutter reduction, 2) local peak pixel detection, 3) region growing, 4) contour extraction and 5)width calculation. For the performance validation of our method, we performed the test width estimation using a real data of two cars acquired by commercial radar system - I200 manufactured by Navtech. As a result, we verified that the proposed method can estimate the widths of targets.

Cosmological parameter constraints from galaxy-galaxy lensing with the Deep Lens Survey

  • Yoon, Mijin;Jee, Myungkook James
    • 천문학회보
    • /
    • 제42권2호
    • /
    • pp.54.3-55
    • /
    • 2017
  • The Deep Lens Survey (DLS), a precursor to the Large Synoptic Survey Telescope (LSST), is a 20 deg2 survey carried out with NOAO's Blanco and Mayalltelescopes. DLS is unique in its depth reaching down to ~27th mags in BVRz bands. This enables a broad redshift baseline and is optimal for investigating cosmological evolution of the large scale structure. Galaxy-galaxylensing is a powerful tool to estimate averaged matter distribution around lensgalaxies by measuring shape distortions of background galaxies. The signal from galaxy-galaxy lensing is sensitive not only to galaxy halo properties, but also to cosmological environment at large scales. In this study, we measure galaxy-galaxy lensing and galaxy clustering, which together put strong constraints on the cosmological parameters. We obtain significant galaxy-galaxy lensing signals out to ~20 Mpc while tightly controlling systematics. The B-mode signals are consistent with zero. Our lens-source flip test indicates that minimal systematic errors are present in DLS photometric redshifts. Shear calibration is performed using high-fidelity galaxy image simulations. We demonstrate that the overall shape of the galaxy-galaxy lensing signal is well described by the halo model comprised of central and non-central halo contributions. Finally, we present our preliminary constraints on the matter density and the normalization parameters.

  • PDF

Improving data reliability on oligonucleotide microarray

  • Yoon, Yeo-In;Lee, Young-Hak;Park, Jin-Hyun
    • 한국생물정보학회:학술대회논문집
    • /
    • 한국생물정보시스템생물학회 2004년도 The 3rd Annual Conference for The Korean Society for Bioinformatics Association of Asian Societies for Bioinformatics 2004 Symposium
    • /
    • pp.107-116
    • /
    • 2004
  • The advent of microarray technologies gives an opportunity to moni tor the expression of ten thousands of genes, simultaneously. Such microarray data can be deteriorated by experimental errors and image artifacts, which generate non-negligible outliers that are estimated by 15% of typical microarray data. Thus, it is an important issue to detect and correct the se faulty probes prior to high-level data analysis such as classification or clustering. In this paper, we propose a systematic procedure for the detection of faulty probes and its proper correction in Genechip array based on multivariate statistical approaches. Principal component analysis (PCA), one of the most widely used multivariate statistical approaches, has been applied to construct a statistical correlation model with 20 pairs of probes for each gene. And, the faulty probes are identified by inspecting the squared prediction error (SPE) of each probe from the PCA model. Then, the outlying probes are reconstructed by the iterative optimization approach minimizing SPE. We used the public data presented from the gene chip project of human fibroblast cell. Through the application study, the proposed approach showed good performance for probe correction without removing faulty probes, which may be desirable in the viewpoint of the maximum use of data information.

  • PDF

영상처리 기법을 통한 RBFNN 패턴 분류기 기반 개선된 지문인식 시스템 설계 (Design of Fingerprints Identification Based on RBFNN Using Image Processing Techniques)

  • 배종수;오성권;김현기
    • 전기학회논문지
    • /
    • 제65권6호
    • /
    • pp.1060-1069
    • /
    • 2016
  • In this paper, we introduce the fingerprint recognition system based on Radial Basis Function Neural Network(RBFNN). Fingerprints are classified as four types(Whole, Arch, Right roof, Left roof). The preprocessing methods such as fast fourier transform, normalization, calculation of ridge's direction, filtering with gabor filter, binarization and rotation algorithm, are used in order to extract the features on fingerprint images and then those features are considered as the inputs of the network. RBFNN uses Fuzzy C-Means(FCM) clustering in the hidden layer and polynomial functions such as linear, quadratic, and modified quadratic are defined as connection weights of the network. Particle Swarm Optimization (PSO) algorithm optimizes a number of essential parameters needed to improve the accuracy of RBFNN. Those optimized parameters include the number of clusters and the fuzzification coefficient used in the FCM algorithm, and the orders of polynomial of networks. The performance evaluation of the proposed fingerprint recognition system is illustrated with the use of fingerprint data sets that are collected through Anguli program.