• Title/Summary/Keyword: Unsupervised

Search Result 814, Processing Time 0.027 seconds

Unsupervised Image Classification for Large Remotely-sensed Imagery using Regiongrowing Segmentation

  • Lee, Sang-Hoon
    • Proceedings of the KSRS Conference
    • /
    • v.1
    • /
    • pp.188-190
    • /
    • 2006
  • A multistage hierarchical clustering technique, which is an unsupervised technique, was suggested in this paper for classifying large remotely-sensed imagery. The multistage algorithm consists of two stages. The local segmentor of the first stage performs regiongrowing segmentation by employing the hierarchical clustering procedure of CN-chain with the restriction that pixels in a cluster must be spatially contiguous. This stage uses a sliding window strategy with boundary blocking to alleviate a computational problem in computer memory for an enormous data. The global segmentor of the second stage has not spatial constraints for merging to classify the segments resulting from the previous stage. The experimental results show that the new approach proposed in this study efficiently performs the segmentation for the images of very large size and an extensive number of bands

  • PDF

Study of an algorithm for intelligent digital protective relaying (지능형 디지탈 보호계전 알고리즘 연구)

  • 신현익;이성환;강신준;김정한;김상철
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.343-346
    • /
    • 1996
  • A new method for on-line induction motor fault detection is presented in this paper. This system utilizes unsupervised-learning clustering algorithm, the Dignet, proposed by Thomopoulos etc., to learn the spectral characteristics of a good motor operating on-line. After a sufficient training period, the Dignet signals one-phase ground fault, or a potential failure condition when a new cluster is formed and persists for some time. Since a fault condition is found by comparison to a prior condition of the machine, on-line failure prediction is possible with this system without requiring information on the motor of load characteristics.

  • PDF

Unsupervised Cluster Estimation using Subtractive HyperBox Algorithm (차감 HyperBox 알고리듬을 이용한 Unsupervised 클러스터 추정)

  • Moon, Seong-Hwan;Choi, Byeong-Geol;Kang, Hun
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1997.10a
    • /
    • pp.87-90
    • /
    • 1997
  • Mountain Method의 다른 형태인 Subtractive 클러스터링 알고리듬은 계산이 간단하고 기존의 클러스터링 방법들과는 달리 초기 클러스터 중심의 개수 선정이 필요 없기 때문에 클러스터를 추정하는데 효과적인 알고리듬이다. 또한 클러스터의 간격을 결정하는 파라미터의 값에 따라 클러스터의 개수를 다르게 할 수 있다. 그러나 이 파라미터에 의해 동일한 그룹(Class)내에서 여러 개의 클러스터 중심이 발생될 수도 있다. 본 논문에서는 Subtractive HyperBox 알고리듬을 사용하여 이 파라미터의 영향을 줄이고 발생한 클러스터 중심이 속한 그룹의 경계를 판정함으로서 같은 그룹내에서 하나의 클러스터만 발생하도록 하고, 순차적으로 클러스터링 한 후 결과를 Subtractive 클러스터링 알고리듬과 비교하여 보았다.

  • PDF

A Neural Network for Concept Learning : Recognitron (개념 학습에 의한 신경 회로망 컴퓨터)

  • Lee, Ki-Han;Whang, Hee-Yoong;Kim, Choon-Suk
    • Proceedings of the KIEE Conference
    • /
    • 1989.07a
    • /
    • pp.495-499
    • /
    • 1989
  • Concept is the set of selected neurons in a stable state of a neurel network. The Recognitron uses a parallel feedback structure to support concept learning. A number of clusters can exist in response to a given input, each of which make up a selective neuron. There are supervised and unsupervised learnig methods in concept teaming. In this paper, we have chosen unsupervised learning. Also, a new concept called relaxational learning has been introduced to stop runaway weights

  • PDF

Neural Learning Algorithms for Independent Component Analysis

  • Choi, Seung-Jin
    • Journal of IKEEE
    • /
    • v.2 no.1 s.2
    • /
    • pp.24-33
    • /
    • 1998
  • Independent Component analysis (ICA) is a new statistical method for extracting statistically independent components from their linear instantaneous mixtures which are generated by an unknown linear generative model. The recognition model is learned in unsupervised manner so that the recovered signals by the recognition model become the possibly scaled estimates of original source signals. This paper addresses the neural learning approach to ICA. As recognition models a linear feedforward network and a linear feedback network are considered. Associated learning algorithms for both networks are derived from maximum likelihood and information-theoretic approaches, using natural Riemannian gradient [1]. Theoretical results are confirmed by extensive computer simulations.

  • PDF

Unsupervised Classiflcation of Multiple Attributes via Autoassociative Neural Network

  • Kamioka, Reina;Kurata, Kouji;Hiraoka, Kazuyuki;Mishima, Taketoshi
    • Proceedings of the IEEK Conference
    • /
    • 2002.07b
    • /
    • pp.798-801
    • /
    • 2002
  • This paper proposes unsupervised classification of multiple attributes via five-layer autoassociative neural network with bottleneck layer. In the conventional methods, high dimensional data are compressed into low dimensional data at bottleneck layer and then feature extraction is performed (Fig.1). In contrast, in the proposed method, analog data is compressed into digital data. Furthermore bottleneck layer is divided into two segments so that each attribute, which is a discrete value, is extracted in corresponding segment (Fig.2).

  • PDF

Language Model Adaptation Based on Topic Probability of Latent Dirichlet Allocation

  • Jeon, Hyung-Bae;Lee, Soo-Young
    • ETRI Journal
    • /
    • v.38 no.3
    • /
    • pp.487-493
    • /
    • 2016
  • Two new methods are proposed for an unsupervised adaptation of a language model (LM) with a single sentence for automatic transcription tasks. At the training phase, training documents are clustered by a method known as Latent Dirichlet allocation (LDA), and then a domain-specific LM is trained for each cluster. At the test phase, an adapted LM is presented as a linear mixture of the now trained domain-specific LMs. Unlike previous adaptation methods, the proposed methods fully utilize a trained LDA model for the estimation of weight values, which are then to be assigned to the now trained domain-specific LMs; therefore, the clustering and weight-estimation algorithms of the trained LDA model are reliable. For the continuous speech recognition benchmark tests, the proposed methods outperform other unsupervised LM adaptation methods based on latent semantic analysis, non-negative matrix factorization, and LDA with n-gram counting.

Study on Application of Neural Network for Unsupervised Training of Remote Sensing Data (신경망을 이용한 원격탐사자료의 군집화 기법 연구)

  • 김광은;이태섭;채효석
    • Spatial Information Research
    • /
    • v.2 no.2
    • /
    • pp.175-188
    • /
    • 1994
  • A competitive learning network was proposed as unsupervised training method of remote sensing data, Its performance and computational re¬quirements were compared with conventional clustering techniques such as Se¬quential and K - Means. An airborne remote sensing data set was used to study the performance of these classifiers. The proposed algorithm required a little more computational time than the conventional techniques. However, the perform¬ance of competitive learning network algorithm was found to be slightly more than those of Sequential and K - Means clustering techniques.

  • PDF

Automatic Categorization of Clusters in Unsupervised Classificatin

  • Jeon, Dong-Keun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.15 no.1E
    • /
    • pp.29-33
    • /
    • 1996
  • A categorization for cluster is necessary when an unsupervised classfication is used for remote sensing image classification. It is desirable that this method is performed automatically, because manual categorization is a highly time consuming process. In this paper, several automatic determination methods were proposed and evaluated. They are four methods. a) maximum number method : which assigns the tharget cluster to the category which occupies the largest area of that cluster b) maximum percentage method : which assigns the target cluster to the category which shows the maximum percentage within the category in that cluster. c) minmun distance method : which assigns the target cluster to the category having minmum distance with that cluster d) element ratio matching method : which assigns local regions to the category having the most similar element ratio of that region From the results of the experiments, it was certified that the result of minimum distance method was almost the same as the result made by a human operator.

  • PDF

Performance of Pseudomorpheme-Based Speech Recognition Units Obtained by Unsupervised Segmentation and Merging (비교사 분할 및 병합으로 구한 의사형태소 음성인식 단위의 성능)

  • Bang, Jeong-Uk;Kwon, Oh-Wook
    • Phonetics and Speech Sciences
    • /
    • v.6 no.3
    • /
    • pp.155-164
    • /
    • 2014
  • This paper proposes a new method to determine the recognition units for large vocabulary continuous speech recognition (LVCSR) in Korean by applying unsupervised segmentation and merging. In the proposed method, a text sentence is segmented into morphemes and position information is added to morphemes. Then submorpheme units are obtained by splitting the morpheme units through the maximization of posterior probability terms. The posterior probability terms are computed from the morpheme frequency distribution, the morpheme length distribution, and the morpheme frequency-of-frequency distribution. Finally, the recognition units are obtained by sequentially merging the submorpheme pair with the highest frequency. Computer experiments are conducted using a Korean LVCSR with a 100k word vocabulary and a trigram language model obtained by a 300 million eojeol (word phrase) corpus. The proposed method is shown to reduce the out-of-vocabulary rate to 1.8% and reduce the syllable error rate relatively by 14.0%.