• Title/Summary/Keyword: subspace clustering

Search Result 29, Processing Time 0.024 seconds

Isolated Word Recognition Using k-clustering Subspace Method and Discriminant Common Vector (k-clustering 부공간 기법과 판별 공통벡터를 이용한 고립단어 인식)

  • Nam, Myung-Woo
    • Journal of the Institute of Electronics Engineers of Korea TE
    • /
    • v.42 no.1
    • /
    • pp.13-20
    • /
    • 2005
  • In this paper, I recognized Korean isolated words using CVEM which is suggested by M. Bilginer et al. CVEM is an algorithm which is easy to extract the common properties from training voice signals and also doesn't need complex calculation. In addition CVEM shows high accuracy in recognition results. But, CVEM has couple of problems which are impossible to use for many training voices and no discriminant information among extracted common vectors. To get the optimal common vectors from certain voice classes, various voices should be used for training. But CVEM is impossible to get continuous high accuracy in recognition because CVEM has a limitation to use many training voices and the absence of discriminant information among common vectors can be the source of critical errors. To solve above problems and improve recognition rate, k-clustering subspace method and DCVEM suggested. And did various experiments using voice signal database made by ETRI to prove the validity of suggested methods. The result of experiments shows improvements in performance. And with proposed methods, all the CVEM problems can be solved with out calculation problem.

A Big Data Analysis by Between-Cluster Information using k-Modes Clustering Algorithm (k-Modes 분할 알고리즘에 의한 군집의 상관정보 기반 빅데이터 분석)

  • Park, In-Kyoo
    • Journal of Digital Convergence
    • /
    • v.13 no.11
    • /
    • pp.157-164
    • /
    • 2015
  • This paper describes subspace clustering of categorical data for convergence and integration. Because categorical data are not designed for dealing only with numerical data, The conventional evaluation measures are more likely to have the limitations due to the absence of ordering and high dimensional data and scarcity of frequency. Hence, conditional entropy measure is proposed to evaluate close approximation of cohesion among attributes within each cluster. We propose a new objective function that is used to reflect the optimistic clustering so that the within-cluster dispersion is minimized and the between-cluster separation is enhanced. We performed experiments on five real-world datasets, comparing the performance of our algorithms with four algorithms, using three evaluation metrics: accuracy, f-measure and adjusted Rand index. According to the experiments, the proposed algorithm outperforms the algorithms that were considered int the evaluation, regarding the considered metrics.

A Short Note on Empirical Penalty Term Study of BIC in K-means Clustering Inverse Regression

  • Ahn, Ji-Hyun;Yoo, Jae-Keun
    • Communications for Statistical Applications and Methods
    • /
    • v.18 no.3
    • /
    • pp.267-275
    • /
    • 2011
  • According to recent studies, Bayesian information criteria(BIC) is proposed to determine the structural dimension of the central subspace through sliced inverse regression(SIR) with high-dimensional predictors. The BIC may be useful in K-means clustering inverse regression(KIR) with high-dimensional predictors. However, the direct application of the BIC to KIR may be problematic, because the slicing scheme in SIR is not the same as that of KIR. In this paper, we present empirical penalty term studies of BIC in KIR to identify the most appropriate one. Numerical studies and real data analysis are presented.

System identification of a super high-rise building via a stochastic subspace approach

  • Faravelli, Lucia;Ubertini, Filippo;Fuggini, Clemente
    • Smart Structures and Systems
    • /
    • v.7 no.2
    • /
    • pp.133-152
    • /
    • 2011
  • System identification is a fundamental step towards the application of structural health monitoring and damage detection techniques. On this respect, the development of evolved identification strategies is a priority for obtaining reliable and repeatable baseline modal parameters of an undamaged structure to be adopted as references for future structural health assessments. The paper presents the identification of the modal parameters of the Guangzhou New Television Tower, China, using a data-driven stochastic subspace identification (SSI-data) approach complemented with an appropriate automatic mode selection strategy which proved to be successful in previous literature studies. This well-known approach is based on a clustering technique which is adopted to discriminate structural modes from spurious noise ones. The method is applied to the acceleration measurements made available within the task I of the ANCRiSST benchmark problem, which cover 24 hours of continuous monitoring of the structural response under ambient excitation. These records are then subdivided into a convenient number of data sets and the variability of modal parameter estimates with ambient temperature and mean wind velocity are pointed out. Both 10 minutes and 1 hour long records are considered for this purpose. A comparison with finite element model predictions is finally carried out, using the structural matrices provided within the benchmark, in order to check that all the structural modes contained in the considered frequency interval are effectively identified via SSI-data.

Mining of Subspace Contrasting Sample Groups in Microarray Data (마이크로어레이 데이터의 부공간 대조 샘플집단 마이닝)

  • Lee, Kyung-Mi;Lee, Keon-Myung
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.5
    • /
    • pp.569-574
    • /
    • 2011
  • In this paper, we introduce the subspace contrasting group identification problem and propose an algorithm to solve the problem. In order to identify contrasting groups, the algorithm first determines two groups of which attribute values are in one of the contrasting ranges specified by the analyst, and searches for the contrasting groups while increasing the dimension of subspaces with an association rule mining strategy. Because the dimension of microarray data is likely to be tens of thousands, it is burdensome to find all contrasting groups over all possible subspaces by query generation. It is very useful in the sense that the proposed method allows to find those contrasting groups without analyst's involvement.

High-Dimensional Clustering Technique using Incremental Projection (점진적 프로젝션을 이용한 고차원 글러스터링 기법)

  • Lee, Hye-Myung;Park, Young-Bae
    • Journal of KIISE:Databases
    • /
    • v.28 no.4
    • /
    • pp.568-576
    • /
    • 2001
  • Most of clustering algorithms data to degenerate rapidly on high dimensional spaces. Moreover, high dimensional data often contain a significant a significant of noise. which causes additional ineffectiveness of algorithms. Therefore it is necessary to develop algorithms adapted to the structure and characteristics of the high dimensional data. In this paper, we propose a clustering algorithms CLIP using the projection The CLIP is designed to overcome efficiency and/or effectiveness problems on high dimensional clustering and it is the is based on clustering on each one dimensional subspace but we use the incremental projection to recover high dimensional cluster and to reduce the computational cost significantly at time To evaluate the performance of CLIP we demonstrate is efficiency and effectiveness through a series of experiments on synthetic data sets.

  • PDF

Genetic Design of Granular-oriented Radial Basis Function Neural Network Based on Information Proximity (정보 유사성 기반 입자화 중심 RBF NN의 진화론적 설계)

  • Park, Ho-Sung;Oh, Sung-Kwun;Kim, Hyun-Ki
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.59 no.2
    • /
    • pp.436-444
    • /
    • 2010
  • In this study, we introduce and discuss a concept of a granular-oriented radial basis function neural networks (GRBF NNs). In contrast to the typical architectures encountered in radial basis function neural networks(RBF NNs), our main objective is to develop a design strategy of GRBF NNs as follows : (a) The architecture of the network is fully reflective of the structure encountered in the training data which are granulated with the aid of clustering techniques. More specifically, the output space is granulated with use of K-Means clustering while the information granules in the multidimensional input space are formed by using a so-called context-based Fuzzy C-Means which takes into account the structure being already formed in the output space, (b) The innovative development facet of the network involves a dynamic reduction of dimensionality of the input space in which the information granules are formed in the subspace of the overall input space which is formed by selecting a suitable subset of input variables so that the this subspace retains the structure of the entire space. As this search is of combinatorial character, we use the technique of genetic optimization to determine the optimal input subspaces. A series of numeric studies exploiting some nonlinear process data and a dataset coming from the machine learning repository provide a detailed insight into the nature of the algorithm and its parameters as well as offer some comparative analysis.

Independent Feature Subspace Analysis for Gene Expression Data (유전자 발현 데이터의 독립 특징 부공간 해석)

  • Kim, Heijin;Park, Seungjin;Bang, Sung-Yang
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2002.10c
    • /
    • pp.739-742
    • /
    • 2002
  • This paper addresses a new statistical method, IFSAcycle, which is an unsupervised learning method of analyzing cell cycle-related gene expression data. The IFSAcycle is based on the independent feature subspace analysis (IFAS) [3], which generalizes the independent component analysis (ICA). Experimental results show the usefulness of IFAS: (1) the ability of assigning genes to multiple coexpression pattern groups; (2) the capability of clustering key genes that determine each critical point of cell cycle.

  • PDF

More on directional regression

  • Kim, Kyongwon;Yoo, Jae Keun
    • Communications for Statistical Applications and Methods
    • /
    • v.28 no.5
    • /
    • pp.553-562
    • /
    • 2021
  • Directional regression (DR; Li and Wang, 2007) is well-known as an exhaustive sufficient dimension reduction method, and performs well in complex regression models to have linear and nonlinear trends. However, the extension of DR is not well-done upto date, so we will extend DR to accommodate multivariate regression and large p-small n regression. We propose three versions of DR for multivariate regression and discuss how DR is applicable for the latter regression case. Numerical studies confirm that DR is robust to the number of clusters and the choice of hierarchical-clustering or pooled DR.

Multi-Radial Basis Function SVM Classifier: Design and Analysis

  • Wang, Zheng;Yang, Cheng;Oh, Sung-Kwun;Fu, Zunwei
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.6
    • /
    • pp.2511-2520
    • /
    • 2018
  • In this study, Multi-Radial Basis Function Support Vector Machine (Multi-RBF SVM) classifier is introduced based on a composite kernel function. In the proposed multi-RBF support vector machine classifier, the input space is divided into several local subsets considered for extremely nonlinear classification tasks. Each local subset is expressed as nonlinear classification subspace and mapped into feature space by using kernel function. The composite kernel function employs the dual RBF structure. By capturing the nonlinear distribution knowledge of local subsets, the training data is mapped into higher feature space, then Multi-SVM classifier is realized by using the composite kernel function through optimization procedure similar to conventional SVM classifier. The original training data set is partitioned by using some unsupervised learning methods such as clustering methods. In this study, three types of clustering method are considered such as Affinity propagation (AP), Hard C-Mean (HCM) and Iterative Self-Organizing Data Analysis Technique Algorithm (ISODATA). Experimental results on benchmark machine learning datasets show that the proposed method improves the classification performance efficiently.