• Title/Summary/Keyword: subspace

Search Result 739, Processing Time 0.024 seconds

Forward Backward PAST (Projection Approximation Subspace Tracking) Algorithm for the Better Subspace Estimation Accuracy

  • Lim, Jun-Seok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.27 no.1E
    • /
    • pp.25-29
    • /
    • 2008
  • The projection approximation subspace tracking (PAST) is one of the attractive subspace tracking algorithms, because it estimatesthe signal subspace adaptively and continuously. Furthermore, the computational complexity is relatively low. However, the algorithm still has room for improvement in the subspace estimation accuracy. In this paper, we propose a new algorithm to improve the subspace estimation accuracy using a normally ordered input vector and a reversely ordered input vector simultaneously.

ON MULTI SUBSPACE-HYPERCYCLIC OPERATORS

  • Moosapoor, Mansooreh
    • Communications of the Korean Mathematical Society
    • /
    • v.35 no.4
    • /
    • pp.1185-1192
    • /
    • 2020
  • In this paper, we introduce and investigate multi subspace-hypercyclic operators and prove that multi-hypercyclic operators are multi subspace-hypercyclic. We show that if T is M-hypercyclic or multi M-hypercyclic, then Tn is multi M-hypercyclic for any natural number n and by using this result, make some examples of multi subspace-hypercyclic operators. We prove that multi M-hypercyclic operators have somewhere dense orbits in M. We show that analytic Toeplitz operators can not be multi subspace-hypercyclic. Also, we state a sufficient condition for coanalytic Toeplitz operators to be multi subspace-hypercyclic.

Optimization of Random Subspace Ensemble for Bankruptcy Prediction (재무부실화 예측을 위한 랜덤 서브스페이스 앙상블 모형의 최적화)

  • Min, Sung-Hwan
    • Journal of Information Technology Services
    • /
    • v.14 no.4
    • /
    • pp.121-135
    • /
    • 2015
  • Ensemble classification is to utilize multiple classifiers instead of using a single classifier. Recently ensemble classifiers have attracted much attention in data mining community. Ensemble learning techniques has been proved to be very useful for improving the prediction accuracy. Bagging, boosting and random subspace are the most popular ensemble methods. In random subspace, each base classifier is trained on a randomly chosen feature subspace of the original feature space. The outputs of different base classifiers are aggregated together usually by a simple majority vote. In this study, we applied the random subspace method to the bankruptcy problem. Moreover, we proposed a method for optimizing the random subspace ensemble. The genetic algorithm was used to optimize classifier subset of random subspace ensemble for bankruptcy prediction. This paper applied the proposed genetic algorithm based random subspace ensemble model to the bankruptcy prediction problem using a real data set and compared it with other models. Experimental results showed the proposed model outperformed the other models.

Interference Suppression Using Principal Subspace Modification in Multichannel Wiener Filter and Its Application to Speech Recognition

  • Kim, Gi-Bak
    • ETRI Journal
    • /
    • v.32 no.6
    • /
    • pp.921-931
    • /
    • 2010
  • It has been shown that the principal subspace-based multichannel Wiener filter (MWF) provides better performance than the conventional MWF for suppressing interference in the case of a single target source. It can efficiently estimate the target speech component in the principal subspace which estimates the acoustic transfer function up to a scaling factor. However, as the input signal-to-interference ratio (SIR) becomes lower, larger errors are incurred in the estimation of the acoustic transfer function by the principal subspace method, degrading the performance in interference suppression. In order to alleviate this problem, a principal subspace modification method was proposed in previous work. The principal subspace modification reduces the estimation error of the acoustic transfer function vector at low SIRs. In this work, a frequency-band dependent interpolation technique is further employed for the principal subspace modification. The speech recognition test is also conducted using the Sphinx-4 system and demonstrates the practical usefulness of the proposed method as a front processing for the speech recognizer in a distant-talking and interferer-present environment.

Orthonormalized Forward Backward PAST (Projection Approximation Subspace Tracking) Algorithm (직교설 전후방 PAST (Projection Approximation Subspace Tracking) 알고리즘)

  • Lim, Jun-Seok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.6
    • /
    • pp.514-519
    • /
    • 2009
  • The projection approximation subspace tracking (PAST) is one of the attractive subspace tracking algorithms, because it estimates the signal subspace adaptively and continuously. Furthermore, the computational complexity is relatively low. However, the algorithm still has room for improvement in the subspace estimation accuracy. FE-PAST (Forward-Backward PAST) is one of the results from the improvement studies. In this paper, we propose a new algorithm to improve the orthogonality of the FB-PAST (Forward-Backward PAST).

Note on the estimation of informative predictor subspace and projective-resampling informative predictor subspace (다변량회귀에서 정보적 설명 변수 공간의 추정과 투영-재표본 정보적 설명 변수 공간 추정의 고찰)

  • Yoo, Jae Keun
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.5
    • /
    • pp.657-666
    • /
    • 2022
  • An informative predictor subspace is useful to estimate the central subspace, when conditions required in usual suffcient dimension reduction methods fail. Recently, for multivariate regression, Ko and Yoo (2022) newly defined a projective-resampling informative predictor subspace, instead of the informative predictor subspace, by the adopting projective-resampling method (Li et al. 2008). The new space is contained in the informative predictor subspace but contains the central subspace. In this paper, a method directly to estimate the informative predictor subspace is proposed, and it is compapred with the method by Ko and Yoo (2022) through theoretical aspects and numerical studies. The numerical studies confirm that the Ko-Yoo method is better in the estimation of the central subspace than the proposed method and is more efficient in sense that the former has less variation in the estimation.

Tutorial: Dimension reduction in regression with a notion of sufficiency

  • Yoo, Jae Keun
    • Communications for Statistical Applications and Methods
    • /
    • v.23 no.2
    • /
    • pp.93-103
    • /
    • 2016
  • In the paper, we discuss dimension reduction of predictors ${\mathbf{X}}{\in}{{\mathbb{R}}^p}$ in a regression of $Y{\mid}{\mathbf{X}}$ with a notion of sufficiency that is called sufficient dimension reduction. In sufficient dimension reduction, the original predictors ${\mathbf{X}}$ are replaced by its lower-dimensional linear projection without loss of information on selected aspects of the conditional distribution. Depending on the aspects, the central subspace, the central mean subspace and the central $k^{th}$-moment subspace are defined and investigated as primary interests. Then the relationships among the three subspaces and the changes in the three subspaces for non-singular transformation of ${\mathbf{X}}$ are studied. We discuss the two conditions to guarantee the existence of the three subspaces that constrain the marginal distribution of ${\mathbf{X}}$ and the conditional distribution of $Y{\mid}{\mathbf{X}}$. A general approach to estimate them is also introduced along with an explanation for conditions commonly assumed in most sufficient dimension reduction methodologies.

ON SUBSPACE-SUPERCYCLIC SEMIGROUP

  • El Berrag, Mohammed;Tajmouati, Abdelaziz
    • Communications of the Korean Mathematical Society
    • /
    • v.33 no.1
    • /
    • pp.157-164
    • /
    • 2018
  • A $C_0$-semigroup ${\tau}=(T_t)_{t{\geq}0}$ on a Banach space X is called subspace-supercyclic for a subspace M, if $\mathbb{C}Orb({\tau},x){\bigcap}M=\{{\lambda}T_tx\;:\;{\lambda}{\in}\mathbb{C},\;t{\geq}0\}{\bigcap}M$ is dense in M for a vector $x{\in}M$. In this paper we characterize the notion of subspace-supercyclic $C_0$-semigroup. At the same time, we also provide a subspace-supercyclicity criterion $C_0$-semigroup and offer two equivalent conditions of this criterion.

APPROXIMATION PROPERTIES OF PAIRS OF SUBSPACES

  • Lee, Keun Young
    • Bulletin of the Korean Mathematical Society
    • /
    • v.56 no.3
    • /
    • pp.563-568
    • /
    • 2019
  • This study is concerned with the approximation properties of pairs. For ${\lambda}{\geq}1$, we prove that given a Banach space X and a closed subspace $Z_0$, if the pair ($X,Z_0$) has the ${\lambda}$-bounded approximation property (${\lambda}$-BAP), then for every ideal Z containing $Z_0$, the pair ($Z,Z_0$) has the ${\lambda}$-BAP; further, if Z is a closed subspace of X and the pair (X, Z) has the ${\lambda}$-BAP, then for every separable subspace $Y_0$ of X, there exists a separable closed subspace Y containing $Y_0$ such that the pair ($Y,Y{\cap}Z$) has the ${\lambda}$-BAP. We also prove that if Z is a separable closed subspace of X, then the pair (X, Z) has the ${\lambda}$-BAP if and only if for every separable subspace $Y_0$ of X, there exists a separable closed subspace Y containing $Y_0{\cup}Z$ such that the pair (Y, Z) has the ${\lambda}$-BAP.

Improving an Ensemble Model Using Instance Selection Method (사례 선택 기법을 활용한 앙상블 모형의 성능 개선)

  • Min, Sung-Hwan
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.39 no.1
    • /
    • pp.105-115
    • /
    • 2016
  • Ensemble classification involves combining individually trained classifiers to yield more accurate prediction, compared with individual models. Ensemble techniques are very useful for improving the generalization ability of classifiers. The random subspace ensemble technique is a simple but effective method for constructing ensemble classifiers; it involves randomly drawing some of the features from each classifier in the ensemble. The instance selection technique involves selecting critical instances while deleting and removing irrelevant and noisy instances from the original dataset. The instance selection and random subspace methods are both well known in the field of data mining and have proven to be very effective in many applications. However, few studies have focused on integrating the instance selection and random subspace methods. Therefore, this study proposed a new hybrid ensemble model that integrates instance selection and random subspace techniques using genetic algorithms (GAs) to improve the performance of a random subspace ensemble model. GAs are used to select optimal (or near optimal) instances, which are used as input data for the random subspace ensemble model. The proposed model was applied to both Kaggle credit data and corporate credit data, and the results were compared with those of other models to investigate performance in terms of classification accuracy, levels of diversity, and average classification rates of base classifiers in the ensemble. The experimental results demonstrated that the proposed model outperformed other models including the single model, the instance selection model, and the original random subspace ensemble model.