• Title/Summary/Keyword: hypersphere

Search Result 30, Processing Time 0.019 seconds

A NEW DESCRIPTION OF SPHERICAL IMAGES ASSOCIATED WITH MINIMAL CURVES IN THE COMPLEX SPACE ℂ4

  • Yilmaz, Suha;Unluturk, Yasin
    • Honam Mathematical Journal
    • /
    • v.44 no.1
    • /
    • pp.121-134
    • /
    • 2022
  • In this study, we obtain the spherical images of minimal curves in the complex space in ℂ4 which are obtained by translating Cartan frame vector fields to the centre of hypersphere, and present their properties such as becoming isotropic cubic, pseudo helix, and spherical involutes. Also, we examine minimal curves which are characterized by a system of differential equations.

Prototype-Based Classification Using Class Hyperspheres (클래스 초월구를 이용한 프로토타입 기반 분류)

  • Lee, Hyun-Jong;Hwang, Doosung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.10
    • /
    • pp.483-488
    • /
    • 2016
  • In this paper, we propose a prototype-based classification learning by using the nearest-neighbor rule. The nearest-neighbor is applied to segment the class area of all the training data with hyperspheres, and a hypersphere must cover the data from the same class. The radius of a hypersphere is computed by the mid point of the two distances to the farthest same class point and the nearest other class point. And we transform the prototype selection problem into a set covering problem in order to determine the smallest set of prototypes that cover all the training data. The proposed prototype selection method is designed by a greedy algorithm and applicable to process a large-scale training set in parallel. The prediction rule is the nearest-neighbor rule and the new training data is the set of prototypes. In experiments, the generalization performance of the proposed method is superior to existing methods.

A class of compact submanifolds with constant mean curvature

  • Jang, Changrim
    • Bulletin of the Korean Mathematical Society
    • /
    • v.34 no.2
    • /
    • pp.155-171
    • /
    • 1997
  • Let $M^n$ be a connected subminifold of a Euclidean space $E^m$, equipped with the induced metric. Denoty by $\Delta$ the Laplacian operator of $M^n$ and by x the position vector. A well-known T. Takahashi's theorem [13] says that $\delta x = \lambda x$ for some constant $\lambda$ if and only if $M^n$ is either minimal subminifold of $E^m$ or minimal submanifold in a hypersphere of $E^m$. In [9], O. Garay studied the hypersurfaces $M^n$ in $E^{n+1}$ satisfying $\delta x = Dx$, where D is a diagonal matrix, and he classified such hypersurfaces. Garay's condition can be seen as a generalization of T.

  • PDF

KMSVOD: Support Vector Data Description using K-means Clustering (KMSVDD: K-means Clustering을 이용한 Support Vector Data Description)

  • Kim, Pyo-Jae;Chang, Hyung-Jin;Song, Dong-Sung;Choi, Jin-Young
    • Proceedings of the KIEE Conference
    • /
    • 2006.04a
    • /
    • pp.90-92
    • /
    • 2006
  • 기존의 Support Vector Data Description (SVDD) 방법은 학습 데이터의 개수가 증가함에 따라 학습 시간이 지수 함수적으로 증가하므로, 대량의 데이터를 학습하는 데에는 한계가 있었다. 본 논문에서는 학습 속도를 빠르게 하기 위해 K-means clustering 알고리즘을 이용하는 SVDD 알고리즘을 제안하고자 한다. 제안된 알고리즘은 기존의 decomposition 방법과 유사하게 K-means clustering 알고리즘을 이용하여 학습 데이터 영역을 sub-grouping한 후 각각의 sub-group들을 개별적으로 학습함으로써 계산량 감소 효과를 얻는다. 이러한 sub-grouping 과정은 hypersphere를 이용하여 학습 데이터를 둘러싸는 SVDD의 학습 특성을 훼손시키지 않으면서 중심점으로 모여진 작은 영역의 학습 데이터를 학습하도록 함으로써, 기존의 SVDD와 비교하여 학습 정확도의 차이 없이 빠른 학습을 가능하게 한다. 다양한 데이터들을 이용한 모의실험을 통하여 그 효과를 검증하도록 한다.

  • PDF

A Study on the Optimal Mahalanobis Distance for Speech Recognition

  • Lee, Chang-Young
    • Speech Sciences
    • /
    • v.13 no.4
    • /
    • pp.177-186
    • /
    • 2006
  • In an effort to enhance the quality of feature vector classification and thereby reduce the recognition error rate of the speaker-independent speech recognition, we employ the Mahalanobis distance in the calculation of the similarity measure between feature vectors. It is assumed that the metric matrix of the Mahalanobis distance be diagonal for the sake of cost reduction in memory and time of calculation. We propose that the diagonal elements be given in terms of the variations of the feature vector components. Geometrically, this prescription tends to redistribute the set of data in the shape of a hypersphere in the feature vector space. The idea is applied to the speech recognition by hidden Markov model with fuzzy vector quantization. The result shows that the recognition is improved by an appropriate choice of the relevant adjustable parameter. The Viterbi score difference of the two winners in the recognition test shows that the general behavior is in accord with that of the recognition error rate.

  • PDF

Perceptron-like SOM : Generalization of SOM (퍼셉트론 형태의 SOM : SOM의 일반화)

  • Song, Geun-Bae;Lee, Haing-Sei
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.10
    • /
    • pp.3098-3104
    • /
    • 2000
  • This paper defiens a perceptron-like self-organizing map(PSOM) and show that PSOM is equivalent to Kohonen's self-organizing map(SOM) if target values of output neurons of PSOM are selected properly. This fact imphes that PSOM is a generalized SOM algorithm. This paper also show that if clustering is restricted to vector sets distributed on hypersphere with unit radius, SOM and dot-product SOM(DOSM) are equivalent algorithms. Therefore we conclude that DSOM is a special case of SOM, which in turn a special, case of PSOM.

  • PDF

2-TYPE SURFACES AND QUADRIC HYPERSURFACES SATISFYING ⟨∆x, x⟩ = const.

  • Jang, Changrim;Jo, Haerae
    • East Asian mathematical journal
    • /
    • v.33 no.5
    • /
    • pp.571-585
    • /
    • 2017
  • Let M be a connected n-dimensional submanifold of a Euclidean space $E^{n+k}$ equipped with the induced metric and ${\Delta}$ its Laplacian. If the position vector x of M is decomposed as a sum of three vectors $x=x_1+x_2+x_0$ where two vectors $x_1$ and $x_2$ are non-constant eigen vectors of the Laplacian, i.e., ${\Delta}x_i={\lambda}_ix_i$, i = 1, 2 (${\lambda}_i{\in}R$) and $x_0$ is a constant vector, then, M is called a 2-type submanifold. In this paper we showed that a 2-type surface M in $E^3$ satisfies ${\langle}{\Delta}x,x-x_0{\rangle}=c$ for a constant c, where ${\langle},{\rangle}$ is the usual inner product in $E^3$, then M is an open part of a circular cylinder. Also we showed that if a quadric hypersurface M in a Euclidean space satisfies ${\langle}{\Delta}x,x{\rangle}=c$ for a constant c, then it is one of a minimal quadric hypersurface, a genaralized cone, a hypersphere, and a spherical cylinder.

Recursive Fuzzy Partition of Pattern Space for Automatic Generation of Decision Rules (결정규칙의 자동생성을 위한 패턴공간의 재귀적 퍼지분할)

  • 김봉근;최형일
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.5 no.2
    • /
    • pp.28-43
    • /
    • 1995
  • This paper concerns with automatic generation of fuzzy rules which can be used for pattern classification. Feature space is recursively subdivided into hyperspheres, and each hypersphere is represented by its centroid and bounding distance. Fuzzy rules are then generated based on the constructed hyperspheres. The resulting fuzzy rules have very simple premise parts, and they can be organized into a hierarchical structure so that classification process can be implemented very rapidly. The experimented results show that the suggested method works very well compared to other methods.

  • PDF

SPACE CURVES SATISFYING $\Delta$H = AH

  • Kim, Dong-Soo;Chung, Hei-Sun
    • Bulletin of the Korean Mathematical Society
    • /
    • v.31 no.2
    • /
    • pp.193-200
    • /
    • 1994
  • Let x : $M^{n}$ .rarw. $E^{m}$ be an isometric immersion of a manifold $M^{n}$ into the Euclidean space $E^{m}$ and .DELTA. the Laplacian of $M^{n}$ defined by -div.omicron.grad. The family of such immersions satisfying the condition .DELTA.x = .lambda.x, .lambda..mem.R, is characterized by a well known result ot Takahashi (8]): they are either minimal in $E^{m}$ or minimal in some Euclidean hypersphere. As a generalization of Takahashi's result, many authors ([3,6,7]) studied the hypersurfaces $M^{n}$ in $E^{n+1}$ satisfying .DELTA.x = Ax + b, where A is a square matrix and b is a vector in $E^{n+1}$, and they proved independently that such hypersurfaces are either minimal in $E^{n+1}$ or hyperspheres or spherical cylinders. Since .DELTA.x = -nH, the submanifolds mentioned above satisfy .DELTA.H = .lambda.H or .DELTA.H = AH, where H is the mean curvature vector field of M. And the family of hypersurfaces satisfying .DELTA.H = .lambda.H was explored for some cases in [4]. In this paper, we classify space curves x : R .rarw. $E^{3}$ satisfying .DELTA.x = Ax + b or .DELTA.H = AH, and find conditions for such curves to be equivalent.alent.alent.

  • PDF

KCYP data analysis using Bayesian multivariate linear model (베이지안 다변량 선형 모형을 이용한 청소년 패널 데이터 분석)

  • Insun, Lee;Keunbaik, Lee
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.6
    • /
    • pp.703-724
    • /
    • 2022
  • Although longitudinal studies mainly produce multivariate longitudinal data, most of existing statistical models analyze univariate longitudinal data and there is a limitation to explain complex correlations properly. Therefore, this paper describes various methods of modeling the covariance matrix to explain the complex correlations. Among them, modified Cholesky decomposition, modified Cholesky block decomposition, and hypersphere decomposition are reviewed. In this paper, we review these methods and analyze Korean children and youth panel (KCYP) data are analyzed using the Bayesian method. The KCYP data are multivariate longitudinal data that have response variables: School adaptation, academic achievement, and dependence on mobile phones. Assuming that the correlation structure and the innovation standard deviation structure are different, several models are compared. For the most suitable model, all explanatory variables are significant for school adaptation, and academic achievement and only household income appears as insignificant variables when cell phone dependence is a response variable.