• Title/Summary/Keyword: High Dimensionality Data

Search Result 122, Processing Time 0.034 seconds

Data-Compression-Based Resource Management in Cloud Computing for Biology and Medicine

  • Zhu, Changming
    • Journal of Computing Science and Engineering
    • /
    • v.10 no.1
    • /
    • pp.21-31
    • /
    • 2016
  • With the application and development of biomedical techniques such as next-generation sequencing, mass spectrometry, and medical imaging, the amount of biomedical data have been growing explosively. In terms of processing such data, we face the problems surrounding big data, highly intensive computation, and high dimensionality data. Fortunately, cloud computing represents significant advantages of resource allocation, data storage, computation, and sharing and offers a solution to solve big data problems of biomedical research. In order to improve the efficiency of resource management in cloud computing, this paper proposes a clustering method and adopts Radial Basis Function in order to compress comprehensive data sets found in biology and medicine in high quality, and stores these data with resource management in cloud computing. Experiments have validated that with such a data-compression-based resource management in cloud computing, one can store large data sets from biology and medicine in fewer capacities. Furthermore, with reverse operation of the Radial Basis Function, these compressed data can be reconstructed with high accuracy.

GC-Tree: A Hierarchical Index Structure for Image Databases (GC-트리 : 이미지 데이타베이스를 위한 계층 색인 구조)

  • 차광호
    • Journal of KIISE:Databases
    • /
    • v.31 no.1
    • /
    • pp.13-22
    • /
    • 2004
  • With the proliferation of multimedia data, there is an increasing need to support the indexing and retrieval of high-dimensional image data. Although there have been many efforts, the performance of existing multidimensional indexing methods is not satisfactory in high dimensions. Thus the dimensionality reduction and the approximate solution methods were tried to deal with the so-called dimensionality curse. But these methods are inevitably accompanied by the loss of precision of query results. Therefore, recently, the vector approximation-based methods such as the VA- file and the LPC-file were developed to preserve the precision of query results. However, the performance of the vector approximation-based methods depend largely on the size of the approximation file and they lose the advantages of the multidimensional indexing methods that prune much search space. In this paper, we propose a new index structure called the GC-tree for efficient similarity search in image databases. The GC-tree is based on a special subspace partitioning strategy which is optimized for clustered high-dimensional images. It adaptively partitions the data space based on a density function and dynamically constructs an index structure. The resultant index structure adapts well to the strongly clustered distribution of high-dimensional images.

Measuring health activation among foreign students in South Korea: initial evaluation of the feasibility, dimensionality, and reliability of the Consumer Health Activation Index (CHAI)

  • Park, MJ;Jung, Hun Sik
    • International Journal of Advanced Culture Technology
    • /
    • v.8 no.3
    • /
    • pp.192-197
    • /
    • 2020
  • Foreign students in South Korea face important challenges when they try to maintain their health. As a measure of their motivation to actively build skills for overcoming those challenges, we evaluated the 10-item Consumer Health Activation Index (CHAI), testing its feasibility, dimensionality, and reliability. There were no missing data, there was no floor effect, and for the total scores the ceiling effect was trivial (< 2%). Results of the Kaiser-Meyer-Olkin test and Bartlett's test of sphericity indicated that the data were suitable for the detection of structure by factor analysis. The results of parallel analysis and the shape of the scree plot supported a two-factor solution. One factor had 3 items concerning "my doctor" and the other factor had the 7 remaining items. Reliability was high for the 10-item CHAI (alpha = 0.856), for the 3-item subscale (alpha = 0.838), and for the 7-item subscale (alpha = 0.857). Reliability could not be improved by deletion of any items. Use of the CHAI to gather data from these foreign students is feasible, and reliable results can be obtained whether one uses the total score from all 10 items or scores from the proposed 7-item and 3-item subscales.

A review of gene selection methods based on machine learning approaches (기계학습 접근법에 기반한 유전자 선택 방법들에 대한 리뷰)

  • Lee, Hajoung;Kim, Jaejik
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.5
    • /
    • pp.667-684
    • /
    • 2022
  • Gene expression data present the level of mRNA abundance of each gene, and analyses of gene expressions have provided key ideas for understanding the mechanism of diseases and developing new drugs and therapies. Nowadays high-throughput technologies such as DNA microarray and RNA-sequencing enabled the simultaneous measurement of thousands of gene expressions, giving rise to a characteristic of gene expression data known as high dimensionality. Due to the high-dimensionality, learning models to analyze gene expression data are prone to overfitting problems, and to solve this issue, dimension reduction or feature selection techniques are commonly used as a preprocessing step. In particular, we can remove irrelevant and redundant genes and identify important genes using gene selection methods in the preprocessing step. Various gene selection methods have been developed in the context of machine learning so far. In this paper, we intensively review recent works on gene selection methods using machine learning approaches. In addition, the underlying difficulties with current gene selection methods as well as future research directions are discussed.

Canonical Correlation Biplot

  • Park, Mi-Ra;Huh, Myung-Hoe
    • Communications for Statistical Applications and Methods
    • /
    • v.3 no.1
    • /
    • pp.11-19
    • /
    • 1996
  • Canonical correlation analysis is a multivariate technique for identifying and quantifying the statistical relationship between two sets of variables. Like most multivariate techniques, the main objective of canonical correlation analysis is to reduce the dimensionality of the dataset. It would be particularly useful if high dimensional data can be represented in a low dimensional space. In this study, we will construct statistical graphs for paired sets of multivariate data. Specifically, plots of the observations as well as the variables are proposed. We discuss the geometric interpretation and goodness-of-fit of the proposed plots. We also provide a numerical example.

  • PDF

Bayesian baseline-category logit random effects models for longitudinal nominal data

  • Kim, Jiyeong;Lee, Keunbaik
    • Communications for Statistical Applications and Methods
    • /
    • v.27 no.2
    • /
    • pp.201-210
    • /
    • 2020
  • Baseline-category logit random effects models have been used to analyze longitudinal nominal data. The models account for subject-specific variations using random effects. However, the random effects covariance matrix in the models needs to explain subject-specific variations as well as serial correlations for nominal outcomes. In order to satisfy them, the covariance matrix must be heterogeneous and high-dimensional. However, it is difficult to estimate the random effects covariance matrix due to its high dimensionality and positive-definiteness. In this paper, we exploit the modified Cholesky decomposition to estimate the high-dimensional heterogeneous random effects covariance matrix. Bayesian methodology is proposed to estimate parameters of interest. The proposed methods are illustrated with real data from the McKinney Homeless Research Project.

A Novel Approach of Feature Extraction for Analog Circuit Fault Diagnosis Based on WPD-LLE-CSA

  • Wang, Yuehai;Ma, Yuying;Cui, Shiming;Yan, Yongzheng
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.6
    • /
    • pp.2485-2492
    • /
    • 2018
  • The rapid development of large-scale integrated circuits has brought great challenges to the circuit testing and diagnosis, and due to the lack of exact fault models, inaccurate analog components tolerance, and some nonlinear factors, the analog circuit fault diagnosis is still regarded as an extremely difficult problem. To cope with the problem that it's difficult to extract fault features effectively from masses of original data of the nonlinear continuous analog circuit output signal, a novel approach of feature extraction and dimension reduction for analog circuit fault diagnosis based on wavelet packet decomposition, local linear embedding algorithm, and clone selection algorithm (WPD-LLE-CSA) is proposed. The proposed method can identify faulty components in complicated analog circuits with a high accuracy above 99%. Compared with the existing feature extraction methods, the proposed method can significantly reduce the quantity of features with less time spent under the premise of maintaining a high level of diagnosing rate, and also the ratio of dimensionality reduction was discussed. Several groups of experiments are conducted to demonstrate the efficiency of the proposed method.

Quantitative Analysis for Plasma Etch Modeling Using Optical Emission Spectroscopy: Prediction of Plasma Etch Responses

  • Jeong, Young-Seon;Hwang, Sangheum;Ko, Young-Don
    • Industrial Engineering and Management Systems
    • /
    • v.14 no.4
    • /
    • pp.392-400
    • /
    • 2015
  • Monitoring of plasma etch processes for fault detection is one of the hallmark procedures in semiconductor manufacturing. Optical emission spectroscopy (OES) has been considered as a gold standard for modeling plasma etching processes for on-line diagnosis and monitoring. However, statistical quantitative methods for processing the OES data are still lacking. There is an urgent need for a statistical quantitative method to deal with high-dimensional OES data for improving the quality of etched wafers. Therefore, we propose a robust relevance vector machine (RRVM) for regression with statistical quantitative features for modeling etch rate and uniformity in plasma etch processes by using OES data. For effectively dealing with the OES data complexity, we identify seven statistical features for extraction from raw OES data by reducing the data dimensionality. The experimental results demonstrate that the proposed approach is more suitable for high-accuracy monitoring of plasma etch responses obtained from OES.

A Feature Vector Selection Method for Cancer Classification

  • Yun, Zheng;Keong, Kwoh-Chee
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2005.09a
    • /
    • pp.23-28
    • /
    • 2005
  • The high-dimensionality and insufficiency of gene expression profiles and proteomic profiles makes feature selection become a critical step in efficiently building accurate models for cancer problems based on such data sets. In this paper, we use a method, called Discrete Function Learning algorithm, to find discriminatory feature vectors based on information theory. The target feature vectors contain all or most information (in terms of entropy) of the class attribute. Two data sets are selected to validate our approach, one leukemia subtype gene expression data set and one ovarian cancer proteomic data set. The experimental results show that the our method generalizes well when applied to these insufficient and high-dimensional data sets. Furthermore, the obtained classifiers are highly understandable and accurate.

  • PDF

Negative binomial loglinear mixed models with general random effects covariance matrix

  • Sung, Youkyung;Lee, Keunbaik
    • Communications for Statistical Applications and Methods
    • /
    • v.25 no.1
    • /
    • pp.61-70
    • /
    • 2018
  • Modeling of the random effects covariance matrix in generalized linear mixed models (GLMMs) is an issue in analysis of longitudinal categorical data because the covariance matrix can be high-dimensional and its estimate must satisfy positive-definiteness. To satisfy these constraints, we consider the autoregressive and moving average Cholesky decomposition (ARMACD) to model the covariance matrix. The ARMACD creates a more flexible decomposition of the covariance matrix that provides generalized autoregressive parameters, generalized moving average parameters, and innovation variances. In this paper, we analyze longitudinal count data with overdispersion using GLMMs. We propose negative binomial loglinear mixed models to analyze longitudinal count data and we also present modeling of the random effects covariance matrix using the ARMACD. Epilepsy data are analyzed using our proposed model.