• Title/Summary/Keyword: Methods: data analysis

Search Result 19,439, Processing Time 0.036 seconds

An Empirical Study on Dimension Reduction

  • Suh, Changhee;Lee, Hakbae
    • Journal of the Korean Data Analysis Society
    • /
    • v.20 no.6
    • /
    • pp.2733-2746
    • /
    • 2018
  • The two inverse regression estimation methods, SIR and SAVE to estimate the central space are computationally easy and are widely used. However, SIR and SAVE may have poor performance in finite samples and need strong assumptions (linearity and/or constant covariance conditions) on predictors. The two non-parametric estimation methods, MAVE and dMAVE have much better performance for finite samples than SIR and SAVE. MAVE and dMAVE need no strong requirements on predictors or on the response variable. MAVE is focused on estimating the central mean subspace, but dMAVE is to estimate the central space. This paper explores and compares four methods to explain the dimension reduction. Each algorithm of these four methods is reviewed. Empirical study for simulated data shows that MAVE and dMAVE has relatively better performance than SIR and SAVE, regardless of not only different models but also different distributional assumptions of predictors. However, real data example with the binary response demonstrates that SAVE is better than other methods.

Social Media Data Analysis Trends and Methods

  • Rokaya, Mahmoud;Al Azwari, Sanaa
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.9
    • /
    • pp.358-368
    • /
    • 2022
  • Social media is a window for everyone, individuals, communities, and companies to spread ideas and promote trends and products. With these opportunities, challenges and problems related to security, privacy and rights arose. Also, the data accumulated from social media has become a fertile source for many analytics, inference, and experimentation with new technologies in the field of data science. In this chapter, emphasis will be given to methods of trend analysis, especially ensemble learning methods. Ensemble learning methods embrace the concept of cooperation between different learning methods rather than competition between them. Therefore, in this chapter, we will discuss the most important trends in ensemble learning and their applications in analysing social media data and anticipating the most important future trends.

Robustness of model averaging methods for the violation of standard linear regression assumptions

  • Lee, Yongsu;Song, Juwon
    • Communications for Statistical Applications and Methods
    • /
    • v.28 no.2
    • /
    • pp.189-204
    • /
    • 2021
  • In a regression analysis, a single best model is usually selected among several candidate models. However, it is often useful to combine several candidate models to achieve better performance, especially, in the prediction viewpoint. Model combining methods such as stacking and Bayesian model averaging (BMA) have been suggested from the perspective of averaging candidate models. When the candidate models include a true model, it is expected that BMA generally gives better performance than stacking. On the other hand, when candidate models do not include the true model, it is known that stacking outperforms BMA. Since stacking and BMA approaches have different properties, it is difficult to determine which method is more appropriate under other situations. In particular, it is not easy to find research papers that compare stacking and BMA when regression model assumptions are violated. Therefore, in the paper, we compare the performance among model averaging methods as well as a single best model in the linear regression analysis when standard linear regression assumptions are violated. Simulations were conducted to compare model averaging methods with the linear regression when data include outliers and data do not include them. We also compared them when data include errors from a non-normal distribution. The model averaging methods were applied to the water pollution data, which have a strong multicollinearity among variables. Simulation studies showed that the stacking method tends to give better performance than BMA or standard linear regression analysis (including the stepwise selection method) in the sense of risks (see (3.1)) or prediction error (see (3.2)) when typical linear regression assumptions are violated.

Data Visualization using Linear and Non-linear Dimensionality Reduction Methods

  • Kim, Junsuk;Youn, Joosang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.12
    • /
    • pp.21-26
    • /
    • 2018
  • As the large amount of data can be efficiently stored, the methods extracting meaningful features from big data has become important. Especially, the techniques of converting high- to low-dimensional data are crucial for the 'Data visualization'. In this study, principal component analysis (PCA; linear dimensionality reduction technique) and Isomap (non-linear dimensionality reduction technique) are introduced and applied to neural big data obtained by the functional magnetic resonance imaging (fMRI). First, we investigate how much the physical properties of stimuli are maintained after the dimensionality reduction processes. We moreover compared the amount of residual variance to quantitatively compare the amount of information that was not explained. As result, the dimensionality reduction using Isomap contains more information than the principal component analysis. Our results demonstrate that it is necessary to consider not only linear but also nonlinear characteristics in the big data analysis.

Review on statistical methods for protecting privacy and measuring risk of disclosure when releasing information for public use (정보공개 환경에서 개인정보 보호와 노출 위험의 측정에 대한 통계적 방법)

  • Lee, Yonghee
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.5
    • /
    • pp.1029-1041
    • /
    • 2013
  • Recently, along with emergence of big data, there are incresing demands for releasing information and micro data for public use so that protecting privacy and measuring risk of disclosure for released database become important issues in goverment and business sector as well as academic community. This paper reviews statistical methods for protecting privacy and measuring risk of disclosure when micro data or data analysis sever is released for public use.

Multi-dimension Categorical Data with Bayesian Network (베이지안 네트워크를 이용한 다차원 범주형 분석)

  • Kim, Yong-Chul
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.11 no.2
    • /
    • pp.169-174
    • /
    • 2018
  • In general, the methods of the analysis of variance(ANOVA) for the continuous data and the chi-square test for the discrete data are used for statistical analysis of the effect and the association. In multidimensional data, analysis of hierarchical structure is required and statistical linear model is adopted. The structure of the linear model requires the normality of the data. A multidimensional categorical data analysis methods are used for causal relations, interactions, and correlation analysis. In this paper, Bayesian network model using probability distribution is proposed to reduce analysis procedure and analyze interactions and causal relationships in categorical data analysis.

Model-Ship Correlation Study on the Powering Performance for a Large Container Carrier

  • Hwangbo, S.M.;Go, S.C.
    • Journal of Ship and Ocean Technology
    • /
    • v.5 no.4
    • /
    • pp.44-50
    • /
    • 2001
  • Large container carriers are suffering from lack of knowledge on reliable correlation allowances between model tests and full-scale trials, especially at fully loaded condition, Careful full-scale sea trial with a full loading of containers both in holds and on decks was carried out to clarify it. Model test results were analyzed by different methods but with the same measuring data to figure out appropriated correlations factors for each analysis methods, Even if it is no doubt that model test technique is one of the most reliable tool to predict full scale powering performance, its assumptions and simplifications which have been applied on the course of data manipulation and analysis need a feedback from sea trial data for a fine tuning, so called correlation factor. It can be stated that the best correlation allowances at fully loaded condition for both 2-dimensional and 3-dimensional analysis methods are fecund through the careful sea trial results and relevant study on the large size container carriers.

  • PDF

A study on principal component analysis using penalty method (페널티 방법을 이용한 주성분분석 연구)

  • Park, Cheolyong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.4
    • /
    • pp.721-731
    • /
    • 2017
  • In this study, principal component analysis methods using Lasso penalty are introduced. There are two popular methods that apply Lasso penalty to principal component analysis. The first method is to find an optimal vector of linear combination as the regression coefficient vector of regressing for each principal component on the original data matrix with Lasso penalty (elastic net penalty in general). The second method is to find an optimal vector of linear combination by minimizing the residual matrix obtained from approximating the original matrix by the singular value decomposition with Lasso penalty. In this study, we have reviewed two methods of principal components using Lasso penalty in detail, and shown that these methods have an advantage especially in applying to data sets that have more variables than cases. Also, these methods are compared in an application to a real data set using R program. More specifically, these methods are applied to the crime data in Ahamad (1967), which has more variables than cases.

Forecasting Symbolic Candle Chart-Valued Time Series

  • Park, Heewon;Sakaori, Fumitake
    • Communications for Statistical Applications and Methods
    • /
    • v.21 no.6
    • /
    • pp.471-486
    • /
    • 2014
  • This study introduces a new type of symbolic data, a candle chart-valued time series. We aggregate four stock indices (i.e., open, close, highest and lowest) as a one data point to summarize a huge amount of data. In other words, we consider a candle chart, which is constructed by open, close, highest and lowest stock indices, as a type of symbolic data for a long period. The proposed candle chart-valued time series effectively summarize and visualize a huge data set of stock indices to easily understand a change in stock indices. We also propose novel approaches for the candle chart-valued time series modeling based on a combination of two midpoints and two half ranges between the highest and the lowest indices, and between the open and the close indices. Furthermore, we propose three types of sum of square for estimation of the candle chart valued-time series model. The proposed methods take into account of information from not only ordinary data, but also from interval of object, and thus can effectively perform for time series modeling (e.g., forecasting future stock index). To evaluate the proposed methods, we describe real data analysis consisting of the stock market indices of five major Asian countries'. We can see thorough the results that the proposed approaches outperform for forecasting future stock indices compared with classical data analysis.

Comparing Accuracy of Imputation Methods for Incomplete Categorical Data

  • Shin, Hyung-Won;Sohn, So-Young
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2003.05a
    • /
    • pp.237-242
    • /
    • 2003
  • Various kinds of estimation methods have been developed for imputation of categorical missing data. They include modal category method, logistic regression, and association rule. In this study, we propose two imputation methods (neural network fusion and voting fusion) that combine the results of individual imputation methods. A Monte-Carlo simulation is used to compare the performance of these methods. Five factors used to simulate the missing data are (1) true model for the data, (2) data size, (3) noise size (4) percentage of missing data, and (5) missing pattern. Overall, neural network fusion performed the best while voting fusion is better than the individual imputation methods, although it was inferior to the neural network fusion. Result of an additional real data analysis confirms the simulation result.

  • PDF