• Title/Summary/Keyword: Methods selection

Search Result 4,081, Processing Time 0.031 seconds

관찰 및 추천에 의한 영재교육대상자 선발방식 분석 - 2011학년도 대학부설 과학영재교육원 입학전형을 중심으로 -

  • Kwon, Ern-Gun;Jo, In-Seo
    • East Asian mathematical journal
    • /
    • v.28 no.2
    • /
    • pp.215-232
    • /
    • 2012
  • The methods of selection through observations and recommendations were introduced in the process of recruiting new students for the science education institutes for the gifted attached to 25 universities recently. This paper itemized the methods of screening through observations and recommendations. This paper also analyzed the problems with the methods and attempted to create plans for their improvement. The methods of selection through observations and recommendations led to the positive results that students' usual activities and attitudes in the classroom were reflected on the evaluation and that the cost of their private lessons was also reduced. However, the methods showed a few problems that need to be corrected. We point out problems occurring with examining their documents for submission and interviews. It was not easy to grade candidates' gifts, creativity, potential and development within the contents of the documents and the limited time of conducting interviews. On the plans for the developments of the implemented methods of selection through observations and recommendations, we have several suggestions. The chances for teachers' in-service training of learning the methods of selection through observations and recommendations need to be expanded. The interview needs to be enhanced and to have the same weight as the document screening. To secure the continuity of the education for the gifted, the clear guidelines from the Ministry of Education, Science, Technology along with the cooperation of the education institutes for the gifted are essential.

Differences by Selection Method for Exposure Factor Input Distribution for Use in Probabilistic Consumer Exposure Assessment

  • Kang, Sohyun;Kim, Jinho;Lim, Miyoung;Lee, Kiyoung
    • Journal of Environmental Health Sciences
    • /
    • v.48 no.5
    • /
    • pp.266-271
    • /
    • 2022
  • Background: The selection of distributions of input parameters is an important component in probabilistic exposure assessment. Goodness-of-fit (GOF) methods are used to determine the distribution of exposure factors. However, there are no clear guidelines for choosing an appropriate GOF method. Objectives: The outcomes of probabilistic consumer exposure assessment were compared by using five different GOF methods for the selection of input distributions: chi-squared test, Kolmogorov-Smirnov test (K-S), Anderson-Darling test (A-D), Akaike information criterion (AIC) and Bayesian information criterion (BIC). Methods: Individual exposures were estimated based on product usage factor combinations from 10,000 respondents. The distribution of individual exposure was considered as the true value of population exposures. Results: Among the five GOF methods, probabilistic exposure distributions using the A-D and K-S methods were similar to individual exposure estimations. Comparing the 95th percentiles of the probabilistic distributions and the individual estimations for 10 CPs, there were 0.73 to 1.92 times differences for the A-D method, and 0.73 to 1.60 times differences (excluding tire-shine spray) for the K-S method. Conclusions: There were significant differences in exposure assessment results among the selection of the GOF methods. Therefore, the GOF methods for probabilistic consumer exposure assessment should be carefully selected.

Nonlinear Feature Transformation and Genetic Feature Selection: Improving System Security and Decreasing Computational Cost

  • Taghanaki, Saeid Asgari;Ansari, Mohammad Reza;Dehkordi, Behzad Zamani;Mousavi, Sayed Ali
    • ETRI Journal
    • /
    • v.34 no.6
    • /
    • pp.847-857
    • /
    • 2012
  • Intrusion detection systems (IDSs) have an important effect on system defense and security. Recently, most IDS methods have used transformed features, selected features, or original features. Both feature transformation and feature selection have their advantages. Neighborhood component analysis feature transformation and genetic feature selection (NCAGAFS) is proposed in this research. NCAGAFS is based on soft computing and data mining and uses the advantages of both transformation and selection. This method transforms features via neighborhood component analysis and chooses the best features with a classifier based on a genetic feature selection method. This novel approach is verified using the KDD Cup99 dataset, demonstrating higher performances than other well-known methods under various classifiers have demonstrated.

A Decision Support System for Supplier Selection in B2B E-procurement (전자조달을 위한 공급자 선택 지원 시스템의 개발)

  • 하성호;남미성
    • Korean Management Science Review
    • /
    • v.21 no.1
    • /
    • pp.113-129
    • /
    • 2004
  • Nowadays many enterprises build e-procurement systems. An e-procurement is a Web-based procurement process and its functionalities are considered important in the B2B e-commerce. Buyers should select competent suppliers for a successful e-procurement. Therefore, this study proposes a method using the analytic hierarchy process(AHP) for building a Web-based supplier selection system. In detail, the purpose of this study is (1) to review methods previously used when buyers selecting suppliers and to extract important selection criteria: (2) to explain extended AHP method adopted by this study among supplier selection methods: (3) to describe the supplier selection steps using extended AHP : and (4) to propose a decision support system embedding the methodology described above. The proposed system comprises of three phase: first phase is to evaluate suppliers on enterprise level: second phase to evaluate them on each transaction level: third phase to post-evaluate them.

Subset selection in multiple linear regression: An improved Tabu search

  • Bae, Jaegug;Kim, Jung-Tae;Kim, Jae-Hwan
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.40 no.2
    • /
    • pp.138-145
    • /
    • 2016
  • This paper proposes an improved tabu search method for subset selection in multiple linear regression models. Variable selection is a vital combinatorial optimization problem in multivariate statistics. The selection of the optimal subset of variables is necessary in order to reliably construct a multiple linear regression model. Its applications widely range from machine learning, timeseries prediction, and multi-class classification to noise detection. Since this problem has NP-complete nature, it becomes more difficult to find the optimal solution as the number of variables increases. Two typical metaheuristic methods have been developed to tackle the problem: the tabu search algorithm and hybrid genetic and simulated annealing algorithm. However, these two methods have shortcomings. The tabu search method requires a large amount of computing time, and the hybrid algorithm produces a less accurate solution. To overcome the shortcomings of these methods, we propose an improved tabu search algorithm to reduce moves of the neighborhood and to adopt an effective move search strategy. To evaluate the performance of the proposed method, comparative studies are performed on small literature data sets and on large simulation data sets. Computational results show that the proposed method outperforms two metaheuristic methods in terms of the computing time and solution quality.

A Study on Feature Selection for kNN Classifier using Document Frequency and Collection Frequency (문헌빈도와 장서빈도를 이용한 kNN 분류기의 자질선정에 관한 연구)

  • Lee, Yong-Gu
    • Journal of Korean Library and Information Science Society
    • /
    • v.44 no.1
    • /
    • pp.27-47
    • /
    • 2013
  • This study investigated the classification performance of a kNN classifier using the feature selection methods based on document frequency(DF) and collection frequency(CF). The results of the experiments, which used HKIB-20000 data, were as follows. First, the feature selection methods that used high-frequency terms and removed low-frequency terms by the CF criterion achieved better classification performance than those using the DF criterion. Second, neither DF nor CF methods performed well when low-frequency terms were selected first in the feature selection process. Last, combining CF and DF criteria did not result in better classification performance than using the single feature selection criterion of DF or CF.

Minimum Message Length and Classical Methods for Model Selection in Univariate Polynomial Regression

  • Viswanathan, Murlikrishna;Yang, Young-Kyu;WhangBo, Taeg-Keun
    • ETRI Journal
    • /
    • v.27 no.6
    • /
    • pp.747-758
    • /
    • 2005
  • The problem of selection among competing models has been a fundamental issue in statistical data analysis. Good fits to data can be misleading since they can result from properties of the model that have nothing to do with it being a close approximation to the source distribution of interest (for example, overfitting). In this study we focus on the preference among models from a family of polynomial regressors. Three decades of research has spawned a number of plausible techniques for the selection of models, namely, Akaike's Finite Prediction Error (FPE) and Information Criterion (AIC), Schwartz's criterion (SCH), Generalized Cross Validation (GCV), Wallace's Minimum Message Length (MML), Minimum Description Length (MDL), and Vapnik's Structural Risk Minimization (SRM). The fundamental similarity between all these principles is their attempt to define an appropriate balance between the complexity of models and their ability to explain the data. This paper presents an empirical study of the above principles in the context of model selection, where the models under consideration are univariate polynomials. The paper includes a detailed empirical evaluation of the model selection methods on six target functions, with varying sample sizes and added Gaussian noise. The results from the study appear to provide strong evidence in support of the MML- and SRM- based methods over the other standard approaches (FPE, AIC, SCH and GCV).

  • PDF

A study on bandwith selection based on ASE for nonparametric density estimators

  • Kim, Tae-Yoon
    • Journal of the Korean Statistical Society
    • /
    • v.29 no.3
    • /
    • pp.307-313
    • /
    • 2000
  • Suppose we have a set of data X1, ···, Xn and employ kernel density estimator to estimate the marginal density of X. in this article bandwith selection problem for kernel density estimator is examined closely. In particular the Kullback-Leibler method (a bandwith selection methods based on average square error (ASE)) is considered.

  • PDF

New Feature Selection Method for Text Categorization

  • Wang, Xingfeng;Kim, Hee-Cheol
    • Journal of information and communication convergence engineering
    • /
    • v.15 no.1
    • /
    • pp.53-61
    • /
    • 2017
  • The preferred feature selection methods for text classification are filter-based. In a common filter-based feature selection scheme, unique scores are assigned to features; then, these features are sorted according to their scores. The last step is to add the top-N features to the feature set. In this paper, we propose an improved global feature selection scheme wherein its last step is modified to obtain a more representative feature set. The proposed method aims to improve the classification performance of global feature selection methods by creating a feature set representing all classes almost equally. For this purpose, a local feature selection method is used in the proposed method to label features according to their discriminative power on classes; these labels are used while producing the feature sets. Experimental results obtained using the well-known 20 Newsgroups and Reuters-21578 datasets with the k-nearest neighbor algorithm and a support vector machine indicate that the proposed method improves the classification performance in terms of a widely known metric ($F_1$).

Comparison of Feature Selection Methods in Support Vector Machines (지지벡터기계의 변수 선택방법 비교)

  • Kim, Kwangsu;Park, Changyi
    • The Korean Journal of Applied Statistics
    • /
    • v.26 no.1
    • /
    • pp.131-139
    • /
    • 2013
  • Support vector machines(SVM) may perform poorly in the presence of noise variables; in addition, it is difficult to identify the importance of each variable in the resulting classifier. A feature selection can improve the interpretability and the accuracy of SVM. Most existing studies concern feature selection in the linear SVM through penalty functions yielding sparse solutions. Note that one usually adopts nonlinear kernels for the accuracy of classification in practice. Hence feature selection is still desirable for nonlinear SVMs. In this paper, we compare the performances of nonlinear feature selection methods such as component selection and smoothing operator(COSSO) and kernel iterative feature extraction(KNIFE) on simulated and real data sets.