• Title/Summary/Keyword: Information Criterion

Search Result 1,500, Processing Time 0.031 seconds

Re-prioritizing of Prospective and Strategic Technologies for Future Agricultural Mechanization using AHP (AHP를 이용한 농업기계분야의 미래 유망 및 전략 기술에 대한 우선순위 재설정)

  • Cho, K.T.;Chang, D.I.;Shin, B.C.;Han, J.I.;Kim, J.Y.;Lee, J.I.
    • Journal of Biosystems Engineering
    • /
    • v.33 no.2
    • /
    • pp.142-148
    • /
    • 2008
  • The study was focused on setting priority for future core technologies in agricultural mechanization using AHP (Analytic Hierarchy Process). A total of 23 technologies was selected by specialists. Evaluation criteria for the priority setting were decided as 'technology', 'marketability', and 'publicity'. Thirteen specialists in agricultural mechanization answered the questionnaire for AHP. As the results, 'technology' was decided as the most important evaluation criterion. 'feasibility' in 'technology' criterion, 'market growth' in 'marketability' criterion, and 'impact to other industry' in 'publicity' criterion were decided as sub-criteria in each criterion. The most important technology was 'Development of portable safety evaluation system for fresh and convenient agricultural products'.

New stop criterion using the absolute mean value of LLR difference for Turbo Codes (LLR 차의 절대 평균값을 이용한 터보부호의 새로운 반복중단 알고리즘)

  • Shim ByoungSup;Lee Wanbum;Jeong DaeHo;Lim SoonJa;Kim TaeHyung;Kim HwanYong
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.42 no.5 s.335
    • /
    • pp.39-46
    • /
    • 2005
  • It is well known the fact that turbo codes has better performance as the number of iteration and the interleaver size increases in the AWGN channel environment. However, as the number of iteration and the interleaver size are increased, it is required much delay and computation for iterative decoding. Therefore, it is important to devise an efficient criterion to stop the iteration process and prevent unnecessary computations and decoding delay. In this paper, it proposes the efficient iterative decoding stop criterion using the absolute mean value of LLR difference. It is verifying that the proposal iterative decoding stop criterion can be reduced the average iterative decoding number compared to conventional schemes with a negligible degradation of the error performance.

Identifying Statistically Significant Gene-Sets by Gene Set Enrichment Analysis Using Fisher Criterion (Fisher Criterion을 이용한 Gene Set Enrichment Analysis 기반 유의 유전자 집합의 검출 방법 연구)

  • Kim, Jae-Young;Shin, Mi-Young
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.4
    • /
    • pp.19-26
    • /
    • 2008
  • Gene set enrichment analysis (GSEA) is a computational method to identify statistically significant gene sets showing significant differences between two groups of microarray expression profiles and simultaneously uncover their biological meanings in an elegant way by employing gene annotation databases, such as Cytogenetic Band, KEGG pathways, gene ontology, and etc. For the gone set enrichment analysis, all the genes in a given dataset are first ordered by the signal-to-noise ratio between the groups and then further analyses are proceeded. Despite of its impressive results in several previous studies, however, gene ranking by the signal-to-noise ratio makes it difficult to consider highly up-regulated genes and highly down-regulated genes at the same time as the candidates of significant genes, which possibly reflect certain situations incurred in metabolic and signaling pathways. To deal with this problem, in this article, we investigate the gene set enrichment analysis method with Fisher criterion for gene ranking and also evaluate its effects in Leukemia related pathway analyses.

Study on the Reliability Evaluation Method of Components when Operating in Different Environments (이종 환경에서 운용되는 부품의 신뢰도 평가 방법 연구)

  • Hwang, Jeong Taek;Kim, Jong Hak;Jeon, Ju Yeon;Han, Jae Hyeon
    • Journal of the Korean Society of Safety
    • /
    • v.32 no.5
    • /
    • pp.115-121
    • /
    • 2017
  • This paper is to introduce the main modeling assumptions and data structures associated with right-censored data to describe the successful methodological ideas for analyzing such a field-failure-data when components operating in different environments. The Kaplan - Meier method is the most popular method used for survival analysis. Together with the log-rank test, it may provide us with an opportunity to estimate survival probabilities and to compare survival between groups. An important advantage of the Kaplan - Meier curve is that the method can take into account some types of censored data, particularly right-censoring. The above non-parametric method was used to verify the equality of parts life used in different environments. After that, we performed the life distribution analysis using the parametric method. We simulated data from three distributions: exponential, normal, and Weibull. This allowed us to compare the results of the estimates to the known true values and to quantify the reliability indices. Here we used the Akaike information criterion to find a suitable life time distribution. If the Akaike information criterion is the smallest, the best model of failure data is presented. In this paper, no-nparametrics and parametrics methods are analyzed using R program which is a popular statistical program.

Region Classification and Image Based on Region-Based Prediction (RBP) Model

  • Cassio-M.Yorozuya;Yu-Liu;Masayuki-Nakajima
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1998.06b
    • /
    • pp.165-170
    • /
    • 1998
  • This paper presents a new prediction method RBP region-based prediction model where the context used for prediction contains regions instead of individual pixels. There is a meaningful property that RBP can partition a cartoon image into two distinctive types of regions, one containing full-color backgrounds and the other containing boundaries, edges and home-chromatic areas. With the development of computer techniques, synthetic images created with CG (computer graphics) becomes attactive. Like the demand on data compression, it is imperative to efficiently compress synthetic images such as cartoon animation generated with CG for storage of finite capacity and transmission of narrow bandwidth. This paper a lossy compression method to full-color regions and a lossless compression method to homo-chromatic and boundaries regions. Two criteria for partitioning are described, constant criterion and variable criterion. The latter criterion, in form of a linear function, gives the different threshold for classification in terms of contents of the image of interest. We carry out experiments by applying our method to a sequence of cartoon animation. We carry out experiments by applying our method to a sequence of cartoon animation. Compared with the available image compression standard MPEG-1, our method gives the superior results in both compression ratio and complexity.

  • PDF

Repetitive model refinement for structural health monitoring using efficient Akaike information criterion

  • Lin, Jeng-Wen
    • Smart Structures and Systems
    • /
    • v.15 no.5
    • /
    • pp.1329-1344
    • /
    • 2015
  • The stiffness of a structure is one of several structural signals that are useful indicators of the amount of damage that has been done to the structure. To accurately estimate the stiffness, an equation of motion containing a stiffness parameter must first be established by expansion as a linear series model, a Taylor series model, or a power series model. The model is then used in multivariate autoregressive modeling to estimate the structural stiffness and compare it to the theoretical value. Stiffness assessment for modeling purposes typically involves the use of one of three statistical model refinement approaches, one of which is the efficient Akaike information criterion (AIC) proposed in this paper. If a newly added component of the model results in a decrease in the AIC value, compared to the value obtained with the previously added component(s), it is statistically justifiable to retain this new component; otherwise, it should be removed. This model refinement process is repeated until all of the components of the model are shown to be statistically justifiable. In this study, this model refinement approach was compared with the two other commonly used refinement approaches: principal component analysis (PCA) and principal component regression (PCR) combined with the AIC. The results indicate that the proposed AIC approach produces more accurate structural stiffness estimates than the other two approaches.

NEWLY DISCOVERED z ~ 5 QUASARS BASED ON DEEP LEARNING AND BAYESIAN INFORMATION CRITERION

  • Shin, Suhyun;Im, Myungshin;Kim, Yongjung;Jiang, Linhua
    • Journal of The Korean Astronomical Society
    • /
    • v.55 no.4
    • /
    • pp.131-138
    • /
    • 2022
  • We report the discovery of four quasars with M1450 ≳ -25.0 mag at z ~ 5 and supermassive black hole mass measurement for one of the quasars. They were selected as promising high-redshift quasar candidates via deep learning and Bayesian information criterion, which are expected to be effective in discriminating quasars from the late-type stars and high-redshift galaxies. The candidates were observed by the Double Spectrograph on the Palomar 200-inch Hale Telescope. They show clear Lyα breaks at about 7000-8000 Å, indicating they are quasars at 4.7 < z < 5.6. For HSC J233107-001014, we measure the mass of its supermassive black hole (SMBH) using its C IV λ1549 emission line. The SMBH mass and Eddington ratio of the quasar are found to be ~108 M and ~0.6, respectively. This suggests that this quasar possibly harbors a fast growing SMBH near the Eddington limit despite its faintness (LBol < 1046 erg s-1). Our 100% quasar identification rate supports high efficiency of our deep learning and Bayesian information criterion selection method, which can be applied to future surveys to increase high-redshift quasar sample.

Robust varying coefficient model using L1 regularization

  • Hwang, Changha;Bae, Jongsik;Shim, Jooyong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.4
    • /
    • pp.1059-1066
    • /
    • 2016
  • In this paper we propose a robust version of varying coefficient models, which is based on the regularized regression with L1 regularization. We use the iteratively reweighted least squares procedure to solve L1 regularized objective function of varying coefficient model in locally weighted regression form. It provides the efficient computation of coefficient function estimates and the variable selection for given value of smoothing variable. We present the generalized cross validation function and Akaike information type criterion for the model selection. Applications of the proposed model are illustrated through the artificial examples and the real example of predicting the effect of the input variables and the smoothing variable on the output.

Estimation of Optimal Mixture Number of GMM for Environmental Sounds Recognition (환경음 인식을 위한 GMM의 혼합모델 개수 추정)

  • Han, Da-Jeong;Park, Aa-Ron;Baek, Sung-June
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.2
    • /
    • pp.817-821
    • /
    • 2012
  • In this paper we applied the optimal mixture number estimation technique in GMM(Gaussian mixture model) using BIC(Bayesian information criterion) and MDL(minimum description length) as a model selection criterion for environmental sounds recognition. In the experiment, we extracted 12 MFCC(mel-frequency cepstral coefficients) features from 9 kinds of environmental sounds which amounts to 27747 data and classified them with GMM. As mentioned above, BIC and MDL is applied to estimate the optimal number of mixtures in each environmental sounds class. According to the experimental results, while the recognition performances are maintained, the computational complexity decreases by 17.8% with BIC and 31.7% with MDL. It shows that the computational complexity reduction by BIC and MDL is effective for environmental sounds recognition using GMM.

(Adaptive Structure of Modular Wavelet Neural Network Using Growing and Pruning Algorithm) (성장과 소거 알고리즘을 이용한 모듈화된 웨이블렛 신경망의 적응구조 설계)

  • Seo, Jae-Yong;Kim, Yong-Taek;Jo, Hyeon-Chan;Jeon, Hong-Tae
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.39 no.1
    • /
    • pp.16-23
    • /
    • 2002
  • In this paper, we propose the growing and pruning algorithm to design the optimal structure of modular wavelet neural network(MWNN) with F-projection and geometric growing criterion. Geometric growing criterion consists of estimated error criterion considering local error and angle criterion which attempts to assign wavelet function that is nearly orthogonal to all other existing wavelet functions. These criteria provide a methodology which a network designer can construct MWNN according to one's intention. The proposed growing algorithm increases in number of module or the size of modules of MWNN. Also, the pruning algorithm eliminates unnecessary node of module or module from constructed MWNN to overcome the problem due to localized characteristic of wavelet neural network which is used to modules of MWNN. We apply the proposed constructing algorithm of the optimal structure of MWNN to approximation problems of 1-D function and 2-D function, and evaluate the effectiveness of the proposed algorithm.