• Title/Summary/Keyword: Error decision

Search Result 893, Processing Time 0.031 seconds

An Experimental Study on the Design of the Korean Ballot Paper - Problems of the Regulations of the Public Official Election Act - (한국 투표용지 디자인에 관한 실험 연구 - 공직선거법 규정에 대한 문제제기 -)

  • Jung, Eui Tay;Hong, Jae Woo;Lee, Sang Hyeb;Lee, Eun Jung
    • Design Convergence Study
    • /
    • v.17 no.3
    • /
    • pp.91-108
    • /
    • 2018
  • Rather the ballot paper design could influence voting behavior, there is less study upon the designing ballot paper and the importance of information design. This study examines the possibility of error occurring in the ballot paper design under the rules of Public Official Election Act. To do this, we conducted a heuristic evaluation method to review regulations, and an empirical experiment with closed groups. From this, we found that (1) diverse cases of ballot paper can be produced, and (2) various fonts, sizes, and materials can be used. Accordingly, it is inevitable to deliver regulations on (1) the usage of chromaticity and image, (2) the applying universal designed-typography, and (3) the margin for the spacing between ballot boxes. This study, at the end, suggests institutional measures for securing the validity and legitimacy on the decision-making process to remove latent ambiguity in ballot paper designing.

B-spline polynomials models for analyzing growth patterns of Guzerat young bulls in field performance tests

  • Ricardo Costa Sousa;Fernando dos Santos Magaco;Daiane Cristina Becker Scalez;Jose Elivalto Guimaraes Campelo;Clelia Soares de Assis;Idalmo Garcia Pereira
    • Animal Bioscience
    • /
    • v.37 no.5
    • /
    • pp.817-825
    • /
    • 2024
  • Objective: The aim of this study was to identify suitable polynomial regression for modeling the average growth trajectory and to estimate the relative development of the rib eye area, scrotal circumference, and morphometric measurements of Guzerat young bulls. Methods: A total of 45 recently weaned males, aged 325.8±28.0 days and weighing 219.9±38.05 kg, were evaluated. The animals were kept on Brachiaria brizantha pastures, received multiple supplementations, and were managed under uniform conditions for 294 days, with evaluations conducted every 56 days. The average growth trajectory was adjusted using ordinary polynomials, Legendre polynomials, and quadratic B-splines. The coefficient of determination, mean absolute deviation, mean square error, the value of the restricted likelihood function, Akaike information criteria, and consistent Akaike information criteria were applied to assess the quality of the fits. For the study of allometric growth, the power model was applied. Results: Ordinary polynomial and Legendre polynomial models of the fifth order provided the best fits. B-splines yielded the best fits in comparing models with the same number of parameters. Based on the restricted likelihood function, Akaike's information criterion, and consistent Akaike's information criterion, the B-splines model with six intervals described the growth trajectory of evaluated animals more smoothly and consistently. In the study of allometric growth, the evaluated traits exhibited negative heterogeneity (b<1) relative to the animals' weight (p<0.01), indicating the precocity of Guzerat cattle for weight gain on pasture. Conclusion: Complementary studies of growth trajectory and allometry can help identify when an animal's weight changes and thus assist in decision-making regarding management practices, nutritional requirements, and genetic selection strategies to optimize growth and animal performance.

Blockchain and AI-based big data processing techniques for sustainable agricultural environments (지속가능한 농업 환경을 위한 블록체인과 AI 기반 빅 데이터 처리 기법)

  • Yoon-Su Jeong
    • Advanced Industrial SCIence
    • /
    • v.3 no.2
    • /
    • pp.17-22
    • /
    • 2024
  • Recently, as the ICT field has been used in various environments, it has become possible to analyze pests by crops, use robots when harvesting crops, and predict by big data by utilizing ICT technologies in a sustainable agricultural environment. However, in a sustainable agricultural environment, efforts to solve resource depletion, agricultural population decline, poverty increase, and environmental destruction are constantly being demanded. This paper proposes an artificial intelligence-based big data processing analysis method to reduce the production cost and increase the efficiency of crops based on a sustainable agricultural environment. The proposed technique strengthens the security and reliability of data by processing big data of crops combined with AI, and enables better decision-making and business value extraction. It can lead to innovative changes in various industries and fields and promote the development of data-oriented business models. During the experiment, the proposed technique gave an accurate answer to only a small amount of data, and at a farm site where it is difficult to tag the correct answer one by one, the performance similar to that of learning with a large amount of correct answer data (with an error rate within 0.05) was found.

Frequency Domain Double-Talk Detector Based on Gaussian Mixture Model (주파수 영역에서의 Gaussian Mixture Model 기반의 동시통화 검출 연구)

  • Lee, Kyu-Ho;Chang, Joon-Hyuk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.4
    • /
    • pp.401-407
    • /
    • 2009
  • In this paper, we propose a novel method for the cross-correlation based double-talk detection (DTD), which employing the Gaussian Mixture Model (GMM) in the frequency domain. The proposed algorithm transforms the cross correlation coefficient used in the time domain into 16 channels in the frequency domain using the discrete fourier transform (DFT). The channels are then selected into seven feature vectors for GMM and we identify three different regions such as far-end, double-talk and near-end speech using the likelihood comparison based on those feature vectors. The presented DTD algorithm detects efficiently the double-talk regions without Voice Activity Detector which has been used in conventional cross correlation based double-talk detection. The performance of the proposed algorithm is evaluated under various conditions and yields better results compared with the conventional schemes. especially, show the robustness against detection errors resulting from the background noises or echo path change which one of the key issues in practical DTD.

Methodology for Variable Optimization in Injection Molding Process (사출 성형 공정에서의 변수 최적화 방법론)

  • Jung, Young Jin;Kang, Tae Ho;Park, Jeong In;Cho, Joong Yeon;Hong, Ji Soo;Kang, Sung Woo
    • Journal of Korean Society for Quality Management
    • /
    • v.52 no.1
    • /
    • pp.43-56
    • /
    • 2024
  • Purpose: The injection molding process, crucial for plastic shaping, encounters difficulties in sustaining product quality when replacing injection machines. Variations in machine types and outputs between different production lines or factories increase the risk of quality deterioration. In response, the study aims to develop a system that optimally adjusts conditions during the replacement of injection machines linked to molds. Methods: Utilizing a dataset of 12 injection process variables and 52 corresponding sensor variables, a predictive model is crafted using Decision Tree, Random Forest, and XGBoost. Model evaluation is conducted using an 80% training data and a 20% test data split. The dependent variable, classified into five characteristics based on temperature and pressure, guides the prediction model. Bayesian optimization, integrated into the selected model, determines optimal values for process variables during the replacement of injection machines. The iterative convergence of sensor prediction values to the optimum range is visually confirmed, aligning them with the target range. Experimental results validate the proposed approach. Results: Post-experiment analysis indicates the superiority of the XGBoost model across all five characteristics, achieving a combined high performance of 0.81 and a Mean Absolute Error (MAE) of 0.77. The study introduces a method for optimizing initial conditions in the injection process during machine replacement, utilizing Bayesian optimization. This streamlined approach reduces both time and costs, thereby enhancing process efficiency. Conclusion: This research contributes practical insights to the optimization literature, offering valuable guidance for industries seeking streamlined and cost-effective methods for machine replacement in injection molding.

Application of ML algorithms to predict the effective fracture toughness of several types of concret

  • Ibrahim Albaijan;Hanan Samadi;Arsalan Mahmoodzadeh;Hawkar Hashim Ibrahim;Nejib Ghazouani
    • Computers and Concrete
    • /
    • v.34 no.2
    • /
    • pp.247-265
    • /
    • 2024
  • Measuring the fracture toughness of concrete in laboratory settings is challenging due to various factors, such as complex sample preparation procedures, the requirement for precise instruments, potential sample failure, and the brittleness of the samples. Therefore, there is an urgent need to develop innovative and more effective tools to overcome these limitations. Supervised learning methods offer promising solutions. This study introduces seven machine learning algorithms for predicting concrete's effective fracture toughness (K-eff). The models were trained using 560 datasets obtained from the central straight notched Brazilian disc (CSNBD) test. The concrete samples used in the experiments contained micro silica and powdered stone, which are commonly used additives in the construction industry. The study considered six input parameters that affect concrete's K-eff, including concrete type, sample diameter, sample thickness, crack length, force, and angle of initial crack. All the algorithms demonstrated high accuracy on both the training and testing datasets, with R2 values ranging from 0.9456 to 0.9999 and root mean squared error (RMSE) values ranging from 0.000004 to 0.009287. After evaluating their performance, the gated recurrent unit (GRU) algorithm showed the highest predictive accuracy. The ranking of the applied models, from highest to lowest performance in predicting the K-eff of concrete, was as follows: GRU, LSTM, RNN, SFL, ELM, LSSVM, and GEP. In conclusion, it is recommended to use supervised learning models, specifically GRU, for precise estimation of concrete's K-eff. This approach allows engineers to save significant time and costs associated with the CSNBD test. This research contributes to the field by introducing a reliable tool for accurately predicting the K-eff of concrete, enabling efficient decision-making in various engineering applications.

Improving Airline Pilot's Ability for Abnormal & Emergency Situation by Utilizing EBT (EBT를 활용한 민간항공 조종사의 비정상상황 대처능력 향상)

  • Sam-Seung Han;Hyeon-Deok Kim;Kyu-Wang Kim
    • Journal of Advanced Navigation Technology
    • /
    • v.28 no.4
    • /
    • pp.507-517
    • /
    • 2024
  • Analysis of air accident investigation data led experts from ICAO and IATA member countries to agree on the need for further improvements in flight safety and led to a comprehensive review of pilot training. As a result of this review, emphasis was placed on developing pilots' technical as well as non-technical competencies related to CRM. In 2013, ICAO recommended that contracting countries implement EBT to develop and strengthen pilots' competencies to improve flight safety. This study intend to find out how Korean airlines utilize EBT training programs and how they manage threats and errors when encountering abnormal situations to secure and promote safety, which is the original purpose of operations. We sought to derive ways to improve and develop non-technical core competencies.

A Study on Forecasting Accuracy Improvement of Case Based Reasoning Approach Using Fuzzy Relation (퍼지 관계를 활용한 사례기반추론 예측 정확성 향상에 관한 연구)

  • Lee, In-Ho;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.67-84
    • /
    • 2010
  • In terms of business, forecasting is a work of what is expected to happen in the future to make managerial decisions and plans. Therefore, the accurate forecasting is very important for major managerial decision making and is the basis for making various strategies of business. But it is very difficult to make an unbiased and consistent estimate because of uncertainty and complexity in the future business environment. That is why we should use scientific forecasting model to support business decision making, and make an effort to minimize the model's forecasting error which is difference between observation and estimator. Nevertheless, minimizing the error is not an easy task. Case-based reasoning is a problem solving method that utilizes the past similar case to solve the current problem. To build the successful case-based reasoning models, retrieving the case not only the most similar case but also the most relevant case is very important. To retrieve the similar and relevant case from past cases, the measurement of similarities between cases is an important key factor. Especially, if the cases contain symbolic data, it is more difficult to measure the distances. The purpose of this study is to improve the forecasting accuracy of case-based reasoning approach using fuzzy relation and composition. Especially, two methods are adopted to measure the similarity between cases containing symbolic data. One is to deduct the similarity matrix following binary logic(the judgment of sameness between two symbolic data), the other is to deduct the similarity matrix following fuzzy relation and composition. This study is conducted in the following order; data gathering and preprocessing, model building and analysis, validation analysis, conclusion. First, in the progress of data gathering and preprocessing we collect data set including categorical dependent variables. Also, the data set gathered is cross-section data and independent variables of the data set include several qualitative variables expressed symbolic data. The research data consists of many financial ratios and the corresponding bond ratings of Korean companies. The ratings we employ in this study cover all bonds rated by one of the bond rating agencies in Korea. Our total sample includes 1,816 companies whose commercial papers have been rated in the period 1997~2000. Credit grades are defined as outputs and classified into 5 rating categories(A1, A2, A3, B, C) according to credit levels. Second, in the progress of model building and analysis we deduct the similarity matrix following binary logic and fuzzy composition to measure the similarity between cases containing symbolic data. In this process, the used types of fuzzy composition are max-min, max-product, max-average. And then, the analysis is carried out by case-based reasoning approach with the deducted similarity matrix. Third, in the progress of validation analysis we verify the validation of model through McNemar test based on hit ratio. Finally, we draw a conclusion from the study. As a result, the similarity measuring method using fuzzy relation and composition shows good forecasting performance compared to the similarity measuring method using binary logic for similarity measurement between two symbolic data. But the results of the analysis are not statistically significant in forecasting performance among the types of fuzzy composition. The contributions of this study are as follows. We propose another methodology that fuzzy relation and fuzzy composition could be applied for the similarity measurement between two symbolic data. That is the most important factor to build case-based reasoning model.

Optimal Selection of Classifier Ensemble Using Genetic Algorithms (유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택)

  • Kim, Myung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.99-112
    • /
    • 2010
  • Ensemble learning is a method for improving the performance of classification and prediction algorithms. It is a method for finding a highly accurateclassifier on the training set by constructing and combining an ensemble of weak classifiers, each of which needs only to be moderately accurate on the training set. Ensemble learning has received considerable attention from machine learning and artificial intelligence fields because of its remarkable performance improvement and flexible integration with the traditional learning algorithms such as decision tree (DT), neural networks (NN), and SVM, etc. In those researches, all of DT ensemble studies have demonstrated impressive improvements in the generalization behavior of DT, while NN and SVM ensemble studies have not shown remarkable performance as shown in DT ensembles. Recently, several works have reported that the performance of ensemble can be degraded where multiple classifiers of an ensemble are highly correlated with, and thereby result in multicollinearity problem, which leads to performance degradation of the ensemble. They have also proposed the differentiated learning strategies to cope with performance degradation problem. Hansen and Salamon (1990) insisted that it is necessary and sufficient for the performance enhancement of an ensemble that the ensemble should contain diverse classifiers. Breiman (1996) explored that ensemble learning can increase the performance of unstable learning algorithms, but does not show remarkable performance improvement on stable learning algorithms. Unstable learning algorithms such as decision tree learners are sensitive to the change of the training data, and thus small changes in the training data can yield large changes in the generated classifiers. Therefore, ensemble with unstable learning algorithms can guarantee some diversity among the classifiers. To the contrary, stable learning algorithms such as NN and SVM generate similar classifiers in spite of small changes of the training data, and thus the correlation among the resulting classifiers is very high. This high correlation results in multicollinearity problem, which leads to performance degradation of the ensemble. Kim,s work (2009) showedthe performance comparison in bankruptcy prediction on Korea firms using tradition prediction algorithms such as NN, DT, and SVM. It reports that stable learning algorithms such as NN and SVM have higher predictability than the unstable DT. Meanwhile, with respect to their ensemble learning, DT ensemble shows the more improved performance than NN and SVM ensemble. Further analysis with variance inflation factor (VIF) analysis empirically proves that performance degradation of ensemble is due to multicollinearity problem. It also proposes that optimization of ensemble is needed to cope with such a problem. This paper proposes a hybrid system for coverage optimization of NN ensemble (CO-NN) in order to improve the performance of NN ensemble. Coverage optimization is a technique of choosing a sub-ensemble from an original ensemble to guarantee the diversity of classifiers in coverage optimization process. CO-NN uses GA which has been widely used for various optimization problems to deal with the coverage optimization problem. The GA chromosomes for the coverage optimization are encoded into binary strings, each bit of which indicates individual classifier. The fitness function is defined as maximization of error reduction and a constraint of variance inflation factor (VIF), which is one of the generally used methods to measure multicollinearity, is added to insure the diversity of classifiers by removing high correlation among the classifiers. We use Microsoft Excel and the GAs software package called Evolver. Experiments on company failure prediction have shown that CO-NN is effectively applied in the stable performance enhancement of NNensembles through the choice of classifiers by considering the correlations of the ensemble. The classifiers which have the potential multicollinearity problem are removed by the coverage optimization process of CO-NN and thereby CO-NN has shown higher performance than a single NN classifier and NN ensemble at 1% significance level, and DT ensemble at 5% significance level. However, there remain further research issues. First, decision optimization process to find optimal combination function should be considered in further research. Secondly, various learning strategies to deal with data noise should be introduced in more advanced further researches in the future.

Product Recommender Systems using Multi-Model Ensemble Techniques (다중모형조합기법을 이용한 상품추천시스템)

  • Lee, Yeonjeong;Kim, Kyoung-Jae
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.39-54
    • /
    • 2013
  • Recent explosive increase of electronic commerce provides many advantageous purchase opportunities to customers. In this situation, customers who do not have enough knowledge about their purchases, may accept product recommendations. Product recommender systems automatically reflect user's preference and provide recommendation list to the users. Thus, product recommender system in online shopping store has been known as one of the most popular tools for one-to-one marketing. However, recommender systems which do not properly reflect user's preference cause user's disappointment and waste of time. In this study, we propose a novel recommender system which uses data mining and multi-model ensemble techniques to enhance the recommendation performance through reflecting the precise user's preference. The research data is collected from the real-world online shopping store, which deals products from famous art galleries and museums in Korea. The data initially contain 5759 transaction data, but finally remain 3167 transaction data after deletion of null data. In this study, we transform the categorical variables into dummy variables and exclude outlier data. The proposed model consists of two steps. The first step predicts customers who have high likelihood to purchase products in the online shopping store. In this step, we first use logistic regression, decision trees, and artificial neural networks to predict customers who have high likelihood to purchase products in each product group. We perform above data mining techniques using SAS E-Miner software. In this study, we partition datasets into two sets as modeling and validation sets for the logistic regression and decision trees. We also partition datasets into three sets as training, test, and validation sets for the artificial neural network model. The validation dataset is equal for the all experiments. Then we composite the results of each predictor using the multi-model ensemble techniques such as bagging and bumping. Bagging is the abbreviation of "Bootstrap Aggregation" and it composite outputs from several machine learning techniques for raising the performance and stability of prediction or classification. This technique is special form of the averaging method. Bumping is the abbreviation of "Bootstrap Umbrella of Model Parameter," and it only considers the model which has the lowest error value. The results show that bumping outperforms bagging and the other predictors except for "Poster" product group. For the "Poster" product group, artificial neural network model performs better than the other models. In the second step, we use the market basket analysis to extract association rules for co-purchased products. We can extract thirty one association rules according to values of Lift, Support, and Confidence measure. We set the minimum transaction frequency to support associations as 5%, maximum number of items in an association as 4, and minimum confidence for rule generation as 10%. This study also excludes the extracted association rules below 1 of lift value. We finally get fifteen association rules by excluding duplicate rules. Among the fifteen association rules, eleven rules contain association between products in "Office Supplies" product group, one rules include the association between "Office Supplies" and "Fashion" product groups, and other three rules contain association between "Office Supplies" and "Home Decoration" product groups. Finally, the proposed product recommender systems provides list of recommendations to the proper customers. We test the usability of the proposed system by using prototype and real-world transaction and profile data. For this end, we construct the prototype system by using the ASP, Java Script and Microsoft Access. In addition, we survey about user satisfaction for the recommended product list from the proposed system and the randomly selected product lists. The participants for the survey are 173 persons who use MSN Messenger, Daum Caf$\acute{e}$, and P2P services. We evaluate the user satisfaction using five-scale Likert measure. This study also performs "Paired Sample T-test" for the results of the survey. The results show that the proposed model outperforms the random selection model with 1% statistical significance level. It means that the users satisfied the recommended product list significantly. The results also show that the proposed system may be useful in real-world online shopping store.