• Title/Summary/Keyword: Tree Modeling

Search Result 337, Processing Time 0.024 seconds

Automated Code Smell Detection and Refactoring using OCL (OCL을 이용한 자동화된 코드스멜 탐지와 리팩토링)

  • Kim, Tae-Woong;Kim, Tae-Gong
    • The KIPS Transactions:PartD
    • /
    • v.15D no.6
    • /
    • pp.825-840
    • /
    • 2008
  • Refactoring is a kind of software modification process that improves system qualities internally but maintains system functions externally. What should be improved on the existing source codes should take precedence over the others in such a modification process using this refactoring. Martin Fowler and Kent Beck proposed a method that identifies code smells for this purpose. Also, some studies on determining what refactoring will be applied to which targets through detecting code smells in codes were presented. However, these studies have a lot of disadvantages that show a lack of precise description for such code smells and detect limited code smells only. In addition, these studies showed other disadvantages that generate ambiguity in behavior preservation due to the fact that a description method of pre-conditions for the behavior preservation is included in a refactoring process or unformalized. Thus, our study represents a precise specification of code smells using OCL and proposes a framework that performs a refactoring process through the automatic detection of code smells using an OCL interpreter. Furthermore, we perform the automatic detection in which the code smells are be specified by using OCL to the java program and verify its applicability and effectivity through applying a refactoring process.

Prediction of Multi-Physical Analysis Using Machine Learning (기계학습을 이용한 다중물리해석 결과 예측)

  • Lee, Keun-Myoung;Kim, Kee-Young;Oh, Ung;Yoo, Sung-kyu;Song, Byeong-Suk
    • Journal of IKEEE
    • /
    • v.20 no.1
    • /
    • pp.94-102
    • /
    • 2016
  • This paper proposes a new prediction method to reduce times and labor of repetitive multi-physics simulation. To achieve exact results from the whole simulation processes, complex modeling and huge amounts of time are required. Current multi-physics analysis focuses on the simulation method itself and the simulation environment to reduce times and labor. However this paper proposes an alternative way to reduce simulation times and labor by exploiting machine learning algorithm trained with data set from simulation results. Through comparing each machine learning algorithm, Gaussian Process Regression showed the best performance with under 100 training data and how similar results can be achieved through machine-learning without a complex simulation process. Given trained machine learning algorithm, it's possible to predict the result after changing some features of the simulation model just in a few second. This new method will be helpful to effectively reduce simulation times and labor because it can predict the results before more simulation.

A Study on Phoneme Likely Units to Improve the Performance of Context-dependent Acoustic Models in Speech Recognition (음성인식에서 문맥의존 음향모델의 성능향상을 위한 유사음소단위에 관한 연구)

  • 임영춘;오세진;김광동;노덕규;송민규;정현열
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.5
    • /
    • pp.388-402
    • /
    • 2003
  • In this paper, we carried out the word, 4 continuous digits. continuous, and task-independent word recognition experiments to verify the effectiveness of the re-defined phoneme-likely units (PLUs) for the phonetic decision tree based HM-Net (Hidden Markov Network) context-dependent (CD) acoustic modeling in Korean appropriately. In case of the 48 PLUs, the phonemes /ㅂ/, /ㄷ/, /ㄱ/ are separated by initial sound, medial vowel, final consonant, and the consonants /ㄹ/, /ㅈ/, /ㅎ/ are also separated by initial sound, final consonant according to the position of syllable, word, and sentence, respectively. In this paper. therefore, we re-define the 39 PLUs by unifying the one phoneme in the separated initial sound, medial vowel, and final consonant of the 48 PLUs to construct the CD acoustic models effectively. Through the experimental results using the re-defined 39 PLUs, in word recognition experiments with the context-independent (CI) acoustic models, the 48 PLUs has an average of 7.06%, higher recognition accuracy than the 39 PLUs used. But in the speaker-independent word recognition experiments with the CD acoustic models, the 39 PLUs has an average of 0.61% better recognition accuracy than the 48 PLUs used. In the 4 continuous digits recognition experiments with the liaison phenomena. the 39 PLUs has also an average of 6.55% higher recognition accuracy. And then, in continuous speech recognition experiments, the 39 PLUs has an average of 15.08% better recognition accuracy than the 48 PLUs used too. Finally, though the 48, 39 PLUs have the lower recognition accuracy, the 39 PLUs has an average of 1.17% higher recognition characteristic than the 48 PLUs used in the task-independent word recognition experiments according to the unknown contextual factor. Through the above experiments, we verified the effectiveness of the re-defined 39 PLUs compared to the 48PLUs to construct the CD acoustic models in this paper.

Context-Awareness Modeling Method using Timed Petri-nets (시간 페트리 넷을 이용한 상황인지 모델링 기법)

  • Park, Byung-Sung;Kim, Hag-Bae
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.4B
    • /
    • pp.354-361
    • /
    • 2011
  • Increasing interest and technological advances in smart home has led to active research on context-awareness service and prediction algorithms such as Bayesian Networks, Tree-Dimensional Structures and Genetic prediction algorithms. Context-awareness service presents that providing automatic customized service regarding individual user's pattern surely helps users improve the quality of life. However, it is difficult to implement context-awareness service because the problems are that handling coincidence with context information and exceptional cases have to consider. To overcome this problem, we proposes an Intelligent Sequential Matching Algorithm(ISMA), models context-awareness service using Timed Petri-net(TPN) which is petri-net to have time factor. The example scenario illustrates the effectiveness of the Timed Petri-net model and our proposed algorithm improves average 4~6% than traditional in the accuracy and reliability of prediction.

Forecasting of the COVID-19 pandemic situation of Korea

  • Goo, Taewan;Apio, Catherine;Heo, Gyujin;Lee, Doeun;Lee, Jong Hyeok;Lim, Jisun;Han, Kyulhee;Park, Taesung
    • Genomics & Informatics
    • /
    • v.19 no.1
    • /
    • pp.11.1-11.8
    • /
    • 2021
  • For the novel coronavirus disease 2019 (COVID-19), predictive modeling, in the literature, uses broadly susceptible exposed infected recoverd (SEIR)/SIR, agent-based, curve-fitting models. Governments and legislative bodies rely on insights from prediction models to suggest new policies and to assess the effectiveness of enforced policies. Therefore, access to accurate outbreak prediction models is essential to obtain insights into the likely spread and consequences of infectious diseases. The objective of this study is to predict the future COVID-19 situation of Korea. Here, we employed 5 models for this analysis; SEIR, local linear regression (LLR), negative binomial (NB) regression, segment Poisson, deep-learning based long short-term memory models (LSTM) and tree based gradient boosting machine (GBM). After prediction, model performance comparison was evelauated using relative mean squared errors (RMSE) for two sets of train (January 20, 2020-December 31, 2020 and January 20, 2020-January 31, 2021) and testing data (January 1, 2021-February 28, 2021 and February 1, 2021-February 28, 2021) . Except for segmented Poisson model, the other models predicted a decline in the daily confirmed cases in the country for the coming future. RMSE values' comparison showed that LLR, GBM, SEIR, NB, and LSTM respectively, performed well in the forecasting of the pandemic situation of the country. A good understanding of the epidemic dynamics would greatly enhance the control and prevention of COVID-19 and other infectious diseases. Therefore, with increasing daily confirmed cases since this year, these results could help in the pandemic response by informing decisions about planning, resource allocation, and decision concerning social distancing policies.

The Primary Process and Key Concepts of Economic Evaluation in Healthcare

  • Kim, Younhee;Kim, Yunjung;Lee, Hyeon-Jeong;Lee, Seulki;Park, Sun-Young;Oh, Sung-Hee;Jang, Suhyun;Lee, Taejin;Ahn, Jeonghoon;Shin, Sangjin
    • Journal of Preventive Medicine and Public Health
    • /
    • v.55 no.5
    • /
    • pp.415-423
    • /
    • 2022
  • Economic evaluations in the healthcare are used to assess economic efficiency of pharmaceuticals and medical interventions such as diagnoses and medical procedures. This study introduces the main concepts of economic evaluation across its key steps: planning, outcome and cost calculation, modeling, cost-effectiveness results, uncertainty analysis, and decision-making. When planning an economic evaluation, we determine the study population, intervention, comparators, perspectives, time horizon, discount rates, and type of economic evaluation. In healthcare economic evaluations, outcomes include changes in mortality, the survival rate, life years, and quality-adjusted life years, while costs include medical, non-medical, and productivity costs. Model-based economic evaluations, including decision tree and Markov models, are mainly used to calculate the total costs and total effects. In cost-effectiveness or costutility analyses, cost-effectiveness is evaluated using the incremental cost-effectiveness ratio, which is the additional cost per one additional unit of effectiveness gained by an intervention compared with a comparator. All outcomes have uncertainties owing to limited evidence, diverse methodologies, and unexplained variation. Thus, researchers should review these uncertainties and confirm their robustness. We hope to contribute to the establishment and dissemination of economic evaluation methodologies that reflect Korean clinical and research environment and ultimately improve the rationality of healthcare policies.

Determination of the stage and grade of periodontitis according to the current classification of periodontal and peri-implant diseases and conditions (2018) using machine learning algorithms

  • Kubra Ertas;Ihsan Pence;Melike Siseci Cesmeli;Zuhal Yetkin Ay
    • Journal of Periodontal and Implant Science
    • /
    • v.53 no.1
    • /
    • pp.38-53
    • /
    • 2023
  • Purpose: The current Classification of Periodontal and Peri-Implant Diseases and Conditions, published and disseminated in 2018, involves some difficulties and causes diagnostic conflicts due to its criteria, especially for inexperienced clinicians. The aim of this study was to design a decision system based on machine learning algorithms by using clinical measurements and radiographic images in order to determine and facilitate the staging and grading of periodontitis. Methods: In the first part of this study, machine learning models were created using the Python programming language based on clinical data from 144 individuals who presented to the Department of Periodontology, Faculty of Dentistry, Süleyman Demirel University. In the second part, panoramic radiographic images were processed and classification was carried out with deep learning algorithms. Results: Using clinical data, the accuracy of staging with the tree algorithm reached 97.2%, while the random forest and k-nearest neighbor algorithms reached 98.6% accuracy. The best staging accuracy for processing panoramic radiographic images was provided by a hybrid network model algorithm combining the proposed ResNet50 architecture and the support vector machine algorithm. For this, the images were preprocessed, and high success was obtained, with a classification accuracy of 88.2% for staging. However, in general, it was observed that the radiographic images provided a low level of success, in terms of accuracy, for modeling the grading of periodontitis. Conclusions: The machine learning-based decision system presented herein can facilitate periodontal diagnoses despite its current limitations. Further studies are planned to optimize the algorithm and improve the results.

Cost-Effectiveness Analysis of Home-Based Hospice-Palliative Care for Terminal Cancer Patients

  • Kim, Ye-seul;Han, Euna;Lee, Jae-woo;Kang, Hee-Taik
    • Journal of Hospice and Palliative Care
    • /
    • v.25 no.2
    • /
    • pp.76-84
    • /
    • 2022
  • Purpose: We compared cost-effectiveness parameters between inpatient and home-based hospice-palliative care services for terminal cancer patients in Korea. Methods: A decision-analytic Markov model was used to compare the cost-effectiveness of hospice-palliative care in an inpatient unit (inpatient-start group) and at home (home-start group). The model adopted a healthcare system perspective, with a 9-week horizon and a 1-week cycle length. The transition probabilities were calculated based on the reports from the Korean National Cancer Center in 2017 and Health Insurance Review & Assessment Service in 2020. Quality of life (QOL) was converted to the quality-adjusted life week (QALW). Modeling and cost-effectiveness analysis were performed with TreeAge software. The weekly medical cost was estimated to be 2,481,479 Korean won (KRW) for inpatient hospice-palliative care and 225,688 KRW for home-based hospice-palliative care. One-way sensitivity analysis was used to assess the impact of different scenarios and assumptions on the model results. Results: Compared with the inpatient-start group, the incremental cost of the home-start group was 697,657 KRW, and the incremental effectiveness based on QOL was 0.88 QALW. The incremental cost-effectiveness ratio (ICER) of the home-start group was 796,476 KRW/QALW. Based on one-way sensitivity analyses, the ICER was predicted to increase to 1,626,988 KRW/QALW if the weekly cost of home-based hospice doubled, but it was estimated to decrease to -2,898,361 KRW/QALW if death rates at home doubled. Conclusion: Home-based hospice-palliative care may be more cost-effective than inpatient hospice-palliative care. Home-based hospice appears to be affordable even if the associated medical expenditures double.

Response Modeling for the Marketing Promotion with Weighted Case Based Reasoning Under Imbalanced Data Distribution (불균형 데이터 환경에서 변수가중치를 적용한 사례기반추론 기반의 고객반응 예측)

  • Kim, Eunmi;Hong, Taeho
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.29-45
    • /
    • 2015
  • Response modeling is a well-known research issue for those who have tried to get more superior performance in the capability of predicting the customers' response for the marketing promotion. The response model for customers would reduce the marketing cost by identifying prospective customers from very large customer database and predicting the purchasing intention of the selected customers while the promotion which is derived from an undifferentiated marketing strategy results in unnecessary cost. In addition, the big data environment has accelerated developing the response model with data mining techniques such as CBR, neural networks and support vector machines. And CBR is one of the most major tools in business because it is known as simple and robust to apply to the response model. However, CBR is an attractive data mining technique for data mining applications in business even though it hasn't shown high performance compared to other machine learning techniques. Thus many studies have tried to improve CBR and utilized in business data mining with the enhanced algorithms or the support of other techniques such as genetic algorithm, decision tree and AHP (Analytic Process Hierarchy). Ahn and Kim(2008) utilized logit, neural networks, CBR to predict that which customers would purchase the items promoted by marketing department and tried to optimized the number of k for k-nearest neighbor with genetic algorithm for the purpose of improving the performance of the integrated model. Hong and Park(2009) noted that the integrated approach with CBR for logit, neural networks, and Support Vector Machine (SVM) showed more improved prediction ability for response of customers to marketing promotion than each data mining models such as logit, neural networks, and SVM. This paper presented an approach to predict customers' response of marketing promotion with Case Based Reasoning. The proposed model was developed by applying different weights to each feature. We deployed logit model with a database including the promotion and the purchasing data of bath soap. After that, the coefficients were used to give different weights of CBR. We analyzed the performance of proposed weighted CBR based model compared to neural networks and pure CBR based model empirically and found that the proposed weighted CBR based model showed more superior performance than pure CBR model. Imbalanced data is a common problem to build data mining model to classify a class with real data such as bankruptcy prediction, intrusion detection, fraud detection, churn management, and response modeling. Imbalanced data means that the number of instance in one class is remarkably small or large compared to the number of instance in other classes. The classification model such as response modeling has a lot of trouble to recognize the pattern from data through learning because the model tends to ignore a small number of classes while classifying a large number of classes correctly. To resolve the problem caused from imbalanced data distribution, sampling method is one of the most representative approach. The sampling method could be categorized to under sampling and over sampling. However, CBR is not sensitive to data distribution because it doesn't learn from data unlike machine learning algorithm. In this study, we investigated the robustness of our proposed model while changing the ratio of response customers and nonresponse customers to the promotion program because the response customers for the suggested promotion is always a small part of nonresponse customers in the real world. We simulated the proposed model 100 times to validate the robustness with different ratio of response customers to response customers under the imbalanced data distribution. Finally, we found that our proposed CBR based model showed superior performance than compared models under the imbalanced data sets. Our study is expected to improve the performance of response model for the promotion program with CBR under imbalanced data distribution in the real world.

An Analytical Approach Using Topic Mining for Improving the Service Quality of Hotels (호텔 산업의 서비스 품질 향상을 위한 토픽 마이닝 기반 분석 방법)

  • Moon, Hyun Sil;Sung, David;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.21-41
    • /
    • 2019
  • Thanks to the rapid development of information technologies, the data available on Internet have grown rapidly. In this era of big data, many studies have attempted to offer insights and express the effects of data analysis. In the tourism and hospitality industry, many firms and studies in the era of big data have paid attention to online reviews on social media because of their large influence over customers. As tourism is an information-intensive industry, the effect of these information networks on social media platforms is more remarkable compared to any other types of media. However, there are some limitations to the improvements in service quality that can be made based on opinions on social media platforms. Users on social media platforms represent their opinions as text, images, and so on. Raw data sets from these reviews are unstructured. Moreover, these data sets are too big to extract new information and hidden knowledge by human competences. To use them for business intelligence and analytics applications, proper big data techniques like Natural Language Processing and data mining techniques are needed. This study suggests an analytical approach to directly yield insights from these reviews to improve the service quality of hotels. Our proposed approach consists of topic mining to extract topics contained in the reviews and the decision tree modeling to explain the relationship between topics and ratings. Topic mining refers to a method for finding a group of words from a collection of documents that represents a document. Among several topic mining methods, we adopted the Latent Dirichlet Allocation algorithm, which is considered as the most universal algorithm. However, LDA is not enough to find insights that can improve service quality because it cannot find the relationship between topics and ratings. To overcome this limitation, we also use the Classification and Regression Tree method, which is a kind of decision tree technique. Through the CART method, we can find what topics are related to positive or negative ratings of a hotel and visualize the results. Therefore, this study aims to investigate the representation of an analytical approach for the improvement of hotel service quality from unstructured review data sets. Through experiments for four hotels in Hong Kong, we can find the strengths and weaknesses of services for each hotel and suggest improvements to aid in customer satisfaction. Especially from positive reviews, we find what these hotels should maintain for service quality. For example, compared with the other hotels, a hotel has a good location and room condition which are extracted from positive reviews for it. In contrast, we also find what they should modify in their services from negative reviews. For example, a hotel should improve room condition related to soundproof. These results mean that our approach is useful in finding some insights for the service quality of hotels. That is, from the enormous size of review data, our approach can provide practical suggestions for hotel managers to improve their service quality. In the past, studies for improving service quality relied on surveys or interviews of customers. However, these methods are often costly and time consuming and the results may be biased by biased sampling or untrustworthy answers. The proposed approach directly obtains honest feedback from customers' online reviews and draws some insights through a type of big data analysis. So it will be a more useful tool to overcome the limitations of surveys or interviews. Moreover, our approach easily obtains the service quality information of other hotels or services in the tourism industry because it needs only open online reviews and ratings as input data. Furthermore, the performance of our approach will be better if other structured and unstructured data sources are added.