• Title/Summary/Keyword: default prediction

Search Result 60, Processing Time 0.024 seconds

Dynamic Retry Adaptation Scheme to Improve Transmission of H.264 HD Video over 802.11 Peer-to-Peer Networks

  • Sinky, Mohammed;Lee, Ben;Lee, Tae-Wook;Kim, Chang-Gone;Shin, Jong-Keun
    • ETRI Journal
    • /
    • v.37 no.6
    • /
    • pp.1096-1107
    • /
    • 2015
  • This paper presents a dynamic retry adaptation scheme for H.264 HD video, called DRAS.264, which dynamically adjusts the retry limits of frames at the medium access control (MAC) layer according to the impact those frames have on the streamed H.264 HD video. DRAS.264 is further improved with a bandwidth estimation technique, better prediction of packet delays, and expanded results covering multi-slice video. Our study is performed using the Open Evaluation Framework for Video Over Networks as a simulation environment for various congestion scenarios. Results show improvements in average peak signal-to-noise ratios of up to 4.45 dB for DRAS.264 in comparison to the default MAC layer operation. Furthermore, the ability of DRAS.264 to prioritize data of H.264 bitstreams reduces error propagation during video playback, leading to noticeable visual improvements.

Ensemble Learning for Solving Data Imbalance in Bankruptcy Prediction (기업부실 예측 데이터의 불균형 문제 해결을 위한 앙상블 학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.15 no.3
    • /
    • pp.1-15
    • /
    • 2009
  • In a classification problem, data imbalance occurs when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. This paper proposes a Geometric Mean-based Boosting (GM-Boost) to resolve the problem of data imbalance. Since GM-Boost introduces the notion of geometric mean, it can perform learning process considering both majority and minority sides, and reinforce the learning on misclassified data. An empirical study with bankruptcy prediction on Korea companies shows that GM-Boost has the higher classification accuracy than previous methods including Under-sampling, Over-Sampling, and AdaBoost, used in imbalanced data and robust learning performance regardless of the degree of data imbalance.

  • PDF

A Study on Classification Models for Predicting Bankruptcy Based on XAI (XAI 기반 기업부도예측 분류모델 연구)

  • Jihong Kim;Nammee Moon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.8
    • /
    • pp.333-340
    • /
    • 2023
  • Efficient prediction of corporate bankruptcy is an important part of making appropriate lending decisions for financial institutions and reducing loan default rates. In many studies, classification models using artificial intelligence technology have been used. In the financial industry, even if the performance of the new predictive models is excellent, it should be accompanied by an intuitive explanation of the basis on which the result was determined. Recently, the US, EU, and South Korea have commonly presented the right to request explanations of algorithms, so transparency in the use of AI in the financial sector must be secured. In this paper, an artificial intelligence-based interpretable classification prediction model was proposed using corporate bankruptcy data that was open to the outside world. First, data preprocessing, 5-fold cross-validation, etc. were performed, and classification performance was compared through optimization of 10 supervised learning classification models such as logistic regression, SVM, XGBoost, and LightGBM. As a result, LightGBM was confirmed as the best performance model, and SHAP, an explainable artificial intelligence technique, was applied to provide a post-explanation of the bankruptcy prediction process.

Circuit Performance Prediction of Scaled FinFET Following ITRS Roadmap based on Accurate Parasitic Compact Model (정확한 기생 성분을 고려한 ITRS roadmap 기반 FinFET 공정 노드별 회로 성능 예측)

  • Choe, KyeungKeun;Kwon, Kee-Won;Kim, SoYoung
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.10
    • /
    • pp.33-46
    • /
    • 2015
  • In this paper, we predicts the analog and digital circuit performance of FinFETs that are scaled down following the ITRS(International technology roadmap for semiconductors). For accurate prediction of the circuit performance of scaled down devices, accurate parasitic resistance and capacitance analytical models are developed and their accuracies are within 2 % compared to 3D TCAD simulation results. The parasitic capacitance models are developed using conformal mapping, and the parasitic resistance models are enhanced to include the fin extension length($L_{ext}$) with respect to the default parasitic resistance model of BSIM-CMG. A new algorithm is developed to fit the DC characteristics of BSIM-CMG to the reference DC data. The proposed capacitance and resistance models are implemented inside BSIM-CMG to replace the default parasitic model, and SPICE simulations are performed to predict circuit performances such as $f_T$, $f_{MAX}$, ring oscillators and common source amplifier. Using the proposed parasitic capacitance and resistance model, the device and circuit performances are quantitatively predicted down to 5 nm FinFET transistors. As the FinFET technology scales, due to the improvement in both DC characteristics and the parasitic elements, the circuit performance will improve.

Direction-Embedded Branch Prediction based on the Analysis of Neural Network (신경망의 분석을 통한 방향 정보를 내포하는 분기 예측 기법)

  • Kwak Jong Wook;Kim Ju-Hwan;Jhon Chu Shik
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.42 no.1
    • /
    • pp.9-26
    • /
    • 2005
  • In the pursuit of ever higher levels of performance, recent computer systems have made use of deep pipeline, dynamic scheduling and multi-issue superscalar processor technologies. In this situations, branch prediction schemes are an essential part of modem microarchitectures because the penalty for a branch misprediction increases as pipelines deepen and the number of instructions issued per cycle increases. In this paper, we propose a novel branch prediction scheme, direction-gshare(d-gshare), to improve the prediction accuracy. At first, we model a neural network with the components that possibly affect the branch prediction accuracy, and analyze the variation of their weights based on the neural network information. Then, we newly add the component that has a high weight value to an original gshare scheme. We simulate our branch prediction scheme using Simple Scalar, a powerful event-driven simulator, and analyze the simulation results. Our results show that, compared to bimodal, two-level adaptive and gshare predictor, direction-gshare predictor(d-gshare. 3) outperforms, without additional hardware costs, by up to 4.1% and 1.5% in average for the default mont of embedded direction, and 11.8% in maximum and 3.7% in average for the optimal one.

Corporate Bankruptcy Prediction Model using Explainable AI-based Feature Selection (설명가능 AI 기반의 변수선정을 이용한 기업부실예측모형)

  • Gundoo Moon;Kyoung-jae Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.241-265
    • /
    • 2023
  • A corporate insolvency prediction model serves as a vital tool for objectively monitoring the financial condition of companies. It enables timely warnings, facilitates responsive actions, and supports the formulation of effective management strategies to mitigate bankruptcy risks and enhance performance. Investors and financial institutions utilize default prediction models to minimize financial losses. As the interest in utilizing artificial intelligence (AI) technology for corporate insolvency prediction grows, extensive research has been conducted in this domain. However, there is an increasing demand for explainable AI models in corporate insolvency prediction, emphasizing interpretability and reliability. The SHAP (SHapley Additive exPlanations) technique has gained significant popularity and has demonstrated strong performance in various applications. Nonetheless, it has limitations such as computational cost, processing time, and scalability concerns based on the number of variables. This study introduces a novel approach to variable selection that reduces the number of variables by averaging SHAP values from bootstrapped data subsets instead of using the entire dataset. This technique aims to improve computational efficiency while maintaining excellent predictive performance. To obtain classification results, we aim to train random forest, XGBoost, and C5.0 models using carefully selected variables with high interpretability. The classification accuracy of the ensemble model, generated through soft voting as the goal of high-performance model design, is compared with the individual models. The study leverages data from 1,698 Korean light industrial companies and employs bootstrapping to create distinct data groups. Logistic Regression is employed to calculate SHAP values for each data group, and their averages are computed to derive the final SHAP values. The proposed model enhances interpretability and aims to achieve superior predictive performance.

Performance Modelling of Adaptive VANET with Enhanced Priority Scheme

  • Lim, Joanne Mun-Yee;Chang, YoongChoon;Alias, MohamadYusoff;Loo, Jonathan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.4
    • /
    • pp.1337-1358
    • /
    • 2015
  • In this paper, we present an analytical and simulated study on the performance of adaptive vehicular ad hoc networks (VANET) priority based on Transmission Distance Reliability Range (TDRR) and data type. VANET topology changes rapidly due to its inherent nature of high mobility nodes and unpredictable environments. Therefore, nodes in VANET must be able to adapt to the ever changing environment and optimize parameters to enhance performance. However, there is a lack of adaptability in the current VANET scheme. Existing VANET IEEE802.11p's Enhanced Distributed Channel Access; EDCA assigns priority solely based on data type. In this paper, we propose a new priority scheme which utilizes Markov model to perform TDRR prediction and assign priorities based on the proposed Markov TDRR Prediction with Enhanced Priority VANET Scheme (MarPVS). Subsequently, we performed an analytical study on MarPVS performance modeling. In particular, considering five different priority levels defined in MarPVS, we derived the probability of successful transmission, the number of low priority messages in back off process and concurrent low priority transmission. Finally, the results are used to derive the average transmission delay for data types defined in MarPVS. Numerical results are provided along with simulation results which confirm the accuracy of the proposed analysis. Simulation results demonstrate that the proposed MarPVS results in lower transmission latency and higher packet success rate in comparison with the default IEEE802.11p scheme and greedy scheduler scheme.

The Importance of a Borrower's Track Record on Repayment Performance: Evidence in P2P Lending Market

  • KIM, Dongwoo
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.7 no.7
    • /
    • pp.85-93
    • /
    • 2020
  • In peer-to-peer (P2P) loan markets, as most lenders are unskilled and inexperienced ordinary individuals, it is important to know the characteristics of borrowers that significantly impact their repayment performance. This study investigates the effects and importance of borrowers' past repayment performance track record within the platform to identify its predictive power. To this end, I analyze the detailed loan repayment data from two leading P2P lending platforms in Korea using a Cox proportional hazard, multiple linear regression, and logit models. Furthermore, the predictive power of the factors proxied by borrowers' track records are evaluated through the receiver operating characteristic (ROC) curves. As a result, it is found that the borrowers' past track record within the platform have the most important impact on the repayment performance of their current loans. In addition, this study also reveals that the borrowers' track record is much more predictive of their repayment performance than any other factor. The findings of this study emphasize that individual lenders must take into account the quality of borrowers' past transaction history when making a funding decision, and that platform operators should actively share the borrowers' past records within the markets with lenders.

A Study on Utilization of Vision Transformer for CTR Prediction (CTR 예측을 위한 비전 트랜스포머 활용에 관한 연구)

  • Kim, Tae-Suk;Kim, Seokhun;Im, Kwang Hyuk
    • Knowledge Management Research
    • /
    • v.22 no.4
    • /
    • pp.27-40
    • /
    • 2021
  • Click-Through Rate (CTR) prediction is a key function that determines the ranking of candidate items in the recommendation system and recommends high-ranking items to reduce customer information overload and achieve profit maximization through sales promotion. The fields of natural language processing and image classification are achieving remarkable growth through the use of deep neural networks. Recently, a transformer model based on an attention mechanism, differentiated from the mainstream models in the fields of natural language processing and image classification, has been proposed to achieve state-of-the-art in this field. In this study, we present a method for improving the performance of a transformer model for CTR prediction. In order to analyze the effect of discrete and categorical CTR data characteristics different from natural language and image data on performance, experiments on embedding regularization and transformer normalization are performed. According to the experimental results, it was confirmed that the prediction performance of the transformer was significantly improved when the L2 generalization was applied in the embedding process for CTR data input processing and when batch normalization was applied instead of layer normalization, which is the default regularization method, to the transformer model.

Measurement of the Device Properties of a Ionization Smoke Detector to Improve Predictive Performance of the Fire Modeling (화재모델링 예측성능 개선을 위한 이온화식 연기감지기의 장치물성 측정)

  • Kim, Kyung-Hwa;Hwang, Cheol-Hong
    • Fire Science and Engineering
    • /
    • v.27 no.4
    • /
    • pp.27-34
    • /
    • 2013
  • The high prediction performance of fire detector models is essentially needed to assure the reliability of fire and evacuation modeling in the process of PBD (Performance Based fire safety Design). The main objective of the present study is to measure input information in order to predict the accurate activation time of smoke detector into a Large Eddy Simulation (LES) fire model such as FDS (Fire Dynamics Simulator). To end this, FDE (Fire Detector Evaluator) which can measure the device properties of detector was developed, and the input information of Heskestad and Cleary's models was measured for a ionization smoke detector. In addition, the activation times of smoke detectors predicted using default values into FDS and measured values in the present study were systematically compared. As a result, the device properties of smoke detector examined in the present study showed a significant difference compared to the default values used into FDS, which resulted in the considerable difference of up to 15 minutes or more in terms of the activation time of smoke detector. The database (DB) on device properties of various smoke and heat detectors will be built to improve the reliability of PBD in future studies.