• 제목/요약/키워드: 금융기관 구간

Search Result 4, Processing Time 0.017 seconds

Status and Future of Security Techniques in the Internet Banking Service (인터넷 뱅킹 서비스 보안기술의 현황과 미래)

  • Lee, Kyungroul;Yim, Kangbin;Seo, Jungtaek
    • Journal of Internet Computing and Services
    • /
    • v.18 no.2
    • /
    • pp.31-42
    • /
    • 2017
  • As Internet banking service became popular, many users can exchange goods by online. Even though this advantage, there are incident cases in the Internet banking service due to security threats. In order to counteract this problem, various security techniques have been applied over whole area in the Internet banking service. Therefore, we described that analyzed results of security techniques applied in the financial institutions area and network communication area in this paper. We consider that this paper will be useful as a reference to protect security threats occurred by insiders and vulnerabilities in implementation.

Analysis and Classification of Security Threats based on the Internet Banking Service (인터넷 뱅킹 서비스에서의 보안위협 분류 및 분석)

  • Lee, Kyung-Roul;Lee, Sun-Young;Yim, Kang-Bin
    • Informatization Policy
    • /
    • v.24 no.2
    • /
    • pp.20-42
    • /
    • 2017
  • In this paper, we focus on classification of security threats and definitions of security requirements for Internet banking service. Threats are classified based on the past and potential incidents, based upon which we will be able to propose security requirements. In order to identify security threats, the structure of the Internet banking service is classified into three sections - the financial institutions, the network, and the user-terminal - and we defined arising threats for each section. We focused the analysis especially on the user-terminal section, which is relatively vulnerable, causing difficulties in securing stability of the service as a whole. The analyzed security threats are expected to serve the foundation for safe configuration of various Internet banking services.

A Study on VaR Stability for Operational Risk Management (운영리스크 VaR 추정값의 안정성검증 방법 연구)

  • Kim, Hyun-Joong;Kim, Woo-Hwan;Lee, Sang-Cheol;Im, Jong-Ho;Cho, Sang-Hee;Kim, Ah-Hyoun
    • Communications for Statistical Applications and Methods
    • /
    • v.15 no.5
    • /
    • pp.697-708
    • /
    • 2008
  • Operational risk is defined as the risk of loss resulting from inadequate or failed internal processes, people and systems, or external events. The advanced measurement approach proposed by Basel committee uses loss distribution approach(LDA) which quantifies operational loss based on bank's own historical data and measurement system. LDA involves two distribution fittings(frequency and severity) and then generates aggregate loss distribution by employing mathematical convolution. An objective validation for the operational risk measurement is essential because the operational risk measurement allows flexibility and subjective judgement to calculate regulatory capital. However, the methodology to verify the soundness of the operational risk measurement was not fully developed because the internal operational loss data had been extremely sparse and the modeling of extreme tail was very difficult. In this paper, we propose a methodology for the validation of operational risk measurement based on bootstrap confidence intervals of operational VaR(value at risk). We derived two methods to generate confidence intervals of operational VaR.

The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

  • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.83-102
    • /
    • 2021
  • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.