• Title/Summary/Keyword: Generated AI

Search Result 238, Processing Time 0.025 seconds

Corporate Bankruptcy Prediction Model using Explainable AI-based Feature Selection (설명가능 AI 기반의 변수선정을 이용한 기업부실예측모형)

  • Gundoo Moon;Kyoung-jae Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.241-265
    • /
    • 2023
  • A corporate insolvency prediction model serves as a vital tool for objectively monitoring the financial condition of companies. It enables timely warnings, facilitates responsive actions, and supports the formulation of effective management strategies to mitigate bankruptcy risks and enhance performance. Investors and financial institutions utilize default prediction models to minimize financial losses. As the interest in utilizing artificial intelligence (AI) technology for corporate insolvency prediction grows, extensive research has been conducted in this domain. However, there is an increasing demand for explainable AI models in corporate insolvency prediction, emphasizing interpretability and reliability. The SHAP (SHapley Additive exPlanations) technique has gained significant popularity and has demonstrated strong performance in various applications. Nonetheless, it has limitations such as computational cost, processing time, and scalability concerns based on the number of variables. This study introduces a novel approach to variable selection that reduces the number of variables by averaging SHAP values from bootstrapped data subsets instead of using the entire dataset. This technique aims to improve computational efficiency while maintaining excellent predictive performance. To obtain classification results, we aim to train random forest, XGBoost, and C5.0 models using carefully selected variables with high interpretability. The classification accuracy of the ensemble model, generated through soft voting as the goal of high-performance model design, is compared with the individual models. The study leverages data from 1,698 Korean light industrial companies and employs bootstrapping to create distinct data groups. Logistic Regression is employed to calculate SHAP values for each data group, and their averages are computed to derive the final SHAP values. The proposed model enhances interpretability and aims to achieve superior predictive performance.

Detection of video editing points using facial keypoints (얼굴 특징점을 활용한 영상 편집점 탐지)

  • Joshep Na;Jinho Kim;Jonghyuk Park
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.15-30
    • /
    • 2023
  • Recently, various services using artificial intelligence(AI) are emerging in the media field as well However, most of the video editing, which involves finding an editing point and attaching the video, is carried out in a passive manner, requiring a lot of time and human resources. Therefore, this study proposes a methodology that can detect the edit points of video according to whether person in video are spoken by using Video Swin Transformer. First, facial keypoints are detected through face alignment. To this end, the proposed structure first detects facial keypoints through face alignment. Through this process, the temporal and spatial changes of the face are reflected from the input video data. And, through the Video Swin Transformer-based model proposed in this study, the behavior of the person in the video is classified. Specifically, after combining the feature map generated through Video Swin Transformer from video data and the facial keypoints detected through Face Alignment, utterance is classified through convolution layers. In conclusion, the performance of the image editing point detection model using facial keypoints proposed in this paper improved from 87.46% to 89.17% compared to the model without facial keypoints.

Generation of He I 1083 nm Images from SDO/AIA 19.3 and 30.4 nm Images by Deep Learning

  • Son, Jihyeon;Cha, Junghun;Moon, Yong-Jae;Lee, Harim;Park, Eunsu;Shin, Gyungin;Jeong, Hyun-Jin
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.46 no.1
    • /
    • pp.41.2-41.2
    • /
    • 2021
  • In this study, we generate He I 1083 nm images from Solar Dynamic Observatory (SDO)/Atmospheric Imaging Assembly (AIA) images using a novel deep learning method (pix2pixHD) based on conditional Generative Adversarial Networks (cGAN). He I 1083 nm images from National Solar Observatory (NSO)/Synoptic Optical Long-term Investigations of the Sun (SOLIS) are used as target data. We make three models: single input SDO/AIA 19.3 nm image for Model I, single input 30.4 nm image for Model II, and double input (19.3 and 30.4 nm) images for Model III. We use data from 2010 October to 2015 July except for June and December for training and the remaining one for test. Major results of our study are as follows. First, the models successfully generate He I 1083 nm images with high correlations. Second, the model with two input images shows better results than those with one input image in terms of metrics such as correlation coefficient (CC) and root mean squared error (RMSE). CC and RMSE between real and AI-generated ones for the model III with 4 by 4 binnings are 0.84 and 11.80, respectively. Third, AI-generated images show well observational features such as active regions, filaments, and coronal holes. This work is meaningful in that our model can produce He I 1083 nm images with higher cadence without data gaps, which would be useful for studying the time evolution of chromosphere and coronal holes.

  • PDF

Sentiment Analysis of News Based on Generative AI and Real Estate Price Prediction: Application of LSTM and VAR Models (생성 AI기반 뉴스 감성 분석과 부동산 가격 예측: LSTM과 VAR모델의 적용)

  • Sua Kim;Mi Ju Kwon;Hyon Hee Kim
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.5
    • /
    • pp.209-216
    • /
    • 2024
  • Real estate market prices are determined by various factors, including macroeconomic variables, as well as the influence of a variety of unstructured text data such as news articles and social media. News articles are a crucial factor in predicting real estate transaction prices as they reflect the economic sentiment of the public. This study utilizes sentiment analysis on news articles to generate a News Sentiment Index score, which is then seamlessly integrated into a real estate price prediction model. To calculate the sentiment index, the content of the articles is first summarized. Then, using AI, the summaries are categorized into positive, negative, and neutral sentiments, and a total score is calculated. This score is then applied to the real estate price prediction model. The models used for real estate price prediction include the Multi-head attention LSTM model and the Vector Auto Regression model. The LSTM prediction model, without applying the News Sentiment Index (NSI), showed Root Mean Square Error (RMSE) values of 0.60, 0.872, and 1.117 for the 1-month, 2-month, and 3-month forecasts, respectively. With the NSI applied, the RMSE values were reduced to 0.40, 0.724, and 1.03 for the same forecast periods. Similarly, the VAR prediction model without the NSI showed RMSE values of 1.6484, 0.6254, and 0.9220 for the 1-month, 2-month, and 3-month forecasts, respectively, while applying the NSI led to RMSE values of 1.1315, 0.3413, and 1.6227 for these periods. These results demonstrate the effectiveness of the proposed model in predicting apartment transaction price index and its ability to forecast real estate market price fluctuations that reflect socio-economic trends.

Development of AI-based Prediction and Assessment Program for Tunnelling Impact

  • Yoo, Chungsik;HAIDER, SYED AIZAZ;Yang, Jaewon;ALI, TABISH
    • Journal of the Korean Geosynthetics Society
    • /
    • v.18 no.4
    • /
    • pp.39-52
    • /
    • 2019
  • In this paper the development and implementation of an artificial intelligence (AI)-based Tunnelling Impact prediction and assessment program (SKKU-iTunnel) is presented. Program predicts tunnelling induced surface settlement and groundwater drawdown by utilizing well trained ANNs and uses these predicted values to perform the damage assessment likely to occur in nearby structures and pipelines/utilities for a given tunnel problem. Generalised artificial neural networks (ANNs) were trained, to predict the induced parameters, through databases generated by combining real field data and numerical analysis for cases that represented real field conditions. It is shown that program equipped with carefully trained ANN can predict tunnel impact assessments and perform damage assessments quiet efficiently and comparable accuracy to that of numerical analysis. This paper describes the idea and implementation details of the SKKU-iTunnel with an example for demonstration.

Assessment of Radiation Dose from Radioactive Wedge Filters during High-Energy X-Ray Therapy

  • Back, Geum-mun;Park, Sung Ho;Kim, Tae-Hyung
    • Progress in Medical Physics
    • /
    • v.28 no.2
    • /
    • pp.45-48
    • /
    • 2017
  • This paper evaluated the amount of radiation generated by wedge filters during radiation therapy using a high-energy linear accelerator, and the dose to the worker during wedge replacement. After 10-MV photon beam was irradiated with wedge filter, the wedge was removed from the linear accelerator, and the dose rate and energy spectrum were measured. The initial measurement was approximately 1 uSv/h, and the radiation level was reduced to 0.3 uSv/h after 6 min. The effective half-life derived from the dose rate measurement was approximately 3.5 min, and the influence of AI-28 was about 53%. From the energy spectrum measurements, a peak of 1,799 keV was measured for AI-28, while the peak for Co-58 was not measured in the control room. The peaks for Au-106 and Cd-105 were found only measurement was done without wedge removement from the linear accelerator. The additional doses received by the radiation worker during wedge replacement were estimated to be 0.08-0.4 mSv per year.

Deep Learning Application of Gamma Camera Quality Control in Nuclear Medicine (핵의학 감마카메라 정도관리의 딥러닝 적용)

  • Jeong, Euihwan;Oh, Joo-Young;Lee, Joo-Young;Park, Hoon-Hee
    • Journal of radiological science and technology
    • /
    • v.43 no.6
    • /
    • pp.461-467
    • /
    • 2020
  • In the field of nuclear medicine, errors are sometimes generated because the assessment of the uniformity of gamma cameras relies on the naked eye of the evaluator. To minimize these errors, we created an artificial intelligence model based on CNN algorithm and wanted to assess its usefulness. We produced 20,000 normal images and partial cold region images using Python, and conducted artificial intelligence training with Resnet18 models. The training results showed that accuracy, specificity and sensitivity were 95.01%, 92.30%, and 97.73%, respectively. According to the results of the evaluation of the confusion matrix of artificial intelligence and expert groups, artificial intelligence was accuracy, specificity and sensitivity of 94.00%, 91.50%, and 96.80%, respectively, and expert groups was accuracy, specificity and sensitivity of 69.00%, 64.00%, and 74.00%, respectively. The results showed that artificial intelligence was better than expert groups. In addition, by checking together with the radiological technologist and AI, errors that may occur during the quality control process can be reduced, providing a better examination environment for patients, providing convenience to radiologists, and improving work efficiency.

Preliminary study of artificial intelligence-based fuel-rod pattern analysis of low-quality tomographic image of fuel assembly

  • Seong, Saerom;Choi, Sehwan;Ahn, Jae Joon;Choi, Hyung-joo;Chung, Yong Hyun;You, Sei Hwan;Yeom, Yeon Soo;Choi, Hyun Joon;Min, Chul Hee
    • Nuclear Engineering and Technology
    • /
    • v.54 no.10
    • /
    • pp.3943-3948
    • /
    • 2022
  • Single-photon emission computed tomography is one of the reliable pin-by-pin verification techniques for spent-fuel assemblies. One of the challenges with this technique is to increase the total fuel assembly verification speed while maintaining high verification accuracy. The aim of the present study, therefore, was to develop an artificial intelligence (AI) algorithm-based tomographic image analysis technique for partial-defect verification of fuel assemblies. With the Monte Carlo (MC) simulation technique, a tomographic image dataset consisting of 511 fuel-rod patterns of a 3 × 3 fuel assembly was generated, and with these images, the VGG16, GoogLeNet, and ResNet models were trained. According to an evaluation of these models for different training dataset sizes, the ResNet model showed 100% pattern estimation accuracy. And, based on the different tomographic image qualities, all of the models showed almost 100% pattern estimation accuracy, even for low-quality images with unrecognizable fuel patterns. This study verified that an AI model can be effectively employed for accurate and fast partial-defect verification of fuel assemblies.

Anomaly Detection via Pattern Dictionary Method and Atypicality in Application (패턴사전과 비정형성을 통한 이상치 탐지방법 적용)

  • Sehong Oh;Jongsung Park;Youngsam Yoon
    • Journal of Sensor Science and Technology
    • /
    • v.32 no.6
    • /
    • pp.481-486
    • /
    • 2023
  • Anomaly detection holds paramount significance across diverse fields, encompassing fraud detection, risk mitigation, and sensor evaluation tests. Its pertinence extends notably to the military, particularly within the Warrior Platform, a comprehensive combat equipment system with wearable sensors. Hence, we propose a data-compression-based anomaly detection approach tailored to unlabeled time series and sequence data. This method entailed the construction of two distinctive features, typicality and atypicality, to discern anomalies effectively. The typicality of a test sequence was determined by evaluating the compression efficacy achieved through the pattern dictionary. This dictionary was established based on the frequency of all patterns identified in a training sequence generated for each sensor within Warrior Platform. The resulting typicality served as an anomaly score, facilitating the identification of anomalous data using a predetermined threshold. To improve the performance of the pattern dictionary method, we leveraged atypicality to discern sequences that could undergo compression independently without relying on the pattern dictionary. Consequently, our refined approach integrated both typicality and atypicality, augmenting the effectiveness of the pattern dictionary method. Our proposed method exhibited heightened capability in detecting a spectrum of unpredictable anomalies, fortifying the stability of wearable sensors prevalent in military equipment, including the Army TIGER 4.0 system.

Design of Block Codes for Distributed Learning in VR/AR Transmission

  • Seo-Hee Hwang;Si-Yeon Pak;Jin-Ho Chung;Daehwan Kim;Yongwan Kim
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.4
    • /
    • pp.300-305
    • /
    • 2023
  • Audience reactions in response to remote virtual performances must be compressed before being transmitted to the server. The server, which aggregates these data for group insights, requires a distribution code for the transfer. Recently, distributed learning algorithms such as federated learning have gained attention as alternatives that satisfy both the information security and efficiency requirements. In distributed learning, no individual user has access to complete information, and the objective is to achieve a learning effect similar to that achieved with the entire information. It is therefore important to distribute interdependent information among users and subsequently aggregate this information following training. In this paper, we present a new extension technique for minimal code that allows a new minimal code with a different length and Hamming weight to be generated through the product of any vector and a given minimal code. Thus, the proposed technique can generate minimal codes with previously unknown parameters. We also present a scenario wherein these combined methods can be applied.