• Title/Summary/Keyword: artificial intelligence-based models

Search Result 575, Processing Time 0.027 seconds

Predicting Oxynitrification layer using AI-based Varying Coefficient Regression model (AI 기반의 Varying Coefficient Regression 모델을 이용한 산질화층 예측)

  • Hye Jung Park;Joo Yong Shim;Kyong Jun An;Chang Ha Hwang;Je Hyun Han
    • Journal of the Korean Society for Heat Treatment
    • /
    • v.36 no.6
    • /
    • pp.374-381
    • /
    • 2023
  • This study develops and evaluates a deep learning model for predicting oxide and nitride layers based on plasma process data. We introduce a novel deep learning-based Varying Coefficient Regressor (VCR) by adapting the VCR, which previously relied on an existing unique function. This model is employed to forecast the oxide and nitride layers within the plasma. Through comparative experiments, the proposed VCR-based model exhibits superior performance compared to Long Short-Term Memory, Random Forest, and other methods, showcasing its excellence in predicting time series data. This study indicates the potential for advancing prediction models through deep learning in the domain of plasma processing and highlights its application prospects in industrial settings.

Exploring Narrative Intelligence in AI: Implications for the Evolution of Homo narrans (인공지능의 서사 지능 탐구 : 새로운 서사 생태계와 호모 나랜스의 진화)

  • Hochang Kwon
    • Trans-
    • /
    • v.16
    • /
    • pp.107-133
    • /
    • 2024
  • Narratives are fundamental to human cognition and social culture, serving as the primary means by which individuals and societies construct meaning, share experiences, and convey cultural and moral values. The field of artificial intelligence, which seeks to mimic human thought and behavior, has long studied story generation and story understanding, and today's Large Language Models are demonstrating remarkable narrative capabilities based on advances in natural language processing. This situation raises a variety of changes and new issues, but a comprehensive discussion of them is hard to find. This paper aims to provide a holistic view of the current state and future changes by exploring the intersections and interactions of human and AI narrative intelligence. This paper begins with a review of multidisciplinary research on the intrinsic relationship between humans and narrative, represented by the term Homo narrans, and then provide a historical overview of how narrative has been studied in the field of AI. This paper then explore the possibilities and limitations of narrative intelligence as revealed by today's Large Language Models, and present three philosophical challenges for understanding the implications of AI with narrative intelligence.

A SE Approach to Predict the Peak Cladding Temperature using Artificial Neural Network

  • ALAtawneh, Osama Sharif;Diab, Aya
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.16 no.2
    • /
    • pp.67-77
    • /
    • 2020
  • Traditionally nuclear thermal hydraulic and nuclear safety has relied on numerical simulations to predict the system response of a nuclear power plant either under normal operation or accident condition. However, this approach may sometimes be rather time consuming particularly for design and optimization problems. To expedite the decision-making process data-driven models can be used to deduce the statistical relationships between inputs and outputs rather than solving physics-based models. Compared to the traditional approach, data driven models can provide a fast and cost-effective framework to predict the behavior of highly complex and non-linear systems where otherwise great computational efforts would be required. The objective of this work is to develop an AI algorithm to predict the peak fuel cladding temperature as a metric for the successful implementation of FLEX strategies under extended station black out. To achieve this, the model requires to be conditioned using pre-existing database created using the thermal-hydraulic analysis code, MARS-KS. In the development stage, the model hyper-parameters are tuned and optimized using the talos tool.

Predicting Steel Structure Product Weight Ratios using Large Language Model-Based Neural Networks (대형 언어 모델 기반 신경망을 활용한 강구조물 부재 중량비 예측)

  • Jong-Hyeok Park;Sang-Hyun Yoo;Soo-Hee Han;Kyeong-Jun Kim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.1
    • /
    • pp.119-126
    • /
    • 2024
  • In building information model (BIM), it is difficult to train an artificial intelligence (AI) model due to the lack of sufficient data about individual projects in an architecture firm. In this paper, we present a methodology to correctly train an AI neural network model based on a large language model (LLM) to predict the steel structure product weight ratios in BIM. The proposed method, with the aid of the LLM, can overcome the inherent problem of limited data availability in BIM and handle a combination of natural language and numerical data. The experimental results showed that the proposed method demonstrated significantly higher accuracy than methods based on a smaller language model. The potential for effectively applying large language models in BIM is confirmed, leading to expectations of preventing building accidents and efficiently managing construction costs.

A Study on Artificial Intelligence Models for Predicting the Causes of Chemical Accidents Using Chemical Accident Status and Case Data (화학물질 사고 현황 및 사례 데이터를 이용한 인공지능 사고 원인 예측 모델에 관한 연구)

  • KyungHyun Lee;RackJune Baek;Hyeseong Jung;WooSu Kim;HeeJeong Choi
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.5
    • /
    • pp.725-733
    • /
    • 2024
  • This study aims to develop an artificial intelligence-based model for predicting the causes of chemical accidents, utilizing data on 865 chemical accident situations and cases provided by the Chemical Safety Agency under the Ministry of Environment from January 2014 to January 2024. The research involved training the data using six artificial intelligence models and compared evaluation metrics such as accuracy, precision, recall, and F1 score. Based on 356 chemical accident cases from 2020 to 2024, additional training data sets were applied using chemical accident cause investigations and similar accident prevention measures suggested by the Chemical Safety Agency from 2021 to 2022. Through this process, the Multi-Layer Perceptron (MLP) model showed an accuracy of 0.6590 and a precision of 0.6821. the Multi-Layer Perceptron (MLP) model showed an accuracy of 0.6590 and a precision of 0.6821. The Logistic Regression model improved its accuracy from 0.6647 to 0.7778 and its precision from 0.6790 to 0.7992, confirming that the Logistic Regression model is the most effective for predicting the causes of chemical accidents.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

Seismic retrofit of steel structures with re-centering friction devices using genetic algorithm and artificial neural network

  • Mohamed Noureldin;Masoum M. Gharagoz;Jinkoo Kim
    • Steel and Composite Structures
    • /
    • v.47 no.2
    • /
    • pp.167-184
    • /
    • 2023
  • In this study, a new recentering friction device (RFD) to retrofit steel moment frame structures is introduced. The device provides both self-centering and energy dissipation capabilities for the retrofitted structure. A hybrid performance-based seismic design procedure considering multiple limit states is proposed for designing the device and the retrofitted structure. The design of the RFD is achieved by modifying the conventional performance-based seismic design (PBSD) procedure using computational intelligence techniques, namely, genetic algorithm (GA) and artificial neural network (ANN). Numerous nonlinear time-history response analyses (NLTHAs) are conducted on multi-degree of freedom (MDOF) and single-degree of freedom (SDOF) systems to train and validate the ANN to achieve high prediction accuracy. The proposed procedure and the new RFD are assessed using 2D and 3D models globally and locally. Globally, the effectiveness of the proposed device is assessed by conducting NLTHAs to check the maximum inter-story drift ratio (MIDR). Seismic fragilities of the retrofitted models are investigated by constructing fragility curves of the models for different limit states. After that, seismic life cycle cost (LCC) is estimated for the models with and without the retrofit. Locally, the stress concentration at the contact point of the RFD and the existing steel frame is checked being within acceptable limits using finite element modeling (FEM). The RFD showed its effectiveness in minimizing MIDR and eliminating residual drift for low to mid-rise steel frames models tested. GA and ANN proved to be crucial integrated parts in the modified PBSD to achieve the required seismic performance at different limit states with reasonable computational cost. ANN showed a very high prediction accuracy for transformation between MDOF and SDOF systems. Also, the proposed retrofit showed its efficiency in enhancing the seismic fragility and reducing the LCC significantly compared to the un-retrofitted models.

Steel Surface Defect Detection using the RetinaNet Detection Model

  • Sharma, Mansi;Lim, Jong-Tae;Chae, Yi-Geun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.2
    • /
    • pp.136-146
    • /
    • 2022
  • Some surface defects make the weak quality of steel materials. To limit these defects, we advocate a one-stage detector model RetinaNet among diverse detection algorithms in deep learning. There are several backbones in the RetinaNet model. We acknowledged two backbones, which are ResNet50 and VGG19. To validate our model, we compared and analyzed several traditional models, one-stage models like YOLO and SSD models and two-stage models like Faster-RCNN, EDDN, and Xception models, with simulations based on steel individual classes. We also performed the correlation of the time factor between one-stage and two-stage models. Comparative analysis shows that the proposed model achieves excellent results on the dataset of the Northeastern University surface defect detection dataset. We would like to work on different backbones to check the efficiency of the model for real world, increasing the datasets through augmentation and focus on improving our limitation.

Multi-Scale Dilation Convolution Feature Fusion (MsDC-FF) Technique for CNN-Based Black Ice Detection

  • Sun-Kyoung KANG
    • Korean Journal of Artificial Intelligence
    • /
    • v.11 no.3
    • /
    • pp.17-22
    • /
    • 2023
  • In this paper, we propose a black ice detection system using Convolutional Neural Networks (CNNs). Black ice poses a serious threat to road safety, particularly during winter conditions. To overcome this problem, we introduce a CNN-based architecture for real-time black ice detection with an encoder-decoder network, specifically designed for real-time black ice detection using thermal images. To train the network, we establish a specialized experimental platform to capture thermal images of various black ice formations on diverse road surfaces, including cement and asphalt. This enables us to curate a comprehensive dataset of thermal road black ice images for a training and evaluation purpose. Additionally, in order to enhance the accuracy of black ice detection, we propose a multi-scale dilation convolution feature fusion (MsDC-FF) technique. This proposed technique dynamically adjusts the dilation ratios based on the input image's resolution, improving the network's ability to capture fine-grained details. Experimental results demonstrate the superior performance of our proposed network model compared to conventional image segmentation models. Our model achieved an mIoU of 95.93%, while LinkNet achieved an mIoU of 95.39%. Therefore, it is concluded that the proposed model in this paper could offer a promising solution for real-time black ice detection, thereby enhancing road safety during winter conditions.

Explainable radionuclide identification algorithm based on the convolutional neural network and class activation mapping

  • Yu Wang;Qingxu Yao;Quanhu Zhang;He Zhang;Yunfeng Lu;Qimeng Fan;Nan Jiang;Wangtao Yu
    • Nuclear Engineering and Technology
    • /
    • v.54 no.12
    • /
    • pp.4684-4692
    • /
    • 2022
  • Radionuclide identification is an important part of the nuclear material identification system. The development of artificial intelligence and machine learning has made nuclide identification rapid and automatic. However, many methods directly use existing deep learning models to analyze the gamma-ray spectrum, which lacks interpretability for researchers. This study proposes an explainable radionuclide identification algorithm based on the convolutional neural network and class activation mapping. This method shows the area of interest of the neural network on the gamma-ray spectrum by generating a class activation map. We analyzed the class activation map of the gamma-ray spectrum of different types, different gross counts, and different signal-to-noise ratios. The results show that the convolutional neural network attempted to learn the relationship between the input gamma-ray spectrum and the nuclide type, and could identify the nuclide based on the photoelectric peak and Compton edge. Furthermore, the results explain why the neural network could identify gamma-ray spectra with low counts and low signal-to-noise ratios. Thus, the findings improve researchers' confidence in the ability of neural networks to identify nuclides and promote the application of artificial intelligence methods in the field of nuclide identification.