• Title/Summary/Keyword: Neural networks modeling

Search Result 390, Processing Time 0.029 seconds

Development of Artificial Intelligence Joint Model for Hybrid Finite Element Analysis (하이브리드 유한요소해석을 위한 인공지능 조인트 모델 개발)

  • Jang, Kyung Suk;Lim, Hyoung Jun;Hwang, Ji Hye;Shin, Jaeyoon;Yun, Gun Jin
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.48 no.10
    • /
    • pp.773-782
    • /
    • 2020
  • The development of joint FE models for deep learning neural network (DLNN)-based hybrid FEA is presented. Material models of bolts and bearings in the front axle of tractor, showing complex behavior induced by various tightening conditions, were replaced with DLNN models. Bolts are modeled as one-dimensional Timoshenko beam elements with six degrees of freedom, and bearings as three-dimensional solid elements. Stress-strain data were extracted from all elements after finite element analysis subjected to various load conditions, and DLNN for bolts and bearing were trained with Tensorflow. The DLNN-based joint models were implemented in the ABAQUS user subroutines where stresses from the next increment are updated and the algorithmic tangent stiffness matrix is calculated. Generalization of the trained DLNN in the FE model was verified by subjecting it to a new loading condition. Finally, the DLNN-based FEA for the front axle of the tractor was conducted and the feasibility was verified by comparing with results of a static structural experiment of the actual tractor.

Neural Networks-Genetic Algorithm Model for Modeling of Nonlinear Evaporation and Evapotranpiration Time Series. 2. Optimal Model Construction by Uncertainty Analysis (비선형 증발량 및 증발산량 시계열의 모형화를 위한 신경망-유전자 알고리즘 모형 2. 불확실성 분석에 의한 최적모형의 구축)

  • Kim, Sung-Won;Kim, Hung-Soo
    • Journal of Korea Water Resources Association
    • /
    • v.40 no.1 s.174
    • /
    • pp.89-99
    • /
    • 2007
  • Uncertainty analysis is used to eliminate the climatic variables of input nodes and construct the model of an optimal type from COMBINE-GRNNM-GA(Type-1), which have been developed in this issue(2007). The input variable which has the lowest smoothing factor during the training performance, is eliminated from the original COMBINE-GRNNM-GA (Type-1). And, the modified COMBINE-GRNNM-GA(Type-1) is retrained to find the new and lowest smoothing factor of the each climatic variable. The input variable which has the lowest smoothing factor, implies the least useful climatic variable for the model output. Furthermore, The sensitive and insensitive climatic variables are chosen from the uncertainty analysis of the input nodes. The optimal COMBINE-GRNNM-GA(Type-1) is developed to estimate and calculate the PE which is missed or ungaged and the $ET_r$ which is not measured with the least cost and endeavor Finally, the PE and $ET_r$. maps can be constructed to give the reference data for drought and irrigation and drainage networks system analysis using the optimal COMBINE-GRNNM-GA(Type-1) in South Korea.

Illegal Cash Accommodation Detection Modeling Using Ensemble Size Reduction (신용카드 불법현금융통 적발을 위한 축소된 앙상블 모형)

  • Lee, Hwa-Kyung;Han, Sang-Bum;Jhee, Won-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.1
    • /
    • pp.93-116
    • /
    • 2010
  • Ensemble approach is applied to the detection modeling of illegal cash accommodation (ICA) that is the well-known type of fraudulent usages of credit cards in far east nations and has not been addressed in the academic literatures. The performance of fraud detection model (FDM) suffers from the imbalanced data problem, which can be remedied to some extent using an ensemble of many classifiers. It is generally accepted that ensembles of classifiers produce better accuracy than a single classifier provided there is diversity in the ensemble. Furthermore, recent researches reveal that it may be better to ensemble some selected classifiers instead of all of the classifiers at hand. For the effective detection of ICA, we adopt ensemble size reduction technique that prunes the ensemble of all classifiers using accuracy and diversity measures. The diversity in ensemble manifests itself as disagreement or ambiguity among members. Data imbalance intrinsic to FDM affects our approach for ICA detection in two ways. First, we suggest the training procedure with over-sampling methods to obtain diverse training data sets. Second, we use some variants of accuracy and diversity measures that focus on fraud class. We also dynamically calculate the diversity measure-Forward Addition and Backward Elimination. In our experiments, Neural Networks, Decision Trees and Logit Regressions are the base models as the ensemble members and the performance of homogeneous ensembles are compared with that of heterogeneous ensembles. The experimental results show that the reduced size ensemble is as accurate on average over the data-sets tested as the non-pruned version, which provides benefits in terms of its application efficiency and reduced complexity of the ensemble.

Multi-FNN Identification by Means of HCM Clustering and ITs Optimization Using Genetic Algorithms (HCM 클러스터링에 의한 다중 퍼지-뉴럴 네트워크 동정과 유전자 알고리즘을 이용한 이의 최적화)

  • 오성권;박호성
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.10 no.5
    • /
    • pp.487-496
    • /
    • 2000
  • In this paper, the Multi-FNN(Fuzzy-Neural Networks) model is identified and optimized using HCM(Hard C-Means) clustering method and genetic algorithms. The proposed Multi-FNN is based on Yamakawa's FNN and uses simplified inference as fuzzy inference method and error back propagation algorithm as learning rules. We use a HCM clustering and Genetic Algorithms(GAs) to identify both the structure and the parameters of a Multi-FNN model. Here, HCM clustering method, which is carried out for the process data preprocessing of system modeling, is utilized to determine the structure of Multi-FNN according to the divisions of input-output space using I/O process data. Also, the parameters of Multi-FNN model such as apexes of membership function, learning rates and momentum coefficients are adjusted using genetic algorithms. A aggregate performance index with a weighting factor is used to achieve a sound balance between approximation and generalization abilities of the model. The aggregate performance index stands for an aggregate objective function with a weighting factor to consider a mutual balance and dependency between approximation and predictive abilities. According to the selection and adjustment of a weighting factor of this aggregate abjective function which depends on the number of data and a certain degree of nonlinearity, we show that it is available and effective to design an optimal Multi-FNN model. To evaluate the performance of the proposed model, we use the time series data for gas furnace and the numerical data of nonlinear function.

  • PDF

Modeling the Distribution Demand Estimation for Urban Rail Transit (퍼지제어를 이용한 도시철도 분포수요 예측모형 구축)

  • Kim, Dae-Ung;Park, Cheol-Gu;Choe, Han-Gyu
    • Journal of Korean Society of Transportation
    • /
    • v.23 no.2
    • /
    • pp.25-36
    • /
    • 2005
  • In this study, we suggested a new approach method forecasting distribution demand of urban rail transit usign fuzzy control, with intend to reflect irregularity and various functional relationship between trip length and distribution demand. To establish fuzzy control model and test this model, the actual trip volume(production, attraction and distribution volume) and trip length (space distance between a departure and arrival station) of Daegu subway line 1 were used. Firstly, usign these data we established a fuzzy control model, nd the estimation accuracy of the model was examined and compared with that of generalized gravity model. The results showed that the fuzzy control model was superior to gravity model in accuracy of estimation. Therefore, wwe found that fuzzy control was able to be applied as a effective method to predict the distribution demand of urban rail transit. Finally, to increase the estimation precision of the model, we expect studies that define membership functions and set up fuzzy rules organized with neural networks.

An Investigation on Digital Humanities Research Trend by Analyzing the Papers of Digital Humanities Conferences (디지털 인문학 연구 동향 분석 - Digital Humanities 학술대회 논문을 중심으로 -)

  • Chung, EunKyung
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.55 no.1
    • /
    • pp.393-413
    • /
    • 2021
  • Digital humanities, which creates new and innovative knowledge through the combination of digital information technology and humanities research problems, can be seen as a representative multidisciplinary field of study. To investigate the intellectual structure of the digital humanities field, a network analysis of authors and keywords co-word was performed on a total of 441 papers in the last two years (2019, 2020) at the Digital Humanities Conference. As the results of the author and keyword analysis show, we can find out the active activities of Europe, North America, and Japanese and Chinese authors in East Asia. Through the co-author network, 11 dis-connected sub-networks are identified, which can be seen as a result of closed co-authoring activities. Through keyword analysis, 16 sub-subject areas are identified, which are machine learning, pedagogy, metadata, topic modeling, stylometry, cultural heritage, network, digital archive, natural language processing, digital library, twitter, drama, big data, neural network, virtual reality, and ethics. This results imply that a diver variety of digital information technologies are playing a major role in the digital humanities. In addition, keywords with high frequency can be classified into humanities-based keywords, digital information technology-based keywords, and convergence keywords. The dynamics of the growth and development of digital humanities can represented in these combinations of keywords.

Image-Based Automatic Bridge Component Classification Using Deep Learning (딥러닝을 활용한 이미지 기반 교량 구성요소 자동분류 네트워크 개발)

  • Cho, Munwon;Lee, Jae Hyuk;Ryu, Young-Moo;Park, Jeongjun;Yoon, Hyungchul
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.41 no.6
    • /
    • pp.751-760
    • /
    • 2021
  • Most bridges in Korea are over 20 years old, and many problems linked to their deterioration are being reported. The current practice for bridge inspection mainly depends on expert evaluation, which can be subjective. Recent studies have introduced data-driven methods using building information modeling, which can be more efficient and objective, but these methods require manual procedures that consume time and money. To overcome this, this study developed an image-based automaticbridge component classification network to reduce the time and cost required for converting the visual information of bridges to a digital model. The proposed method comprises two convolutional neural networks. The first network estimates the type of the bridge based on the superstructure, and the second network classifies the bridge components. In avalidation test, the proposed system automatically classified the components of 461 bridge images with 96.6 % of accuracy. The proposed approach is expected to contribute toward current bridge maintenance practice.

Status and Implications of Hydrogeochemical Characterization of Deep Groundwater for Deep Geological Disposal of High-Level Radioactive Wastes in Developed Countries (고준위 방사성 폐기물 지질처분을 위한 해외 선진국의 심부 지하수 환경 연구동향 분석 및 시사점 도출)

  • Jaehoon Choi;Soonyoung Yu;SunJu Park;Junghoon Park;Seong-Taek Yun
    • Economic and Environmental Geology
    • /
    • v.55 no.6
    • /
    • pp.737-760
    • /
    • 2022
  • For the geological disposal of high-level radioactive wastes (HLW), an understanding of deep subsurface environment is essential through geological, hydrogeological, geochemical, and geotechnical investigations. Although South Korea plans the geological disposal of HLW, only a few studies have been conducted for characterizing the geochemistry of deep subsurface environment. To guide the hydrogeochemical research for selecting suitable repository sites, this study overviewed the status and trends in hydrogeochemical characterization of deep groundwater for the deep geological disposal of HLW in developed countries. As a result of examining the selection process of geological disposal sites in 8 countries including USA, Canada, Finland, Sweden, France, Japan, Germany, and Switzerland, the following geochemical parameters were needed for the geochemical characterization of deep subsurface environment: major and minor elements and isotopes (e.g., 34S and 18O of SO42-, 13C and 14C of DIC, 2H and 18O of water) of both groundwater and pore water (in aquitard), fracture-filling minerals, organic materials, colloids, and oxidation-reduction indicators (e.g., Eh, Fe2+/Fe3+, H2S/SO42-, NH4+/NO3-). A suitable repository was selected based on the integrated interpretation of these geochemical data from deep subsurface. In South Korea, hydrochemical types and evolutionary patterns of deep groundwater were identified using artificial neural networks (e.g., Self-Organizing Map), and the impact of shallow groundwater mixing was evaluated based on multivariate statistics (e.g., M3 modeling). The relationship between fracture-filling minerals and groundwater chemistry also has been investigated through a reaction-path modeling. However, these previous studies in South Korea had been conducted without some important geochemical data including isotopes, oxidationreduction indicators and DOC, mainly due to the lack of available data. Therefore, a detailed geochemical investigation is required over the country to collect these hydrochemical data to select a geological disposal site based on scientific evidence.

An Intelligent Intrusion Detection Model Based on Support Vector Machines and the Classification Threshold Optimization for Considering the Asymmetric Error Cost (비대칭 오류비용을 고려한 분류기준값 최적화와 SVM에 기반한 지능형 침입탐지모형)

  • Lee, Hyeon-Uk;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.157-173
    • /
    • 2011
  • As the Internet use explodes recently, the malicious attacks and hacking for a system connected to network occur frequently. This means the fatal damage can be caused by these intrusions in the government agency, public office, and company operating various systems. For such reasons, there are growing interests and demand about the intrusion detection systems (IDS)-the security systems for detecting, identifying and responding to unauthorized or abnormal activities appropriately. The intrusion detection models that have been applied in conventional IDS are generally designed by modeling the experts' implicit knowledge on the network intrusions or the hackers' abnormal behaviors. These kinds of intrusion detection models perform well under the normal situations. However, they show poor performance when they meet a new or unknown pattern of the network attacks. For this reason, several recent studies try to adopt various artificial intelligence techniques, which can proactively respond to the unknown threats. Especially, artificial neural networks (ANNs) have popularly been applied in the prior studies because of its superior prediction accuracy. However, ANNs have some intrinsic limitations such as the risk of overfitting, the requirement of the large sample size, and the lack of understanding the prediction process (i.e. black box theory). As a result, the most recent studies on IDS have started to adopt support vector machine (SVM), the classification technique that is more stable and powerful compared to ANNs. SVM is known as a relatively high predictive power and generalization capability. Under this background, this study proposes a novel intelligent intrusion detection model that uses SVM as the classification model in order to improve the predictive ability of IDS. Also, our model is designed to consider the asymmetric error cost by optimizing the classification threshold. Generally, there are two common forms of errors in intrusion detection. The first error type is the False-Positive Error (FPE). In the case of FPE, the wrong judgment on it may result in the unnecessary fixation. The second error type is the False-Negative Error (FNE) that mainly misjudges the malware of the program as normal. Compared to FPE, FNE is more fatal. Thus, when considering total cost of misclassification in IDS, it is more reasonable to assign heavier weights on FNE rather than FPE. Therefore, we designed our proposed intrusion detection model to optimize the classification threshold in order to minimize the total misclassification cost. In this case, conventional SVM cannot be applied because it is designed to generate discrete output (i.e. a class). To resolve this problem, we used the revised SVM technique proposed by Platt(2000), which is able to generate the probability estimate. To validate the practical applicability of our model, we applied it to the real-world dataset for network intrusion detection. The experimental dataset was collected from the IDS sensor of an official institution in Korea from January to June 2010. We collected 15,000 log data in total, and selected 1,000 samples from them by using random sampling method. In addition, the SVM model was compared with the logistic regression (LOGIT), decision trees (DT), and ANN to confirm the superiority of the proposed model. LOGIT and DT was experimented using PASW Statistics v18.0, and ANN was experimented using Neuroshell 4.0. For SVM, LIBSVM v2.90-a freeware for training SVM classifier-was used. Empirical results showed that our proposed model based on SVM outperformed all the other comparative models in detecting network intrusions from the accuracy perspective. They also showed that our model reduced the total misclassification cost compared to the ANN-based intrusion detection model. As a result, it is expected that the intrusion detection model proposed in this paper would not only enhance the performance of IDS, but also lead to better management of FNE.

Multi-task Learning Based Tropical Cyclone Intensity Monitoring and Forecasting through Fusion of Geostationary Satellite Data and Numerical Forecasting Model Output (정지궤도 기상위성 및 수치예보모델 융합을 통한 Multi-task Learning 기반 태풍 강도 실시간 추정 및 예측)

  • Lee, Juhyun;Yoo, Cheolhee;Im, Jungho;Shin, Yeji;Cho, Dongjin
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_3
    • /
    • pp.1037-1051
    • /
    • 2020
  • The accurate monitoring and forecasting of the intensity of tropical cyclones (TCs) are able to effectively reduce the overall costs of disaster management. In this study, we proposed a multi-task learning (MTL) based deep learning model for real-time TC intensity estimation and forecasting with the lead time of 6-12 hours following the event, based on the fusion of geostationary satellite images and numerical forecast model output. A total of 142 TCs which developed in the Northwest Pacific from 2011 to 2016 were used in this study. The Communications system, the Ocean and Meteorological Satellite (COMS) Meteorological Imager (MI) data were used to extract the images of typhoons, and the Climate Forecast System version 2 (CFSv2) provided by the National Center of Environmental Prediction (NCEP) was employed to extract air and ocean forecasting data. This study suggested two schemes with different input variables to the MTL models. Scheme 1 used only satellite-based input data while scheme 2 used both satellite images and numerical forecast modeling. As a result of real-time TC intensity estimation, Both schemes exhibited similar performance. For TC intensity forecasting with the lead time of 6 and 12 hours, scheme 2 improved the performance by 13% and 16%, respectively, in terms of the root mean squared error (RMSE) when compared to scheme 1. Relative root mean squared errors(rRMSE) for most intensity levels were lessthan 30%. The lower mean absolute error (MAE) and RMSE were found for the lower intensity levels of TCs. In the test results of the typhoon HALONG in 2014, scheme 1 tended to overestimate the intensity by about 20 kts at the early development stage. Scheme 2 slightly reduced the error, resulting in an overestimation by about 5 kts. The MTL models reduced the computational cost about 300% when compared to the single-tasking model, which suggested the feasibility of the rapid production of TC intensity forecasts.