• Title/Summary/Keyword: artificial intelligence-based models

Search Result 575, Processing Time 0.024 seconds

A Study of Prediction of Daily Water Supply Usion ANFIS (ANFIS를 이용한 상수도 1일 급수량 예측에 관한 연구)

  • Rhee, Kyoung-Hoon;Moon, Byoung-Seok;Kang, Il-Hwan
    • Journal of Korea Water Resources Association
    • /
    • v.31 no.6
    • /
    • pp.821-832
    • /
    • 1998
  • This study investigates the prediction of daily water supply, which is a necessary for the efficient management of water distribution system. Fuzzy neuron, namely artificial intelligence, is a neural network into which fuzzy information is inputted and then processed. In this study, daily water supply was predicted through an adaptive learning method by which a membership function and fuzzy rules were adapted for daily water supply prediction. This study was investigated methods for predicting water supply based on data about the amount of water supplied to the city of Kwangju. For variables choice, four analyses of input data were conducted: correlation analysis, autocorrelation analysis, partial autocorrelation analysis, and cross-correlation analysis. Input variables were (a) the amount of water supplied (b) the mean temperature, and (c)the population of the area supplied with water. Variables were combined in an integrated model. Data of the amount of daily water supply only was modelled and its validity was verified in the case that the meteorological office of weather forecast is not always reliable. Proposed models include accidental cases such as a suspension of water supply. The maximum error rate between the estimation of the model and the actual measurement was 18.35% and the average error was lower than 2.36%. The model is expected to be a real-time estimation of the operational control of water works and water/drain pipes.

  • PDF

A Study on the Improvement of Injection Molding Process Using CAE and Decision-tree (CAE와 Decision-tree를 이용한 사출성형 공정개선에 관한 연구)

  • Hwang, Soonhwan;Han, Seong-Ryeol;Lee, Hoojin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.4
    • /
    • pp.580-586
    • /
    • 2021
  • The CAT methodology is a numerical analysis technique using CAE. Recently, a methodology of applying artificial intelligence techniques to a simulation has been studied. A previous study compared the deformation results according to the injection molding process using a machine learning technique. Although MLP has excellent prediction performance, it lacks an explanation of the decision process and is like a black box. In this study, data was generated using Autodesk Moldflow 2018, an injection molding analysis software. Several Machine Learning Algorithms models were developed using RapidMiner version 9.5, a machine learning platform software, and the root mean square error was compared. The decision-tree showed better prediction performance than other machine learning techniques with the RMSE values. The classification criterion can be increased according to the Maximal Depth that determines the size of the Decision-tree, but the complexity also increases. The simulation showed that by selecting an intermediate value that satisfies the constraint based on the changed position, there was 7.7% improvement compared to the previous simulation.

Optimization of Multiclass Support Vector Machine using Genetic Algorithm: Application to the Prediction of Corporate Credit Rating (유전자 알고리즘을 이용한 다분류 SVM의 최적화: 기업신용등급 예측에의 응용)

  • Ahn, Hyunchul
    • Information Systems Review
    • /
    • v.16 no.3
    • /
    • pp.161-177
    • /
    • 2014
  • Corporate credit rating assessment consists of complicated processes in which various factors describing a company are taken into consideration. Such assessment is known to be very expensive since domain experts should be employed to assess the ratings. As a result, the data-driven corporate credit rating prediction using statistical and artificial intelligence (AI) techniques has received considerable attention from researchers and practitioners. In particular, statistical methods such as multiple discriminant analysis (MDA) and multinomial logistic regression analysis (MLOGIT), and AI methods including case-based reasoning (CBR), artificial neural network (ANN), and multiclass support vector machine (MSVM) have been applied to corporate credit rating.2) Among them, MSVM has recently become popular because of its robustness and high prediction accuracy. In this study, we propose a novel optimized MSVM model, and appy it to corporate credit rating prediction in order to enhance the accuracy. Our model, named 'GAMSVM (Genetic Algorithm-optimized Multiclass Support Vector Machine),' is designed to simultaneously optimize the kernel parameters and the feature subset selection. Prior studies like Lorena and de Carvalho (2008), and Chatterjee (2013) show that proper kernel parameters may improve the performance of MSVMs. Also, the results from the studies such as Shieh and Yang (2008) and Chatterjee (2013) imply that appropriate feature selection may lead to higher prediction accuracy. Based on these prior studies, we propose to apply GAMSVM to corporate credit rating prediction. As a tool for optimizing the kernel parameters and the feature subset selection, we suggest genetic algorithm (GA). GA is known as an efficient and effective search method that attempts to simulate the biological evolution phenomenon. By applying genetic operations such as selection, crossover, and mutation, it is designed to gradually improve the search results. Especially, mutation operator prevents GA from falling into the local optima, thus we can find the globally optimal or near-optimal solution using it. GA has popularly been applied to search optimal parameters or feature subset selections of AI techniques including MSVM. With these reasons, we also adopt GA as an optimization tool. To empirically validate the usefulness of GAMSVM, we applied it to a real-world case of credit rating in Korea. Our application is in bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. The experimental dataset was collected from a large credit rating company in South Korea. It contained 39 financial ratios of 1,295 companies in the manufacturing industry, and their credit ratings. Using various statistical methods including the one-way ANOVA and the stepwise MDA, we selected 14 financial ratios as the candidate independent variables. The dependent variable, i.e. credit rating, was labeled as four classes: 1(A1); 2(A2); 3(A3); 4(B and C). 80 percent of total data for each class was used for training, and remaining 20 percent was used for validation. And, to overcome small sample size, we applied five-fold cross validation to our dataset. In order to examine the competitiveness of the proposed model, we also experimented several comparative models including MDA, MLOGIT, CBR, ANN and MSVM. In case of MSVM, we adopted One-Against-One (OAO) and DAGSVM (Directed Acyclic Graph SVM) approaches because they are known to be the most accurate approaches among various MSVM approaches. GAMSVM was implemented using LIBSVM-an open-source software, and Evolver 5.5-a commercial software enables GA. Other comparative models were experimented using various statistical and AI packages such as SPSS for Windows, Neuroshell, and Microsoft Excel VBA (Visual Basic for Applications). Experimental results showed that the proposed model-GAMSVM-outperformed all the competitive models. In addition, the model was found to use less independent variables, but to show higher accuracy. In our experiments, five variables such as X7 (total debt), X9 (sales per employee), X13 (years after founded), X15 (accumulated earning to total asset), and X39 (the index related to the cash flows from operating activity) were found to be the most important factors in predicting the corporate credit ratings. However, the values of the finally selected kernel parameters were found to be almost same among the data subsets. To examine whether the predictive performance of GAMSVM was significantly greater than those of other models, we used the McNemar test. As a result, we found that GAMSVM was better than MDA, MLOGIT, CBR, and ANN at the 1% significance level, and better than OAO and DAGSVM at the 5% significance level.

Development of Joint-Based Motion Prediction Model for Home Co-Robot Using SVM (SVM을 이용한 가정용 협력 로봇의 조인트 위치 기반 실행동작 예측 모델 개발)

  • Yoo, Sungyeob;Yoo, Dong-Yeon;Park, Ye-Seul;Lee, Jung-Won
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.12
    • /
    • pp.491-498
    • /
    • 2019
  • Digital twin is a technology that virtualizes physical objects of the real world on a computer. It is used by collecting sensor data through IoT, and using the collected data to connect physical objects and virtual objects in both directions. It has an advantage of minimizing risk by tuning an operation of virtual model through simulation and responding to varying environment by exploiting experiments in advance. Recently, artificial intelligence and machine learning technologies have been attracting attention, so that tendency to virtualize a behavior of physical objects, observe virtual models, and apply various scenarios is increasing. In particular, recognition of each robot's motion is needed to build digital twin for co-robot which is a heart of industry 4.0 factory automation. Compared with modeling based research for recognizing motion of co-robot, there are few attempts to predict motion based on sensor data. Therefore, in this paper, an experimental environment for collecting current and inertia data in co-robot to detect the motion of the robot is built, and a motion prediction model based on the collected sensor data is proposed. The proposed method classifies the co-robot's motion commands into 9 types based on joint position and uses current and inertial sensor values to predict them by accumulated learning. The data used for accumulating learning is the sensor values that are collected when the co-robot operates with margin in input parameters of the motion commands. Through this, the model is constructed to predict not only the nine movements along the same path but also the movements along the similar path. As a result of learning using SVM, the accuracy, precision, and recall factors of the model were evaluated as 97% on average.

Development and application of cellular automata-based urban inundation and water cycle model CAW (셀룰러 오토마타 기반 도시침수 및 물순환 해석 모형 CAW의 개발 및 적용)

  • Lee, Songhee;Choi, Hyeonjin;Woo, Hyuna;Kim, Minyoung;Lee, Eunhyung;Kim, Sanghyun;Noh, Seong Jin
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.3
    • /
    • pp.165-179
    • /
    • 2024
  • It is crucial to have a comprehensive understanding of inundation and water cycle in urban areas for mitigating flood risks and sustainable water resources management. In this study, we developed a Cellular Automata-based integrated Water cycle model (CAW). A comparative analysis with physics-based and conventional cellular automata-based models was performed in an urban watershed in Portland, USA, to evaluate the adequacy of spatiotemporal inundation simulation in the context of a high-resolution setup. A high similarity was found in the maximum inundation maps by CAW and Weighted Cellular Automata 2 Dimension (WCA2D) model presumably due to the same diffuse wave assumption, showing an average Root-Mean-Square-Error (RMSE) value of 1.3 cm and high scores of binary pattern indices (HR 0.91, FAR 0.02, CSI 0.90). Furthermore, through multiple simulation experiments estimating the effects of land cover and soil conditions on inundation and infiltration, as the impermeability rate increased by 41%, the infiltration decreased by 54% (4.16 mm/m2) while the maximum inundation depth increased by 10% (2.19 mm/m2). It was expected that high-resolution integrated inundation and water cycle analysis considering various land cover and soil conditions in urban areas would be feasible using CAW.

Dynamic forecasts of bankruptcy with Recurrent Neural Network model (RNN(Recurrent Neural Network)을 이용한 기업부도예측모형에서 회계정보의 동적 변화 연구)

  • Kwon, Hyukkun;Lee, Dongkyu;Shin, Minsoo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.139-153
    • /
    • 2017
  • Corporate bankruptcy can cause great losses not only to stakeholders but also to many related sectors in society. Through the economic crises, bankruptcy have increased and bankruptcy prediction models have become more and more important. Therefore, corporate bankruptcy has been regarded as one of the major topics of research in business management. Also, many studies in the industry are in progress and important. Previous studies attempted to utilize various methodologies to improve the bankruptcy prediction accuracy and to resolve the overfitting problem, such as Multivariate Discriminant Analysis (MDA), Generalized Linear Model (GLM). These methods are based on statistics. Recently, researchers have used machine learning methodologies such as Support Vector Machine (SVM), Artificial Neural Network (ANN). Furthermore, fuzzy theory and genetic algorithms were used. Because of this change, many of bankruptcy models are developed. Also, performance has been improved. In general, the company's financial and accounting information will change over time. Likewise, the market situation also changes, so there are many difficulties in predicting bankruptcy only with information at a certain point in time. However, even though traditional research has problems that don't take into account the time effect, dynamic model has not been studied much. When we ignore the time effect, we get the biased results. So the static model may not be suitable for predicting bankruptcy. Thus, using the dynamic model, there is a possibility that bankruptcy prediction model is improved. In this paper, we propose RNN (Recurrent Neural Network) which is one of the deep learning methodologies. The RNN learns time series data and the performance is known to be good. Prior to experiment, we selected non-financial firms listed on the KOSPI, KOSDAQ and KONEX markets from 2010 to 2016 for the estimation of the bankruptcy prediction model and the comparison of forecasting performance. In order to prevent a mistake of predicting bankruptcy by using the financial information already reflected in the deterioration of the financial condition of the company, the financial information was collected with a lag of two years, and the default period was defined from January to December of the year. Then we defined the bankruptcy. The bankruptcy we defined is the abolition of the listing due to sluggish earnings. We confirmed abolition of the list at KIND that is corporate stock information website. Then we selected variables at previous papers. The first set of variables are Z-score variables. These variables have become traditional variables in predicting bankruptcy. The second set of variables are dynamic variable set. Finally we selected 240 normal companies and 226 bankrupt companies at the first variable set. Likewise, we selected 229 normal companies and 226 bankrupt companies at the second variable set. We created a model that reflects dynamic changes in time-series financial data and by comparing the suggested model with the analysis of existing bankruptcy predictive models, we found that the suggested model could help to improve the accuracy of bankruptcy predictions. We used financial data in KIS Value (Financial database) and selected Multivariate Discriminant Analysis (MDA), Generalized Linear Model called logistic regression (GLM), Support Vector Machine (SVM), Artificial Neural Network (ANN) model as benchmark. The result of the experiment proved that RNN's performance was better than comparative model. The accuracy of RNN was high in both sets of variables and the Area Under the Curve (AUC) value was also high. Also when we saw the hit-ratio table, the ratio of RNNs that predicted a poor company to be bankrupt was higher than that of other comparative models. However the limitation of this paper is that an overfitting problem occurs during RNN learning. But we expect to be able to solve the overfitting problem by selecting more learning data and appropriate variables. From these result, it is expected that this research will contribute to the development of a bankruptcy prediction by proposing a new dynamic model.

A Study on UI Prototyping Based on Personality of Things for Interusability in IoT Environment (IoT 환경에서 인터유저빌리티(Interusability) 개선을 위한 사물성격(Personality of Things)중심의 UI 프로토타이핑에 대한 연구)

  • Ahn, Mikyung;Park, Namchoon
    • Journal of the HCI Society of Korea
    • /
    • v.13 no.2
    • /
    • pp.31-44
    • /
    • 2018
  • In the IoT environment, various things could be connected. Those connected things learn and operate themselves, by acquiring data. As human being, they have self-learning and self-operating systems. In the field of IoT study, therefore, the key issue is to design communication system connecting both of the two different types of subjects, human being(user) and the things. With the advent of the IoT environment, much research has been done in the field of UI design. It can be seen that research has been conducted to take complex factors into account through keywords such as multi-modality and interusability. However, the existing UI design method has limitations in structuring or testing interaction between things and users of IoT environment. Therefore, this paper suggests a new UI prototyping method. In this paper, the major analysis and studies are as follows: (1) defined what is the behavior process of the things (2) analyzed the existing IoT product (3) built a new framework driving personality types (4) extracted three representative personality models (5) applied the three models to the smart home service and tested UI prototyping. It is meaningful with that this study can confirm user experience (UX) about IoT service in a more comprehensive way. Moreover, the concept of the personality of things will be utilized as a tool for establishing the identity of artificial intelligence (AI) services in the future.

  • PDF

Analysis of the Status of Natural Language Processing Technology Based on Deep Learning (딥러닝 중심의 자연어 처리 기술 현황 분석)

  • Park, Sang-Un
    • The Journal of Bigdata
    • /
    • v.6 no.1
    • /
    • pp.63-81
    • /
    • 2021
  • The performance of natural language processing is rapidly improving due to the recent development and application of machine learning and deep learning technologies, and as a result, the field of application is expanding. In particular, as the demand for analysis on unstructured text data increases, interest in NLP(Natural Language Processing) is also increasing. However, due to the complexity and difficulty of the natural language preprocessing process and machine learning and deep learning theories, there are still high barriers to the use of natural language processing. In this paper, for an overall understanding of NLP, by examining the main fields of NLP that are currently being actively researched and the current state of major technologies centered on machine learning and deep learning, We want to provide a foundation to understand and utilize NLP more easily. Therefore, we investigated the change of NLP in AI(artificial intelligence) through the changes of the taxonomy of AI technology. The main areas of NLP which consists of language model, text classification, text generation, document summarization, question answering and machine translation were explained with state of the art deep learning models. In addition, major deep learning models utilized in NLP were explained, and data sets and evaluation measures for performance evaluation were summarized. We hope researchers who want to utilize NLP for various purposes in their field be able to understand the overall technical status and the main technologies of NLP through this paper.

A Study on Orthogonal Image Detection Precision Improvement Using Data of Dead Pine Trees Extracted by Period Based on U-Net model (U-Net 모델에 기반한 기간별 추출 소나무 고사목 데이터를 이용한 정사영상 탐지 정밀도 향상 연구)

  • Kim, Sung Hun;Kwon, Ki Wook;Kim, Jun Hyun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.4
    • /
    • pp.251-260
    • /
    • 2022
  • Although the number of trees affected by pine wilt disease is decreasing, the affected area is expanding across the country. Recently, with the development of deep learning technology, it is being rapidly applied to the detection study of pine wilt nematodes and dead trees. The purpose of this study is to efficiently acquire deep learning training data and acquire accurate true values to further improve the detection ability of U-Net models through learning. To achieve this purpose, by using a filtering method applying a step-by-step deep learning algorithm the ambiguous analysis basis of the deep learning model is minimized, enabling efficient analysis and judgment. As a result of the analysis the U-Net model using the true values analyzed by period in the detection and performance improvement of dead pine trees of wilt nematode using the U-Net algorithm had a recall rate of -0.5%p than the U-Net model using the previously provided true values, precision was 7.6%p and F-1 score was 4.1%p. In the future, it is judged that there is a possibility to increase the precision of wilt detection by applying various filtering techniques, and it is judged that the drone surveillance method using drone orthographic images and artificial intelligence can be used in the pine wilt nematode disaster prevention project.

A Multi-agent based Cooperation System for an Intelligent Earthwork (지능형 토공을 위한 멀티에이전트 기반 협업시스템)

  • Kim, Sung-Keun
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.34 no.5
    • /
    • pp.1609-1623
    • /
    • 2014
  • A number of studies have been conducted recently regarding the development of automation systems for the construction sector. Much of this attention has focused on earthwork because it is highly dependent on construction machines and is regarded as being basic for the construction of buildings and civil works. For example, technologies are being developed in order to enable earthwork planning based on construction site models that are constructed by automatic systems and to enable construction equipment to perform the work based on the plan and the environment. There are many problems that need to be solved in order to enable the use of automatic earthwork systems in construction sites. For example, technologies are needed for enabling collaborations between similar and different kinds of construction equipment. This study aims to develop a construction system that imitates collaborative systems and decision-making methods that are used by humans. The proposed system relies on the multi-agent concept from the field of artificial intelligence. In order to develop a multi-agent-based system, configurations and functions are proposed for the agents and a framework for collaboration and arbitration between agents is presented. Furthermore, methods are introduced for preventing duplicate work and minimizing interference effects during the collaboration process. Methods are also presented for performing advance planning for the excavators and compactors that are involved in the construction. The current study suggests a theoretical framework and evaluates the results using virtual simulations. However, in the future, an empirical study will be conducted in order to apply these concepts to actual construction sites through the development of a physical system.