• Title/Summary/Keyword: Machine learning algorithm

Search Result 1,492, Processing Time 0.028 seconds

Predicting unconfined compression strength and split tensile strength of soil-cement via artificial neural networks

  • Luis Pereira;Luis Godinho;Fernando G. Branco
    • Geomechanics and Engineering
    • /
    • v.33 no.6
    • /
    • pp.611-624
    • /
    • 2023
  • Soil properties make it attractive as a building material due to its mechanical strength, aesthetically appearance, plasticity, and low cost. However, it is frequently necessary to improve and stabilize the soil mechanical properties with binders. Soil-cement is applied for purposes ranging from housing to dams, roads and foundations. Unconfined compression strength (UCS) and split tensile strength (CD) are essential mechanical parameters for ascertaining the aptitude of soil-cement for a given application. However, quantifying these parameters requires specimen preparation, testing, and several weeks. Methodologies that allowed accurate estimation of mechanical parameters in shorter time would represent an important advance in order to ensure shorter deliverable timeline and reduce the amount of laboratory work. In this work, an extensive campaign of UCS and CD tests was carried out in a sandy soil from the Leiria region (Portugal). Then, using the machine learning tool Neural Pattern Recognition of the MATLAB software, a prediction of these two parameters based on six input parameters was made. The results, especially those obtained with resource to a Bayesian regularization-backpropagation algorithm, are frankly positive, with a forecast success percentage over 90% and very low root mean square error (RMSE).

Explainable analysis of the Relationship between Hypertension with Gas leakages (설명 가능한 인공지능 기술을 활용한 가스누출과 고혈압의 연관 분석)

  • Dashdondov, Khongorzul;Jo, Kyuri;Kim, Mi-Hye
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.55-56
    • /
    • 2022
  • Hypertension is a severe health problem and increases the risk of other health issues, such as heart disease, heart attack, and stroke. In this research, we propose a machine learning-based prediction method for the risk of chronic hypertension. The proposed method consists of four main modules. In the first module, the linear interpolation method fills missing values of the integration of gas and meteorological datasets. In the second module, the OrdinalEncoder-based normalization is followed by the Decision tree algorithm to select important features. The prediction analysis module builds three models based on k-Nearest Neighbors, Decision Tree, and Random Forest to predict hypertension levels. Finally, the features used in the prediction model are explained by the DeepSHAP approach. The proposed method is evaluated by integrating the Korean meteorological agency dataset, natural gas leakage dataset, and Korean National Health and Nutrition Examination Survey dataset. The experimental results showed important global features for the hypertension of the entire population and local components for particular patients. Based on the local explanation results for a randomly selected 65-year-old male, the effect of hypertension increased from 0.694 to 1.249 when age increased by 0.37 and gas loss increased by 0.17. Therefore, it is concluded that gas loss is the cause of high blood pressure.

Analyzing effect and importance of input predictors for urban streamflow prediction based on a Bayesian tree-based model

  • Nguyen, Duc Hai;Bae, Deg-Hyo
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.134-134
    • /
    • 2022
  • Streamflow forecasting plays a crucial role in water resource control, especially in highly urbanized areas that are very vulnerable to flooding during heavy rainfall event. In addition to providing the accurate prediction, the evaluation of effects and importance of the input predictors can contribute to water manager. Recently, machine learning techniques have applied their advantages for modeling complex and nonlinear hydrological processes. However, the techniques have not considered properly the importance and uncertainty of the predictor variables. To address these concerns, we applied the GA-BART, that integrates a genetic algorithm (GA) with the Bayesian additive regression tree (BART) model for hourly streamflow forecasting and analyzing input predictors. The Jungrang urban basin was selected as a case study and a database was established based on 39 heavy rainfall events during 2003 and 2020 from the rain gauges and monitoring stations. For the goal of this study, we used a combination of inputs that included the areal rainfall of the subbasins at current time step and previous time steps and water level and streamflow of the stations at time step for multistep-ahead streamflow predictions. An analysis of multiple datasets including different input predictors was performed to define the optimal set for streamflow forecasting. In addition, the GA-BART model could reasonably determine the relative importance of the input variables. The assessment might help water resource managers improve the accuracy of forecasts and early flood warnings in the basin.

  • PDF

PE file malware detection using opcode and IAT (Opcode와 IAT를 활용한 PE 파일 악성코드 탐지)

  • JeongHun Lee;Ah Reum Kang
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.01a
    • /
    • pp.103-106
    • /
    • 2023
  • 코로나 팬데믹 사태로 인해 업무환경이 재택근무를 하는 환경으로 바뀌고 악성코드의 변종 또한 빠르게 발전하고 있다. 악성코드를 분석하고 백신 프로그램을 만들면 새로운 변종 악성코드가 생기고 변종에 대한 백신프로그램이 만들어 질 때까지 변종된 악성코드는 사용자에게 위협이 된다. 본 연구에서는 머신러닝 알고리즘을 사용하여 악성파일 여부를 예측하는 방법을 제시하였다. 일반적인 악성코드의 구조를 갖는 Portable Executable 구조 파일을 파이썬의 LIEF 라이브러리를 사용하여 Certificate, Imports, Opcode 등 3가지 feature에 대해 정적분석을 하였다. 학습 데이터로는 정상파일 320개와 악성파일 530개를 사용하였다. Certificate는 hasSignature(디지털 서명정보), isValidcertificate(디지털 서명의 유효성), isNotExpired(인증서의 유효성)의 feature set을 사용하고, Imports는 Import Address Table의 function 빈도수를 비교하여 feature set을 구축하였다. Opcode는 tri-gram으로 추출하여 빈도수를 비교하여 feature set을 구축하였다. 테스트 데이터로는 정상파일 360개 악성파일 610개를 사용하였으며 Feature set을 사용하여 random forest, decision tree, bagging, adaboost 등 4가지 머신러닝 알고리즘을 대상으로 성능을 비교하였고, bagging 알고리즘에서 약 0.98의 정확도를 보였다.

  • PDF

A Study on the Research Trends in Int'l Trade Using Topic modeling (토픽모델링을 활용한 무역분야 연구동향 분석)

  • Jee-Hoon Lee;Jung-Suk Kim
    • Korea Trade Review
    • /
    • v.45 no.3
    • /
    • pp.55-69
    • /
    • 2020
  • This study examines the research trends and knowledge structure of international trade studies using topic modeling method, which is one of the main methodologies of text mining. We collected and analyzed English abstracts of 1,868 papers of three Korean major journals in the area of international trade from 2003 to 2019. We used the Latent Dirichlet Allocation(LDA), an unsupervised machine learning algorithm to extract the latent topics from the large quantity of research abstracts. 20 topics are identified without any prior human judgement. The topics reveal topographical maps of research in international trade and are representative and meaningful in the sense that most of them correspond to previously established sub-topics in trade studies. Then we conducted a regression analysis on the document-topic distributions generated by LDA to identify hot and cold topics. We discovered 2 hot topics(internationalization capacity and performance of export companies, economic effect of trade) and 2 cold topics(exchange rate and current account, trade finance). Trade studies are characterized as a interdisciplinary study of three agendas(i.e. international economy, International Business, trade practice), and 20 topics identified can be grouped into these 3 agendas. From the estimated results of the study, we find that the Korean government's active pursuit of FTA and consequent necessity of capacity building in Korean export firms lie behind the popularity of topic selection by the Korean researchers in the area of int'l trade.

Prediction of karst sinkhole collapse using a decision-tree (DT) classifier

  • Boo Hyun Nam;Kyungwon Park;Yong Je Kim
    • Geomechanics and Engineering
    • /
    • v.36 no.5
    • /
    • pp.441-453
    • /
    • 2024
  • Sinkhole subsidence and collapse is a common geohazard often formed in karst areas such as the state of Florida, United States of America. To predict the sinkhole occurrence, we need to understand the formation mechanism of sinkhole and its karst hydrogeology. For this purpose, investigating the factors affecting sinkholes is an essential and important step. The main objectives of the presenting study are (1) the development of a machine learning (ML)-based model, namely C5.0 decision tree (C5.0 DT), for the prediction of sinkhole susceptibility, which accounts for sinkhole/subsidence inventory and sinkhole contributing factors (e.g., geological/hydrogeological) and (2) the construction of a regional-scale sinkhole susceptibility map. The study area is east central Florida (ECF) where a cover-collapse type is commonly reported. The C5.0 DT algorithm was used to account for twelve (12) identified hydrogeological factors. In this study, a total of 1,113 sinkholes in ECF were identified and the dataset was then randomly divided into 70% and 30% subsets for training and testing, respectively. The performance of the sinkhole susceptibility model was evaluated using a receiver operating characteristic (ROC) curve, particularly the area under the curve (AUC). The C5.0 model showed a high prediction accuracy of 83.52%. It is concluded that a decision tree is a promising tool and classifier for spatial prediction of karst sinkholes and subsidence in the ECF area.

[Reivew]Prediction of Cervical Cancer Risk from Taking Hormone Contraceptivese

  • Su jeong RU;Kyung-A KIM;Myung-Ae CHUNG;Min Soo KANG
    • Korean Journal of Artificial Intelligence
    • /
    • v.12 no.1
    • /
    • pp.25-29
    • /
    • 2024
  • In this study, research was conducted to predict the probability of cervical cancer occurrence associated with the use of hormonal contraceptives. Cervical cancer is influenced by various environmental factors; however, the human papillomavirus (HPV) is detected in 99% of cases, making it the primary attributed cause. Additionally, although cervical cancer ranks 10th in overall female cancer incidence, it is nearly 100% preventable among known cancers. Early-stage cervical cancer typically presents no symptoms but can be detected early through regular screening. Therefore, routine tests, including cytology, should be conducted annually, as early detection significantly improves the chances of successful treatment. Thus, we employed artificial intelligence technology to forecast the likelihood of developing cervical cancer. We utilized the logistic regression algorithm, a predictive model, through Microsoft Azure. The classification model yielded an accuracy of 80.8%, a precision of 80.2%, a recall rate of 99.0%, and an F1 score of 88.6%. These results indicate that the use of hormonal contraceptives is associated with an increased risk of cervical cancer. Further development of the artificial intelligence program, as studied here, holds promise for reducing mortality rates attributable to cervical cancer.

Software Measurement by Analyzing Multiple Time-Series Patterns (다중 시계열 패턴 분석에 의한 소프트웨어 계측)

  • Kim Gye-Young
    • Journal of Internet Computing and Services
    • /
    • v.6 no.1
    • /
    • pp.105-114
    • /
    • 2005
  • This paper describes a new measuring technique by analysing multiple time-series patterns. This paper's goal is that extracts a really measured value having a sample pattern which is the best matched with an inputted time-series, and calculates a difference ratio with the value. Therefore, the proposed technique is not a recognition but a measurement. and not a hardware but a software. The proposed technique is consisted of three stages, initialization, learning and measurement. In the initialization stage, it decides weights of all parameters using importance given by an operator. In the learning stage, it classifies sample patterns using LBG and DTW algorithm, and then creates code sequences for all the patterns. In the measurement stage, it creates a code sequence for an inputted time-series pattern, finds samples having the same code sequence by hashing, and then selects the best matched sample. Finally it outputs the really measured value with the sample and the difference ratio. For the purpose of performance evaluation, we tested on multiple time-series patterns obtained from etching machine which is a semiconductor manufacturing.

  • PDF

Design of Incremental K-means Clustering-based Radial Basis Function Neural Networks Model (증분형 K-means 클러스터링 기반 방사형 기저함수 신경회로망 모델 설계)

  • Park, Sang-Beom;Lee, Seung-Cheol;Oh, Sung-Kwun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.5
    • /
    • pp.833-842
    • /
    • 2017
  • In this study, the design methodology of radial basis function neural networks based on incremental K-means clustering is introduced for learning and processing the big data. If there is a lot of dataset to be trained, general clustering may not learn dataset due to the lack of memory capacity. However, the on-line processing of big data could be effectively realized through the parameters operation of recursive least square estimation as well as the sequential operation of incremental clustering algorithm. Radial basis function neural networks consist of condition part, conclusion part and aggregation part. In the condition part, incremental K-means clustering algorithms is used tweights of the conclusion part are given as linear function and parameters are calculated using recursive least squareo get the center points of data and find the fitness using gaussian function as the activation function. Connection s estimation. In the aggregation part, a final output is obtained by center of gravity method. Using machine learning data, performance index are shown and compared with other models. Also, the performance of the incremental K-means clustering based-RBFNNs is carried out by using PSO. This study demonstrates that the proposed model shows the superiority of algorithmic design from the viewpoint of on-line processing for big data.

Multi-Modal Wearable Sensor Integration for Daily Activity Pattern Analysis with Gated Multi-Modal Neural Networks (Gated Multi-Modal Neural Networks를 이용한 다중 웨어러블 센서 결합 방법 및 일상 행동 패턴 분석)

  • On, Kyoung-Woon;Kim, Eun-Sol;Zhang, Byoung-Tak
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.2
    • /
    • pp.104-109
    • /
    • 2017
  • We propose a new machine learning algorithm which analyzes daily activity patterns of users from multi-modal wearable sensor data. The proposed model learns and extracts activity patterns using input from wearable devices in real-time. Inspired by cue integration of human's property, we constructed gated multi-modal neural networks which integrate wearable sensor input data selectively by using gate modules. For the experiments, sensory data were collected by using multiple wearable devices in restaurant situations. As an experimental result, we first show that the proposed model performs well in terms of prediction accuracy. Then, the possibility to construct a knowledge schema automatically by analyzing the activation patterns in the middle layer of our proposed model is explained.