• 제목/요약/키워드: machine learning applications

검색결과 538건 처리시간 0.024초

MLOps workflow language and platform for time series data anomaly detection

  • Sohn, Jung-Mo;Kim, Su-Min
    • Journal of the Korea Society of Computer and Information
    • /
    • 제27권11호
    • /
    • pp.19-27
    • /
    • 2022
  • In this study, we propose a language and platform to describe and manage the MLOps(Machine Learning Operations) workflow for time series data anomaly detection. Time series data is collected in many fields, such as IoT sensors, system performance indicators, and user access. In addition, it is used in many applications such as system monitoring and anomaly detection. In order to perform prediction and anomaly detection of time series data, the MLOps platform that can quickly and flexibly apply the analyzed model to the production environment is required. Thus, we developed Python-based AI/ML Modeling Language (AMML) to easily configure and execute MLOps workflows. Python is widely used in data analysis. The proposed MLOps platform can extract and preprocess time series data from various data sources (R-DB, NoSql DB, Log File, etc.) using AMML and predict it through a deep learning model. To verify the applicability of AMML, the workflow for generating a transformer oil temperature prediction deep learning model was configured with AMML and it was confirmed that the training was performed normally.

Application of ML algorithms to predict the effective fracture toughness of several types of concret

  • Ibrahim Albaijan;Hanan Samadi;Arsalan Mahmoodzadeh;Hawkar Hashim Ibrahim;Nejib Ghazouani
    • Computers and Concrete
    • /
    • 제34권2호
    • /
    • pp.247-265
    • /
    • 2024
  • Measuring the fracture toughness of concrete in laboratory settings is challenging due to various factors, such as complex sample preparation procedures, the requirement for precise instruments, potential sample failure, and the brittleness of the samples. Therefore, there is an urgent need to develop innovative and more effective tools to overcome these limitations. Supervised learning methods offer promising solutions. This study introduces seven machine learning algorithms for predicting concrete's effective fracture toughness (K-eff). The models were trained using 560 datasets obtained from the central straight notched Brazilian disc (CSNBD) test. The concrete samples used in the experiments contained micro silica and powdered stone, which are commonly used additives in the construction industry. The study considered six input parameters that affect concrete's K-eff, including concrete type, sample diameter, sample thickness, crack length, force, and angle of initial crack. All the algorithms demonstrated high accuracy on both the training and testing datasets, with R2 values ranging from 0.9456 to 0.9999 and root mean squared error (RMSE) values ranging from 0.000004 to 0.009287. After evaluating their performance, the gated recurrent unit (GRU) algorithm showed the highest predictive accuracy. The ranking of the applied models, from highest to lowest performance in predicting the K-eff of concrete, was as follows: GRU, LSTM, RNN, SFL, ELM, LSSVM, and GEP. In conclusion, it is recommended to use supervised learning models, specifically GRU, for precise estimation of concrete's K-eff. This approach allows engineers to save significant time and costs associated with the CSNBD test. This research contributes to the field by introducing a reliable tool for accurately predicting the K-eff of concrete, enabling efficient decision-making in various engineering applications.

Application and Potential of Artificial Intelligence in Heart Failure: Past, Present, and Future

  • Minjae Yoon;Jin Joo Park;Taeho Hur;Cam-Hao Hua;Musarrat Hussain;Sungyoung Lee;Dong-Ju Choi
    • International Journal of Heart Failure
    • /
    • 제6권1호
    • /
    • pp.11-19
    • /
    • 2024
  • The prevalence of heart failure (HF) is increasing, necessitating accurate diagnosis and tailored treatment. The accumulation of clinical information from patients with HF generates big data, which poses challenges for traditional analytical methods. To address this, big data approaches and artificial intelligence (AI) have been developed that can effectively predict future observations and outcomes, enabling precise diagnoses and personalized treatments of patients with HF. Machine learning (ML) is a subfield of AI that allows computers to analyze data, find patterns, and make predictions without explicit instructions. ML can be supervised, unsupervised, or semi-supervised. Deep learning is a branch of ML that uses artificial neural networks with multiple layers to find complex patterns. These AI technologies have shown significant potential in various aspects of HF research, including diagnosis, outcome prediction, classification of HF phenotypes, and optimization of treatment strategies. In addition, integrating multiple data sources, such as electrocardiography, electronic health records, and imaging data, can enhance the diagnostic accuracy of AI algorithms. Currently, wearable devices and remote monitoring aided by AI enable the earlier detection of HF and improved patient care. This review focuses on the rationale behind utilizing AI in HF and explores its various applications.

Shanghai Containerised Freight Index Forecasting Based on Deep Learning Methods: Evidence from Chinese Futures Markets

  • Liang Chen;Jiankun Li;Rongyu Pei;Zhenqing Su;Ziyang Liu
    • East Asian Economic Review
    • /
    • 제28권3호
    • /
    • pp.359-388
    • /
    • 2024
  • With the escalation of global trade, the Chinese commodity futures market has ascended to a pivotal role within the international shipping landscape. The Shanghai Containerized Freight Index (SCFI), a leading indicator of the shipping industry's health, is particularly sensitive to the vicissitudes of the Chinese commodity futures sector. Nevertheless, a significant research gap exists regarding the application of Chinese commodity futures prices as predictive tools for the SCFI. To address this gap, the present study employs a comprehensive dataset spanning daily observations from March 24, 2017, to May 27, 2022, encompassing a total of 29,308 data points. We have crafted an innovative deep learning model that synergistically combines Long Short-Term Memory (LSTM) and Convolutional Neural Network (CNN) architectures. The outcomes show that the CNN-LSTM model does a great job of finding the nonlinear dynamics in the SCFI dataset and accurately capturing its long-term temporal dependencies. The model can handle changes in random sample selection, data frequency, and structural shifts within the dataset. It achieved an impressive R2 of 96.6% and did better than the LSTM and CNN models that were used alone. This research underscores the predictive prowess of the Chinese futures market in influencing the Shipping Cost Index, deepening our understanding of the intricate relationship between the shipping industry and the financial sphere. Furthermore, it broadens the scope of machine learning applications in maritime transportation management, paving the way for SCFI forecasting research. The study's findings offer potent decision-support tools and risk management solutions for logistics enterprises, shipping corporations, and governmental entities.

Improving the Classification Accuracy Using Unlabeled Data: A Naive Bayesian Case (나이브 베이지안 환경에서 미분류 데이터를 이용한 성능향상)

  • Lee Chang-Hwan
    • The KIPS Transactions:PartB
    • /
    • 제13B권4호
    • /
    • pp.457-462
    • /
    • 2006
  • In many applications, an enormous amount of unlabeled data is available with little cost. Therefore, it is natural to ask whether we can take advantage of these unlabeled data in classification learning. In this paper, we analyzed the role of unlabeled data in the context of naive Bayesian learning. Experimental results show that including unlabeled data as part of training data can significantly improve the performance of classification accuracy. The effect of using unlabeled data is especially important in case labeled data are sparse.

Development of Low-Cost Vision-based Eye Tracking Algorithm for Information Augmented Interactive System

  • Park, Seo-Jeon;Kim, Byung-Gyu
    • Journal of Multimedia Information System
    • /
    • 제7권1호
    • /
    • pp.11-16
    • /
    • 2020
  • Deep Learning has become the most important technology in the field of artificial intelligence machine learning, with its high performance overwhelming existing methods in various applications. In this paper, an interactive window service based on object recognition technology is proposed. The main goal is to implement an object recognition technology using this deep learning technology to remove the existing eye tracking technology, which requires users to wear eye tracking devices themselves, and to implement an eye tracking technology that uses only usual cameras to track users' eye. We design an interactive system based on efficient eye detection and pupil tracking method that can verify the user's eye movement. To estimate the view-direction of user's eye, we initialize to make the reference (origin) coordinate. Then the view direction is estimated from the extracted eye pupils from the origin coordinate. Also, we propose a blink detection technique based on the eye apply ratio (EAR). With the extracted view direction and eye action, we provide some augmented information of interest without the existing complex and expensive eye-tracking systems with various service topics and situations. For verification, the user guiding service is implemented as a proto-type model with the school map to inform the location information of the desired location or building.

Sensorless Speed Control of Direct Current Motor by Neural Network (신경회로망을 이용한 직류전동기의 센서리스 속도제어)

  • 강성주;오세진;김종수
    • Journal of Advanced Marine Engineering and Technology
    • /
    • 제28권1호
    • /
    • pp.90-97
    • /
    • 2004
  • DC motor requires a rotor speed sensor for accurate speed control. The speed sensors such as resolvers and encoders are used as speed detectors. but they increase cost and size of the motor and restrict the industrial drive applications. So in these days. many Papers have reported on the sensorless operation or DC motor(3)-(5). This paper Presents a new sensorless strategy using neural networks(6)-(8). Neural network structure has three layers which are input layer. hidden layer and output layer. The optimal neural network structure was tracked down by trial and error and it was found that 4-16-1 neural network has given suitable results for the instantaneous rotor speed. Also. learning method is very important in neural network. Supervised learning methods(8) are typically used to train the neural network for learning the input/output pattern presented. The back-propagation technique adjusts the neural network weights during training. The rotor speed is gained by weights and four inputs to the neural network. The experimental results were found satisfactory in both the independency on machine parameters and the insensitivity to the load condition.

Binary Classification of Hypertensive Retinopathy Using Deep Dense CNN Learning

  • Mostafa E.A., Ibrahim;Qaisar, Abbas
    • International Journal of Computer Science & Network Security
    • /
    • 제22권12호
    • /
    • pp.98-106
    • /
    • 2022
  • A condition of the retina known as hypertensive retinopathy (HR) is connected to high blood pressure. The severity and persistence of hypertension are directly correlated with the incidence of HR. To avoid blindness, it is essential to recognize and assess HR as soon as possible. Few computer-aided systems are currently available that can diagnose HR issues. On the other hand, those systems focused on gathering characteristics from a variety of retinopathy-related HR lesions and categorizing them using conventional machine-learning algorithms. Consequently, for limited applications, significant and complicated image processing methods are necessary. As seen in recent similar systems, the preciseness of classification is likewise lacking. To address these issues, a new CAD HR-diagnosis system employing the advanced Deep Dense CNN Learning (DD-CNN) technology is being developed to early identify HR. The HR-diagnosis system utilized a convolutional neural network that was previously trained as a feature extractor. The statistical investigation of more than 1400 retinography images is undertaken to assess the accuracy of the implemented system using several performance metrics such as specificity (SP), sensitivity (SE), area under the receiver operating curve (AUC), and accuracy (ACC). On average, we achieved a SE of 97%, ACC of 98%, SP of 99%, and AUC of 0.98. These results indicate that the proposed DD-CNN classifier is used to diagnose hypertensive retinopathy.

AN OPTIMAL BOOSTING ALGORITHM BASED ON NONLINEAR CONJUGATE GRADIENT METHOD

  • CHOI, JOOYEON;JEONG, BORA;PARK, YESOM;SEO, JIWON;MIN, CHOHONG
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • 제22권1호
    • /
    • pp.1-13
    • /
    • 2018
  • Boosting, one of the most successful algorithms for supervised learning, searches the most accurate weighted sum of weak classifiers. The search corresponds to a convex programming with non-negativity and affine constraint. In this article, we propose a novel Conjugate Gradient algorithm with the Modified Polak-Ribiera-Polyak conjugate direction. The convergence of the algorithm is proved and we report its successful applications to boosting.

A Hierarchical deep model for food classification from photographs

  • Yang, Heekyung;Kang, Sungyong;Park, Chanung;Lee, JeongWook;Yu, Kyungmin;Min, Kyungha
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권4호
    • /
    • pp.1704-1720
    • /
    • 2020
  • Recognizing food from photographs presents many applications for machine learning, computer vision and dietetics, etc. Recent progress of deep learning techniques accelerates the recognition of food in a great scale. We build a hierarchical structure composed of deep CNN to recognize and classify food from photographs. We build a dataset for Korean food of 18 classes, which are further categorized in 4 major classes. Our hierarchical recognizer classifies foods into four major classes in the first step. Each food in the major classes is further classified into the exact class in the second step. We employ DenseNet structure for the baseline of our recognizer. The hierarchical structure provides higher accuracy and F1 score than those from the single-structured recognizer.