• Title/Summary/Keyword: Deep Learning Models

Search Result 1,319, Processing Time 0.023 seconds

Fault diagnosis of linear transfer robot using XAI

  • Taekyung Kim;Arum Park
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.3
    • /
    • pp.121-138
    • /
    • 2024
  • Artificial intelligence is crucial to manufacturing productivity. Understanding the difficulties in producing disruptions, especially in linear feed robot systems, is essential for efficient operations. These mechanical tools, essential for linear movements within systems, are prone to damage and degradation, especially in the LM guide, due to repetitive motions. We examine how explainable artificial intelligence (XAI) may diagnose wafer linear robot linear rail clearance and ball screw clearance anomalies. XAI helps diagnose problems and explain anomalies, enriching management and operational strategies. By interpreting the reasons for anomaly detection through visualizations such as Class Activation Maps (CAMs) using technologies like Grad-CAM, FG-CAM, and FFT-CAM, and comparing 1D-CNN with 2D-CNN, we illustrates the potential of XAI in enhancing diagnostic accuracy. The use of datasets from accelerometer and torque sensors in our experiments validates the high accuracy of the proposed method in binary and ternary classifications. This study exemplifies how XAI can elucidate deep learning models trained on industrial signals, offering a practical approach to understanding and applying AI in maintaining the integrity of critical components such as LM guides in linear feed robots.

Deflection aware smart structures by artificial intelligence algorithm

  • Qingyun Gao;Yun Wang;Zhimin Zhou;Khalid A. Alnowibet
    • Smart Structures and Systems
    • /
    • v.33 no.5
    • /
    • pp.333-347
    • /
    • 2024
  • There has been an increasing interest in the construction of smart buildings that can actively monitor and react to their surroundings. The capacity of these intelligent structures to precisely predict and respond to deflection is a crucial feature that guarantees both their structural soundness and efficiency. Conventional techniques for determining deflection often depend on intricate mathematical models and computational simulations, which may be time- and resource-consuming. Artificial intelligence (AI) algorithms have become a potent tool for anticipating and controlling deflection in intelligent structures in response to these difficulties. The term "deflection-aware smart structures" in this sense refers to constructions that have AI algorithms installed that continually monitor and analyses deflection data in order to proactively detect any problems and take appropriate action. These structures anticipate deflection across a range of operating circumstances and environmental factors by using cutting-edge AI approaches including deep learning, reinforcement learning, and neural networks. AI systems are able to predict real-time deflection with high accuracy by using data from embedded sensors and actuators. This capability enables the systems to identify intricate patterns and linkages. Intelligent buildings have the potential to self-correct in order to reduce deflection and maximize performance. In conclusion, the development of deflection-aware smart structures is a major stride forward for structural engineering and has enormous potential to enhance the performance, safety, and dependability of designed systems in a variety of industries.

A Study on Leakage Detection Technique Using Transfer Learning-Based Feature Fusion (전이학습 기반 특징융합을 이용한 누출판별 기법 연구)

  • YuJin Han;Tae-Jin Park;Jonghyuk Lee;Ji-Hoon Bae
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.2
    • /
    • pp.41-47
    • /
    • 2024
  • When there were disparities in performance between models trained in the time and frequency domains, even after conducting an ensemble, we observed that the performance of the ensemble was compromised due to imbalances in the individual model performances. Therefore, this paper proposes a leakage detection technique to enhance the accuracy of pipeline leakage detection through a step-wise learning approach that extracts features from both the time and frequency domains and integrates them. This method involves a two-step learning process. In the Stage 1, independent model training is conducted in the time and frequency domains to effectively extract crucial features from the provided data in each domain. In Stage 2, the pre-trained models were utilized by removing their respective classifiers. Subsequently, the features from both domains were fused, and a new classifier was added for retraining. The proposed transfer learning-based feature fusion technique in this paper performs model training by integrating features extracted from the time and frequency domains. This integration exploits the complementary nature of features from both domains, allowing the model to leverage diverse information. As a result, it achieved a high accuracy of 99.88%, demonstrating outstanding performance in pipeline leakage detection.

A Study on the traffic flow prediction through Catboost algorithm (Catboost 알고리즘을 통한 교통흐름 예측에 관한 연구)

  • Cheon, Min Jong;Choi, Hye Jin;Park, Ji Woong;Choi, HaYoung;Lee, Dong Hee;Lee, Ook
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.3
    • /
    • pp.58-64
    • /
    • 2021
  • As the number of registered vehicles increases, traffic congestion will worsen worse, which may act as an inhibitory factor for urban social and economic development. Through accurate traffic flow prediction, various AI techniques have been used to prevent traffic congestion. This paper uses the data from a VDS (Vehicle Detection System) as input variables. This study predicted traffic flow in five levels (free flow, somewhat delayed, delayed, somewhat congested, and congested), rather than predicting traffic flow in two levels (free flow and congested). The Catboost model, which is a machine-learning algorithm, was used in this study. This model predicts traffic flow in five levels and compares and analyzes the accuracy of the prediction with other algorithms. In addition, the preprocessed model that went through RandomizedSerachCv and One-Hot Encoding was compared with the naive one. As a result, the Catboost model without any hyper-parameter showed the highest accuracy of 93%. Overall, the Catboost model analyzes and predicts a large number of categorical traffic data better than any other machine learning and deep learning models, and the initial set parameters are optimized for Catboost.

A Study on the Win-Loss Prediction Analysis of Korean Professional Baseball by Artificial Intelligence Model (인공지능 모델에 따른 한국 프로야구의 승패 예측 분석에 관한 연구)

  • Kim, Tae-Hun;Lim, Seong-Won;Koh, Jin-Gwang;Lee, Jae-Hak
    • The Journal of Bigdata
    • /
    • v.5 no.2
    • /
    • pp.77-84
    • /
    • 2020
  • In this study, we conducted a study on the win-loss predicton analysis of korean professional baseball by artificial intelligence models. Based on the model, we predicted the winner as well as each team's final rank in the league. Additionally, we developed a website for viewers' understanding. In each game's first, third, and fifth inning, we analyze to select the best model that performs the highest accuracy and minimizes errors. Based on the result, we generate the rankings. We used the predicted data started from May 5, the season's opening day, to August 30, 2020 to generate the rankings. In the games which Kia Tigers did not play, however, we used actual games' results in the data. KNN and AdaBoost selected the most optimized machine learning model. As a result, we observe a decreasing trend of the predicted results' ranking error as the season progresses. The deep learning model recorded 89% of the model accuracy. It provides the same result of decreasing ranking error trends of the predicted results that we observe in the machine learning model. We estimate that this study's result applies to future KBO predictions as well as other fields. We expect broadcasting enhancements by posting the predicted winning percentage per inning which is generated by AI algorism. We expect this will bring new interest to the KBO fans. Furthermore, the prediction generated at each inning would provide insights to teams so that they can analyze data and come up with successful strategies.

Artificial neural network algorithm comparison for exchange rate prediction

  • Shin, Noo Ri;Yun, Dai Yeol;Hwang, Chi-gon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.3
    • /
    • pp.125-130
    • /
    • 2020
  • At the end of 1997, the volatility of the exchange rate intensified as the nation's exchange rate system was converted into a free-floating exchange rate system. As a result, managing the exchange rate is becoming a very important task, and the need for forecasting the exchange rate is growing. The exchange rate prediction model using the existing exchange rate prediction method, statistical technique, cannot find a nonlinear pattern of the time series variable, and it is difficult to analyze the time series with the variability cluster phenomenon. And as the number of variables to be analyzed increases, the number of parameters to be estimated increases, and it is not easy to interpret the meaning of the estimated coefficients. Accordingly, the exchange rate prediction model using artificial neural network, rather than statistical technique, is presented. Using DNN, which is the basis of deep learning among artificial neural networks, and LSTM, a recurrent neural network model, the number of hidden layers, neurons, and activation function changes of each model found the optimal exchange rate prediction model. The study found that although there were model differences, LSTM models performed better than DNN models and performed best when the activation function was Tanh.

Current Status and Future Direction of Artificial Intelligence in Healthcare and Medical Education (의료분야에서 인공지능 현황 및 의학교육의 방향)

  • Jung, Jin Sup
    • Korean Medical Education Review
    • /
    • v.22 no.2
    • /
    • pp.99-114
    • /
    • 2020
  • The rapid development of artificial intelligence (AI), including deep learning, has led to the development of technologies that may assist in the diagnosis and treatment of diseases, prediction of disease risk and prognosis, health index monitoring, drug development, and healthcare management and administration. However, in order for AI technology to improve the quality of medical care, technical problems and the efficacy of algorithms should be evaluated in real clinical environments rather than the environment in which algorithms are developed. Further consideration should be given to whether these models can improve the quality of medical care and clinical outcomes of patients. In addition, the development of regulatory systems to secure the safety of AI medical technology, the ethical and legal issues related to the proliferation of AI technology, and the impacts on the relationship with patients also need to be addressed. Systematic training of healthcare personnel is needed to enable adaption to the rapid changes in the healthcare environment. An overall review and revision of undergraduate medical curriculum is required to enable extraction of significant information from rapidly expanding medical information, data science literacy, empathy/compassion for patients, and communication among various healthcare providers. Specialized postgraduate AI education programs for each medical specialty are needed to develop proper utilization of AI models in clinical practice.

Sketch Recognition Using LSTM with Attention Mechanism and Minimum Cost Flow Algorithm

  • Nguyen-Xuan, Bac;Lee, Guee-Sang
    • International Journal of Contents
    • /
    • v.15 no.4
    • /
    • pp.8-15
    • /
    • 2019
  • This paper presents a solution of the 'Quick, Draw! Doodle Recognition Challenge' hosted by Google. Doodles are drawings comprised of concrete representational meaning or abstract lines creatively expressed by individuals. In this challenge, a doodle is presented as a sequence of sketches. From the view of at the sketch level, to learn the pattern of strokes representing a doodle, we propose a sequential model stacked with multiple convolution layers and Long Short-Term Memory (LSTM) cells following the attention mechanism [15]. From the view at the image level, we use multiple models pre-trained on ImageNet to recognize the doodle. Finally, an ensemble and a post-processing method using the minimum cost flow algorithm are introduced to combine multiple models in achieving better results. In this challenge, our solutions garnered 11th place among 1,316 teams. Our performance was 0.95037 MAP@3, only 0.4% lower than the winner. It demonstrates that our method is very competitive. The source code for this competition is published at: https://github.com/ngxbac/Kaggle-QuickDraw.

Comparative Analysis of Deep Learning Researches for Compressed Video Quality Improvement (압축 영상 화질 개선을 위한 딥 러닝 연구에 대한 분석)

  • Lee, Young-Woon;Kim, Byung-Gyu
    • Journal of Broadcast Engineering
    • /
    • v.24 no.3
    • /
    • pp.420-429
    • /
    • 2019
  • Recently, researches using Convolutional Neural Network (CNN)-based approaches have been actively conducted to improve the reduced quality of compressed video using block-based video coding standards such as H.265/HEVC. This paper aims to summarize and analyze the network models in these quality enhancement studies. At first the detailed components of CNN for quality enhancement are overviewed and then we summarize prior studies in the image domain. Next, related studies are summarized in three aspects of network structure, dataset, and training methods, and present representative models implementation and experimental results for performance comparison.

Enhancing Wind Speed and Wind Power Forecasting Using Shape-Wise Feature Engineering: A Novel Approach for Improved Accuracy and Robustness

  • Mulomba Mukendi Christian;Yun Seon Kim;Hyebong Choi;Jaeyoung Lee;SongHee You
    • International Journal of Advanced Culture Technology
    • /
    • v.11 no.4
    • /
    • pp.393-405
    • /
    • 2023
  • Accurate prediction of wind speed and power is vital for enhancing the efficiency of wind energy systems. Numerous solutions have been implemented to date, demonstrating their potential to improve forecasting. Among these, deep learning is perceived as a revolutionary approach in the field. However, despite their effectiveness, the noise present in the collected data remains a significant challenge. This noise has the potential to diminish the performance of these algorithms, leading to inaccurate predictions. In response to this, this study explores a novel feature engineering approach. This approach involves altering the data input shape in both Convolutional Neural Network-Long Short-Term Memory (CNN-LSTM) and Autoregressive models for various forecasting horizons. The results reveal substantial enhancements in model resilience against noise resulting from step increases in data. The approach could achieve an impressive 83% accuracy in predicting unseen data up to the 24th steps. Furthermore, this method consistently provides high accuracy for short, mid, and long-term forecasts, outperforming the performance of individual models. These findings pave the way for further research on noise reduction strategies at different forecasting horizons through shape-wise feature engineering.