• Title/Summary/Keyword: deep machine learning

Search Result 1,085, Processing Time 0.034 seconds

Battery-loaded power management algorithm of electric propulsion ship based on power load and state learning model (전력 부하와 학습모델 기반의 전기추진선박의 배터리 연동 전력관리 알고리즘)

  • Oh, Ji-hyun;Oh, Jin-seok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.9
    • /
    • pp.1202-1208
    • /
    • 2020
  • In line with the current era of the 4th Industrial Revolution, it is necessary to prepare for the future by integrating AI elements in the ship sector. In addition, it is necessary to respond to this in the field of power management for the appearance of autonomous ships. In this study, we propose a battery-linked electric propulsion system (BLEPS) algorithm using machine learning's DNN. For the experiment, we learned the pattern of ship power consumption for each operation mode based on the ship data through LabView and derived the battery status through Python to check the flexibility of the generator and battery interlocking. As a result of the experiment, the low load operation of the generator was reduced through charging and discharging of the battery, and economic efficiency and reliability were confirmed by reducing the fuel consumption of 1% of LNG.

Accuracy of Phishing Websites Detection Algorithms by Using Three Ranking Techniques

  • Mohammed, Badiea Abdulkarem;Al-Mekhlafi, Zeyad Ghaleb
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.2
    • /
    • pp.272-282
    • /
    • 2022
  • Between 2014 and 2019, the US lost more than 2.1 billion USD to phishing attacks, according to the FBI's Internet Crime Complaint Center, and COVID-19 scam complaints totaled more than 1,200. Phishing attacks reflect these awful effects. Phishing websites (PWs) detection appear in the literature. Previous methods included maintaining a centralized blacklist that is manually updated, but newly created pseudonyms cannot be detected. Several recent studies utilized supervised machine learning (SML) algorithms and schemes to manipulate the PWs detection problem. URL extraction-based algorithms and schemes. These studies demonstrate that some classification algorithms are more effective on different data sets. However, for the phishing site detection problem, no widely known classifier has been developed. This study is aimed at identifying the features and schemes of SML that work best in the face of PWs across all publicly available phishing data sets. The Scikit Learn library has eight widely used classification algorithms configured for assessment on the public phishing datasets. Eight was tested. Later, classification algorithms were used to measure accuracy on three different datasets for statistically significant differences, along with the Welch t-test. Assemblies and neural networks outclass classical algorithms in this study. On three publicly accessible phishing datasets, eight traditional SML algorithms were evaluated, and the results were calculated in terms of classification accuracy and classifier ranking as shown in tables 4 and 8. Eventually, on severely unbalanced datasets, classifiers that obtained higher than 99.0 percent classification accuracy. Finally, the results show that this could also be adapted and outperforms conventional techniques with good precision.

A Study on the Hyper-parameter Optimization of Bitcoin Price Prediction LSTM Model (비트코인 가격 예측을 위한 LSTM 모델의 Hyper-parameter 최적화 연구)

  • Kim, Jun-Ho;Sung, Hanul
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.4
    • /
    • pp.17-24
    • /
    • 2022
  • Bitcoin is a peer-to-peer cryptocurrency designed for electronic transactions that do not depend on the government or financial institutions. Since Bitcoin was first issued, a huge blockchain financial market has been created, and as a result, research to predict Bitcoin price data using machine learning has been increasing. However, the inefficient Hyper-parameter optimization process of machine learning research is interrupting the progress of the research. In this paper, we analyzes and presents the direction of Hyper-parameter optimization through experiments that compose the entire combination of the Timesteps, the number of LSTM units, and the Dropout ratio among the most representative Hyper-parameter and measure the predictive performance for each combination based on Bitcoin price prediction model using LSTM layer.

A Study on Synthetic Flight Vehicle Trajectory Data Generation Using Time-series Generative Adversarial Network and Its Application to Trajectory Prediction of Flight Vehicles (시계열 생성적 적대 신경망을 이용한 비행체 궤적 합성 데이터 생성 및 비행체 궤적 예측에서의 활용에 관한 연구)

  • Park, In Hee;Lee, Chang Jin;Jung, Chanho
    • Journal of IKEEE
    • /
    • v.25 no.4
    • /
    • pp.766-769
    • /
    • 2021
  • In order to perform tasks such as design, control, optimization, and prediction of flight vehicle trajectories based on machine learning techniques including deep learning, a certain amount of flight vehicle trajectory data is required. However, there are cases in which it is difficult to secure more than a certain amount of flight vehicle trajectory data for various reasons. In such cases, synthetic data generation could be one way to make machine learning possible. In this paper, to explore this possibility, we generated and evaluated synthetic flight vehicle trajectory data using time-series generative adversarial neural network. In addition, various ablation studies (comparative experiments) were performed to explore the possibility of using synthetic data in the aircraft trajectory prediction task. The experimental results presented in this paper are expected to be of practical help to researchers who want to conduct research on the possibility of using synthetic data in the generation of synthetic flight vehicle trajectory data and the work related to flight vehicle trajectories.

Prediction of Material's Formation Energy Using Crystal Graph Convolutional Neural Network (결정그래프 합성곱 인공신경망을 통한 소재의 생성 에너지 예측)

  • Lee, Hyun-Gi;Seo, Dong-Hwa
    • Journal of the Korean Institute of Electrical and Electronic Material Engineers
    • /
    • v.35 no.2
    • /
    • pp.134-142
    • /
    • 2022
  • As industry and technology go through advancement, it is hard to search new materials which satisfy various standards through conventional trial-and-error based research methods. Crystal Graph Convolutional Neural Network(CGCNN) is a neural network which uses material's features as train data, and predicts the material properties(formation energy, bandgap, etc.) much faster than first-principles calculation. This report introduces how to train the CGCNN model which predicts the formation energy using open database. It is anticipated that with a simple programming skill, readers could construct a model using their data and purpose. Developing machine learning model for materials science is going to help researchers who should explore large chemical and structural space to discover materials efficiently.

Method of preventing Pressure Ulcer and EMR data preprocess

  • Kim, Dowon;Kim, Minkyu;Kim, Yoon;Han, Seon-Sook;Heo, Jungwon;Choi, Hyun-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.12
    • /
    • pp.69-76
    • /
    • 2022
  • This paper proposes a method of refining and processing time-series data using Medical Information Mart for Intensive Care (MIMIC-IV) v2.0 data. In addition, the significance of the processing method was validated through a machine learning-based pressure ulcer early warning system using a dataset processed based on the proposed method. The implemented system alerts medical staff in advance 12 and 24 hours before a lesion occurs. In conjunction with the Electronic Medical Record (EMR) system, it informs the medical staff of the risk of a patient's pressure ulcer development in real-time to support a clinical decision, and further, it enables the efficient allocation of medical resources. Among several machine learning models, the GRU model showed the best performance with AUROC of 0.831 for 12 hours and 0.822 for 24 hours.

Injection Process Yield Improvement Methodology Based on eXplainable Artificial Intelligence (XAI) Algorithm (XAI(eXplainable Artificial Intelligence) 알고리즘 기반 사출 공정 수율 개선 방법론)

  • Ji-Soo Hong;Yong-Min Hong;Seung-Yong Oh;Tae-Ho Kang;Hyeon-Jeong Lee;Sung-Woo Kang
    • Journal of Korean Society for Quality Management
    • /
    • v.51 no.1
    • /
    • pp.55-65
    • /
    • 2023
  • Purpose: The purpose of this study is to propose an optimization process to improve product yield in the process using process data. Recently, research for low-cost and high-efficiency production in the manufacturing process using machine learning or deep learning has continued. Therefore, this study derives major variables that affect product defects in the manufacturing process using eXplainable Artificial Intelligence(XAI) method. After that, the optimal range of the variables is presented to propose a methodology for improving product yield. Methods: This study is conducted using the injection molding machine AI dataset released on the Korea AI Manufacturing Platform(KAMP) organized by KAIST. Using the XAI-based SHAP method, major variables affecting product defects are extracted from each process data. XGBoost and LightGBM were used as learning algorithms, 5-6 variables are extracted as the main process variables for the injection process. Subsequently, the optimal control range of each process variable is presented using the ICE method. Finally, the product yield improvement methodology of this study is proposed through a validation process using Test Data. Results: The results of this study are as follows. In the injection process data, it was confirmed that XGBoost had an improvement defect rate of 0.21% and LightGBM had an improvement defect rate of 0.29%, which were improved by 0.79%p and 0.71%p, respectively, compared to the existing defect rate of 1.00%. Conclusion: This study is a case study. A research methodology was proposed in the injection process, and it was confirmed that the product yield was improved through verification.

Reinforcement Learning for Minimizing Tardiness and Set-Up Change in Parallel Machine Scheduling Problems for Profile Shops in Shipyard (조선소 병렬 기계 공정에서의 납기 지연 및 셋업 변경 최소화를 위한 강화학습 기반의 생산라인 투입순서 결정)

  • So-Hyun Nam;Young-In Cho;Jong Hun Woo
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.60 no.3
    • /
    • pp.202-211
    • /
    • 2023
  • The profile shops in shipyards produce section steels required for block production of ships. Due to the limitations of shipyard's production capacity, a considerable amount of work is already outsourced. In addition, the need to improve the productivity of the profile shops is growing because the production volume is expected to increase due to the recent boom in the shipbuilding industry. In this study, a scheduling optimization was conducted for a parallel welding line of the profile process, with the aim of minimizing tardiness and the number of set-up changes as objective functions to achieve productivity improvements. In particular, this study applied a dynamic scheduling method to determine the job sequence considering variability of processing time. A Markov decision process model was proposed for the job sequence problem, considering the trade-off relationship between two objective functions. Deep reinforcement learning was also used to learn the optimal scheduling policy. The developed algorithm was evaluated by comparing its performance with priority rules (SSPT, ATCS, MDD, COVERT rule) in test scenarios constructed by the sampling data. As a result, the proposed scheduling algorithms outperformed than the priority rules in terms of set-up ratio, tardiness, and makespan.

Demand Forecasting Model for Bike Relocation of Sharing Stations (공유자전거 따릉이 재배치를 위한 실시간 수요예측 모델 연구)

  • Yoosin Kim
    • Journal of Internet Computing and Services
    • /
    • v.24 no.5
    • /
    • pp.107-120
    • /
    • 2023
  • The public bicycle of Seoul, Ttareungyi, was launched at October 2015 to reduce traffic and carbon emissions in downtown Seoul and now, 2023 Oct, the cumulative number of user is upto 4 million and the number of bike is about 43,000 with about 2700 stations. However, super growth of Ttareungyi has caused the several problems, especially demand/supply mismatch, and thus the Seoul citizen has been complained about out of stock. In this point, this study conducted a real time demand forecasting model to prevent stock out bike at stations. To develop the model, the research team gathered the rental·return transaction data of 20,000 bikes in whole 1600 stations for 2019 year and then analyzed bike usage, user behavior, bike stations, and so on. The forecasting model using machine learning is developed to predict the amount of rental/return on each bike station every hour through daily learning with the recent 90 days data with the weather information. The model is validated with MAE and RMSE of bike stations, and tested as a prototype service on the Seoul Bike Management System(Mobile App) for the relocation team of Seoul City.

Performance Comparison of Machine Learning Algorithms for Network Traffic Security in Medical Equipment (의료기기 네트워크 트래픽 보안 관련 머신러닝 알고리즘 성능 비교)

  • Seung Hyoung Ko;Joon Ho Park;Da Woon Wang;Eun Seok Kang;Hyun Wook Han
    • Journal of Information Technology Services
    • /
    • v.22 no.5
    • /
    • pp.99-108
    • /
    • 2023
  • As the computerization of hospitals becomes more advanced, security issues regarding data generated from various medical devices within hospitals are gradually increasing. For example, because hospital data contains a variety of personal information, attempts to attack it have been continuously made. In order to safely protect data from external attacks, each hospital has formed an internal team to continuously monitor whether the computer network is safely protected. However, there are limits to how humans can monitor attacks that occur on networks within hospitals in real time. Recently, artificial intelligence models have shown excellent performance in detecting outliers. In this paper, an experiment was conducted to verify how well an artificial intelligence model classifies normal and abnormal data in network traffic data generated from medical devices. There are several models used for outlier detection, but among them, Random Forest and Tabnet were used. Tabnet is a deep learning algorithm related to receive and classify structured data. Two algorithms were trained using open traffic network data, and the classification accuracy of the model was measured using test data. As a result, the random forest algorithm showed a classification accuracy of 93%, and Tapnet showed a classification accuracy of 99%. Therefore, it is expected that most outliers that may occur in a hospital network can be detected using an excellent algorithm such as Tabnet.