• Title/Summary/Keyword: deep Learning

Search Result 5,763, Processing Time 0.035 seconds

A Research on Low-power Buffer Management Algorithm based on Deep Q-Learning approach for IoT Networks (IoT 네트워크에서의 심층 강화학습 기반 저전력 버퍼 관리 기법에 관한 연구)

  • Song, Taewon
    • Journal of Internet of Things and Convergence
    • /
    • v.8 no.4
    • /
    • pp.1-7
    • /
    • 2022
  • As the number of IoT devices increases, power management of the cluster head, which acts as a gateway between the cluster and sink nodes in the IoT network, becomes crucial. Particularly when the cluster head is a mobile wireless terminal, the power consumption of the IoT network must be minimized over its lifetime. In addition, the delay of information transmission in the IoT network is one of the primary metrics for rapid information collecting in the IoT network. In this paper, we propose a low-power buffer management algorithm that takes into account the information transmission delay in an IoT network. By forwarding or skipping received packets utilizing deep Q learning employed in deep reinforcement learning methods, the suggested method is able to reduce power consumption while decreasing transmission delay level. The proposed approach is demonstrated to reduce power consumption and to improve delay relative to the existing buffer management technique used as a comparison in slotted ALOHA protocol.

Use of deep learning in nano image processing through the CNN model

  • Xing, Lumin;Liu, Wenjian;Liu, Xiaoliang;Li, Xin;Wang, Han
    • Advances in nano research
    • /
    • v.12 no.2
    • /
    • pp.185-195
    • /
    • 2022
  • Deep learning is another field of artificial intelligence (AI) utilized for computer aided diagnosis (CAD) and image processing in scientific research. Considering numerous mechanical repetitive tasks, reading image slices need time and improper with geographical limits, so the counting of image information is hard due to its strong subjectivity that raise the error ratio in misdiagnosis. Regarding the highest mortality rate of Lung cancer, there is a need for biopsy for determining its class for additional treatment. Deep learning has recently given strong tools in diagnose of lung cancer and making therapeutic regimen. However, identifying the pathological lung cancer's class by CT images in beginning phase because of the absence of powerful AI models and public training data set is difficult. Convolutional Neural Network (CNN) was proposed with its essential function in recognizing the pathological CT images. 472 patients subjected to staging FDG-PET/CT were selected in 2 months prior to surgery or biopsy. CNN was developed and showed the accuracy of 87%, 69%, and 69% in training, validation, and test sets, respectively, for T1-T2 and T3-T4 lung cancer classification. Subsequently, CNN (or deep learning) could improve the CT images' data set, indicating that the application of classifiers is adequate to accomplish better exactness in distinguishing pathological CT images that performs better than few deep learning models, such as ResNet-34, Alex Net, and Dense Net with or without Soft max weights.

A Study on Peak Load Prediction Using TCN Deep Learning Model (TCN 딥러닝 모델을 이용한 최대전력 예측에 관한 연구)

  • Lee Jung Il
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.6
    • /
    • pp.251-258
    • /
    • 2023
  • It is necessary to predict peak load accurately in order to supply electric power and operate the power system stably. Especially, it is more important to predict peak load accurately in winter and summer because peak load is higher than other seasons. If peak load is predicted to be higher than actual peak load, the start-up costs of power plants would increase. It causes economic loss to the company. On the other hand, if the peak load is predicted to be lower than the actual peak load, blackout may occur due to a lack of power plants capable of generating electricity. Economic losses and blackouts can be prevented by minimizing the prediction error of the peak load. In this paper, the latest deep learning model such as TCN is used to minimize the prediction error of peak load. Even if the same deep learning model is used, there is a difference in performance depending on the hyper-parameters. So, I propose methods for optimizing hyper-parameters of TCN for predicting the peak load. Data from 2006 to 2021 were input into the model and trained, and prediction error was tested using data in 2022. It was confirmed that the performance of the deep learning model optimized by the methods proposed in this study is superior to other deep learning models.

Visual Explanation of a Deep Learning Solar Flare Forecast Model and Its Relationship to Physical Parameters

  • Yi, Kangwoo;Moon, Yong-Jae;Lim, Daye;Park, Eunsu;Lee, Harim
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.46 no.1
    • /
    • pp.42.1-42.1
    • /
    • 2021
  • In this study, we present a visual explanation of a deep learning solar flare forecast model and its relationship to physical parameters of solar active regions (ARs). For this, we use full-disk magnetograms at 00:00 UT from the Solar and Heliospheric Observatory/Michelson Doppler Imager and the Solar Dynamics Observatory/Helioseismic and Magnetic Imager, physical parameters from the Space-weather HMI Active Region Patch (SHARP), and Geostationary Operational Environmental Satellite X-ray flare data. Our deep learning flare forecast model based on the Convolutional Neural Network (CNN) predicts "Yes" or "No" for the daily occurrence of C-, M-, and X-class flares. We interpret the model using two CNN attribution methods (guided backpropagation and Gradient-weighted Class Activation Mapping [Grad-CAM]) that provide quantitative information on explaining the model. We find that our deep learning flare forecasting model is intimately related to AR physical properties that have also been distinguished in previous studies as holding significant predictive ability. Major results of this study are as follows. First, we successfully apply our deep learning models to the forecast of daily solar flare occurrence with TSS = 0.65, without any preprocessing to extract features from data. Second, using the attribution methods, we find that the polarity inversion line is an important feature for the deep learning flare forecasting model. Third, the ARs with high Grad-CAM values produce more flares than those with low Grad-CAM values. Fourth, nine SHARP parameters such as total unsigned vertical current, total unsigned current helicity, total unsigned flux, and total photospheric magnetic free energy density are well correlated with Grad-CAM values.

  • PDF

Case Study of Building a Malicious Domain Detection Model Considering Human Habitual Characteristics: Focusing on LSTM-based Deep Learning Model (인간의 습관적 특성을 고려한 악성 도메인 탐지 모델 구축 사례: LSTM 기반 Deep Learning 모델 중심)

  • Jung Ju Won
    • Convergence Security Journal
    • /
    • v.23 no.5
    • /
    • pp.65-72
    • /
    • 2023
  • This paper proposes a method for detecting malicious domains considering human habitual characteristics by building a Deep Learning model based on LSTM (Long Short-Term Memory). DGA (Domain Generation Algorithm) malicious domains exploit human habitual errors, resulting in severe security threats. The objective is to swiftly and accurately respond to changes in malicious domains and their evasion techniques through typosquatting to minimize security threats. The LSTM-based Deep Learning model automatically analyzes and categorizes generated domains as malicious or benign based on malware-specific features. As a result of evaluating the model's performance based on ROC curve and AUC accuracy, it demonstrated 99.21% superior detection accuracy. Not only can this model detect malicious domains in real-time, but it also holds potential applications across various cyber security domains. This paper proposes and explores a novel approach aimed at safeguarding users and fostering a secure cyber environment against cyber attacks.

Fishing Boat Rolling Movement of Time Series Prediction based on Deep Network Model (심층 네트워크 모델에 기반한 어선 횡동요 시계열 예측)

  • Donggyun Kim;Nam-Kyun Im
    • Journal of Navigation and Port Research
    • /
    • v.47 no.6
    • /
    • pp.376-385
    • /
    • 2023
  • Fishing boat capsizing accidents account for more than half of all capsize accidents. These can occur for a variety of reasons, including inexperienced operation, bad weather, and poor maintenance. Due to the size and influence of the industry, technological complexity, and regional diversity, fishing ships are relatively under-researched compared to commercial ships. This study aimed to predict the rolling motion time series of fishing boats using an image-based deep learning model. Image-based deep learning can achieve high performance by learning various patterns in a time series. Three image-based deep learning models were used for this purpose: Xception, ResNet50, and CRNN. Xception and ResNet50 are composed of 177 and 184 layers, respectively, while CRNN is composed of 22 relatively thin layers. The experimental results showed that the Xception deep learning model recorded the lowest Symmetric mean absolute percentage error(sMAPE) of 0.04291 and Root Mean Squared Error(RMSE) of 0.0198. ResNet50 and CRNN recorded an RMSE of 0.0217 and 0.022, respectively. This confirms that the models with relatively deeper layers had higher accuracy.

A Study on Improvement of Buffer Cache Performance for File I/O in Deep Learning (딥러닝의 파일 입출력을 위한 버퍼캐시 성능 개선 연구)

  • Jeongha Lee;Hyokyung Bahn
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.2
    • /
    • pp.93-98
    • /
    • 2024
  • With the rapid advance in AI (artificial intelligence) and high-performance computing technologies, deep learning is being used in various fields. Deep learning proceeds training by randomly reading a large amount of data and repeats this process. A large number of files are randomly repeatedly referenced during deep learning, which shows different access characteristics from traditional workloads with temporal locality. In order to cope with the difficulty in caching caused by deep learning, we propose a new sampling method that aims at reducing the randomness of dataset reading and adaptively operating on existing buffer cache algorithms. We show that the proposed policy reduces the miss rate of the buffer cache by 16% on average and up to 33% compared to the existing method, and improves the execution time by up to 24%.

Deep Learning-Based Daily Baseball Attendance Predcition (딥러닝 기반 일별 야구 관중 수 예측)

  • Hyunhee Lee;Seoyoung Sohn;Minseo Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.3
    • /
    • pp.131-135
    • /
    • 2024
  • Baseball attracts the largest audience among professional sports in Korea. In particular, attendance is the primary source of income in baseball. Previous studies have limitations in reflecting the characteristics of individual stadium. For instance, the KIA Tigers exhibit the highest away game revenue among domestic teams, but they show lower home game earnings. Therefore, we aim to predict the daily attendance at the Gwangju-KIA Champions Field of the KIA Tigers using deep learning. We collected and preprocessed daily attendance, dates, weather, and team-related variables for Gwangju-KIA Champions Field from 2018 to 2023. We propose a deep learning-based linear regression model to predict the daily attendance. We expect that the proposed deep learning model will be used as basic information to increase the club's revenue.

Effect on self-enhancement of deep-learning inference by repeated training of false detection cases in tunnel accident image detection (터널 내 돌발상황 오탐지 영상의 반복 학습을 통한 딥러닝 추론 성능의 자가 성장 효과)

  • Lee, Kyu Beom;Shin, Hyu Soung
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.21 no.3
    • /
    • pp.419-432
    • /
    • 2019
  • Most of deep learning model training was proceeded by supervised learning, which is to train labeling data composed by inputs and corresponding outputs. Labeling data was directly generated manually, so labeling accuracy of data is relatively high. However, it requires heavy efforts in securing data because of cost and time. Additionally, the main goal of supervised learning is to improve detection performance for 'True Positive' data but not to reduce occurrence of 'False Positive' data. In this paper, the occurrence of unpredictable 'False Positive' appears by trained modes with labeling data and 'True Positive' data in monitoring of deep learning-based CCTV accident detection system, which is under operation at a tunnel monitoring center. Those types of 'False Positive' to 'fire' or 'person' objects were frequently taking place for lights of working vehicle, reflecting sunlight at tunnel entrance, long black feature which occurs to the part of lane or car, etc. To solve this problem, a deep learning model was developed by simultaneously training the 'False Positive' data generated in the field and the labeling data. As a result, in comparison with the model that was trained only by the existing labeling data, the re-inference performance with respect to the labeling data was improved. In addition, re-inference of the 'False Positive' data shows that the number of 'False Positive' for the persons were more reduced in case of training model including many 'False Positive' data. By training of the 'False Positive' data, the capability of field application of the deep learning model was improved automatically.

Development of Traffic Speed Prediction Model Reflecting Spatio-temporal Impact based on Deep Neural Network (시공간적 영향력을 반영한 딥러닝 기반의 통행속도 예측 모형 개발)

  • Kim, Youngchan;Kim, Junwon;Han, Yohee;Kim, Jongjun;Hwang, Jewoong
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.19 no.1
    • /
    • pp.1-16
    • /
    • 2020
  • With the advent of the fourth industrial revolution era, there has been a growing interest in deep learning using big data, and studies using deep learning have been actively conducted in various fields. In the transportation sector, there are many advantages to using deep learning in research as much as using deep traffic big data. In this study, a short -term travel speed prediction model using LSTM, a deep learning technique, was constructed to predict the travel speed. The LSTM model suitable for time series prediction was selected considering that the travel speed data, which is used for prediction, is time series data. In order to predict the travel speed more precisely, we constructed a model that reflects both temporal and spatial effects. The model is a short-term prediction model that predicts after one hour. For the analysis data, the 5minute travel speed collected from the Seoul Transportation Information Center was used, and the analysis section was selected as a part of Gangnam where traffic was congested.