• 제목/요약/키워드: short term neural network

Search Result 395, Processing Time 0.023 seconds

A Comparative Study of Machine Learning Algorithms Using LID-DS DataSet (LID-DS 데이터 세트를 사용한 기계학습 알고리즘 비교 연구)

  • Park, DaeKyeong;Ryu, KyungJoon;Shin, DongIl;Shin, DongKyoo;Park, JeongChan;Kim, JinGoog
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.3
    • /
    • pp.91-98
    • /
    • 2021
  • Today's information and communication technology is rapidly developing, the security of IT infrastructure is becoming more important, and at the same time, cyber attacks of various forms are becoming more advanced and sophisticated like intelligent persistent attacks (Advanced Persistent Threat). Early defense or prediction of increasingly sophisticated cyber attacks is extremely important, and in many cases, the analysis of network-based intrusion detection systems (NIDS) related data alone cannot prevent rapidly changing cyber attacks. Therefore, we are currently using data generated by intrusion detection systems to protect against cyber attacks described above through Host-based Intrusion Detection System (HIDS) data analysis. In this paper, we conducted a comparative study on machine learning algorithms using LID-DS (Leipzig Intrusion Detection-Data Set) host-based intrusion detection data including thread information, metadata, and buffer data missing from previously used data sets. The algorithms used were Decision Tree, Naive Bayes, MLP (Multi-Layer Perceptron), Logistic Regression, LSTM (Long Short-Term Memory model), and RNN (Recurrent Neural Network). Accuracy, accuracy, recall, F1-Score indicators and error rates were measured for evaluation. As a result, the LSTM algorithm had the highest accuracy.

Development of Long-Term Electricity Demand Forecasting Model using Sliding Period Learning and Characteristics of Major Districts (주요 지역별 특성과 이동 기간 학습 기법을 활용한 장기 전력수요 예측 모형 개발)

  • Gong, InTaek;Jeong, Dabeen;Bak, Sang-A;Song, Sanghwa;Shin, KwangSup
    • The Journal of Bigdata
    • /
    • v.4 no.1
    • /
    • pp.63-72
    • /
    • 2019
  • For power energy, optimal generation and distribution plans based on accurate demand forecasts are necessary because it is not recoverable after they have been delivered to users through power generation and transmission processes. Failure to predict power demand can cause various social and economic problems, such as a massive power outage in September 2011. In previous studies on forecasting power demand, ARIMA, neural network models, and other methods were developed. However, limitations such as the use of the national average ambient air temperature and the application of uniform criteria to distinguish seasonality are causing distortion of data or performance degradation of the predictive model. In order to improve the performance of the power demand prediction model, we divided Korea into five major regions, and the power demand prediction model of the linear regression model and the neural network model were developed, reflecting seasonal characteristics through regional characteristics and migration period learning techniques. With the proposed approach, it seems possible to forecast the future demand in short term as well as in long term. Also, it is possible to consider various events and exceptional cases during a certain period.

  • PDF

Analyzing the Impact of Multivariate Inputs on Deep Learning-Based Reservoir Level Prediction and Approaches for Mid to Long-Term Forecasting (다변량 입력이 딥러닝 기반 저수율 예측에 미치는 영향 분석과 중장기 예측 방안)

  • Hyeseung Park;Jongwook Yoon;Hojun Lee;Hyunho Yang
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.4
    • /
    • pp.199-207
    • /
    • 2024
  • Local reservoirs are crucial sources for agricultural water supply, necessitating stable water level management to prepare for extreme climate conditions such as droughts. Water level prediction is significantly influenced by local climate characteristics, such as localized rainfall, as well as seasonal factors including cropping times, making it essential to understand the correlation between input and output data as much as selecting an appropriate prediction model. In this study, extensive multivariate data from over 400 reservoirs in Jeollabuk-do from 1991 to 2022 was utilized to train and validate a water level prediction model that comprehensively reflects the complex hydrological and climatological environmental factors of each reservoir, and to analyze the impact of each input feature on the prediction performance of water levels. Instead of focusing on improvements in water level performance through neural network structures, the study adopts a basic Feedforward Neural Network composed of fully connected layers, batch normalization, dropout, and activation functions, focusing on the correlation between multivariate input data and prediction performance. Additionally, most existing studies only present short-term prediction performance on a daily basis, which is not suitable for practical environments that require medium to long-term predictions, such as 10 days or a month. Therefore, this study measured the water level prediction performance up to one month ahead through a recursive method that uses daily prediction values as the next input. The experiment identified performance changes according to the prediction period and analyzed the impact of each input feature on the overall performance based on an Ablation study.

Chord-based stepwise Korean Trot music generation technique using RNN-GAN (RNN-GAN을 이용한 코드 기반의 단계적 트로트 음악 생성 기법)

  • Hwang, Seo-Rim;Park, Young-Cheol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.6
    • /
    • pp.622-628
    • /
    • 2020
  • This paper proposes a music generation technique that automatically generates trot music using a Generative Adversarial Network (GAN) model composed of a Recurrent Neural Network (RNN). The proposed method uses a method of creating a chord as a skeleton of the music, creating a melody and bass in stages based on the chord progression made, and attaching it to the corresponding chord to complete the structured piece. Also, a new chorus chord progression is created from the verse chord progression by applying the characteristics of a trot song that repeats the structure divided into an individual section, such as intro, verse, and chorus. And it extends the length of the created trot. The quality of the generated music was specified using subjective evaluation and objective evaluation methods. It was confirmed that the generated music has similar characteristics to the existing trot.

Rainfall Forecasting Using Satellite Information and Integrated Flood Runoff and Inundation Analysis (I): Theory and Development of Model (위성정보에 의한 강우예측과 홍수유출 및 범람 연계 해석 (I): 이론 및 모형의 개발)

  • Choi, Hyuk Joon;Han, Kun Yeun;Kim, Gwangseob
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.6B
    • /
    • pp.597-603
    • /
    • 2006
  • The purpose of this study is to improve the short term rainfall forecast skill using neural network model that can deal with the non-linear behavior between satellite data and ground observation, and minimize the flood damage. To overcome the geographical limitation of Korean peninsula and get the long forecast lead time of 3 to 6 hour, the developed rainfall forecast model took satellite imageries and wide range AWS data. The architecture of neural network model is a multi-layer neural network which consists of one input layer, one hidden layer, and one output layer. Neural network is trained using a momentum back propagation algorithm. Flood was estimated using rainfall forecasts. We developed a dynamic flood inundation model which is associated with 1-dimensional flood routing model. Therefore the model can forecast flood aspect in a protected lowland by levee failure of river. In the case of multiple levee breaks at main stream and tributaries, the developed flood inundation model can estimate flood level in a river and inundation level and area in a protected lowland simultaneously.

Symbolizing Numbers to Improve Neural Machine Translation (숫자 기호화를 통한 신경기계번역 성능 향상)

  • Kang, Cheongwoong;Ro, Youngheon;Kim, Jisu;Choi, Heeyoul
    • Journal of Digital Contents Society
    • /
    • v.19 no.6
    • /
    • pp.1161-1167
    • /
    • 2018
  • The development of machine learning has enabled machines to perform delicate tasks that only humans could do, and thus many companies have introduced machine learning based translators. Existing translators have good performances but they have problems in number translation. The translators often mistranslate numbers when the input sentence includes a large number. Furthermore, the output sentence structure completely changes even if only one number in the input sentence changes. In this paper, first, we optimized a neural machine translation model architecture that uses bidirectional RNN, LSTM, and the attention mechanism through data cleansing and changing the dictionary size. Then, we implemented a number-processing algorithm specialized in number translation and applied it to the neural machine translation model to solve the problems above. The paper includes the data cleansing method, an optimal dictionary size and the number-processing algorithm, as well as experiment results for translation performance based on the BLEU score.

A Comparative Study of Machine Learning Algorithms Based on Tensorflow for Data Prediction (데이터 예측을 위한 텐서플로우 기반 기계학습 알고리즘 비교 연구)

  • Abbas, Qalab E.;Jang, Sung-Bong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.3
    • /
    • pp.71-80
    • /
    • 2021
  • The selection of an appropriate neural network algorithm is an important step for accurate data prediction in machine learning. Many algorithms based on basic artificial neural networks have been devised to efficiently predict future data. These networks include deep neural networks (DNNs), recurrent neural networks (RNNs), long short-term memory (LSTM) networks, and gated recurrent unit (GRU) neural networks. Developers face difficulties when choosing among these networks because sufficient information on their performance is unavailable. To alleviate this difficulty, we evaluated the performance of each algorithm by comparing their errors and processing times. Each neural network model was trained using a tax dataset, and the trained model was used for data prediction to compare accuracies among the various algorithms. Furthermore, the effects of activation functions and various optimizers on the performance of the models were analyzed The experimental results show that the GRU and LSTM algorithms yields the lowest prediction error with an average RMSE of 0.12 and an average R2 score of 0.78 and 0.75 respectively, and the basic DNN model achieves the lowest processing time but highest average RMSE of 0.163. Furthermore, the Adam optimizer yields the best performance (with DNN, GRU, and LSTM) in terms of error and the worst performance in terms of processing time. The findings of this study are thus expected to be useful for scientists and developers.

Feature Selection with Ensemble Learning for Prostate Cancer Prediction from Gene Expression

  • Abass, Yusuf Aleshinloye;Adeshina, Steve A.
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.12spc
    • /
    • pp.526-538
    • /
    • 2021
  • Machine and deep learning-based models are emerging techniques that are being used to address prediction problems in biomedical data analysis. DNA sequence prediction is a critical problem that has attracted a great deal of attention in the biomedical domain. Machine and deep learning-based models have been shown to provide more accurate results when compared to conventional regression-based models. The prediction of the gene sequence that leads to cancerous diseases, such as prostate cancer, is crucial. Identifying the most important features in a gene sequence is a challenging task. Extracting the components of the gene sequence that can provide an insight into the types of mutation in the gene is of great importance as it will lead to effective drug design and the promotion of the new concept of personalised medicine. In this work, we extracted the exons in the prostate gene sequences that were used in the experiment. We built a Deep Neural Network (DNN) and Bi-directional Long-Short Term Memory (Bi-LSTM) model using a k-mer encoding for the DNA sequence and one-hot encoding for the class label. The models were evaluated using different classification metrics. Our experimental results show that DNN model prediction offers a training accuracy of 99 percent and validation accuracy of 96 percent. The bi-LSTM model also has a training accuracy of 95 percent and validation accuracy of 91 percent.

Emotion Recognition in Arabic Speech from Saudi Dialect Corpus Using Machine Learning and Deep Learning Algorithms

  • Hanaa Alamri;Hanan S. Alshanbari
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.8
    • /
    • pp.9-16
    • /
    • 2023
  • Speech can actively elicit feelings and attitudes by using words. It is important for researchers to identify the emotional content contained in speech signals as well as the sort of emotion that resulted from the speech that was made. In this study, we studied the emotion recognition system using a database in Arabic, especially in the Saudi dialect, the database is from a YouTube channel called Telfaz11, The four emotions that were examined were anger, happiness, sadness, and neutral. In our experiments, we extracted features from audio signals, such as Mel Frequency Cepstral Coefficient (MFCC) and Zero-Crossing Rate (ZCR), then we classified emotions using many classification algorithms such as machine learning algorithms (Support Vector Machine (SVM) and K-Nearest Neighbor (KNN)) and deep learning algorithms such as (Convolution Neural Network (CNN) and Long Short-Term Memory (LSTM)). Our Experiments showed that the MFCC feature extraction method and CNN model obtained the best accuracy result with 95%, proving the effectiveness of this classification system in recognizing Arabic spoken emotions.

Developing radar-based rainfall prediction model with GAN(Generative Adversarial Network) (생성적 적대 신경망(GAN)을 활용한 강우예측모델 개발)

  • Choi, Suyeon;Sohn, Soyoung;Kim, Yeonjoo
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.185-185
    • /
    • 2021
  • 기후변화로 인한 돌발 강우 등 이상 기후 현상이 증가함에 따라 정확한 강우예측의 중요성은 더 증가하는 추세이다. 전통적인 강우예측의 경우 기상수치모델 또는 외삽법을 이용한 레이더 기반 강우예측 기법을 이용하며, 최근 머신러닝 기술의 발달에 따라 이를 활용한 레이더 자료기반 강우예측기법이 개발되고 있다. 기존 머신러닝을 이용한 강우예측 모델의 경우 주로 시계열 이미지 예측에 적합한 2차원 순환 신경망 기반 기법(Convolutional Long Short-Term Memory, ConvLSTM) 또는 합성곱 신경망 기반 기법(Convolutional Neural Network(CNN) Encoder-Decoder) 등을 이용한다. 본 연구에서는 생성적 적대 신경망 기반 기법(Generative Adversarial Network, GAN)을 이용해 미래 강우예측을 수행하도록 하였다. GAN 방법론은 이미지를 생성하는 생성자와 이를 실제 이미지와 구분하는 구별자가 경쟁하며 학습되어 현재 이미지 생성 분야에서 높은 성능을 보여주고 있다. 본 연구에서 개발한 GAN 기반 모델은 기상청에서 제공된 2016년~2019년까지의 레이더 이미지 자료를 이용하여 초단기, 단기 강우예측을 수행하도록 학습시키고, 2020년 레이더 이미지 자료를 이용해 단기강우예측을 모의하였다. 또한, 기존 머신러닝 기법을 기반으로 한 모델들의 강우예측결과와 GAN 기반 모델의 강우예측결과를 비교분석한 결과, 본 연구를 통해 개발한 강우예측모델이 단기강우예측에 뛰어난 성능을 보이는 것을 확인할 수 있었다.

  • PDF