• Title/Summary/Keyword: Neural Tensor Network

Search Result 55, Processing Time 0.029 seconds

Sources separation of passive sonar array signal using recurrent neural network-based deep neural network with 3-D tensor (3-D 텐서와 recurrent neural network기반 심층신경망을 활용한 수동소나 다중 채널 신호분리 기술 개발)

  • Sangheon Lee;Dongku Jung;Jaesok Yu
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.4
    • /
    • pp.357-363
    • /
    • 2023
  • In underwater signal processing, separating individual signals from mixed signals has long been a challenge due to low signal quality. The common method using Short-time Fourier transform for spectrogram analysis has faced criticism for its complex parameter optimization and loss of phase data. We propose a Triple-path Recurrent Neural Network, based on the Dual-path Recurrent Neural Network's success in long time series signal processing, to handle three-dimensional tensors from multi-channel sensor input signals. By dividing input signals into short chunks and creating a 3D tensor, the method accounts for relationships within and between chunks and channels, enabling local and global feature learning. The proposed technique demonstrates improved Root Mean Square Error and Scale Invariant Signal to Noise Ratio compared to the existing method.

A Tensor Space Model based Deep Neural Network for Automated Text Classification (자동문서분류를 위한 텐서공간모델 기반 심층 신경망)

  • Lim, Pu-reum;Kim, Han-joon
    • Database Research
    • /
    • v.34 no.3
    • /
    • pp.3-13
    • /
    • 2018
  • Text classification is one of the text mining technologies that classifies a given textual document into its appropriate categories and is used in various fields such as spam email detection, news classification, question answering, emotional analysis, and chat bot. In general, the text classification system utilizes machine learning algorithms, and among a number of algorithms, naïve Bayes and support vector machine, which are suitable for text data, are known to have reasonable performance. Recently, with the development of deep learning technology, several researches on applying deep neural networks such as recurrent neural networks (RNN) and convolutional neural networks (CNN) have been introduced to improve the performance of text classification system. However, the current text classification techniques have not yet reached the perfect level of text classification. This paper focuses on the fact that the text data is expressed as a vector only with the word dimensions, which impairs the semantic information inherent in the text, and proposes a neural network architecture based upon the semantic tensor space model.

Hybrid Tensor Flow DNN and Modified Residual Network Approach for Cyber Security Threats Detection in Internet of Things

  • Alshehri, Abdulrahman Mohammed;Fenais, Mohammed Saeed
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.10
    • /
    • pp.237-245
    • /
    • 2022
  • The prominence of IoTs (Internet of Things) and exponential advancement of computer networks has resulted in massive essential applications. Recognizing various cyber-attacks or anomalies in networks and establishing effective intrusion recognition systems are becoming increasingly vital to current security. MLTs (Machine Learning Techniques) can be developed for such data-driven intelligent recognition systems. Researchers have employed a TFDNNs (Tensor Flow Deep Neural Networks) and DCNNs (Deep Convolution Neural Networks) to recognize pirated software and malwares efficiently. However, tuning the amount of neurons in multiple layers with activation functions leads to learning error rates, degrading classifier's reliability. HTFDNNs ( Hybrid tensor flow DNNs) and MRNs (Modified Residual Networks) or Resnet CNNs were presented to recognize software piracy and malwares. This study proposes HTFDNNs to identify stolen software starting with plagiarized source codes. This work uses Tokens and weights for filtering noises while focusing on token's for identifying source code thefts. DLTs (Deep learning techniques) are then used to detect plagiarized sources. Data from Google Code Jam is used for finding software piracy. MRNs visualize colour images for identifying harms in networks using IoTs. Malware samples of Maling dataset is used for tests in this work.

Empirical Investigations to Plant Leaf Disease Detection Based on Convolutional Neural Network

  • K. Anitha;M.Srinivasa Rao
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.6
    • /
    • pp.115-120
    • /
    • 2023
  • Plant leaf diseases and destructive insects are major challenges that affect the agriculture production of the country. Accurate and fast prediction of leaf diseases in crops could help to build-up a suitable treatment technique while considerably reducing the economic and crop losses. In this paper, Convolutional Neural Network based model is proposed to detect leaf diseases of a plant in an efficient manner. Convolutional Neural Network (CNN) is the key technique in Deep learning mainly used for object identification. This model includes an image classifier which is built using machine learning concepts. Tensor Flow runs in the backend and Python programming is used in this model. Previous methods are based on various image processing techniques which are implemented in MATLAB. These methods lack the flexibility of providing good level of accuracy. The proposed system can effectively identify different types of diseases with its ability to deal with complex scenarios from a plant's area. Predictor model is used to precise the disease and showcase the accurate problem which helps in enhancing the noble employment of the farmers. Experimental results indicate that an accuracy of around 93% can be achieved using this model on a prepared Data Set.

Analysis of Tensor Processing Unit and Simulation Using Python (텐서 처리부의 분석 및 파이썬을 이용한 모의실행)

  • Lee, Jongbok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.3
    • /
    • pp.165-171
    • /
    • 2019
  • The study of the computer architecture has shown that major improvements in price-to-energy performance stems from domain-specific hardware development. This paper analyzes the tensor processing unit (TPU) ASIC which can accelerate the reasoning of the artificial neural network (NN). The core device of the TPU is a MAC matrix multiplier capable of high-speed operation and software-managed on-chip memory. The execution model of the TPU can meet the reaction time requirements of the artificial neural network better than the existing CPU and the GPU execution models, with the small area and the low power consumption even though it has many MAC and large memory. Utilizing the TPU for the tensor flow benchmark framework, it can achieve higher performance and better power efficiency than the CPU or CPU. In this paper, we analyze TPU, simulate the Python modeled OpenTPU, and synthesize the matrix multiplication unit, which is the key hardware.

Supervised learning-based DDoS attacks detection: Tuning hyperparameters

  • Kim, Meejoung
    • ETRI Journal
    • /
    • v.41 no.5
    • /
    • pp.560-573
    • /
    • 2019
  • Two supervised learning algorithms, a basic neural network and a long short-term memory recurrent neural network, are applied to traffic including DDoS attacks. The joint effects of preprocessing methods and hyperparameters for machine learning on performance are investigated. Values representing attack characteristics are extracted from datasets and preprocessed by two methods. Binary classification and two optimizers are used. Some hyperparameters are obtained exhaustively for fast and accurate detection, while others are fixed with constants to account for performance and data characteristics. An experiment is performed via TensorFlow on three traffic datasets. Three scenarios are considered to investigate the effects of learning former traffic on sequential traffic analysis and the effects of learning one dataset on application to another dataset, and determine whether the algorithms can be used for recent attack traffic. Experimental results show that the used preprocessing methods, neural network architectures and hyperparameters, and the optimizers are appropriate for DDoS attack detection. The obtained results provide a criterion for the detection accuracy of attacks.

Performance Evaluation of Recurrent Neural Network Algorithms for Recommendation System in E-commerce (전자상거래 추천시스템을 위한 순환신경망 알고리즘들의 성능평가)

  • Seo, Jihye;Yong, Hwan-Seung
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.7
    • /
    • pp.440-445
    • /
    • 2017
  • Due to the advance of e-commerce systems, the number of people using online shopping and products has significantly increased. Therefore, the need for an accurate recommendation system is becoming increasingly more important. Recurrent neural network is a deep-learning algorithm that utilizes sequential information in training. In this paper, an evaluation is performed on the application of recurrent neural networks to recommendation systems. We evaluated three recurrent algorithms (RNN, LSTM and GRU) and three optimal algorithms(Adagrad, RMSProp and Adam) which are commonly used. In the experiments, we used the TensorFlow open source library produced by Google and e-commerce session data from RecSys Challenge 2015. The results using the optimal hyperparameters found in this study are compared with those of RecSys Challenge 2015 participants.

River streamflow prediction using a deep neural network: a case study on the Red River, Vietnam

  • Le, Xuan-Hien;Ho, Hung Viet;Lee, Giha
    • Korean Journal of Agricultural Science
    • /
    • v.46 no.4
    • /
    • pp.843-856
    • /
    • 2019
  • Real-time flood prediction has an important role in significantly reducing potential damage caused by floods for urban residential areas located downstream of river basins. This paper presents an effective approach for flood forecasting based on the construction of a deep neural network (DNN) model. In addition, this research depends closely on the open-source software library, TensorFlow, which was developed by Google for machine and deep learning applications and research. The proposed model was applied to forecast the flowrate one, two, and three days in advance at the Son Tay hydrological station on the Red River, Vietnam. The input data of the model was a series of discharge data observed at five gauge stations on the Red River system, without requiring rainfall data, water levels and topographic characteristics. The research results indicate that the DNN model achieved a high performance for flood forecasting even though only a modest amount of data is required. When forecasting one and two days in advance, the Nash-Sutcliffe Efficiency (NSE) reached 0.993 and 0.938, respectively. The findings of this study suggest that the DNN model can be used to construct a real-time flood warning system on the Red River and for other river basins in Vietnam.

River Water Level Prediction Method based on LSTM Neural Network

  • Le, Xuan Hien;Lee, Giha
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2018.05a
    • /
    • pp.147-147
    • /
    • 2018
  • In this article, we use an open source software library: TensorFlow, developed for the purposes of conducting very complex machine learning and deep neural network applications. However, the system is general enough to be applicable in a wide variety of other domains as well. The proposed model based on a deep neural network model, LSTM (Long Short-Term Memory) to predict the river water level at Okcheon Station of the Guem River without utilization of rainfall - forecast information. For LSTM modeling, the input data is hourly water level data for 15 years from 2002 to 2016 at 4 stations includes 3 upstream stations (Sutong, Hotan, and Songcheon) and the forecasting-target station (Okcheon). The data are subdivided into three purposes: a training data set, a testing data set and a validation data set. The model was formulated to predict Okcheon Station water level for many cases from 3 hours to 12 hours of lead time. Although the model does not require many input data such as climate, geography, land-use for rainfall-runoff simulation, the prediction is very stable and reliable up to 9 hours of lead time with the Nash - Sutcliffe efficiency (NSE) is higher than 0.90 and the root mean square error (RMSE) is lower than 12cm. The result indicated that the method is able to produce the river water level time series and be applicable to the practical flood forecasting instead of hydrologic modeling approaches.

  • PDF

Recommendation system using Deep Autoencoder for Tensor data

  • Park, Jina;Yong, Hwan-Seung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.8
    • /
    • pp.87-93
    • /
    • 2019
  • These days, as interest in the recommendation system with deep learning is increasing, a number of related studies to develop a performance for collaborative filtering through autoencoder, a state-of-the-art deep learning neural network architecture has advanced considerably. The purpose of this study is to propose autoencoder which is used by the recommendation system to predict ratings, and we added more hidden layers to the original architecture of autoencoder so that we implemented deep autoencoder with 3 to 5 hidden layers for much deeper architecture. In this paper, therefore we make a comparison between the performance of them. In this research, we use 2-dimensional arrays and 3-dimensional tensor as the input dataset. As a result, we found a correlation between matrix entry of the 3-dimensional dataset such as item-time and user-time and also figured out that deep autoencoder with extra hidden layers generalized even better performance than autoencoder.