• Title/Summary/Keyword: 신경 텐서망

Search Result 30, Processing Time 0.018 seconds

Automatic Expansion of ConceptNet by Using Neural Tensor Networks (신경 텐서망을 이용한 컨셉넷 자동 확장)

  • Choi, Yong Seok;Lee, Gyoung Ho;Lee, Kong Joo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.11
    • /
    • pp.549-554
    • /
    • 2016
  • ConceptNet is a common sense knowledge base which is formed in a semantic graph whose nodes represent concepts and edges show relationships between concepts. As it is difficult to make knowledge base integrity, a knowledge base often suffers from incompleteness problem. Therefore the quality of reasoning performed over such knowledge bases is sometimes unreliable. This work presents neural tensor networks which can alleviate the problem of knowledge bases incompleteness by reasoning new assertions and adding them into ConceptNet. The neural tensor networks are trained with a collection of assertions extracted from ConceptNet. The input of the networks is two concepts, and the output is the confidence score, telling how possible the connection between two concepts is under a specified relationship. The neural tensor networks can expand the usefulness of ConceptNet by increasing the degree of nodes. The accuracy of the neural tensor networks is 87.7% on testing data set. Also the neural tensor networks can predict a new assertion which does not exist in ConceptNet with an accuracy 85.01%.

A Time-Series Data Prediction Using TensorFlow Neural Network Libraries (텐서 플로우 신경망 라이브러리를 이용한 시계열 데이터 예측)

  • Muh, Kumbayoni Lalu;Jang, Sung-Bong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.8 no.4
    • /
    • pp.79-86
    • /
    • 2019
  • This paper describes a time-series data prediction based on artificial neural networks (ANN). In this study, a batch based ANN model and a stochastic ANN model have been implemented using TensorFlow libraries. Each model are evaluated by comparing training and testing errors that are measured through experiment. To train and test each model, tax dataset was used that are collected from the government website of indiana state budget agency in USA from 2001 to 2018. The dataset includes tax incomes of individual, product sales, company, and total tax incomes. The experimental results show that batch model reveals better performance than stochastic model. Using the batch scheme, we have conducted a prediction experiment. In the experiment, total taxes are predicted during next seven months, and compared with actual collected total taxes. The results shows that predicted data are almost same with the actual data.

A Tensor Space Model based Deep Neural Network for Automated Text Classification (자동문서분류를 위한 텐서공간모델 기반 심층 신경망)

  • Lim, Pu-reum;Kim, Han-joon
    • Database Research
    • /
    • v.34 no.3
    • /
    • pp.3-13
    • /
    • 2018
  • Text classification is one of the text mining technologies that classifies a given textual document into its appropriate categories and is used in various fields such as spam email detection, news classification, question answering, emotional analysis, and chat bot. In general, the text classification system utilizes machine learning algorithms, and among a number of algorithms, naïve Bayes and support vector machine, which are suitable for text data, are known to have reasonable performance. Recently, with the development of deep learning technology, several researches on applying deep neural networks such as recurrent neural networks (RNN) and convolutional neural networks (CNN) have been introduced to improve the performance of text classification system. However, the current text classification techniques have not yet reached the perfect level of text classification. This paper focuses on the fact that the text data is expressed as a vector only with the word dimensions, which impairs the semantic information inherent in the text, and proposes a neural network architecture based upon the semantic tensor space model.

Analysis of Tensor Processing Unit and Simulation Using Python (텐서 처리부의 분석 및 파이썬을 이용한 모의실행)

  • Lee, Jongbok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.3
    • /
    • pp.165-171
    • /
    • 2019
  • The study of the computer architecture has shown that major improvements in price-to-energy performance stems from domain-specific hardware development. This paper analyzes the tensor processing unit (TPU) ASIC which can accelerate the reasoning of the artificial neural network (NN). The core device of the TPU is a MAC matrix multiplier capable of high-speed operation and software-managed on-chip memory. The execution model of the TPU can meet the reaction time requirements of the artificial neural network better than the existing CPU and the GPU execution models, with the small area and the low power consumption even though it has many MAC and large memory. Utilizing the TPU for the tensor flow benchmark framework, it can achieve higher performance and better power efficiency than the CPU or CPU. In this paper, we analyze TPU, simulate the Python modeled OpenTPU, and synthesize the matrix multiplication unit, which is the key hardware.

Performance Evaluation of Recurrent Neural Network Algorithms for Recommendation System in E-commerce (전자상거래 추천시스템을 위한 순환신경망 알고리즘들의 성능평가)

  • Seo, Jihye;Yong, Hwan-Seung
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.7
    • /
    • pp.440-445
    • /
    • 2017
  • Due to the advance of e-commerce systems, the number of people using online shopping and products has significantly increased. Therefore, the need for an accurate recommendation system is becoming increasingly more important. Recurrent neural network is a deep-learning algorithm that utilizes sequential information in training. In this paper, an evaluation is performed on the application of recurrent neural networks to recommendation systems. We evaluated three recurrent algorithms (RNN, LSTM and GRU) and three optimal algorithms(Adagrad, RMSProp and Adam) which are commonly used. In the experiments, we used the TensorFlow open source library produced by Google and e-commerce session data from RecSys Challenge 2015. The results using the optimal hyperparameters found in this study are compared with those of RecSys Challenge 2015 participants.

A Deep Learning Performance Comparison of R and Tensorflow (R과 텐서플로우 딥러닝 성능 비교)

  • Sung-Bong Jang
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.4
    • /
    • pp.487-494
    • /
    • 2023
  • In this study, performance comparison was performed on R and TensorFlow, which are free deep learning tools. In the experiment, six types of deep neural networks were built using each tool, and the neural networks were trained using the 10-year Korean temperature dataset. The number of nodes in the input layer of the constructed neural network was set to 10, the number of output layers was set to 5, and the hidden layer was set to 5, 10, and 20 to conduct experiments. The dataset includes 3600 temperature data collected from Gangnam-gu, Seoul from March 1, 2013 to March 29, 2023. For performance comparison, the future temperature was predicted for 5 days using the trained neural network, and the root mean square error (RMSE) value was measured using the predicted value and the actual value. Experiment results shows that when there was one hidden layer, the learning error of R was 0.04731176, and TensorFlow was measured at 0.06677193, and when there were two hidden layers, R was measured at 0.04782134 and TensorFlow was measured at 0.05799060. Overall, R was measured to have better performance. We tried to solve the difficulties in tool selection by providing quantitative performance information on the two tools to users who are new to machine learning.

Microcontroller-based Gesture Recognition using 1D CNN (1D CNN을 이용한 마이크로컨트롤러기반 제스처 인식)

  • Kim, Ji-Hye;Choi, Kwon-Taeg
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.01a
    • /
    • pp.219-220
    • /
    • 2021
  • 본 논문에서는 마이크로컨트롤러에서 6축 IMU 센서를 사용한 제스쳐를 인식하기 위한 최적화된 학습 방법을 제안한다. 6축 센서값을 119번 샘플링할 경우 특징 차원이 매우 크기 때문에 다층 신경망을 이용할 경우 학습파라미터가 마이크로컨트롤러의 메모리 허용량을 초과하게 된다. 본 논문은 성능은 유지하며 학습 파라미터 개수를 효과적으로 줄이기 위한 마이크로컨트롤러에 최적화된 1D CNN을 제안한다.

  • PDF

Prediction of Wave Breaking Using Machine Learning Open Source Platform (머신러닝 오픈소스 플랫폼을 활용한 쇄파 예측)

  • Lee, Kwang-Ho;Kim, Tag-Gyeom;Kim, Do-Sam
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.32 no.4
    • /
    • pp.262-272
    • /
    • 2020
  • A large number of studies on wave breaking have been carried out, and many experimental data have been documented. Moreover, on the basis of various experimental data set, many empirical or semi-empirical formulas based primarily on regression analysis have been proposed to quantitatively estimate wave breaking for engineering applications. However, wave breaking has an inherent variability, which imply that a linear statistical approach such as linear regression analysis might be inadequate. This study presents an alternative nonlinear method using an neural network, one of the machine learning methods, to estimate breaking wave height and breaking depth. The neural network is modeled using Tensorflow, a machine learning open source platform distributed by Google. The neural network is trained by randomly selecting the collected experimental data, and the trained neural network is evaluated using data not used for learning process. The results for wave breaking height and depth predicted by fully trained neural network are more accurate than those obtained by existing empirical formulas. These results show that neural network is an useful tool for the prediction of wave breaking.

A Comparative Study of Machine Learning Algorithms Based on Tensorflow for Data Prediction (데이터 예측을 위한 텐서플로우 기반 기계학습 알고리즘 비교 연구)

  • Abbas, Qalab E.;Jang, Sung-Bong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.3
    • /
    • pp.71-80
    • /
    • 2021
  • The selection of an appropriate neural network algorithm is an important step for accurate data prediction in machine learning. Many algorithms based on basic artificial neural networks have been devised to efficiently predict future data. These networks include deep neural networks (DNNs), recurrent neural networks (RNNs), long short-term memory (LSTM) networks, and gated recurrent unit (GRU) neural networks. Developers face difficulties when choosing among these networks because sufficient information on their performance is unavailable. To alleviate this difficulty, we evaluated the performance of each algorithm by comparing their errors and processing times. Each neural network model was trained using a tax dataset, and the trained model was used for data prediction to compare accuracies among the various algorithms. Furthermore, the effects of activation functions and various optimizers on the performance of the models were analyzed The experimental results show that the GRU and LSTM algorithms yields the lowest prediction error with an average RMSE of 0.12 and an average R2 score of 0.78 and 0.75 respectively, and the basic DNN model achieves the lowest processing time but highest average RMSE of 0.163. Furthermore, the Adam optimizer yields the best performance (with DNN, GRU, and LSTM) in terms of error and the worst performance in terms of processing time. The findings of this study are thus expected to be useful for scientists and developers.

Image Classification of Damaged Bolts using Convolution Neural Networks (합성곱 신경망을 이용한 손상된 볼트의 이미지 분류)

  • Lee, Soo-Byoung;Lee, Seok-Soon
    • Journal of Aerospace System Engineering
    • /
    • v.16 no.4
    • /
    • pp.109-115
    • /
    • 2022
  • The CNN (Convolution Neural Network) algorithm which combines a deep learning technique, and a computer vision technology, makes image classification feasible with the high-performance computing system. In this thesis, the CNN algorithm is applied to the classification problem, by using a typical deep learning framework of TensorFlow and machine learning techniques. The data set required for supervised learning is generated with the same type of bolts. some of which have undamaged threads, but others have damaged threads. The learning model with less quantity data showed good classification performance on detecting damage in a bolt image. Additionally, the model performance is reviewed by altering the quantity of convolution layers, or applying selectively the over and under fitting alleviation algorithm.