• Title/Summary/Keyword: Deep Learning Frameworks

Search Result 48, Processing Time 0.022 seconds

A Comparative Analysis of Deep Learning Frameworks for Image Learning (이미지 학습을 위한 딥러닝 프레임워크 비교분석)

  • jong-min Kim;Dong-Hwi Lee
    • Convergence Security Journal
    • /
    • v.22 no.4
    • /
    • pp.129-133
    • /
    • 2022
  • Deep learning frameworks are still evolving, and there are various frameworks. Typical deep learning frameworks include TensorFlow, PyTorch, and Keras. The Deepram framework utilizes optimization models in image classification through image learning. In this paper, we use the TensorFlow and PyTorch frameworks, which are most widely used in the deep learning image recognition field, to proceed with image learning, and compare and analyze the results derived in this process to know the optimized framework. was made.

A Review of Deep Learning Research

  • Mu, Ruihui;Zeng, Xiaoqin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.4
    • /
    • pp.1738-1764
    • /
    • 2019
  • With the advent of big data, deep learning technology has become an important research direction in the field of machine learning, which has been widely applied in the image processing, natural language processing, speech recognition and online advertising and so on. This paper introduces deep learning techniques from various aspects, including common models of deep learning and their optimization methods, commonly used open source frameworks, existing problems and future research directions. Firstly, we introduce the applications of deep learning; Secondly, we introduce several common models of deep learning and optimization methods; Thirdly, we describe several common frameworks and platforms of deep learning; Finally, we introduce the latest acceleration technology of deep learning and highlight the future work of deep learning.

CNN model transition learning comparative analysis based on deep learning for image classification (이미지 분류를 위한 딥러닝 기반 CNN모델 전이 학습 비교 분석)

  • Lee, Dong-jun;Jeon, Seung-Je;Lee, DongHwi
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.370-373
    • /
    • 2022
  • Recently, various deep learning framework models such as Tensorflow, Pytorch, Keras, etc. have appeared. In addition, CNN (Convolutional Neural Network) is applied to image recognition using frameworks such as Tensorflow, Pytorch, and Keras, and the optimization model in image classification is mainly used. In this paper, based on the results of training the CNN model with the Paitotchi and tensor flow frameworks most often used in the field of deep learning image recognition, the two frameworks are compared and analyzed for image analysis. Derived an optimized framework.

  • PDF

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

Deep Learning Model Parallelism (딥러닝 모델 병렬 처리)

  • Park, Y.M.;Ahn, S.Y.;Lim, E.J.;Choi, Y.S.;Woo, Y.C.;Choi, W.
    • Electronics and Telecommunications Trends
    • /
    • v.33 no.4
    • /
    • pp.1-13
    • /
    • 2018
  • Deep learning (DL) models have been widely applied to AI applications such image recognition and language translation with big data. Recently, DL models have becomes larger and more complicated, and have merged together. For the accelerated training of a large-scale deep learning model, model parallelism that partitions the model parameters for non-shared parallel access and updates across multiple machines was provided by a few distributed deep learning frameworks. Model parallelism as a training acceleration method, however, is not as commonly used as data parallelism owing to the difficulty of efficient model parallelism. This paper provides a comprehensive survey of the state of the art in model parallelism by comparing the implementation technologies in several deep learning frameworks that support model parallelism, and suggests a future research directions for improving model parallelism technology.

Interworking technology of neural network and data among deep learning frameworks

  • Park, Jaebok;Yoo, Seungmok;Yoon, Seokjin;Lee, Kyunghee;Cho, Changsik
    • ETRI Journal
    • /
    • v.41 no.6
    • /
    • pp.760-770
    • /
    • 2019
  • Based on the growing demand for neural network technologies, various neural network inference engines are being developed. However, each inference engine has its own neural network storage format. There is a growing demand for standardization to solve this problem. This study presents interworking techniques for ensuring the compatibility of neural networks and data among the various deep learning frameworks. The proposed technique standardizes the graphic expression grammar and learning data storage format using the Neural Network Exchange Format (NNEF) of Khronos. The proposed converter includes a lexical, syntax, and parser. This NNEF parser converts neural network information into a parsing tree and quantizes data. To validate the proposed system, we verified that MNIST is immediately executed by importing AlexNet's neural network and learned data. Therefore, this study contributes an efficient design technique for a converter that can execute a neural network and learned data in various frameworks regardless of the storage format of each framework.

A Comparative Performance Analysis of Spark-Based Distributed Deep-Learning Frameworks (스파크 기반 딥 러닝 분산 프레임워크 성능 비교 분석)

  • Jang, Jaehee;Park, Jaehong;Kim, Hanjoo;Yoon, Sungroh
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.5
    • /
    • pp.299-303
    • /
    • 2017
  • By piling up hidden layers in artificial neural networks, deep learning is delivering outstanding performances for high-level abstraction problems such as object/speech recognition and natural language processing. Alternatively, deep-learning users often struggle with the tremendous amounts of time and resources that are required to train deep neural networks. To alleviate this computational challenge, many approaches have been proposed in a diversity of areas. In this work, two of the existing Apache Spark-based acceleration frameworks for deep learning (SparkNet and DeepSpark) are compared and analyzed in terms of the training accuracy and the time demands. In the authors' experiments with the CIFAR-10 and CIFAR-100 benchmark datasets, SparkNet showed a more stable convergence behavior than DeepSpark; but in terms of the training accuracy, DeepSpark delivered a higher classification accuracy of approximately 15%. For some of the cases, DeepSpark also outperformed the sequential implementation running on a single machine in terms of both the accuracy and the running time.

A Survey on Deep Convolutional Neural Networks for Image Steganography and Steganalysis

  • Hussain, Israr;Zeng, Jishen;Qin, Xinhong;Tan, Shunquan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.3
    • /
    • pp.1228-1248
    • /
    • 2020
  • Steganalysis & steganography have witnessed immense progress over the past few years by the advancement of deep convolutional neural networks (DCNN). In this paper, we analyzed current research states from the latest image steganography and steganalysis frameworks based on deep learning. Our objective is to provide for future researchers the work being done on deep learning-based image steganography & steganalysis and highlights the strengths and weakness of existing up-to-date techniques. The result of this study opens new approaches for upcoming research and may serve as source of hypothesis for further significant research on deep learning-based image steganography and steganalysis. Finally, technical challenges of current methods and several promising directions on deep learning steganography and steganalysis are suggested to illustrate how these challenges can be transferred into prolific future research avenues.

A Comparison and Analysis of Deep Learning Framework (딥 러닝 프레임워크의 비교 및 분석)

  • Lee, Yo-Seob;Moon, Phil-Joo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.12 no.1
    • /
    • pp.115-122
    • /
    • 2017
  • Deep learning is artificial intelligence technology that can teach people like themselves who need machine learning. Deep learning has become of the most promising in the development of artificial intelligence to understand the world and detection technology, and Google, Baidu and Facebook is the most developed in advance. In this paper, we discuss the kind of deep learning frameworks, compare and analyze the efficiency of the image and speech recognition field of it.

Application of Different Tools of Artificial Intelligence in Translation Language

  • Mohammad Ahmed Manasrah
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.3
    • /
    • pp.144-150
    • /
    • 2023
  • With progressive advancements in Man-made consciousness (computer based intelligence) and Profound Learning (DL), contributing altogether to Normal Language Handling (NLP), the precision and nature of Machine Interpretation (MT) has worked on complex. There is a discussion, but that its no time like the present the human interpretation became immaterial or excess. All things considered, human flaws are consistently dealt with by its own creations. With the utilization of brain networks in machine interpretation, its been as of late guaranteed that keen frameworks can now decipher at standard with human interpreters. In any case, simulated intelligence is as yet not without any trace of issues related with handling of a language, let be the intricacies and complexities common of interpretation. Then, at that point, comes the innate predispositions while planning smart frameworks. How we plan these frameworks relies upon what our identity is, subsequently setting in a one-sided perspective and social encounters. Given the variety of language designs and societies they address, their taking care of by keen machines, even with profound learning abilities, with human proficiency looks exceptionally far-fetched, at any rate, for the time being.