• Title/Summary/Keyword: a neural-net

Search Result 672, Processing Time 0.034 seconds

An efficient learning method of HMM-Net classifiers (HMM-Net 분류기의 효율적인 학습법)

  • 김상운;김탁령
    • Proceedings of the IEEK Conference
    • /
    • 1998.06a
    • /
    • pp.933-935
    • /
    • 1998
  • The HMM-Net is an architecture for a neural network that implements a hidden markov model (HMM). The architecture is developed for the purpose of combining the discriminant power of neural networks with the time-domain modeling capability of HMMs. Criteria used for learning HMM-Net classifiers are maximum likelihood(ML) and minimization of mean squared error(MMSE). In this paper we propose an efficient learning method of HMM_Net classifiers using a ML-MMSE hybrid criterion and report the results of an experimental study comparing the performance of HMM_Net classifiers trained by the gradient descent algorithm with the above criteria. Experimental results for the isolated numeric digits from /0/ to /9/ show that the performance of the proposed method is better than the others in the repects of learning and recognition rates.

  • PDF

Neural Net Agent for Distributed Information Retrieval (분산 정보 검색을 위한 신경망 에이전트)

  • Choi, Yong-S
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.10
    • /
    • pp.773-784
    • /
    • 2001
  • Since documents on the Web are naturally partitioned into may document database, the efficient information retrieval process requires identifying the document database that are most likely to provide relevant documents to the query and then querying the identified document database. We propose a neural net agent approach to such an efficient information retrieval. First, we present a neural net agent that learns about underlying document database using the relevance feedbacks obtained from many retrieval experiences. For a given query, the neural net agent, which is sufficiently trained on the basis of the BPN learning mechanism, discovers the document database associated with the relevant documents and retrieves those documents effectively. In the experiment, we introduce a neural net agent based information retrieval system and evaluate its performance by comparing experimental results to those of the conventional well-known approaches.

  • PDF

A Neural Net System Self-organizing the Distributed Concepts for Speech Recognition (음성인식을 위한 분산개념을 자율조직하는 신경회로망시스템)

  • Kim, Sung-Suk;Lee, Tai-Ho
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.26 no.5
    • /
    • pp.85-91
    • /
    • 1989
  • In this paper, we propose a neural net system for speech recognition, which is composed of two neural networks. Firstly the self-supervised BP(Back Propagation) network generates the distributed concept corresponding to the activity pattern in the hidden units. And then the self-organizing neural network forms a concept map which directly displays the similarity relations between concepts. By doing the above, the difficulty in learning the conventional BP network is solved and the weak side of BP falling into a pattern matcher is gone, while the strong point of generating the various internal representations is used. And we have obtained the concept map which is more orderly than the Kohonen's SOFM. The proposed neural net system needs not any special preprocessing and has a self-learning ability.

  • PDF

On Learning of HMM-Net Classifiers Using Hybrid Methods (하이브리드법에 의한 HMM-Net 분류기의 학습)

  • 김상운;신성효
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.1273-1276
    • /
    • 1998
  • The HMM-Net is an architecture for a neural network that implements a hidden Markov model (HMM). The architecture is developed for the purpose of combining the discriminant power of neural networks with the time-domain modeling capability of HMMs. Criteria used for learning HMM-Net classifiers are maximum likelihood (ML), maximum mutual information (MMI), and minimization of mean squared error(MMSE). In this paper we propose an efficient learning method of HMM-Net classifiers using hybrid criteria, ML/MMSE and MMI/MMSE, and report the results of an experimental study comparing the performance of HMM-Net classifiers trained by the gradient descent algorithm with the above criteria. Experimental results for the isolated numeric digits from /0/ to /9/ show that the performance of the proposed method is better than the others in the respects of learning and recognition rates.

  • PDF

Interworking technology of neural network and data among deep learning frameworks

  • Park, Jaebok;Yoo, Seungmok;Yoon, Seokjin;Lee, Kyunghee;Cho, Changsik
    • ETRI Journal
    • /
    • v.41 no.6
    • /
    • pp.760-770
    • /
    • 2019
  • Based on the growing demand for neural network technologies, various neural network inference engines are being developed. However, each inference engine has its own neural network storage format. There is a growing demand for standardization to solve this problem. This study presents interworking techniques for ensuring the compatibility of neural networks and data among the various deep learning frameworks. The proposed technique standardizes the graphic expression grammar and learning data storage format using the Neural Network Exchange Format (NNEF) of Khronos. The proposed converter includes a lexical, syntax, and parser. This NNEF parser converts neural network information into a parsing tree and quantizes data. To validate the proposed system, we verified that MNIST is immediately executed by importing AlexNet's neural network and learned data. Therefore, this study contributes an efficient design technique for a converter that can execute a neural network and learned data in various frameworks regardless of the storage format of each framework.

Application and Performance Analysis of Double Pruning Method for Deep Neural Networks (심층신경망의 더블 프루닝 기법의 적용 및 성능 분석에 관한 연구)

  • Lee, Seon-Woo;Yang, Ho-Jun;Oh, Seung-Yeon;Lee, Mun-Hyung;Kwon, Jang-Woo
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.8
    • /
    • pp.23-34
    • /
    • 2020
  • Recently, the artificial intelligence deep learning field has been hard to commercialize due to the high computing power and the price problem of computing resources. In this paper, we apply a double pruning techniques to evaluate the performance of the in-depth neural network and various datasets. Double pruning combines basic Network-slimming and Parameter-prunning. Our proposed technique has the advantage of reducing the parameters that are not important to the existing learning and improving the speed without compromising the learning accuracy. After training various datasets, the pruning ratio was increased to reduce the size of the model.We confirmed that MobileNet-V3 showed the highest performance as a result of NetScore performance analysis. We confirmed that the performance after pruning was the highest in MobileNet-V3 consisting of depthwise seperable convolution neural networks in the Cifar 10 dataset, and VGGNet and ResNet in traditional convolutional neural networks also increased significantly.

Improved Performance of Image Semantic Segmentation using NASNet (NASNet을 이용한 이미지 시맨틱 분할 성능 개선)

  • Kim, Hyoung Seok;Yoo, Kee-Youn;Kim, Lae Hyun
    • Korean Chemical Engineering Research
    • /
    • v.57 no.2
    • /
    • pp.274-282
    • /
    • 2019
  • In recent years, big data analysis has been expanded to include automatic control through reinforcement learning as well as prediction through modeling. Research on the utilization of image data is actively carried out in various industrial fields such as chemical, manufacturing, agriculture, and bio-industry. In this paper, we applied NASNet, which is an AutoML reinforced learning algorithm, to DeepU-Net neural network that modified U-Net to improve image semantic segmentation performance. We used BRATS2015 MRI data for performance verification. Simulation results show that DeepU-Net has more performance than the U-Net neural network. In order to improve the image segmentation performance, remove dropouts that are typically applied to neural networks, when the number of kernels and filters obtained through reinforcement learning in DeepU-Net was selected as a hyperparameter of neural network. The results show that the training accuracy is 0.5% and the verification accuracy is 0.3% better than DeepU-Net. The results of this study can be applied to various fields such as MRI brain imaging diagnosis, thermal imaging camera abnormality diagnosis, Nondestructive inspection diagnosis, chemical leakage monitoring, and monitoring forest fire through CCTV.

DeepAct: A Deep Neural Network Model for Activity Detection in Untrimmed Videos

  • Song, Yeongtaek;Kim, Incheol
    • Journal of Information Processing Systems
    • /
    • v.14 no.1
    • /
    • pp.150-161
    • /
    • 2018
  • We propose a novel deep neural network model for detecting human activities in untrimmed videos. The process of human activity detection in a video involves two steps: a step to extract features that are effective in recognizing human activities in a long untrimmed video, followed by a step to detect human activities from those extracted features. To extract the rich features from video segments that could express unique patterns for each activity, we employ two different convolutional neural network models, C3D and I-ResNet. For detecting human activities from the sequence of extracted feature vectors, we use BLSTM, a bi-directional recurrent neural network model. By conducting experiments with ActivityNet 200, a large-scale benchmark dataset, we show the high performance of the proposed DeepAct model.

Real-Time Face Recognition Based on Subspace and LVQ Classifier (부분공간과 LVQ 분류기에 기반한 실시간 얼굴 인식)

  • Kwon, Oh-Ryun;Min, Kyong-Pil;Chun, Jun-Chul
    • Journal of Internet Computing and Services
    • /
    • v.8 no.3
    • /
    • pp.19-32
    • /
    • 2007
  • This paper present a new face recognition method based on LVQ neural net to construct a real time face recognition system. The previous researches which used PCA, LDA combined neural net usually need much time in training neural net. The supervised LVQ neural net needs much less time in training and can maximize the separability between the classes. In this paper, the proposed method transforms the input face image by PCA and LDA sequentially into low-dimension feature vectors and recognizes the face through LVQ neural net. In order to make the system robust to external light variation, light compensation is performed on the detected face by max-min normalization method as preprocessing. PCA and LDA transformations are applied to the normalized face image to produce low-level feature vectors of the image. In order to determine the initial centers of LVQ and speed up the convergency of the LVQ neural net, the K-Means clustering algorithm is adopted. Subsequently, the class representative vectors can be produced by LVQ2 training using initial center vectors. The face recognition is achieved by using the euclidean distance measure between the center vector of classes and the feature vector of input image. From the experiments, we can prove that the proposed method is more effective in the recognition ratio for the cases of still images from ORL database and sequential images rather than using conventional PCA of a hybrid method with PCA and LDA.

  • PDF

Efficient Fixed-Point Representation for ResNet-50 Convolutional Neural Network (ResNet-50 합성곱 신경망을 위한 고정 소수점 표현 방법)

  • Kang, Hyeong-Ju
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.1
    • /
    • pp.1-8
    • /
    • 2018
  • Recently, the convolutional neural network shows high performance in many computer vision tasks. However, convolutional neural networks require enormous amount of operation, so it is difficult to adopt them in the embedded environments. To solve this problem, many studies are performed on the ASIC or FPGA implementation, where an efficient representation method is required. The fixed-point representation is adequate for the ASIC or FPGA implementation but causes a performance degradation. This paper proposes a separate optimization of representations for the convolutional layers and the batch normalization layers. With the proposed method, the required bit width for the convolutional layers is reduced from 16 bits to 10 bits for the ResNet-50 neural network. Since the computation amount of the convolutional layers occupies the most of the entire computation, the bit width reduction in the convolutional layers enables the efficient implementation of the convolutional neural networks.