• Title/Summary/Keyword: Neural Network Quantization

Search Result 114, Processing Time 0.02 seconds

Sensitivity Property of Generalized CMAC Neural Network

  • Kim, Dong-Hyawn;Lee, In-Won
    • Computational Structural Engineering : An International Journal
    • /
    • v.3 no.1
    • /
    • pp.39-47
    • /
    • 2003
  • Generalized CMAC (GCMAC) is a type of neural network known to be fast in learning. The network may be useful in structural engineering applications such as the identification and the control of structures. The derivatives of a trained GCMAC is relatively poor in accuracy. Therefore to improve the accuracy, a new algorithm is proposed. If GCMAC is directly differentiated, the accuracy of the derivative is not satisfactory. This is due to the quantization of input space and the shape of basis function used. Using the periodicity of the predicted output by GCMAC, the derivative can be improved to the extent of having almost no error. Numerical examples are considered to show the accuracy of the proposed algorithm.

  • PDF

Low Sit Rate Image Coding using Neural Network (신경망을 이용한 저비트율 영상코딩)

  • 정연길;최승규;배철수
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2001.10a
    • /
    • pp.579-582
    • /
    • 2001
  • Vector Transformation is a new method unified vector quantization and coding. So far, codebook generation applied to coding was LBG algorithm. But using the advantage of SOFM(Self-Organizing Feature Map) based on neural network can improve a system's performance. In this paper, we generated VTC(Vector Transformation Coding) codebook applied with SOFM algorithm and compare the result for several coding rates with LBG algorithm. The problem of Vector quantization is complicated calculation and codebook generation. So, to solve this problem, we used neural network approach method.

  • PDF

Extracting Muscle Area with ART2 based Quantization from Rehabilitative Ultrasound Images (ART2 기반 양자화를 이용한 재활 초음파 영상에서의 근육 영역 추출)

  • Kim, Kwang-Baek
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.6
    • /
    • pp.11-17
    • /
    • 2014
  • While safe and convenient, ultrasound imaging analysis is often criticized by its subjective decision making nature by field experts in analyzing musculoskeletal system. In this paper, we propose a new automatic method to extract muscle area using ART2 neural network based quantization. A series of image processing algorithms such as histogram smoothing and End-in search stretching are applied in pre-processing phase to remove noises effectively. Muscle areas are extracted by considering various morphological features and corresponding analysis. In experiment, our ART2 based Quantization is verified as more effective than other general quantization methods.

IAFC(Integrated Adaptive Fuzzy Clustering)Model Using Supervised Learning Rule for Pattern Recognition (패턴 인식을 위한 감독학습을 사용한 IAFC( Integrated Adaptive Fuzzy Clustering)모델)

  • 김용수;김남진;이재연;지수영;조영조;이세열
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2004.10a
    • /
    • pp.153-157
    • /
    • 2004
  • 본 논문은 패턴인식을 위해 사용할 수 있는 감독학습을 이용한 supervised IAFC neural network 1과 supervised IAFC neural network 2를 제안하였다 Supervised IAFC neural network 1과 supervised IAFC neural network 2는 LVQ(Learning Vector Quantization)를 퍼지화한 새로운 퍼지 학습법칙을 사용하고 있다. 이 새로운 퍼지 학습 법칙은 기존의 학습률 대신에 퍼지화된 학습률을 사용하고 있는데, 이 퍼지화된 학습률은 조건 확률을 퍼지화 한 것에 근간을 두고 있다. Supervised IAFC neural network 1과 supervised IAFC neural network 2의 성능과 오류역전파 신경회로망의 성능을 비교하기 위하여 iris 데이터를 사용하였는데, 실험결과 supervised IAFC neural network 2 의 성능이 오류역전파 신경회로망의 성능보다 우수함이 입증되었다.

  • PDF

Speaker Identification using Incremental Neural Network and LPCC (Incremental Neural Network 과 LPCC을 이용한 화자인식)

  • 허광승;박창현;이동욱;심귀보
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2002.12a
    • /
    • pp.341-344
    • /
    • 2002
  • 음성은 화자들의 특징을 가지고 있다. 이 논문에서는 신경망에 기초한 Incremental Learning을 이용하여 화자인식시스템을 소개한다. 컴퓨터를 통하여 녹음된 문장들은 FFT를 거치면서 Frequency 영역으로 바뀌고, 모음들의 특징을 가지고 있는 Formant를 이용하여 모음들을 추출한다. 추출된 모음들은 LPC처리를 통하여 화자의 특성을 가지고 있는 Coefficient값들을 얻는다. LPCC과정과 Vector Quantization을 통해 10개의 특징 점들은 학습을 위한 Input으로 들어가고 화자 수에 따라 증가되는 Hidden Layer와 Output Layer들을 가지고 있는 신경망을 통해 화자인식을 수행한다.

Forward Viterbi Decoder applied LVQ Network (LVQ Network를 적용한 순방향 비터비 복호기)

  • Park Ji woong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.12A
    • /
    • pp.1333-1339
    • /
    • 2004
  • In IS-95 and IMT-2000 systems using variable code rates and constraint lengths, this paper limits code rate 1/2 and constraint length 3 and states the effective reduction of PM(Path Metric) and BM(Branch Metric) memories and arithmetic comparative calculations with appling PVSL(Prototype Vector Selecting Logic) and LVQ(Learning Vector Quantization) in neural network to simplify systems and to decode forwardly. Regardless of extension of constraint length, this paper presents the new Vierbi decoder and the appied algorithm because new structure and algorithm can apply to the existing Viterbi decoder using only uncomplicated application and verifies the rationality of the proposed Viterbi decoder through VHDL simulation and compares the performance between the proposed Viterbi decoder and the existing.

Fuzzy Neural Network Model Using A Learning Rule Considering the Distances Between Classes (클래스간의 거리를 고려한 학습법칙을 사용한 퍼지 신경회로망 모델)

  • Kim Yong-Soo;Baek Yong-Sun;Lee Se-Yul
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.4
    • /
    • pp.460-465
    • /
    • 2006
  • This paper presents a new fuzzy learning rule which considers the Euclidean distances between the input vector and the prototypes of classes. The new fuzzy learning rule is integrated into the supervised IAFC neural network 4. This neural network is stable and plastic. We used iris data to compare the performance of the supervised IAFC neural network 4 with the performances of back propagation neural network and LVQ algorithm.

Compression of DNN Integer Weight using Video Encoder (비디오 인코더를 통한 딥러닝 모델의 정수 가중치 압축)

  • Kim, Seunghwan;Ryu, Eun-Seok
    • Journal of Broadcast Engineering
    • /
    • v.26 no.6
    • /
    • pp.778-789
    • /
    • 2021
  • Recently, various lightweight methods for using Convolutional Neural Network(CNN) models in mobile devices have emerged. Weight quantization, which lowers bit precision of weights, is a lightweight method that enables a model to be used through integer calculation in a mobile environment where GPU acceleration is unable. Weight quantization has already been used in various models as a lightweight method to reduce computational complexity and model size with a small loss of accuracy. Considering the size of memory and computing speed as well as the storage size of the device and the limited network environment, this paper proposes a method of compressing integer weights after quantization using a video codec as a method. To verify the performance of the proposed method, experiments were conducted on VGG16, Resnet50, and Resnet18 models trained with ImageNet and Places365 datasets. As a result, loss of accuracy less than 2% and high compression efficiency were achieved in various models. In addition, as a result of comparison with similar compression methods, it was verified that the compression efficiency was more than doubled.

A Performance Comparison of Backpropagation Neural Networks and Learning Vector Quantization Techniques for Sundanese Characters Recognition

  • Haviluddin;Herman Santoso Pakpahan;Dinda Izmya Nurpadillah;Hario Jati Setyadi;Arif Harjanto;Rayner Alfred
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.3
    • /
    • pp.101-106
    • /
    • 2024
  • This article aims to compare the accuracy of the Backpropagation Neural Network (BPNN) and Learning Vector Quantization (LVQ) approaches in recognizing Sundanese characters. Based on experiments, the level of accuracy that has been obtained by the BPNN technique is 95.23% and the LVQ technique is 66.66%. Meanwhile, the learning time that has been required by the BPNN technique is 2 minutes 45 seconds and then the LVQ method is 17 minutes 22 seconds. The results indicated that the BPNN technique was better than the LVQ technique in recognizing Sundanese characters in accuracy and learning time.

Fuzzy Neural Network Using a Learning Rule utilizing Selective Learning Rate (선택적 학습률을 활용한 학습법칙을 사용한 신경회로망)

  • Baek, Young-Sun;Kim, Yong-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.5
    • /
    • pp.672-676
    • /
    • 2010
  • This paper presents a learning rule that weights more on data near decision boundary. This learning rule generates better decision boundary by reducing the effect of outliers on the decision boundary. The proposed learning rule is integrated into IAFC neural network. IAFC neural network is stable to maintain previous learning results and is plastic to learn new data. The performance of the proposed fuzzy neural network is compared with performances of LVQ neural network and backpropagation neural network. The results show that the performance of the proposed fuzzy neural network is better than those of LVQ neural network and backpropagation neural network.