• 제목/요약/키워드: incremental learning

검색결과 142건 처리시간 0.026초

Incremental Learning을 이용한 화자 인식 (The Speaker Identification Using Incremental Learning)

  • 심귀보;허광승;박창현;이동욱
    • 한국지능시스템학회논문지
    • /
    • 제13권5호
    • /
    • pp.576-581
    • /
    • 2003
  • 음성 속에는 화자의 특징이 포함되어 있다. 본 논문에서는 신경망에 기초한 Incremental Learning을 이용하여 화자 수에 제한 받지 않는 화자 인식 시스템을 제안한다. 컴퓨터를 통하여 녹음된 음성 신호는 End Detection과정을 통하여 유성음과 무성음을 분류하고 LPC를 이용해 12차수의 Cepstral Coefficients를 추출한다. 이 계수는 화자 식별을 위한 학습 입력값으로 사용 된다. Incremental Learning은 이미 학습한 Weight들을 기억하고 새로운 data에 대해서만 학습을 하는 학습 방법으로 Neural Network 구조가 화자 수에 따라 늘어나므로 화자 수에 제한을 받지 않고 학습이 가능하다.

함수 근사를 위한 점증적 서포트 벡터 학습 방법 (Incremental Support Vector Learning Method for Function Approximation)

  • 임채환;박주영
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 하계종합학술대회 논문집(3)
    • /
    • pp.135-138
    • /
    • 2002
  • This paper addresses incremental learning method for regression. SVM(support vector machine) is a recently proposed learning method. In general training a support vector machine requires solving a QP (quadratic programing) problem. For very large dataset or incremental dataset, solving QP problems may be inconvenient. So this paper presents an incremental support vector learning method for function approximation problems.

  • PDF

Speaker Identification Based on Incremental Learning Neural Network

  • Heo, Kwang-Seung;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제5권1호
    • /
    • pp.76-82
    • /
    • 2005
  • Speech signal has various features of speakers. This feature is extracted from speech signal processing. The speaker is identified by the speaker identification system. In this paper, we propose the speaker identification system that uses the incremental learning based on neural network. Recorded speech signal through the microphone is blocked to the frame of 1024 speech samples. Energy is divided speech signal to voiced signal and unvoiced signal. The extracted 12 orders LPC cpestrum coefficients are used with input data for neural network. The speakers are identified with the speaker identification system using the neural network. The neural network has the structure of MLP which consists of 12 input nodes, 8 hidden nodes, and 4 output nodes. The number of output node means the identified speakers. The first output node is excited to the first speaker. Incremental learning begins when the new speaker is identified. Incremental learning is the learning algorithm that already learned weights are remembered and only the new weights that are created as adding new speaker are trained. It is learning algorithm that overcomes the fault of neural network. The neural network repeats the learning when the new speaker is entered to it. The architecture of neural network is extended with the number of speakers. Therefore, this system can learn without the restricted number of speakers.

A New Incremental Learning Algorithm with Probabilistic Weights Using Extended Data Expression

  • Yang, Kwangmo;Kolesnikova, Anastasiya;Lee, Won Don
    • Journal of information and communication convergence engineering
    • /
    • 제11권4호
    • /
    • pp.258-267
    • /
    • 2013
  • New incremental learning algorithm using extended data expression, based on probabilistic compounding, is presented in this paper. Incremental learning algorithm generates an ensemble of weak classifiers and compounds these classifiers to a strong classifier, using a weighted majority voting, to improve classification performance. We introduce new probabilistic weighted majority voting founded on extended data expression. In this case class distribution of the output is used to compound classifiers. UChoo, a decision tree classifier for extended data expression, is used as a base classifier, as it allows obtaining extended output expression that defines class distribution of the output. Extended data expression and UChoo classifier are powerful techniques in classification and rule refinement problem. In this paper extended data expression is applied to obtain probabilistic results with probabilistic majority voting. To show performance advantages, new algorithm is compared with Learn++, an incremental ensemble-based algorithm.

Text-Independent Speaker Identification System Based On Vowel And Incremental Learning Neural Networks

  • Heo, Kwang-Seung;Lee, Dong-Wook;Sim, Kwee-Bo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2003년도 ICCAS
    • /
    • pp.1042-1045
    • /
    • 2003
  • In this paper, we propose the speaker identification system that uses vowel that has speaker's characteristic. System is divided to speech feature extraction part and speaker identification part. Speech feature extraction part extracts speaker's feature. Voiced speech has the characteristic that divides speakers. For vowel extraction, formants are used in voiced speech through frequency analysis. Vowel-a that different formants is extracted in text. Pitch, formant, intensity, log area ratio, LP coefficients, cepstral coefficients are used by method to draw characteristic. The cpestral coefficients that show the best performance in speaker identification among several methods are used. Speaker identification part distinguishes speaker using Neural Network. 12 order cepstral coefficients are used learning input data. Neural Network's structure is MLP and learning algorithm is BP (Backpropagation). Hidden nodes and output nodes are incremented. The nodes in the incremental learning neural network are interconnected via weighted links and each node in a layer is generally connected to each node in the succeeding layer leaving the output node to provide output for the network. Though the vowel extract and incremental learning, the proposed system uses low learning data and reduces learning time and improves identification rate.

  • PDF

미리 순서가 매겨진 학습 데이타를 이용한 효과적인 증가학습 (Efficient Incremental Learning using the Preordered Training Data)

  • 이선영;방승양
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제27권2호
    • /
    • pp.97-107
    • /
    • 2000
  • 증가학습은 점진적으로 학습 데이타를 늘려가며 신경망을 학습시킴으로써 일반적으로 학습시간을 단축시킬 뿐만 아니라 신경망의 일반화 성능을 향상시킨다. 그러나, 기존의 증가학습은 학습 데이타를 선정하는 과정에서 데이타의 중요도를 반복적으로 평가한다. 본 논문에서는 분류 문제의 경우 학습이 시작되기 전에 데이타의 중요도를 한 번만 평가한다. 제안된 방법에서는 분류 문제의 경우 클래스 경계에 가까운 데이타일수록 그 데이타의 중요도가 높다고 보고 이러한 데이타를 선택하는 방법을 제시한다. 두가지 합성 데이타와 실세계 데이타의 실험을 통해 제안된 방법이 기존의 방법보다 학습 시간을 단축시키며 일반화 성능을 향상시킴을 보인다.

  • PDF

Incremental Neural Network 과 LPCC을 이용한 화자인식 (Speaker Identification using Incremental Neural Network and LPCC)

  • 허광승;박창현;이동욱;심귀보
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2002년도 추계학술대회 및 정기총회
    • /
    • pp.341-344
    • /
    • 2002
  • 음성은 화자들의 특징을 가지고 있다. 이 논문에서는 신경망에 기초한 Incremental Learning을 이용하여 화자인식시스템을 소개한다. 컴퓨터를 통하여 녹음된 문장들은 FFT를 거치면서 Frequency 영역으로 바뀌고, 모음들의 특징을 가지고 있는 Formant를 이용하여 모음들을 추출한다. 추출된 모음들은 LPC처리를 통하여 화자의 특성을 가지고 있는 Coefficient값들을 얻는다. LPCC과정과 Vector Quantization을 통해 10개의 특징 점들은 학습을 위한 Input으로 들어가고 화자 수에 따라 증가되는 Hidden Layer와 Output Layer들을 가지고 있는 신경망을 통해 화자인식을 수행한다.

학습지향성이 점진적 혁신에 미치는 효과 및 재직기간의 조절효과 (The Relationship Between Learning Orientation and Incremental Innovation, and the Moderating Effect of Tenure)

  • 안관영
    • 대한안전경영과학회지
    • /
    • 제12권3호
    • /
    • pp.249-255
    • /
    • 2010
  • This paper studies the relationship between learning orientation and incremental innovation(process innovation, operational innovation, and service innovation), and the moderating effect of tenure in tele-communication service sector. Based on the responses from 241 employees, the results of multiple regression analysis show that learning orientation have positive relationships with process innovation, operational innovation, and service innovation. The results of moderating analysis showed that longer tenure employees have more positive relationships with all incremental innovation factors(process innovation, operational innovation, and service innovation) than short tenure employees.

Fault-tolerant control system for once-through steam generator based on reinforcement learning algorithm

  • Li, Cheng;Yu, Ren;Yu, Wenmin;Wang, Tianshu
    • Nuclear Engineering and Technology
    • /
    • 제54권9호
    • /
    • pp.3283-3292
    • /
    • 2022
  • Based on the Deep Q-Network(DQN) algorithm of reinforcement learning, an active fault-tolerance method with incremental action is proposed for the control system with sensor faults of the once-through steam generator(OTSG). In this paper, we first establish the OTSG model as the interaction environment for the agent of reinforcement learning. The reinforcement learning agent chooses an action according to the system state obtained by the pressure sensor, the incremental action can gradually approach the optimal strategy for the current fault, and then the agent updates the network by different rewards obtained in the interaction process. In this way, we can transform the active fault tolerant control process of the OTSG to the reinforcement learning agent's decision-making process. The comparison experiments compared with the traditional reinforcement learning algorithm(RL) with fixed strategies show that the active fault-tolerant controller designed in this paper can accurately and rapidly control under sensor faults so that the pressure of the OTSG can be stabilized near the set-point value, and the OTSG can run normally and stably.

점증적 모델에서 최적의 네트워크 구조를 구하기 위한 학습 알고리즘 (An Learning Algorithm to find the Optimized Network Structure in an Incremental Model)

  • 이종찬;조상엽
    • 인터넷정보학회논문지
    • /
    • 제4권5호
    • /
    • pp.69-76
    • /
    • 2003
  • 본 논문에서는 패턴 분류를 위한 새로운 학습 알고리즘을 소개한다. 이 알고리즘은 학습 데이터 집합에 포함된 오류 때문에 네트워크 구조가 너무 복잡하게 되는 점증적 학습 알고리즘의 문제를 해결하기 위해 고안되었다. 이 문제를 위한 접근 방법으로 미리 정의된 판단기준을 가지고 학습 과정을 중단하는 전지 방법을 사용한다. 이 과정에서 적절한 처리과정에 의해 3층 전향구조를 가지는 반복적 모델이 점증적 모델로부터 유도된다 여기서 이 네트워크 구조가 위층과 아래층 사이에 완전연결이 아니라는 점을 주목한다. 전지 방법의 효율성을 확인하기 위해 이 네트워크는 EBP로 다시 학습한다. 이 결과로부터 제안된 알고리즘이 시스템 성능과 네트워크 구조를 이루는 노드의 수 면에서 효과적임을 발견할 수 있다.

  • PDF