• Title/Summary/Keyword: pitch-matching

Search Result 55, Processing Time 0.023 seconds

The Kalman Filter Design for the Transfer Alignment by Euler Angle Matching (오일러각 정합방식의 전달정렬 칼만필터 설계)

  • Song, Ki-Won;Lee, Sang-Jeong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.12
    • /
    • pp.1044-1050
    • /
    • 2001
  • This paper presents firstly the method of Euler angle matching designing the transfer alignment using the attitude matching. In this method, the observation directly uses Euler angle difference between MINS and SINS so it needs to describe the rotation vector error to the Euler angle error. The rotation vector error related to the Euler angle error is derive from the direction cosine matrix error equation. The feasibility of the Kalman filter designed for the transfer alignment by Euler angle matching is analyzed by the alignment error results with respect to the roll angle the pitch angle, and the yaw angle matching.

  • PDF

Intonation Training System (Visual Analysis Tool) and the application of French Intonation for Korean Learners (컴퓨터를 이용한 억양 교육 프로그램 개발 : 프랑스어 억양 교육을 중심으로)

  • Yu, Chang-Kyu;Son, Mi-Ra;Kim, Hyun-Gi
    • Speech Sciences
    • /
    • v.5 no.1
    • /
    • pp.49-62
    • /
    • 1999
  • This study is concerned with the educational program Visual Analysis Tool (VAT) for sound development for foreign intonation using personal computer. The VAT can run on IBM-PC 386 compatible or higher. It shows the spectrogram, waveform, intensity and the pitch contour. The system can work freely on either waveform zoom in-out or the documentation of measured value. In this paper, intensity and pitch contour information were used. Twelve French sentences were recorded from a French conversational tape. And three Korean participated in this study. They spoke out twelve sentences repeatly and trid to make the same pitch contour - by visually matching their pitcgh contour to the native speaker's. A sentences were recorded again when the participants themselves became familiar with intonation, intensity and pauses. The difference of pitch contour(rising or falling), pitch value, energy, total duration of sentences and the boundary of rhythmic group between native speaker's and theirs before and after training were compared. The results were as following: 1) In a declarative sentence: a native speaker's general pitch contour falls at the end of sentences. But the participant's pitch contours were flat before training. 2) In an interrogative: the native speaker made his pitch contours it rise at the end of sentences with the exception of wh-questions (qu'est-ce que) and a pitch value varied a greath. In the interrogative 'S + V' form sentences, we found the pitch contour rose higher in comparison to other sentences and it varied a great deal. 3) In an exclamatory sentence: the pitch contour looked like a shape of a mountain. But the participants could not make it fall before or after training.

  • PDF

Speaker and Context Independent Emotion Recognition System using Gaussian Mixture Model (GMM을 이용한 화자 및 문장 독립적 감정 인식 시스템 구현)

  • 강면구;김원구
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2463-2466
    • /
    • 2003
  • This paper studied the pattern recognition algorithm and feature parameters for emotion recognition. In this paper, KNN algorithm was used as the pattern matching technique for comparison, and also VQ and GMM were used lot speaker and context independent recognition. The speech parameters used as the feature are pitch, energy, MFCC and their first and second derivatives. Experimental results showed that emotion recognizer using MFCC and their derivatives as a feature showed better performance than that using the Pitch and energy Parameters. For pattern recognition algorithm, GMM based emotion recognizer was superior to KNN and VQ based recognizer

  • PDF

Speaker and Context Independent Emotion Recognition using Speech Signal (음성을 이용한 화자 및 문장독립 감정인식)

  • 강면구;김원구
    • Proceedings of the IEEK Conference
    • /
    • 2002.06d
    • /
    • pp.377-380
    • /
    • 2002
  • In this paper, speaker and context independent emotion recognition using speech signal is studied. For this purpose, a corpus of emotional speech data recorded and classified according to the emotion using the subjective evaluation were used to make statical feature vectors such as average, standard deviation and maximum value of pitch and energy and to evaluate the performance of the conventional pattern matching algorithms. The vector quantization based emotion recognition system is proposed for speaker and context independent emotion recognition. Experimental results showed that vector quantization based emotion recognizer using MFCC parameters showed better performance than that using the Pitch and energy Parameters.

  • PDF

A Transfer Alignment Considering Measurement Time-Delay and Ship Body Flexure (측정치 시간지연과 선체의 유연성을 고려한 전달정렬 기법)

  • Lim, You-Chol;Lyou, Joon
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.4 no.1
    • /
    • pp.225-233
    • /
    • 2001
  • This paper deals with the transfer alignment problem of SDINS(StrapDown Inertial Navigation System) subjected to roll and pitch motions of the ship. Specifically, to reduce alignment errors induced by measurement time-delay and ship body flexure, an error compensation method is suggested based on delay state augmentation and DCM(Direction Cosine Matrix) partial matching. A linearized error model for the velocity and attitude matching transfer alignment system is first derived by linearizing the nonlinear measurement equation with respect to its time delay and augmenting the delay state into the conventional linear state equations. And then DCM partial matching is properly combined to reduce effects of a ship's Y axis flexure. The simulation results show that the suggested method is effective enough resulting in considerably less azimuth alignment errors.

  • PDF

The Comparison of Speech Feature Parameters for Emotion Recognition (감정 인식을 위한 음성의 특징 파라메터 비교)

  • 김원구
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2004.04a
    • /
    • pp.470-473
    • /
    • 2004
  • In this paper, the comparison of speech feature parameters for emotion recognition is studied for emotion recognition using speech signal. For this purpose, a corpus of emotional speech data recorded and classified according to the emotion using the subjective evaluation were used to make statical feature vectors such as average, standard deviation and maximum value of pitch and energy. MFCC parameters and their derivatives with or without cepstral mean subfraction are also used to evaluate the performance of the conventional pattern matching algorithms. Pitch and energy Parameters were used as a Prosodic information and MFCC Parameters were used as phonetic information. In this paper, In the Experiments, the vector quantization based emotion recognition system is used for speaker and context independent emotion recognition. Experimental results showed that vector quantization based emotion recognizer using MFCC parameters showed better performance than that using the Pitch and energy parameters. The vector quantization based emotion recognizer achieved recognition rates of 73.3% for the speaker and context independent classification.

  • PDF

A Study on Korean and English Speaker Recognitions using the Fuzzy Theory (퍼지 이론을 이용한 한국어 및 영어 화자 인식에 관한 연구)

  • 김연숙;김희주;김경재
    • Journal of the Korea Society of Computer and Information
    • /
    • v.7 no.3
    • /
    • pp.49-55
    • /
    • 2002
  • This paper proposes speaker recognition algorithm which includes both the pitch parameter and the fuzzy. This study proposes a pitch detection method for the peak and valley pitch detection function by means of comparing spectra which utilizes the transform characteristics between time and frequency. It measures the similarity to the original spectrum while arbitrarily varying the period in the time domain. It heavily weights the error due to the changing characteristics of the phonemes, while it is strong against noise. In this paper, makes reference pattern using membership function and performs vocal track recognition of common character using fuzzy pattern matching in odor to include time variation width for non-linear utterance time.

  • PDF

A Study on Korean and Japanese Speaker Recognitions using the Fuzzy Theory (퍼지 이론을 이용한 한국어 및 일어 화자 인식에 관한 연구)

  • 김연숙;김창완
    • Journal of the Korea Society of Computer and Information
    • /
    • v.5 no.3
    • /
    • pp.51-57
    • /
    • 2000
  • This paper proposes speaker recognition algorithm which includes both the pitch and the fuzzy. This study proposes a pitch detection method for the peak and valley pitch detection function by means of comparing spectra which utilizes the transform characteristics between time and frequency. It measures the similarity to the original spectrum while arbitrarily varying the period in the time domain. It heavily weights the error due to the changing characteristics of the phonemes, while it is strong against noise. In this paper, makes reference pattern using membership function and performs vocal track recognition of common character using fuzzy pattern matching in order to include time variation width for non-linear utterance time.

  • PDF

3D Image Process by Template Matching and B-Spline Interpolations (템플릿 정합과 B-Spline 보간에 의한 3차원 광학 영상 처리)

  • Joo, Young-Hoon;Yang, Han-Jin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.5
    • /
    • pp.683-688
    • /
    • 2009
  • The purposes of this paper is to propose new techniques to reconstruct measured optical images by using the template matching and B-Spline interpolation method based on image processing technology. To do this, we detect the matching template and non-matching template from each optical image. And then, we match the overlaped images from base level by correcting roll, pitch, and yaw error of images. At last, the matching image is interpolated by B-Spline interpolation. Finally, we show the effectiveness and feasibility of the proposed method through some experiments.

Development of Audio Melody Extraction and Matching Engine for MIREX 2011 tasks

  • Song, Chai-Jong;Jang, Dalwon;Lee, Seok-Pil;Park, Hochong
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2012.07a
    • /
    • pp.164-166
    • /
    • 2012
  • In this paper, we proposed a method for extracting predominant melody of polyphonic music based on harmonic structure. Harmonic structure is an important feature parameter of monophonic signal that has spectral peaks at the integer multiples of its fundamental frequency. We extract all fundamental frequency candidates contained in the polyphonic signal by verifying the required condition of harmonic structure. Then, we combine those harmonic peaks corresponding to each extracted fundamental frequency and assign a rank to each after calculating its harmonic average energy. We run pitch tracking based on the rank of extracted fundamental frequency and continuity of fundamental frequency, and determine the predominant melody. For the query by singing/humming (QbSH) task, we proposed Dynamic Time Warping (DTW) based matching engine. Our system reduces false alarm by combining the distances of multiple DTW processes. To improve the performance, we introduced the asymmetric sense, pitch level compensation, and distance intransitiveness to DTW algorithm.

  • PDF