• Title/Summary/Keyword: Sound fundamental frequency

Search Result 101, Processing Time 0.024 seconds

The Analysis of Acoustic Emission Spectra in a 36 kHz Sonoreactor (36kHz 초음파 반응기에서의 원주파수 및 파생주파수의 음압 분포 분석)

  • Son, Younggyu
    • Journal of Soil and Groundwater Environment
    • /
    • v.21 no.6
    • /
    • pp.128-134
    • /
    • 2016
  • Acoustic emission spectra was analyzed to investigate the distribution of sound pressure in a 36 kHz sonoreactor. The sound pressure of fundamental frequency (f: 36 kHz), harmonics (2f: 72 kHz, 3f: 108 kHz, 4f: 144 kHz, 5f: 180 kHz, 6f: 216 kHz), and subharmonics (1.5f: 54 kHz, 2.5f: 90 kHz, 3.5f: 126 kHz, 4.5f: 162 kHz, 5.5f: 198 kHz, 6.5f; 234 kHz) was measured at every 5 cm from the ultrasonic transducer using a hydrophone and a spectrum analyzer. It was revealed that the input power of ultrasound, the application of mechanical mixing, and the concentration of SDS affected the sound pressure distributions of the fundamental frequency and total detected frequencies frequencies significantly. Moreover a linear relationship was found between the average total sound pressure and the degree of sonochemical oxidation while there was no significant linear relationship between the average sound pressure of fundamental frequency and the degree of sonochemical oxidation.

The acoustical analysis of knee sound for non-invasive diagnosis of articular pathology (비침습적 관절 질환 모니터링을 위한 슬관절 음향분석)

  • Kim Keo-Sik;Park Gyung-Se;Kim Kyeong-Seop;Song Chul-Gyu
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.54 no.12
    • /
    • pp.737-740
    • /
    • 2005
  • This paper describes the possibility of evaluating and classifying arthritic pathology using the acoustical analysis of knee joint sound. Six normal subjects and 11 patients with knee problems were enrolled. Patients were divided into the 1st patient group which required an orthopeadic surgery and the 2nd patient group of osteoarthritis. During sitting and standing periods, subjects' active knee flexion and extension were monitored. Fundamental frequency, mean amplitude of pitch, jitter and shimmer were analyzed according to the position. The results demonstrate that the values of fundamental frequency, jitter and shimmer of the 2nd patient group were larger than others and changed unstably. The values of the standing position were larger than the sitting position.

Vocal Exercise System Using Electroglottography (성문전도를 이용한 발성훈련 시스템)

  • Lee, Je-Hyun;Kim, Ji-Hye;Kang, Gu-Tae;Jung, Dong-Keun
    • Journal of Sensor Science and Technology
    • /
    • v.22 no.2
    • /
    • pp.156-161
    • /
    • 2013
  • This study was aimed to implement the electroglottography (EGG) system for analyzing fundamental frequency of the phonation. EGG was recorded from the conductance between ring electrodes attached to the neck skin area near thyroid cartilage with high frequency carrier electric signals during vocalization, and voice signal was recorded with microphone simultaneously. EGG and voice signals were transmitted to the audio port in PC and recorded with stereo sound recording program. From the digitized data, several parameters such as pitch, jitter, shimmer, CQ and SQ were analyzed from the vowel sounds. For the voice training, sound fundamental frequency was displayed during the vocalization and singing a song using pitches analyzed from the EGG. The system implemented in this study could be used for vocal exercise.

Modeling of Piano Sound Using Method of Line-Segment Approximation and Curve Fitting (선분 근사법과 곡선의 적합성을 이용한 피아노 음의 모델링)

  • Lim, Hun;Chong, Ui-Pil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.3
    • /
    • pp.86-91
    • /
    • 2000
  • In this paper, we will discuss the characteristics of the magnitude and the phase of the piano sound in frequency domain by using the FFT(Fast Fourier Transform). The method deciding the parameters representing those sounds through the mathematical model is described. We used the curve fitting method for the modeling of the harmonic part of the sound including the fundamental frequency in order to minimize the errors between original sounds and modeled sounds. furthermore, we used the line segment approximation method for the modeling of the noise part around fundamental frequency. We also applied the same method for the phase model and could get the modeled sound to be similar to the original sound using the parameters. Therefore the high compression ratio comparing the modeled sound to the original sound is achieved.

  • PDF

Vibration and Sound Characteristic of the Chun-cheon Citizen's Bell (춘천시민의 종의 진동 및 음향 특성)

  • Kim, Seok-Hyun;Kim, Tae-Hyung;Kim, Yun-Ho;Han, Young-Ho
    • Journal of Industrial Technology
    • /
    • v.26 no.A
    • /
    • pp.81-88
    • /
    • 2006
  • The Chun-cheon Citizen's Bell was cast in memory of hosting the 2010 world leasure conference and a striking ceremony was held at the city hall on December 31, 2005. In this study, vibration and sound of the bell are measured and the property of the magnificent sound of the bell is scientifically investigated. Frequency components making the sound are identified and how the frequency components decrease with time is observed using waterfall plot(3-dimensional frequency spectrum). Beat characteristics of the hum(1st frequency) and the fundamental(2nd frequency)are examined by experiment. Directivity of the sound radiation of the bell is examined by measuring the vibration and sound in several directions. Duration of the vibration and the sound is estimated using damping ratio.

  • PDF

Separation of Voiced Sounds and Unvoiced Sounds for Corpus-based Korean Text-To-Speech (한국어 음성합성기의 성능 향상을 위한 합성 단위의 유무성음 분리)

  • Hong, Mun-Ki;Shin, Ji-Young;Kang, Sun-Mee
    • Speech Sciences
    • /
    • v.10 no.2
    • /
    • pp.7-25
    • /
    • 2003
  • Predicting the right prosodic elements is a key factor in improving the quality of synthesized speech. Prosodic elements include break, pitch, duration and loudness. Pitch, which is realized by Fundamental Frequency (F0), is the most important element relating to the quality of the synthesized speech. However, the previous method for predicting the F0 appears to reveal some problems. If voiced and unvoiced sounds are not correctly classified, it results in wrong prediction of pitch, wrong unit of triphone in synthesizing the voiced and unvoiced sounds, and the sound of click or vibration. This kind of feature is usual in the case of the transformation from the voiced sound to the unvoiced sound or from the unvoiced sound to the voiced sound. Such problem is not resolved by the method of grammar, and it much influences the synthesized sound. Therefore, to steadily acquire the correct value of pitch, in this paper we propose a new model for predicting and classifying the voiced and unvoiced sounds using the CART tool.

  • PDF

A Basic Study on the Conversion of Sound into Color Image using both Pitch and Energy

  • Kim, Sung-Ill
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.12 no.2
    • /
    • pp.101-107
    • /
    • 2012
  • This study describes a proposed method of converting an input sound signal into a color image by emulating human synesthetic skills which make it possible to associate an sound source with a specific color image. As a first step of sound-to-image conversion, features such as fundamental frequency(F0) and energy are extracted from an input sound source. Then, a musical scale and an octave can be calculated from F0 signals, so that scale, energy and octave can be converted into three elements of HSI model such hue, saturation and intensity, respectively. Finally, a color image with the BMP file format is created as an output of the process of the HSI-to-RGB conversion. We built a basic system on the basis of the proposed method using a standard C-programming. The simulation results revealed that output color images with the BMP file format created from input sound sources have diverse hues corresponding to the change of the F0 signals, where the hue elements have different intensities depending on octaves with the minimum frequency of 20Hz. Furthermore, output images also have various levels of chroma(or saturation) which is directly converted from the energy.

An objective study of sasang constitution diagnosis by sound analysis (성문(聲紋)분석법에 의한 사상체질 진단의 객관화 연구(I))

  • Kim, Dal-rae;Park, Sung-sik;Gun, Gi-rock
    • Journal of Sasang Constitutional Medicine
    • /
    • v.10 no.1
    • /
    • pp.65-80
    • /
    • 1998
  • Proceeding an objective Study of sasang constitution diagnosis by Sound Analysis which uses Computed Sound lab(CSL), we verified the confidence level of Questionnaire of Sasang Constitution classification II(QSCC II) and the first results of Sound Analysis for verifying correlation between the physical character and Sound character are as follows. 1. The confidence level of QSCC II is 70.8% to Soeumin, 60.8% to Soyangin, 74.5% to Taeumin, and 70.08% in total. But, the actual results of verifying the confidence level after making 100 persons an object of study, are that the confidence level of that is 55.10% to Soeumin, 30.77% to Soyangin, 80.00% to Taeumin, and 55.29% in total. So it doesn't coincide with the confidence lecel of QSCC II 70.8%. 2. The results of verifying the confidence level about other 134 persons after enough explanation before the constitutional diagnosis by QSCC II are that the confidence of that is 71.08 to Soeumin, 54.76% to Soyangin 81.82% to Taeumin, and 69.22% in total. 3. The results of verifying the correlation between B.M.I. and Sasang Costitution are that there are significant differences below P<0.001 between Taeumin and Soeumin, and between Taeumin and Soyangin. 4. Height and Weight influence on a fundamental frequency and formant frequency. 5. There are differences for every constitutions in a amplitude when we nave a Sound analysis. As aboves, it is considered that we can find the differences among the constitutional groups, if we have a Sound analysis of the constitutional Sound characters.

  • PDF

Improving Low Frequency Signal Reproduction in TV Audio (TV 스피커의 저주파수 신호 재생 개선)

  • Arora Manish;Oh Yoonhark;Kim SeoungHun;Lee Hyuckjae;Jang Seongcheol
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.275-278
    • /
    • 2004
  • In TV sound system, loudspeakers are subject to severe size constraints. The small size of the transducer affects the low frequency signal performance of the system. Bass signal performance contributes significantly to the user perceived sound quality and a good bass signal reproduction is essential. Increasing the sound energy in the bass signal range is an unviable solution since the gain required are exceedingly high and signal distortion occurs because of the speaker overload. Recently methods are being proposed to invoke low frequency illusion using psychoacoustic phenomena of the missing fundamental. This paper proposes a simple and effective signal processing method to create bass signal illusion in TV speakers using the missing fundamental effect, at a complexity of 12 MIPS on Motorola 56371 audio DSP.

  • PDF

Conversion of Image into Sound Based on HSI Histogram (HSI 히스토그램에 기초한 이미지-사운드 변환)

  • Kim, Sung-Il
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.3
    • /
    • pp.142-148
    • /
    • 2011
  • The final aim of the present study is to develop the intelligent robot, emulating human synesthetic skills which make it possible to associate a color image with a specific sound. This can be done on the basis of the mutual conversion between color image and sound. As a first step of the final goal, this study focused on a basic system using a conversion of color image into sound. This study describes a proposed method to convert color image into sound, based on the likelihood in the physical frequency information between light and sound. The method of converting color image into sound was implemented by using HSI histograms through RGB-to-HSI color model conversion, which was done by Microsoft Visual C++ (ver. 6.0). Two different color images were used on the simulation experiments, and the results revealed that the hue, saturation and intensity elements of each input color image were converted into fundamental frequency, harmonic and octave elements of a sound, respectively. Through the proposed system, the converted sound elements were then synthesized to automatically generate a sound source with wav file format, using Csound.