• Title/Summary/Keyword: 3-D Spectrogram

Search Result 19, Processing Time 0.026 seconds

Studies on the Constituents of Spiraea Koreana Nakai (참조팝나무의 成分 Alkaloid 에 關한 硏究)

  • Jin, Kab-Dukc
    • Journal of the Korean Chemical Society
    • /
    • v.11 no.3
    • /
    • pp.111-116
    • /
    • 1967
  • A new alkaloid named Spirajine(m.p. 182~$184^{\circ}C$ $[{\alpha}]d^{19}+3.4^{\circ}$ in $CHCl_3$, $C_{23}H_{33}NO_3$, colorless prism) was isolated from the leaves of Spiraea Koreana Nakai (Spiraeceae) (Korean name "Chamjopab namu") which grows in the mountaineous area of Korea, by process of Scheme I (yields 0.13%). Another two unidentified alkaloids (not yet crystallized) were separated by the method of thin layer chromatography. (The Rf values of the two unidentified alkaloids were 0.66, 0.77, respectively and Spirajine 0.72) Spirajine were subjected to the structural investigation with the use of ultra violet and infra red spectrophotometry, and opical rotatory dispersion. The alkaloid contains two ketonic carbonyl groups, tertiary hydroxyl group, methyl groups, N-methyl group and both cyclohexane ring and cyclopentane ring.

  • PDF

An Analysis of Preference for Korean Pop Music By Applying Acoustic Signal Analysis Techniques (음향신호분석 기술을 적용한 한국가요의 시대별 선호도 분석)

  • Cho, Dong-Uk;Kim, Bong-Hyun
    • The KIPS Transactions:PartD
    • /
    • v.19D no.3
    • /
    • pp.211-220
    • /
    • 2012
  • Recently K-Pop gained worldwide sensational popularity, no longer limited to the domestic pop music scene. One of the main causes can be that K-Pop mostly are "Hook Song" which has the "hook effect": a certain melody or/and rhythm is repeated up to 70 times in one song so that it hooks the ear of the listener. Also, visual effects by K-Pop dance group are supposed to contribute to gaining the popularity. In this paper, we propose a method which traces the changes of preference for Korean pop music according to the passing of time and investigates the causes using acoustic signal analysis. For this, experiments in acoustic signal analysis are performed on Korean pop music of from popular female singers in 1960s to those as of this date. Experimental results by applying acoustic signal processing techniques show that the periods discrimination is possible based on scientific evidences. Also, quantitative, objective and numerical data based on acoustic signal processing techniques are extracted compared with the pre-existing methods such as subjective and statistical data.

The Utility of Perturbation, Non-linear dynamic, and Cepstrum measures of dysphonia according to Signal Typing (음성 신호 분류에 따른 장애 음성의 변동률 분석, 비선형 동적 분석, 캡스트럼 분석의 유용성)

  • Choi, Seong Hee;Choi, Chul-Hee
    • Phonetics and Speech Sciences
    • /
    • v.6 no.3
    • /
    • pp.63-72
    • /
    • 2014
  • The current study assessed the utility of acoustic analyses the most commonly used in routine clinical voice assessment including perturbation, nonlinear dynamic analysis, and Spectral/Cepstrum analysis based on signal typing of dysphonic voices and investigated their applicability of clinical acoustic analysis methods. A total of 70 dysphonic voice samples were classified with signal typing using narrowband spectrogram. Traditional parameters of %jitter, %shimmer, and signal-to-noise ratio were calculated for the signals using TF32 and correlation dimension(D2) of nonlinear dynamic parameter and spectral/cepstral measures including mean CPP, CPP_sd, CPPf0, CPPf0_sd, L/H ratio, and L/H ratio_sd were also calculated with ADSV(Analysis of Dysphonia in Speech and VoiceTM). Auditory perceptual analysis was performed by two blinded speech-language pathologists with GRBAS. The results showed that nearly periodic Type 1 signals were all functional dysphonia and Type 4 signals were comprised of neurogenic and organic voice disorders. Only Type 1 voice signals were reliable for perturbation analysis in this study. Significant signal typing-related differences were found in all acoustic and auditory-perceptual measures. SNR, CPP, L/H ratio values for Type 4 were significantly lower than those of other voice signals and significant higher %jitter, %shimmer were observed in Type 4 voice signals(p<.001). Additionally, with increase of signal type, D2 values significantly increased and more complex and nonlinear patterns were represented. Nevertheless, voice signals with highly noise component associated with breathiness were not able to obtain D2. In particular, CPP, was highly sensitive with voice quality 'G', 'R', 'B' than any other acoustic measures. Thus, Spectral and cepstral analyses may be applied for more severe dysphonic voices such as Type 4 signals and CPP can be more accurate and predictive acoustic marker in measuring voice quality and severity in dysphonia.

A study on combination of loss functions for effective mask-based speech enhancement in noisy environments (잡음 환경에 효과적인 마스크 기반 음성 향상을 위한 손실함수 조합에 관한 연구)

  • Jung, Jaehee;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.3
    • /
    • pp.234-240
    • /
    • 2021
  • In this paper, the mask-based speech enhancement is improved for effective speech recognition in noise environments. In the mask-based speech enhancement, enhanced spectrum is obtained by multiplying the noisy speech spectrum by the mask. The VoiceFilter (VF) model is used as the mask estimation, and the Spectrogram Inpainting (SI) technique is used to remove residual noise of enhanced spectrum. In this paper, we propose a combined loss to further improve speech enhancement. In order to effectively remove the residual noise in the speech, the positive part of the Triplet loss is used with the component loss. For the experiment TIMIT database is re-constructed using NOISEX92 noise and background music samples with various Signal to Noise Ratio (SNR) conditions. Source to Distortion Ratio (SDR), Perceptual Evaluation of Speech Quality (PESQ), and Short-Time Objective Intelligibility (STOI) are used as the metrics of performance evaluation. When the VF was trained with the mean squared error and the SI model was trained with the combined loss, SDR, PESQ, and STOI were improved by 0.5, 0.06, and 0.002 respectively compared to the system trained only with the mean squared error.

The Speech of Cleft Palate Patients using Nasometer, EPG and Computer based Speech Analysis System (비음 측정기, 전기 구개도 및 음성 분석 컴퓨터 시스템을 이용한 구개열 언어 장애의 특성 연구)

  • Shin, Hyo-Geun;Kim, Oh-Whan;Kim, Hyun-Gi
    • Speech Sciences
    • /
    • v.4 no.2
    • /
    • pp.69-89
    • /
    • 1998
  • The aim of this study is to develop an objectively method of speech evaluation for children with cleft palates. To assess velopharyngeal function, Visi-Pitch, Computerized Speech Lab. (CSL), Nasometer and Palatometer were used for this study. Acoustic parameters were measured depending on the diagnostic instruments: Pitch (Hz), sound pressure level (dB), jitter (%) and diadochokinetic rate by Visi-Pitch, VOT and vowels formant ($F_1\;&\;F_2$) by a Spectrography and the degree of hypernasality by Nasometer. In addition, Palatometer was used to find the lingual-palatal patterns of cleft palate. Ten children with cleft palates and fifty normal children participated in the experiment. The results are as follows: (1) Higher nasalance of children with cleft palates showed the resonance disorder. (2) The cleft palate showed palatal misarticulation and lateral misarticulation on the palatogram. (3) Children with cleft palates showed the phonatory and respiratory problems. The duration of sustained vowels in children with cleft palates was shorter than in the control groups. The pitch of children with cleft palates was higher than in the control groups. However, intensity, jitter and diadochokinetic rate of children with cleft palates were lower than in the control group. (4) On the Spectrogram, the VOT of children with cleft palates was longer than control group. $F_1\;&\;F_2$ were lower than in the control group.

  • PDF

Improvement of Vocal Detection Accuracy Using Convolutional Neural Networks

  • You, Shingchern D.;Liu, Chien-Hung;Lin, Jia-Wei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.2
    • /
    • pp.729-748
    • /
    • 2021
  • Vocal detection is one of the fundamental steps in musical information retrieval. Typically, the detection process consists of feature extraction and classification steps. Recently, neural networks are shown to outperform traditional classifiers. In this paper, we report our study on how to improve detection accuracy further by carefully choosing the parameters of the deep network model. Through experiments, we conclude that a feature-classifier model is still better than an end-to-end model. The recommended model uses a spectrogram as the input plane and the classifier is an 18-layer convolutional neural network (CNN). With this arrangement, when compared with existing literature, the proposed model improves the accuracy from 91.8% to 94.1% in Jamendo dataset. As the dataset has an accuracy of more than 90%, the improvement of 2.3% is difficult and valuable. If even higher accuracy is required, the ensemble learning may be used. The recommend setting is a majority vote with seven proposed models. Doing so, the accuracy increases by about 1.1% in Jamendo dataset.

Towards Low Complexity Model for Audio Event Detection

  • Saleem, Muhammad;Shah, Syed Muhammad Shehram;Saba, Erum;Pirzada, Nasrullah;Ahmed, Masood
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.9
    • /
    • pp.175-182
    • /
    • 2022
  • In our daily life, we come across different types of information, for example in the format of multimedia and text. We all need different types of information for our common routines as watching/reading the news, listening to the radio, and watching different types of videos. However, sometimes we could run into problems when a certain type of information is required. For example, someone is listening to the radio and wants to listen to jazz, and unfortunately, all the radio channels play pop music mixed with advertisements. The listener gets stuck with pop music and gives up searching for jazz. So, the above example can be solved with an automatic audio classification system. Deep Learning (DL) models could make human life easy by using audio classifications, but it is expensive and difficult to deploy such models at edge devices like nano BLE sense raspberry pi, because these models require huge computational power like graphics processing unit (G.P.U), to solve the problem, we proposed DL model. In our proposed work, we had gone for a low complexity model for Audio Event Detection (AED), we extracted Mel-spectrograms of dimension 128×431×1 from audio signals and applied normalization. A total of 3 data augmentation methods were applied as follows: frequency masking, time masking, and mixup. In addition, we designed Convolutional Neural Network (CNN) with spatial dropout, batch normalization, and separable 2D inspired by VGGnet [1]. In addition, we reduced the model size by using model quantization of float16 to the trained model. Experiments were conducted on the updated dataset provided by the Detection and Classification of Acoustic Events and Scenes (DCASE) 2020 challenge. We confirm that our model achieved a val_loss of 0.33 and an accuracy of 90.34% within the 132.50KB model size.

CNN based data anomaly detection using multi-channel imagery for structural health monitoring

  • Shajihan, Shaik Althaf V.;Wang, Shuo;Zhai, Guanghao;Spencer, Billie F. Jr.
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.181-193
    • /
    • 2022
  • Data-driven structural health monitoring (SHM) of civil infrastructure can be used to continuously assess the state of a structure, allowing preemptive safety measures to be carried out. Long-term monitoring of large-scale civil infrastructure often involves data-collection using a network of numerous sensors of various types. Malfunctioning sensors in the network are common, which can disrupt the condition assessment and even lead to false-negative indications of damage. The overwhelming size of the data collected renders manual approaches to ensure data quality intractable. The task of detecting and classifying an anomaly in the raw data is non-trivial. We propose an approach to automate this task, improving upon the previously developed technique of image-based pre-processing on one-dimensional (1D) data by enriching the features of the neural network input data with multiple channels. In particular, feature engineering is employed to convert the measured time histories into a 3-channel image comprised of (i) the time history, (ii) the spectrogram, and (iii) the probability density function representation of the signal. To demonstrate this approach, a CNN model is designed and trained on a dataset consisting of acceleration records of sensors installed on a long-span bridge, with the goal of fault detection and classification. The effect of imbalance in anomaly patterns observed is studied to better account for unseen test cases. The proposed framework achieves high overall accuracy and recall even when tested on an unseen dataset that is much larger than the samples used for training, offering a viable solution for implementation on full-scale structures where limited labeled-training data is available.

A STUDY ON THE INFLUENCE OF THE PALATAL PLATES UPON THE DURATION OF KOREAN SOUNDS (구개상 장착에 따른 한국어 어음의 조음시간 변화에 관한 연구)

  • Koh, Yeo-Joon;Kim, Chang-Whe;Kim, Yong-Soo
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.32 no.1
    • /
    • pp.77-102
    • /
    • 1994
  • Many studies have been made on the masticatory and esthetic effects of prosthodontic treatments, but few on the restoration of pronunciation, especially in complete denture wearers. The purpose of this study is to provide a basis that could be of help to the complete denture wearers' speech adaptation by analyzing the influence of the palatal coverage upon the duration of consonants and vowels with the method of experimental phonetics. For this study, metal plates and resin plates were made for 3 male subjects in their twenties, who have good occlusion, and do not have speech and hearing disorders. Then 8 Korean consonants and 4 Korean vowels were selected, systemically considering phonetic variants such as the place and manner of articulation, lenis/fortis, mutual effect of each phoneme, etc. They were combined into meaningless tested words in the form of /VCV/, and were included in the carrier sentences. Each informant uttered the sentences 1) without the plate, 2) with the metal plate, 3) with the resin plate. The recorded data were analyzed through the waveform of sounds and spectrogram by using the program SoundEdit, Signalize, Statview 512+for the Macintosh computer. The duration of each segment was measured by searching for the boundaries between the preceding vowels and consonants, and between the consonants and the following vowels. The study led to the conclusion that. 1. With the palatal plate, the duration of all the tested words increased and the duration increased more with the resin plate than with the metal plate. 2. With the palatal plate, the duration of all the preceding vowels, consonants, and following vowels increased, but the temporal structure of the tested words was maintained. 3. As for the manner of articulation, fricative /s/(ㅅ) was greatly influenced by both kinds of palatal plates. 4. As for the place of articulation, alveolar sounds /d/(ㄷ), /n/(ㄴ) were greatly influnced by the kinds of palatal plates, and the velar sounds /n/(ㅇ), /g/(ㄱ) were influenced by the platal plates, but the kind of the palatal plates did not show any significance. 5. As for the lenis/fortis, lenis was influenced more by the kind of the palatal plates. 6. As for the influence of vowels upon each segment in the tested words, palatal vowel /i/(ㅣ) had greater influence than pharyngeal vowel /a/(ㅏ), and following vowels than preceding vowels.

  • PDF