• Title/Summary/Keyword: Speaking Rate

Search Result 117, Processing Time 0.023 seconds

SEMICLASSICAL ASYMPTOTICS OF INFINITELY MANY SOLUTIONS FOR THE INFINITE CASE OF A NONLINEAR SCHRÖDINGER EQUATION WITH CRITICAL FREQUENCY

  • Aguas-Barreno, Ariel;Cevallos-Chavez, Jordy;Mayorga-Zambrano, Juan;Medina-Espinosa, Leonardo
    • Bulletin of the Korean Mathematical Society
    • /
    • v.59 no.1
    • /
    • pp.241-263
    • /
    • 2022
  • We consider a nonlinear Schrödinger equation with critical frequency, (P𝜀) : 𝜀2∆v(x) - V(x)v(x) + |v(x)|p-1v(x) = 0, x ∈ ℝN, and v(x) → 0 as |x| → +∞, for the infinite case as described by Byeon and Wang. Critical means that 0 ≤ V ∈ C(ℝN) verifies Ƶ = {V = 0} ≠ ∅. Infinite means that Ƶ = {x0} and that, grossly speaking, the potential V decays at an exponential rate as x → x0. For the semiclassical limit, 𝜀 → 0, the infinite case has a characteristic limit problem, (Pinf) : ∆u(x)-P(x)u(x) + |u(x)|p-1u(x) = 0, x ∈ Ω, with u(x) = 0 as x ∈ Ω, where Ω ⊆ ℝN is a smooth bounded strictly star-shaped region related to the potential V. We prove the existence of an infinite number of solutions for both the original and the limit problem via a Ljusternik-Schnirelman scheme for even functionals. Fixed a topological level k we show that vk,𝜀, a solution of (P𝜀), subconverges, up to a scaling, to a corresponding solution of (Pinf ), and that vk,𝜀 exponentially decays out of Ω. Finally, uniform estimates on ∂Ω for scaled solutions of (P𝜀) are obtained.

Knowledge-driven speech features for detection of Korean-speaking children with autism spectrum disorder

  • Seonwoo Lee;Eun Jung Yeo;Sunhee Kim;Minhwa Chung
    • Phonetics and Speech Sciences
    • /
    • v.15 no.2
    • /
    • pp.53-59
    • /
    • 2023
  • Detection of children with autism spectrum disorder (ASD) based on speech has relied on predefined feature sets due to their ease of use and the capabilities of speech analysis. However, clinical impressions may not be adequately captured due to the broad range and the large number of features included. This paper demonstrates that the knowledge-driven speech features (KDSFs) specifically tailored to the speech traits of ASD are more effective and efficient for detecting speech of ASD children from that of children with typical development (TD) than a predefined feature set, extended Geneva Minimalistic Acoustic Standard Parameter Set (eGeMAPS). The KDSFs encompass various speech characteristics related to frequency, voice quality, speech rate, and spectral features, that have been identified as corresponding to certain of their distinctive attributes of them. The speech dataset used for the experiments consists of 63 ASD children and 9 TD children. To alleviate the imbalance in the number of training utterances, a data augmentation technique was applied to TD children's utterances. The support vector machine (SVM) classifier trained with the KDSFs achieved an accuracy of 91.25%, surpassing the 88.08% obtained using the predefined set. This result underscores the importance of incorporating domain knowledge in the development of speech technologies for individuals with disorders.

Factors that Influence Awareness of Breast Cancer Screening among Arab Women in Qatar: Results from a Cross Sectional Survey

  • Donnelly, Tam Truong;Al Khater, Al-Hareth;Al-Bader, Salha Bujassoum;Al Kuwari, Mohammed Ghaith;Malik, Mariam;Al-Meer, Nabila;Singh, Rajvir;Fung, Tak
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.15 no.23
    • /
    • pp.10157-10164
    • /
    • 2015
  • Background: Breast cancer is the most common cancer among women in the State of Qatar. Due to low participation in breast cancer screening (BCS) activities, women in Qatar are often diagnosed with breast cancer at advanced stages of the disease. Findings indicate that low participation rates in BCS activities are significantly related to women's low level of awareness of breast cancer screening. The objectives of this study were to: (1) determine the factors that influence Qatari women's awareness of breast cancer and its screening activities: and (2) to find ways to effectively promote breast cancer screening activities among Arabic speaking women in Qatar. Materials and Methods: A multicenter, cross-sectional quantitative survey of 1,063 (87.5% response rate) female Qatari citizens and non-Qatari Arabic-speaking residents, 35 years of age or older, was conducted in Qatar from March 2011 to July 2011. Outcome measures included participant awareness levels of the most recent national recommended guidelines of BCS, participation rates in BCS activities, and factors related to awareness of BCS activities. Results: While most participants (90.7%) were aware of breast cancer, less than half had awareness of BCS practices (28.9% were aware of breast self-examination and 41.8% of clinical breast exams, while 26.4% knew that mammography was recommended by national screening guidelines. Only 7.6% had knowledge of all three BCS activities). Regarding BCS practice, less than one-third practiced BCS appropriately (13.9% of participants performed breast self-examination (BSE) monthly, 31.3% had a clinical breast exam (CBE) once a year or once every two years, and 26.9% of women 40 years of age or older had a mammogram once every year or two years). Awareness of BCS was significantly related to BCS practice, education level, and receipt of information about breast cancer and/or BCS from a variety of sources, particularly doctors and the media. Conclusions: The low levels of participation rates in BCS among Arab women in this study indicate a strong need to increase awareness of the importance of breast cancer screening in Qatari women. Without this awareness, compliance with the most recent breast cancer screening recommendations in Qatar will remain low. An increased effort to implement mass media and public health campaigns regarding the impact of breast cancer on women's health and the benefits of early detection of breast cancer must be coupled with an enhanced participation of health care providers in delivering this message to Qatar population.

Acoustic Characteristics of Stop Consonant Production in the Motor Speech Disorders (운동성 조음장애에서 폐쇄자음 발성의 음향학적 특성)

  • Hong, Hee-Kyung;Kim, Moon-Jun;Yoon, Jin;Park, Hee-Taek;Hong, Ki-Hwan
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.23 no.1
    • /
    • pp.33-42
    • /
    • 2012
  • Background and Objectives : Dysarthria refers to speech disorder that causes difficulties in speech communication due to paralysis, muscle weakening, and incoordination of speech muscle mechanism caused by damaged central or peripheral nerve system. Pitch, strength and speed are influenced by dysarthria during detonation due to difficulties in muscle control. As evaluation items, alternate motion rate and diadochokinesis have been commonly used, and articulation is also an important evaluation items. The purpose of this study is to find acoustic characteristics on sound production of dysarthria patients. Materials and Methods : Research subjects have been selected as 20 dysarthria patients and 20 subjects for control group, and voice sample was composed of bilabial, alveolar sound, and velar sound in diadochokinetic rate, while consonant articulation test was composed of bilabial plosive, alveolar plosive, velar plosive. Analysis items were composed of 1) speaking rate, energy, articulation time of diadochokinesis, 2) voice onset time (VOT), total duration (TD), vowel duration (VD), hold of plosives. Results and Conclusions : The number of diadochokinetic rate of dysarthria was smaller than control group. Both control group and dysarthria group was highly presented in the order of /t/>/p/>/k/. Minimum energy range per cycle during diadochokinetic rate of dysarthria group was smaller than control group, and presented statistical significance in /p/, /k/, /ptk/. Maximum energy range was larger than control group, and presented statistical significance in /t/, /ptk/. Articulation time, gap, total articulation time during diadochokinetic rate of dysarthria group was longer than control group and presented statistical significance. The articulation time was presented in both control group and dysarthria group in the order of /k/>/t/>/p/, while Gap was presented in the order of /p/>/t/>/k/ for control group and /p/>/k/>/t/ for dysarthria group. VOT, TD, VD regarding plosives of dysarthria group were longer than control group. Hold showed large deviation compared to control group that had appeared due to declined larynx and articulation organ motility.

  • PDF

A study on the lip shape recognition algorithm using 3-D Model (3차원 모델을 이용한 입모양 인식 알고리즘에 관한 연구)

  • 김동수;남기환;한준희;배철수;나상동
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 1998.11a
    • /
    • pp.181-185
    • /
    • 1998
  • Recently, research and developmental direction of communication system is concurrent adopting voice data and face image in speaking to provide more higher recognition rate then in the case of only voice data. Therefore, we present a method of lipreading in speech image sequence by using the 3-D facial shape model. The method use a feature information of the face image such as the opening-level of lip, the movement of jaw, and the projection height of lip. At first, we adjust the 3-D face model to speeching face image sequence. Then, to get a feature information we compute variance quantity from adjusted 3-D shape model of image sequence and use the variance quality of the adjusted 3-D model as recognition parameters. We use the intensity inclination values which obtaining from the variance in 3-D feature points as the separation of recognition units from the sequential image. After then, we use discrete HMM algorithm at recognition process, depending on multiple observation sequence which considers the variance of 3-D feature point fully. As a result of recognition experiment with the 8 Korean vowels and 2 Korean consonants, we have about 80% of recognition rate for the plosives and vowels.

  • PDF

A study on the lip shape recognition algorithm using 3-D Model (3차원 모델을 이용한 입모양 인식 알고리즘에 관한 연구)

  • 남기환;배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.5
    • /
    • pp.783-788
    • /
    • 2002
  • Recently, research and developmental direction of communication system is concurrent adopting voice data and face image in speaking to provide more higher recognition rate then in the case of only voice data. Therefore, we present a method of lipreading in speech image sequence by using the 3-D facial shape model. The method use a feature information of the face image such as the opening-level of lip, the movement of jaw, and the projection height of lip. At first, we adjust the 3-D face model to speeching face Image sequence. Then, to get a feature information we compute variance quantity from adjusted 3-D shape model of image sequence and use the variance quality of the adjusted 3-D model as recognition parameters. We use the intensity inclination values which obtaining from the variance in 3-D feature points as the separation of recognition units from the sequential image. After then, we use discrete HMM algorithm at recognition process, depending on multiple observation sequence which considers the variance of 3-D feature point fully. As a result of recognition experiment with the 8 Korean vowels and 2 Korean consonants, we have about 80% of recognition rate for the plosives md vowels.

The Clinical Evaluation of The Reconstruction of Radial Forearm Free Flap in the Head and Neck Cancer Surgery (두경부 악성 종양 절제술후 요골 전완 유리피판을 이용한 재건술의 평가)

  • Kim Hyun-Jik;Lim Young-Chang;Song Mee-Hyun;Lee Won-Jae;Choi Eun-Chang
    • Korean Journal of Head & Neck Oncology
    • /
    • v.19 no.2
    • /
    • pp.164-169
    • /
    • 2003
  • Background and Objectives: The reconstruction is very important in Head and neck cancer surgery to repair the defect created by resection of tumors, to enable successful wound healing, to restore function and to provide acceptable cosmesis. The radial forearm free flap has been the most useful reconstructive flap because it provides a moderate amount of thin, pliable, relatively hairless skin and comparatively simple to do with minimal morbidity. The aims of this study is to estimate the outcome of the reconstruction with radial forearm free flap with the several factors in 140 head and neck cancer cases in our hospital for last 10 years. Materials and Methods: Retrospective review of the records of 140 patients underwent resection of the head and neck tumors and reconstruction with a radial forearm free flap from 1993 to 2003. The age, sex of the patients, Primary site, the complication of donor and recipient site, flap survival rate, median time to start diet, patient subjective symtoms about swallowing and articulating and the fact of revision reconstructive surgery were analyzed. Results: In primary pathologic site, 56 cases were oral cavity cancers, 44 cases, oropharyngeal cancers and 22 cases, hypopharyngeal cancers. Flap survival rate was 93.6% (13 leases). On donor site, wound dehiscence, hematoma, sensory change and infection were noted and on recipient site, most common complication were fistula and wound dehiscence. The complication rate of recipient's site was 19.1 % and donor site, 3.5%. In 118 cases (84.3%), the patients could take all kinds of food. Swallowing difficulty were noted in 22 cases 05.7%). In 5 cases, there was articulation difficulty but most of patients except patients having total laryngectomy (18 cases) couldn't have any difficulty in articulation and speaking. Conclusion: We conclude that the radial forearm free flap is the most appropriate reconstructive material for treating the defect in head and neck reconstruction.

The Speaker Recognition System using the Pitch Alteration (피치변경을 이용한 화자인식 시스템)

  • Jung JongSoon;Bae MyungJin
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.115-118
    • /
    • 2002
  • Parameters used in a speaker recognition system are desirable expressing speaker's characteristics filly and have in a speech. That is to say, if inter-speaker than intra-speaker variance a big characteristic, it is useful to distinguish between speakers. Also, to make minimum error between speakers, it is required the improved recognition technology as well as the distinguishing characteristics. When we see the result of recent simulation performance, we obtain more exact performance by using dynamic characteristics and constant characteristics by a speaking habit. Therefore we suggest it to solve this problem as followings. The prosodic information is used by a characteristic vector of speech. Characteristics vector generally using in speaker recognition system is a modeling spectrum information and is working for a high performance in non-noise circumstance. However, it is found a problem that characteristic vector is distorted in noise circumstance and it makes a reduction of recognition rate. In this paper, we change pitch line divided by segment which can estimate a dynamic characteristic and it is used as a recognition characteristic. we confirmed that the dynamic characteristic is very robust in noise circumstance with a simulation. We make a decision of acceptance or rejection by comparing test pattern and recognition rate using the proposed algorithm has more improvement than using spectrum and prosodic information. Especially stational recognition rate can be obtained in noise circumstance through the simulation.

  • PDF

A study on the lip shape recognition algorithm using 3-D Model (3차원 모델을 이용한 입모양 인식 알고리즘에 관한 연구)

  • 배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.3 no.1
    • /
    • pp.59-68
    • /
    • 1999
  • Recently, research and developmental direction of communication system is concurrent adopting voice data and face image in speaking to provide more higher recognition rate then in the case of only voice data. Therefore, we present a method of lipreading in speech image sequence by using the 3-D facial shape model. The method use a feature information of the face image such as the opening-level of lip, the movement of jaw, and the projection height of lip. At first, we adjust the 3-D face model to speeching face image sequence. Then, to get a feature information we compute variance quantity from adjusted 3-D shape model of image sequence and use the variance quality of the adjusted 3-D model as recognition parameters. We use the intensity inclination values which obtaining from the variance in 3-D feature points as the separation of recognition units from the sequential image. After then, we use discrete HMM algorithm at recognition process, depending on multiple observation sequence which considers the variance of 3-D feature point fully. As a result of recognition experiment with the 8 Korean vowels and 2 Korean consonants, we have about 80% of recognition rate for the plosives and vowels. We propose that usability with visual distinguishing factor that using feature vector because as a result of recognition experiment for recognition parameter with the 10 korean vowels, obtaining high recognition rate.

  • PDF

Automatic severity classification of dysarthria using voice quality, prosody, and pronunciation features (음질, 운율, 발음 특징을 이용한 마비말장애 중증도 자동 분류)

  • Yeo, Eun Jung;Kim, Sunhee;Chung, Minhwa
    • Phonetics and Speech Sciences
    • /
    • v.13 no.2
    • /
    • pp.57-66
    • /
    • 2021
  • This study focuses on the issue of automatic severity classification of dysarthric speakers based on speech intelligibility. Speech intelligibility is a complex measure that is affected by the features of multiple speech dimensions. However, most previous studies are restricted to using features from a single speech dimension. To effectively capture the characteristics of the speech disorder, we extracted features of multiple speech dimensions: voice quality, prosody, and pronunciation. Voice quality consists of jitter, shimmer, Harmonic to Noise Ratio (HNR), number of voice breaks, and degree of voice breaks. Prosody includes speech rate (total duration, speech duration, speaking rate, articulation rate), pitch (F0 mean/std/min/max/med/25quartile/75 quartile), and rhythm (%V, deltas, Varcos, rPVIs, nPVIs). Pronunciation contains Percentage of Correct Phonemes (Percentage of Correct Consonants/Vowels/Total phonemes) and degree of vowel distortion (Vowel Space Area, Formant Centralized Ratio, Vowel Articulatory Index, F2-Ratio). Experiments were conducted using various feature combinations. The experimental results indicate that using features from all three speech dimensions gives the best result, with a 80.15 F1-score, compared to using features from just one or two speech dimensions. The result implies voice quality, prosody, and pronunciation features should all be considered in automatic severity classification of dysarthria.