• Title/Summary/Keyword: voice quality features

Search Result 42, Processing Time 0.022 seconds

Analysis of Elementary Students' Interlanguage in Science Class about Heat and Temperature (열과 온도 수업에서 나타난 초등학생들의 중간 언어 분석)

  • Lee, Ilyeon;Jang, Shinho
    • Journal of Korean Elementary Science Education
    • /
    • v.34 no.1
    • /
    • pp.123-130
    • /
    • 2015
  • For effective science learning, teachers need to rearrange scientific language so that students can understand the contents with their incomplete language resources. Interlanguage is the interplay between everyday language and scientific language. The purpose of the study was to analyze the patterns of interlanguage during 4th grade science class to learn "Heat and Temperature" and to find the features of meaning sharing inside classroom in which a teacher and students participated. The data analysis shows that elementary students' interlanguage has different features compared to scientific language that involves passive voice and content-specialized nouns. Students' interlanguage implied the quality of class community's knowledge-sharing, according to the degree of how students can connect scientific language and everyday language in more effective ways. The implication to elementary science education was discussed.

Efficacy of laughing voice treatment (SKMVTT) in benign vocal fold lesions (양성성대질환의 웃음 음성치료(SKMVTT))

  • Jung, Dae-Yong;Wi, Joon-Yeol;Kim, Seong-Tae
    • Phonetics and Speech Sciences
    • /
    • v.10 no.4
    • /
    • pp.155-161
    • /
    • 2018
  • The purpose of this study was to evaluate the efficacy of a multiple voice therapy technique ($SKMVTT^{(R)}$) using laughter for the treatment of various benign vocal fold lesions. To achieve this, 23 female patients diagnosed with vocal nodules, vocal polyp, and muscle tension dysphonia through videostroboscopy were enrolled in vocal hygiene and $SKMVTT^{(R)}$. All of the patients were treated once a week for 4 to 12 sessions. The GRBAS scale was used to confirm the changes in voice quality before and after the treatment. Acoustic analysis was performed to evaluate jitter, shimmer, NHR, fundamental frequency variation, amplitude variation, PFR, and dB range. Videostroboscopy was performed to confirm the changes in the laryngeal features before and after the treatment. After the $SKMVTT^{(R)}$, the results of the perceptual evaluation demonstrated that the G, R, and B scales significantly improved. An acoustic evaluation also demonstrated that jitter, shimmer, NHR, vAm, vFo, PFR, and dB range also significantly improved after the $SKMVTT^{(R)}$. In comparison to the videostroboscopic findings, the size of the vocal nodules and vocal polyp decreased or disappeared after the treatment. In addition, the size of the cuneiform tubercles decreased, the length of the aryepiglottic folds became longer, and the laryngeal findings of the supraglottic compressions improved after the $SKMVTT^{(R)}$. These results suggest that the $SKMVTT^{(R)}$ is effective in improving the vocal quality of patients with benign vocal fold lesions. In conclusion, it seems that laughter and inspiratory phonation suppressed abnormal laryngeal elevation and lowered laryngeal height, which seems to have the effect of improving hyperfunctional phonation.

A Study on Gender Difference in Antecedents of Trust and Continuance Intention to Purchase Voice Speakers

  • Youness EL Mezzi;Nicole Agnieszka Rydz;Kyung Jin Cha
    • Asia pacific journal of information systems
    • /
    • v.30 no.3
    • /
    • pp.614-635
    • /
    • 2020
  • This study aims at understanding gender difference in trust and the related factors affecting the intention to purchase voice speakers VS. VS are one of the innovations that are emerging at a fast pace in the market. Although it seems to be widely embraced by both genders, people do not intend to use them in some cases due to a lack of trust and the rumors circling these types of technologies. Nevertheless, there are particular barriers to the acceptance of VS technology between females and males due to unfamiliarity with the effective components of such technologies. Therefore, assuming that increasing the knowledge-based familiarity with an effective technique is essential for accepting it. So far, only little is known about VS and its concepts to increase the familiarity and, as a consequence, the acceptance of effective technology. Technology adoption in gender has been studied for many years, and there are many general models in the literature describing it. However, having more customized models for emerging technologies upon their features seems necessary. This study is based on Theory of Reasoned Action and trust-based acceptance which provides a background for understanding the relationships between beliefs, attitude, intentions, and subject norms and how it's affecting gender trust in VS. The statistical analysis results indicate that perceived system quality and perceived interaction quality have stronger influences on trust for males, while privacy concern and emotional trust have stronger influences on trust for females with the intention of purchase for both genders. Our study can be beneficial for future research in the areas of Perceived risk and Perceived utility and behavioral intention to use and human-technology interaction and psychology.

A "GAP-Model" based Framework for Online VVoIP QoE Measurement

  • Calyam, Prasad;Ekici, Eylem;Lee, Chang-Gun;Haffner, Mark;Howes, Nathan
    • Journal of Communications and Networks
    • /
    • v.9 no.4
    • /
    • pp.446-456
    • /
    • 2007
  • Increased access to broadband networks has led to a fast-growing demand for voice and video over IP(VVoIP) applications such as Internet telephony(VoIP), videoconferencing, and IP television(IPTV). For pro-active troubleshooting of VVoIP performance bottlenecks that manifest to end-users as performance impairments such as video frame freezing and voice dropouts, network operators cannot rely on actual end-users to report their subjective quality of experience(QoE). Hence, automated and objective techniques that provide real-time or online VVoIP QoE estimates are vital. Objective techniques developed to-date estimate VVoIP QoE by performing frame-to-frame peak-signal-to-noise ratio(PSNR) comparisons of the original video sequence and the reconstructed video sequence obtained from the sender-side and receiver-side, respectively. Since processing such video sequences is time consuming and computationally intensive, existing objective techniques cannot provide online VVoIP QoE. In this paper, we present a novel framework that can provide online estimates of VVoIP QoE on network paths without end-user involvement and without requiring any video sequences. The framework features the "GAP-model", which is an offline model of QoE expressed as a function of measurable network factors such as bandwidth, delay, jitter, and loss. Using the GAP-model, our online framework can produce VVoIP QoE estimates in terms of "Good", "Acceptable", or "Poor"(GAP) grades of perceptual quality solely from the online measured network conditions.

Acoustic Features of Oral Vowels in the Esophagus Speakers (식도음성의 모음종류에 따른 음향학적 특성)

  • Yun, Eunmi;Mok, Eunhee;Minh, Phan huu Ngoc;Hong, Kihwan
    • Phonetics and Speech Sciences
    • /
    • v.7 no.4
    • /
    • pp.85-92
    • /
    • 2015
  • This study aimed to establish characteristics related to voice and speech through the natural base frequency analysis of esophagus vocalization. In the study, 8 subjects were selected for esophagus vocals, and 10 other subjects were selected for a control group. MDVP(Multi-dimensional Voice Program, Model 4800, USA, 2001), Multi Speech(Model 3700, Kaypantax, USA, 2008) were used as experiment equipment. The speech samples selected for evaluation were vowels and sentences (both declarative and interrogative). For acoustic analysis, the intonation form of fo, jitter, energy, shimmer, HNR, and intonation patterns of the speech sample were measured. The results were as follows: First, the natural intrinsic frequency of extended vowels in the esophagus vocal group was lower than the frequency in the normal vocal group. In particular, the intrinsic frequency difference for high vowel /i/ was much greater than the frequency difference for low vowel /a/. Second, the jitter values of the esophagus vocal group were higher than the control group. In particular, there was a large difference between the jitter values for /a/ and /i/, with the jitter values being highest for /i/. Third, there was no significant difference in vocal strength between the esophagus vocal patient group and the control group. Fourth, the shimmer values of the voices in the esophagus vocal group were higher than shimmer values in the control group. In particular, there was a large difference in shimmer values for low vowel /a/. Fifth, the HNR values of the esophagus vocal group were showed significantly lower than the control group. In particular, the largest difference in HNR values between the two groups was for high vowel /i/. Sixth, the pitch contours of interrogative and declarative sentences of the esophagus vocal patient group showed a different form or only had with small differences compared to the pitch contours of the normal vocal group, thus presenting an inconsistent pattern.

Emotion Recognition Based on Frequency Analysis of Speech Signal

  • Sim, Kwee-Bo;Park, Chang-Hyun;Lee, Dong-Wook;Joo, Young-Hoon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.2 no.2
    • /
    • pp.122-126
    • /
    • 2002
  • In this study, we find features of 3 emotions (Happiness, Angry, Surprise) as the fundamental research of emotion recognition. Speech signal with emotion has several elements. That is, voice quality, pitch, formant, speech speed, etc. Until now, most researchers have used the change of pitch or Short-time average power envelope or Mel based speech power coefficients. Of course, pitch is very efficient and informative feature. Thus we used it in this study. As pitch is very sensitive to a delicate emotion, it changes easily whenever a man is at different emotional state. Therefore, we can find the pitch is changed steeply or changed with gentle slope or not changed. And, this paper extracts formant features from speech signal with emotion. Each vowels show that each formant has similar position without big difference. Based on this fact, in the pleasure case, we extract features of laughter. And, with that, we separate laughing for easy work. Also, we find those far the angry and surprise.

A Method of Predicting Service Time Based on Voice of Customer Data (고객의 소리(VOC) 데이터를 활용한 서비스 처리 시간 예측방법)

  • Kim, Jeonghun;Kwon, Ohbyung
    • Journal of Information Technology Services
    • /
    • v.15 no.1
    • /
    • pp.197-210
    • /
    • 2016
  • With the advent of text analytics, VOC (Voice of Customer) data become an important resource which provides the managers and marketing practitioners with consumer's veiled opinion and requirements. In other words, making relevant use of VOC data potentially improves the customer responsiveness and satisfaction, each of which eventually improves business performance. However, unstructured data set such as customers' complaints in VOC data have seldom used in marketing practices such as predicting service time as an index of service quality. Because the VOC data which contains unstructured data is too complicated form. Also that needs convert unstructured data from structure data which difficult process. Hence, this study aims to propose a prediction model to improve the estimation accuracy of the level of customer satisfaction by combining unstructured from textmining with structured data features in VOC. Also the relationship between the unstructured, structured data and service processing time through the regression analysis. Text mining techniques, sentiment analysis, keyword extraction, classification algorithms, decision tree and multiple regression are considered and compared. For the experiment, we used actual VOC data in a company.

Development of automatic assembly module for yoke parts in auto-focusing actuator (Auto-Focusing 미세부품 Yoke 조립 자동화 모듈 개발)

  • Ha, Seok-Jae;Park, Jeong-Yeon;Park, Kyu-Sub;Yoon, Gil-Sang
    • Design & Manufacturing
    • /
    • v.13 no.1
    • /
    • pp.55-60
    • /
    • 2019
  • Smart-phone in the recently released high-end applied to the camera module is equipped with the most features auto focusing camera module. Also, auto focusing camera module is divided into voice coil motor, encoder, and piezo according to type of motion mechanism. Auto focusing camera module is composed of voice coil motor (VCM) as an actuator and leaf spring as a guide and suspension. VCM actuator is made of magnet, yoke as a metal, and coil as a copper wire. Recently, the assembly as yoke and magnet is made by human resources. These process has a long process time and it is difficult to secure quality. Also, These process is not economical in cost, and productivity is reduced. Therefore, an automatic assembly as yoke and magnet is needed in the present process. In this paper, we have developed an automatic assembly device that can automatically assemble yoke and magnet, and performed verifying performance. Therefore, by using the developed automatic assembly device, it is possible to increase the productivity and reduce the production cost.

Customer Attitude to Artificial Intelligence Features: Exploratory Study on Customer Reviews of AI Speakers (인공지능 속성에 대한 고객 태도 변화: AI 스피커 고객 리뷰 분석을 통한 탐색적 연구)

  • Lee, Hong Joo
    • Knowledge Management Research
    • /
    • v.20 no.2
    • /
    • pp.25-42
    • /
    • 2019
  • AI speakers which are wireless speakers with smart features have released from many manufacturers and adopted by many customers. Though smart features including voice recognition, controlling connected devices and providing information are embedded in many mobile phones, AI speakers are sitting in home and has a role of the central en-tertainment and information provider. Many surveys have investigated the important factors to adopt AI speakers and influ-encing factors on satisfaction. Though most surveys on AI speakers are cross sectional, we can track customer attitude toward AI speakers longitudinally by analyzing customer reviews on AI speakers. However, there is not much research on the change of customer attitude toward AI speaker. Therefore, in this study, we try to grasp how the attitude of AI speaker changes with time by applying text mining-based analysis. We collected the customer reviews on Amazon Echo which has the highest share of AI speakers in the global market from Amazon.com. Since Amazon Echo already have two generations, we can analyze the characteristics of reviews and compare the attitude ac-cording to the adoption time. We identified all sub topics of customer reviews and specified the topics for smart features. And we analyzed how the share of topics varied with time and analyzed diverse meta data for comparisons. The proportions of the topics for general satisfaction and satisfaction on music were increasing while the proportions of the topics for music quality, speakers and wireless speakers were decreasing over time. Though the proportions of topics for smart fea-tures were similar according to time, the share of the topics in positive reviews and importance metrics were reduced in the 2nd generation of Amazon Echo. Even though smart features were mentioned similarly in the reviews, the influential effect on satisfac-tion were reduced over time and especially in the 2nd generation of Amazon Echo.

Analyzing the element of emotion recognition from speech (음성으로부터 감성인식 요소분석)

  • 심귀보;박창현
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.6
    • /
    • pp.510-515
    • /
    • 2001
  • Generally, there are (1)Words for conversation (2)Tone (3)Pitch (4)Formant frequency (5)Speech speed, etc as the element for emotional recognition from speech signal. For human being, it is natural that the tone, vice quality, speed words are easier elements rather than frequency to perceive other s feeling. Therefore, the former things are important elements fro classifying feelings. And, previous methods have mainly used the former thins but using formant is good for implementing as machine. Thus. our final goal of this research is to implement an emotional recognition system based on pitch, formant, speech speed, etc. from speech signal. In this paper, as first stage we foun specific features of feeling angry from his words when a man got angry.

  • PDF