• Title/Summary/Keyword: Sound Emotion

Search Result 176, Processing Time 0.041 seconds

Sensibility Satisfaction Evaluation of MP3 Sound on Mobile Phone (휴대폰 무선서비스 MP3 사운드의 감성만족도 평가)

  • Kweon, O-Seong;Choi, Jae-Hyun
    • Science of Emotion and Sensibility
    • /
    • v.10 no.3
    • /
    • pp.481-489
    • /
    • 2007
  • The purpose of this study was to investigate whether there were differences in sound quality among telecommunication service providers(SPs). To avoid the influence of brand of SPs and mobile phone manufacturer, a series of structured experiments was planed. Possible source of sound difference were tested such as specific genres of music, contents providers, mp3 players for PC, and mobile phone manufacturers. The results show there are differences of sound quality among telecommunication service providers(SPs). But the difference comes from contents, mobile phone, and MP3 player for PC. The same model of mobile phone from the manufacturer sounded differently depending on telecommunication service providers(SPs). The genre of music did not show consistent difference in sound quality.

  • PDF

Preliminary Study on Human Sensibility Evaluation of Ringtone in Mobile Phone (휴대폰의 벨소리 감성만족도 평가를 위한 예비연구)

  • Kweon, O-Seong;Choi, Jae-Hyun
    • Science of Emotion and Sensibility
    • /
    • v.12 no.4
    • /
    • pp.403-410
    • /
    • 2009
  • The purpose of this study was to find whether there is sound quality difference among mobile service providers in terms of ringtone in mobile phones, and identify factors that contribute to the difference of sound quality. A series of experiments were performed to identify the source of sound difference while controlling brand factor of the leading company. Mobile service provider, phone manufacturer, phone model variation, specific music genre and contents provider factors were examined. The results showed there was sound quality difference among mobile service providers. The difference comes from mobile phones provided to service provider and sound contents. There was difference in sound quality among same models that provided to different service providers. The different sound making process contributed to the difference of sound quality. The genre effect was not clear. To complement the limitation of samples used, an interview with the sound expert of mobile device was performed. The results showed hardware parts used and careful tuning of the device can influence to the sound quality of the mobile phone.

  • PDF

Brain Correlates of Emotion for XR Auditory Content (XR 음향 콘텐츠 활용을 위한 감성-뇌연결성 분석 연구)

  • Park, Sangin;Kim, Jonghwa;Park, Soon Yong;Mun, Sungchul
    • Journal of Broadcast Engineering
    • /
    • v.27 no.5
    • /
    • pp.738-750
    • /
    • 2022
  • In this study, we reviewed and discussed whether auditory stimuli with short length can evoke emotion-related neurological responses. The findings implicate that if personalized sound tracks are provided to XR users based on machine learning or probability network models, user experiences in XR environment can be enhanced. We also investigated that the arousal-relaxed factor evoked by short auditory sound can make distinct patterns in functional connectivity characterized from background EEG signals. We found that coherence in the right hemisphere increases in sound-evoked arousal state, and vice versa in relaxed state. Our findings can be practically utilized in developing XR sound bio-feedback system which can provide preference sound to users for highly immersive XR experiences.

Frictional Sounds and Its Related Mechanical Properties of Vapor Permeable Water Repellent Fabrics for Active Wear (스포츠웨어용 투습발수직물의 마찰음과 관련 역학적 성질 비교)

  • 조길수;박미란
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2003.05a
    • /
    • pp.8-13
    • /
    • 2003
  • Frictional sound of 13 vapor permeable water repellent fabric by sound generator were recorded and analysed through FFT analysis. The frictional Sounds were quantified by calculating total sound pressure(LPT), the level range ΔL and the frequency difference Δf. Mechanical properties were measured by KES-FB. LPT values of specimens finished wet coating were higher than those of dry coating. Values for bending rigidity, shear stiffness, surface roughness and compressional recovery of polyurethane fabrics increased compared with the cire finished fabrics. Laminated fabrics had high values of frictional coefficient and low values of surface roughness. LPT showed significant correlation with compressional energy, weight and thickness. (ΔL) was highly correlated with compressional linearity, frictional coefficient, compressional recovery, and (Δf) with tensile linearity, compressional energy, thickness, and weight.

  • PDF

EEG and Psychological Responses to the Sound Characteristics of Car Horns (자동차 경적소리의 특성에 따른 뇌파 및 감성 반응)

  • 최상섭;조문재;이경화;민윤기;오애령;손진훈
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 1998.11a
    • /
    • pp.154-157
    • /
    • 1998
  • This study investigated the psychological and physiological responses to the sound of car horns produced by different. manufacturers. Ten female college students listened to the sound of the horns while their EEG responses on 6 sites were being measured, and rated each hem on psychological scales. Their EEG and psychological responses were investigated as to whether the responses were related to the loudness, sharpness, tonality, and roughness of the horns. The results indicated that the subjects felt more 'dominated' as the loudness and sharpness increased, that the subjects felt more 'pleasant' as the sharpness increased, that the subjects felt more 'dominant' as the tonality increased, and that the subjects felt more 'aroused' as the roughness increased. The physiological results showed that the fast alpha wave in the occipital lobe decreased in the relative power as the loudness, sharpness, and tonality increased, and that the delta wave in the occipital lobe increased and the slow alpha wave in the frontal lobe decreased in the relative power as the roughness increased.

  • PDF

A study on application of the statistic model about an utterance of the speaker (화자의 발음에 대한 통계적 모델의 적용에 관한 연구)

  • Kim, Dae-Sik;Bae, Myong-Jin;Yoon, Jae-Gang
    • Proceedings of the KIEE Conference
    • /
    • 1988.07a
    • /
    • pp.25-28
    • /
    • 1988
  • A speech that play a part of important mediation in the man's conversation is the sound of representation to man's emotion and thought, then voice sound could be verified and identified a speaker's speech by individual property. This study indicates as distribution of pitch in searching for sample number of each pitch with eye in the sound waveform of speaker. We propose the algorithm that judge speaker's emotion state, personality, regional group, age, sex distinction, e.t.c., according to the deviation degree.

  • PDF

The Analysis of Sound Attributes on Sensibility Dimensions (소리의 청각적 속성에 따른 감성차원 분석)

  • Han Kwang-Hee;Lee Ju-Hwan
    • Science of Emotion and Sensibility
    • /
    • v.9 no.1
    • /
    • pp.9-17
    • /
    • 2006
  • As is commonly said, music is 'language of emotions.' It is because sound is a plentiful modality to communicate the human sensibility information. However, most researches of auditory displays were focused on improving efficiency on user's performance data such as performance time and accuracy. Recently, many of researchers in auditory displays acknowledge that individual preference and sensible satisfaction may be a more important factor than the performance data. On this ground, in the present study we constructed the sound sensibility dimensions ('Pleasure', 'Complexity', and 'Activity') and systematically examined the attributes of sound on the sensibility dimensions and analyzed the meanings. As a result, sound sensibility dimensions depended on each sound attributes , and some sound attributes interact with one another. Consequently, the results of the present study will provide the useful possibilities of applying the affective influence in the field of auditory displays needing the applications of the sensibility information according to the sound attributes.

  • PDF

Music Emotion Control Algorithm based on Sound Emotion Tree (감성 트리 기반의 음악 감성 조절 알고리즘)

  • Kim, Donglim;Lim, Bin;Lim, Younghwan
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.3
    • /
    • pp.21-31
    • /
    • 2015
  • This thesis proposes the emotions acquired after listening to the music as an emotion model composed of 8 types of emotions, based on the emotion model studied previously. The 5 musical factors selected, that affect the emotion, are tempo, dynamics, amplitude change, brightness, and noise. According to the emotion model composed of 8 types of emotions, 160 songs categorized into the 8 types of emotions were selected, and the actual data was extracted and analyzed. Through the analysis of actual data, an emotion equation made of weighted value of 5 factors was derived, and an algorithm that can predict the emotion by mapping on the 2-dimensional emotion coordinate system through the emotion equation was designed. Also, a way of controlling emotion by moving the coordinates on the 2-dimensional emotion coordinate system was suggested.

A study on the enhancement of emotion recognition through facial expression detection in user's tendency (사용자의 성향 기반의 얼굴 표정을 통한 감정 인식률 향상을 위한 연구)

  • Lee, Jong-Sik;Shin, Dong-Hee
    • Science of Emotion and Sensibility
    • /
    • v.17 no.1
    • /
    • pp.53-62
    • /
    • 2014
  • Despite the huge potential of the practical application of emotion recognition technologies, the enhancement of the technologies still remains a challenge mainly due to the difficulty of recognizing emotion. Although not perfect, human emotions can be recognized through human images and sounds. Emotion recognition technologies have been researched by extensive studies that include image-based recognition studies, sound-based studies, and both image and sound-based studies. Studies on emotion recognition through facial expression detection are especially effective as emotions are primarily expressed in human face. However, differences in user environment and their familiarity with the technologies may cause significant disparities and errors. In order to enhance the accuracy of real-time emotion recognition, it is crucial to note a mechanism of understanding and analyzing users' personality traits that contribute to the improvement of emotion recognition. This study focuses on analyzing users' personality traits and its application in the emotion recognition system to reduce errors in emotion recognition through facial expression detection and improve the accuracy of the results. In particular, the study offers a practical solution to users with subtle facial expressions or low degree of emotion expression by providing an enhanced emotion recognition function.