• Title/Summary/Keyword: 2D MUSIC

Search Result 113, Processing Time 0.025 seconds

Tonal Characteristics Based on Intonation Pattern of the Korean Emotion Words (감정단어 발화 시 억양 패턴을 반영한 멜로디 특성)

  • Yi, Soo Yon;Oh, Jeahyuk;Chong, Hyun Ju
    • Journal of Music and Human Behavior
    • /
    • v.13 no.2
    • /
    • pp.67-83
    • /
    • 2016
  • This study investigated the tonal characteristics in Korean emotion words by analyzing the pitch patterns transformed from word utterance. Participants were 30 women, ages 19-23. Each participant was instructed to talk about their emotional experiences using 4-syllable target words. A total of 180 utterances were analyzed in terms of the frequency of each syllable using the Praat. The data were transformed into meantones based on the semi-tone scale. When emotion words were used in the middle of a sentence, the pitch pattern was transformed to A3-A3-G3-G3 for '즐거워서(joyful)', C4-D4-B3-A3 for '행복해서(happy)', G3-A3-G3-G3 for '억울해서(resentful)', A3-A3-G3-A3 for '불안해서(anxious)', and C4-C4-A3-G3 for '침울해서(frustrated)'. When the emotion words were used at the end of a sentence, the pitch pattern was transformed to G4-G4-F4-F4 for '즐거워요(joyful)', D4-D4-A3-G3 for '행복해요(happy)', G3-G3-G3-A3 and F3-G3-E3-D3 for '억울해요(resentful)', A3-G3-F3-F3 for '불안해요(anxious)', and A3-A3-F3-F3 for '침울해요(frustrated)'. These results indicate the differences in pitch patterns depending on the conveyed emotions and the position of words in a sentence. This study presents the baseline data on the tonal characteristics of emotion words, thereby suggesting how pitch patterns could be utilized when creating a melody during songwriting for emotional expression.

A Study on "A Midsummer Night's Palace" Using VR Sound Engineering Technology

  • Seok, MooHyun;Kim, HyungGi
    • International Journal of Contents
    • /
    • v.16 no.4
    • /
    • pp.68-77
    • /
    • 2020
  • VR (Virtual Reality) contents make the audience perceive virtual space as real through the virtual Z axis which creates a space that could not be created in 2D due to the space between the eyes of the audience. This visual change has led to the need for technological changes to sound and sound sources inserted into VR contents. However, studies to increase immersion in VR contents are still more focused on scientific and visual fields. This is because composing and producing VR sounds require professional views in two areas: sound-based engineering and computer-based interactive sound engineering. Sound-based engineering is difficult to reflect changes in user interaction or time and space by directing the sound effects, script sound, and background music according to the storyboard organized by the director. However, it has the advantage of producing the sound effects, script sound, and background music in one track and not having to go through the coding phase. Computer-based interactive sound engineering, on the other hand, is produced in different files, including the sound effects, script sound, and background music. It can increase immersion by reflecting user interaction or time and space, but it can also suffer from noise cancelling and sound collisions. Therefore in this study, the following methods were devised and utilized to produce sound for VR contents called "A Midsummer Night" so as to take advantage of each sound-making technology. First, the storyboard is analyzed according to the user's interaction. It is to analyze sound effects, script sound, and background music which is required according to user interaction. Second, the sounds are classified and analyzed as 'simultaneous sound' and 'individual sound'. Thirdly, work on interaction coding for sound effects, script sound, and background music that were produced from the simultaneous sound and individual time sound categories is done. Then, the contents are completed by applying the sound to the video. By going through the process, sound quality inhibitors such as noise cancelling can be removed while allowing sound production that fits to user interaction and time and space.

Effects of Singing of Physiologic Changes in the Elderly Women (노래부르기가 노인의 생리적 변화에 미치는 효과)

  • Min, Soon;Jung, Young-Ju;Lee, Han-Na
    • Journal of Korean Biological Nursing Science
    • /
    • v.2 no.1
    • /
    • pp.76-84
    • /
    • 2000
  • Recently, music therapy is widely used for various kinds of diseases. Music therapy has beneficial effects on emotional disorder and neuropsychiatric diseases in particular. This study was performed to evaluate the effect of singing on physiologic changes. We checked peripheral oxygen saturation and heart rate as indices of physiologic changes. The subjects were 19 control and 30 test group who were registered on the D welfare center for the elderly and agreed to join this study. They had been singing regularly for 6 months. The data were collected just before and after the singing. Data were analyzed with mean, t-test, and paired t-test using SPSS $PC^+$ program. The results were as follows: 1. Heart rate of the singing group decreased significantly after singing.(p<0.05) 2. Peripheral oxygen saturation of the singing group increased significantly after singing.(p<0.05) In conclusion, singing, a kind of aerobic exercise, has beneficial effects on cardiopulmonary system.

  • PDF

The Effects of Music Lesson Applying the Blended Learning-based STEAM Education on the Musical Knowledge and STEAM Literacy of Pre-service Kindergarten Teachers (블랜디드 러닝 기반 STEAM 교육 적용 음악수업이 예비유아교사의 음악지식 및 융합인재소양에 미치는 영향)

  • Kim, Ok-Ju
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.2
    • /
    • pp.217-227
    • /
    • 2018
  • The objective of this study is to analyze the effects of music lesson applying the blended learning-based STEAM education on the musical knowledge and STEAM literacy of pre-service kindergarten teachers. After conducting a pre-test on the musical knowledge and STEAM literacy targeting 20 students(3rd year) of early childhood music lesson in Dept. of Early Childhood Education of C University in O Metropolitan City and 19 students(3rd year) of early childhood music lesson in Dept. of Early Childhood Education of D University in B Metropolitan City, the effectiveness of the pre - post test design was verified. In the results of study, the experimental group with music lesson applying the blended learning-based STEAM education showed significantly improved results in all the areas of musical knowledge and convergence, creativity, and communication out of STEAM literacy than the control group with general music lesson. Such results of this study imply that music lesson applying the blended learning-based STEAM education could be usefully used as a teaching/learning method to improve musical knowledge and STEAM literacy of pre-service kindergarten teachers in university education site in the future.

A digital Audio Watermarking Algorithm using 2D Barcode (2차원 바코드를 이용한 오디오 워터마킹 알고리즘)

  • Bae, Kyoung-Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.2
    • /
    • pp.97-107
    • /
    • 2011
  • Nowadays there are a lot of issues about copyright infringement in the Internet world because the digital content on the network can be copied and delivered easily. Indeed the copied version has same quality with the original one. So, copyright owners and content provider want a powerful solution to protect their content. The popular one of the solutions was DRM (digital rights management) that is based on encryption technology and rights control. However, DRM-free service was launched after Steve Jobs who is CEO of Apple proposed a new music service paradigm without DRM, and the DRM is disappeared at the online music market. Even though the online music service decided to not equip the DRM solution, copyright owners and content providers are still searching a solution to protect their content. A solution to replace the DRM technology is digital audio watermarking technology which can embed copyright information into the music. In this paper, the author proposed a new audio watermarking algorithm with two approaches. First, the watermark information is generated by two dimensional barcode which has error correction code. So, the information can be recovered by itself if the errors fall into the range of the error tolerance. The other one is to use chirp sequence of CDMA (code division multiple access). These make the algorithm robust to the several malicious attacks. There are many 2D barcodes. Especially, QR code which is one of the matrix barcodes can express the information and the expression is freer than that of the other matrix barcodes. QR code has the square patterns with double at the three corners and these indicate the boundary of the symbol. This feature of the QR code is proper to express the watermark information. That is, because the QR code is 2D barcodes, nonlinear code and matrix code, it can be modulated to the spread spectrum and can be used for the watermarking algorithm. The proposed algorithm assigns the different spread spectrum sequences to the individual users respectively. In the case that the assigned code sequences are orthogonal, we can identify the watermark information of the individual user from an audio content. The algorithm used the Walsh code as an orthogonal code. The watermark information is rearranged to the 1D sequence from 2D barcode and modulated by the Walsh code. The modulated watermark information is embedded into the DCT (discrete cosine transform) domain of the original audio content. For the performance evaluation, I used 3 audio samples, "Amazing Grace", "Oh! Carol" and "Take me home country roads", The attacks for the robustness test were MP3 compression, echo attack, and sub woofer boost. The MP3 compression was performed by a tool of Cool Edit Pro 2.0. The specification of MP3 was CBR(Constant Bit Rate) 128kbps, 44,100Hz, and stereo. The echo attack had the echo with initial volume 70%, decay 75%, and delay 100msec. The sub woofer boost attack was a modification attack of low frequency part in the Fourier coefficients. The test results showed the proposed algorithm is robust to the attacks. In the MP3 attack, the strength of the watermark information is not affected, and then the watermark can be detected from all of the sample audios. In the sub woofer boost attack, the watermark was detected when the strength is 0.3. Also, in the case of echo attack, the watermark can be identified if the strength is greater and equal than 0.5.

MPEG Audio New Standard: USAC Technology (MPEG 오디오 최신 표준: USAC 기술)

  • Lee, Tae-Jin;Kang, Kyeong-Ok;Kim, Whan-Woo
    • Journal of Broadcast Engineering
    • /
    • v.16 no.5
    • /
    • pp.693-704
    • /
    • 2011
  • As mobile devices become multi-functional, and converge into a single platform, there is a strong need for a codec that is able to provide consistent quality for speech and music contents. MPEG-D USAC standardization activities started at the 82nd MPEG meeting with a CfP and approved Study on DIS at the 96th MPEG meeting. MPEG-D USAC is converged technology of AMR-WB+ and HE-AAC V2. Specifically, USAC utilizes three core codecs (AAC, ACELP, and TCX) for low frequency regions, SBR for high frequency regions, the MPEG Surround for stereo information, and window transition technology for smoothing transition between various core coder. USAC can provide consistent sound quality for both speech and music contents and can be applied to various applications such as multi-media download to mobile devices, digital radio, mobile TV and audio books.

Effect of Listening Biographies on Frequency Following Response Responses of Vocalists, Violinists, and Non-Musicians to Indian Carnatic Music Stimuli

  • J, Prajna Bhat;Krishna, Rajalakshmi
    • Korean Journal of Audiology
    • /
    • v.25 no.3
    • /
    • pp.131-137
    • /
    • 2021
  • Background and Objectives: The current study investigates pitch coding using frequency following response (FFR) among vocalists, violinists, and non-musicians for Indian Carnatic transition music stimuli and assesses whether their listening biographies strengthen their F0 neural encoding for these stimuli. Subjects and Methods: Three participant groups in the age range of 18-45 years were included in the study. The first group of participants consisted of 20 trained Carnatic vocalists, the second group consisted of 13 trained violinists, and the third group consisted of 22 non-musicians. The stimuli consisted of three Indian Carnatic raga notes (/S-R2-G3/), which was sung by a trained vocalist and played by a trained violinist. For the purposes of this study, the two transitions between the notes T1=/S-R2/ and T2=/R2-G3/ were analyzed, and FFRs were recorded binaurally at 80 dB SPL using neuroscan equipment. Results: Overall average responses of the participants were generated. To assess the participants' pitch tracking to the Carnatic music stimuli, stimulus to response correlation (CC), pitch strength (PS), and pitch error (PE) were measured. Results revealed that both the vocalists and violinists had better CC and PS values with lower PE values, as compared to non-musicians, for both vocal and violin T1 and T2 transition stimuli. Between the musician groups, the vocalists were found to perform superiorly to the violinists for both vocal and violin T1 and T2 transition stimuli. Conclusions: Listening biographies strengthened F0 neural coding, with respect to the vocalists for vocal stimulus at the brainstem level. The violinists, on the other hand, did not show such preference.

Effect of Listening Biographies on Frequency Following Response Responses of Vocalists, Violinists, and Non-Musicians to Indian Carnatic Music Stimuli

  • Prajna, Bhat J;Rajalakshmi, Krishna
    • Journal of Audiology & Otology
    • /
    • v.25 no.3
    • /
    • pp.131-137
    • /
    • 2021
  • Background and Objectives: The current study investigates pitch coding using frequency following response (FFR) among vocalists, violinists, and non-musicians for Indian Carnatic transition music stimuli and assesses whether their listening biographies strengthen their F0 neural encoding for these stimuli. Subjects and Methods: Three participant groups in the age range of 18-45 years were included in the study. The first group of participants consisted of 20 trained Carnatic vocalists, the second group consisted of 13 trained violinists, and the third group consisted of 22 non-musicians. The stimuli consisted of three Indian Carnatic raga notes (/S-R2-G3/), which was sung by a trained vocalist and played by a trained violinist. For the purposes of this study, the two transitions between the notes T1=/S-R2/ and T2=/R2-G3/ were analyzed, and FFRs were recorded binaurally at 80 dB SPL using neuroscan equipment. Results: Overall average responses of the participants were generated. To assess the participants' pitch tracking to the Carnatic music stimuli, stimulus to response correlation (CC), pitch strength (PS), and pitch error (PE) were measured. Results revealed that both the vocalists and violinists had better CC and PS values with lower PE values, as compared to non-musicians, for both vocal and violin T1 and T2 transition stimuli. Between the musician groups, the vocalists were found to perform superiorly to the violinists for both vocal and violin T1 and T2 transition stimuli. Conclusions: Listening biographies strengthened F0 neural coding, with respect to the vocalists for vocal stimulus at the brainstem level. The violinists, on the other hand, did not show such preference.

97부천국제판사스틱영화제(PiFan97)용 공식 리더필름(ID Film)의 제작 사례

  • Lee, Yong-Bae
    • Cartoon and Animation Studies
    • /
    • s.5
    • /
    • pp.535-539
    • /
    • 2001
  • 이 글은 제 1 회 부천국제판타스틱영화제(1997년 8월 29일부터 9월 5일까지 경기도 부천시에서 개최)조직위원회로부터 1997년 6월경에 의뢰를 받아 35mm 필름으로 작업한 20초 분량의 영화제 공식 리더필름에 대한 작업 결과를 요약한 것이다. 작업 내용에 있어서는 3D 컴퓨터그래픽의 무리한 적용보다는 최소한의 절제하고 주로 드로잉 냄새가 강한 2D 캐릭터를 부각시켜, 국내 최초로 도입되는 ‘판타스틱’ 장르 영화제의 취지를 반영해보고자 고심했다. 배경은 3D로 처리하고 캐릭터는 직접 페이퍼에 그려 스캐닝 한 후 2D 공정을 거쳐 이 둘을 합성하였다. 이 작업방식은 당시의 컴퓨터 환경에서는 꽤 거친 공정을 필요로 하였다. 역사적인 조르주 멜리에스의 <달세계 여행> 스틸 컷을 활용한 도입부와 돌비스테레오 녹음방식을 채택한 이 작품의 개요를 정리하면 다음과 같다. ${\circ}$화면형식 : 35mm 스텐다드 사이즈 필름(상영용은 Beta버전) ${\circ}$작품길이 : 20초 ${\circ}$음향 : 돌비스테레오 방식(Only music) ${\circ}$제작기간 : 1997년 6월 - 8월 ${\circ}$사용S/W : 포토샵, 툰즈, 소프트이미지

  • PDF

A study on the effect of background music on computer word-processing tasks (주변음악이 컴퓨터 문서편집작업에 주는 영향에 관한 연구)

  • 박민용
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 1996.04a
    • /
    • pp.162-164
    • /
    • 1996
  • 다른 배경음악의 종류와 수준이 어떻게 컴퓨터 문서편집작업의 수행능력에 영향을 주는지를 알아보기위해 18명의 대학생을 대상으로 2인자 혼합인자 실험계획법을 이용하여 실험을 실시하였다. 실험에 사용된 독립변수로는 "클래식, 록, 그리고 한국민속음악"의 3종류를 갖는 음악형태와 "Low (60~65dB), High(80~85dB)"의 2수준을 갖는 음악크기이며, 분석을 위해 수집된 종족변수치는 문서편집작업의 완료시간 및 작업에러수이다. 분산분석을 이용한 통계분석 결과, 높은 (80~85 dB) 배경음악수준하에서는 낮은 (60~65 dB) 수준에 비해 통계적으로 문서편집작업의 완료시간이 많이 걸렸으며, 특히 록음악이 높은 수준으로 연주될때는 낮은 수준에 비해 유의하게 많은 문서편집작업에러가 발생하였다. 많은 문서편집작업에러가 발생하였다.

  • PDF