• 제목/요약/키워드: Music Scores

검색결과 137건 처리시간 0.022초

손사보 악보의 광학음악인식을 위한 CNN 기반의 보표 및 마디 인식 (Staff-line and Measure Detection using a Convolutional Neural Network for Handwritten Optical Music Recognition)

  • Park, Jong-Won;Kim, Dong-Sam;Kim, Jun-Ho
    • 한국정보통신학회논문지
    • /
    • 제26권7호
    • /
    • pp.1098-1101
    • /
    • 2022
  • With the development of computer music notation programs, when drawing sheet music, it is often drawn using a computer. However, there are still many use of hand-written notations for educational purposes or to quickly draw sheet music such as listening and dictating. In previous studies, OMR focused on recognizing the printed music sheet made by music notation program. the result of handwritten OMR with camera is poor because different people have different writing methods, and lens distortion. In this study, as a pre-processing process for recognizing handwritten music sheet, we propose a method for recognizing a staff using linear regression and a method for recognizing a bar using CNN. F1 scores of staff recognition and barline detection are 99.09% and 95.48%, respectively. This methodologies are expected to contribute to improving the accuracy of handwriting.

집단음악치료가 관심병사의 군 생활 스트레스와 적응에 미치는 효과 (The Effect of Group Music Therapy for At-Risk Korean Soldiers on Adjustment and Stress Level)

  • 윤주리
    • 인간행동과 음악연구
    • /
    • 제9권1호
    • /
    • pp.55-71
    • /
    • 2012
  • 본 연구는 군 생활 적응에 어려움을 호소하는 관심병사들을 대상으로 집단음악심리치료를 시행하여 군 생활 스트레스와 적응 정도에 미치는 영향을 알아보는 것을 목적으로 한다. 본 연구는 ${\bigcirc}{\bigcirc}$ 사단에 소속된 병사들 중 국군병원 정신과 진료 및 군 전문상담관의 개별상담을 통해 관심병사로 분류된 7명을 대상으로 총 12시간에 걸쳐 집단음악치료 프로그램으로 진행되었다. 선행연구에서 계발 및 사용되었던 군 생활 스트레스 척도와 군 생활 적응척도를 사용하여 사전, 사후 검사를 실시하였고, 검사결과는 비모수에 의한 Wilcoxon 검정을 실시하였다. 집단음악치료 프로그램 중재 후 대상자들의 군 생활 스트레스 전체의 사전사후 검사결과는 통계적으로 유의미한 결과를 보였다(p < .05). 이를 하위 요인별로 보면 역할스트레스와 외부스트레스는 유의미한 차이가 있었고, 관계스트레스 및 직무스트레스는 통계적으로 유의미하지 않았다. 군 생활 적응척도의 전체 결과는 통계적으로 유의미하지 않았다(p < .05). 하지만 하위요인에서 심신의 상태와 직책과 직무만족은 통계적으로 유의미한 차이가 있었고, 임무수행의 의지와 군 환경에 대한 태도는 통계적으로 유의미하지 않았다. 이러한 결과들은 심리 정서적 지지 차원의 집단음악치료 활동이 군 부적응 관심병사의 스트레스와 군 생활 적응도에 긍정적 영향을 미칠 수 있음을 시사한다.

악보작성 및 재생 시스템 (Melody Note - Music Score Editor and Play System)

  • 김태기;이대정;박미라;민준기
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국해양정보통신학회 2009년도 추계학술대회
    • /
    • pp.1059-1062
    • /
    • 2009
  • 컴퓨터에 의한 음악의 처리가 점차 발전함에 따라 음악의 자동 입력에 관한 연구의 관심이 증가하고 있다. 이에 따라 컴퓨터에 음악을 입력하는 여러 가지 연구가 이루어지고 있다. 그러나, 이전의 연구들은 전문가만이 할 수 있다는 단점이 있다. 즉, 기존 악보 제작 프로그램은 초보자가 사용하려면 사전 지식이 필요하다. 이를 해결하기 위하여 본 논문에서는 비전문가가 음성으로 만들어낸 음을 추출한 후, 음원의 주파수 대역폭을 이용하여 자동으로 악보를 그리는 시스템을 제안한다. 이 시스템은 비전문가가 작곡을 할 수 있도록 편리성을 제공한다. 또한, 컴퓨터에 의해 처리된 악보를 다양한 악기로 연주하는 기능을 제공한다. 이를 통하여 비전문가도 음성과 간단한 시스템 조작에 의해 작곡을 할 수 있고, 원하는 악기로 연주되는 음악을 만들 수 있다.

  • PDF

Optical Music Score Recognition System for Smart Mobile Devices

  • Han, SeJin;Lee, GueeSang
    • International Journal of Contents
    • /
    • 제10권4호
    • /
    • pp.63-68
    • /
    • 2014
  • In this paper, we propose a smart system that can optically recognize a music score within a document and can play the music after recognition. Many historic handwritten documents have now been digitalized. Converting images of a music score within documents into digital files is particularly difficult and requires considerable resources because a music score consists of a 2D structure with both staff lines and symbols. The proposed system takes an input image using a mobile device equipped with a camera module, and the image is optimized via preprocessing. Binarization, music sheet correction, staff line recognition, vertical line detection, note recognition, and symbol recognition processing are then applied, and a music file is generated in an XML format. The Music XML file is recorded as digital information, and based on that file, we can modify the result, logically correct errors, and finally generate a MIDI file. Our system reduces misrecognition, and a wider range of music score can be recognized because we have implemented distortion correction and vertical line detection. We show that the proposed method is practical, and that is has potential for wide application through an experiment with a variety of music scores.

Effect of Music Training on Categorical Perception of Speech and Music

  • L., Yashaswini;Maruthy, Sandeep
    • Journal of Audiology & Otology
    • /
    • 제24권3호
    • /
    • pp.140-148
    • /
    • 2020
  • Background and Objectives: The aim of this study is to evaluate the effect of music training on the characteristics of auditory perception of speech and music. The perception of speech and music stimuli was assessed across their respective stimulus continuum and the resultant plots were compared between musicians and non-musicians. Subjects and Methods: Thirty musicians with formal music training and twenty-seven non-musicians participated in the study (age: 20 to 30 years). They were assessed for identification of consonant-vowel syllables (/da/ to /ga/), vowels (/u/ to /a/), vocal music note (/ri/ to /ga/), and instrumental music note (/ri/ to /ga/) across their respective stimulus continuum. The continua contained 15 tokens with equal step size between any adjacent tokens. The resultant identification scores were plotted against each token and were analyzed for presence of categorical boundary. If the categorical boundary was found, the plots were analyzed by six parameters of categorical perception; for the point of 50% crossover, lower edge of categorical boundary, upper edge of categorical boundary, phoneme boundary width, slope, and intercepts. Results: Overall, the results showed that both speech and music are perceived differently in musicians and non-musicians. In musicians, both speech and music are categorically perceived, while in non-musicians, only speech is perceived categorically. Conclusions: The findings of the present study indicate that music is perceived categorically by musicians, even if the stimulus is devoid of vocal tract features. The findings support that the categorical perception is strongly influenced by training and results are discussed in light of notions of motor theory of speech perception.

Effect of Music Training on Categorical Perception of Speech and Music

  • L., Yashaswini;Maruthy, Sandeep
    • 대한청각학회지
    • /
    • 제24권3호
    • /
    • pp.140-148
    • /
    • 2020
  • Background and Objectives: The aim of this study is to evaluate the effect of music training on the characteristics of auditory perception of speech and music. The perception of speech and music stimuli was assessed across their respective stimulus continuum and the resultant plots were compared between musicians and non-musicians. Subjects and Methods: Thirty musicians with formal music training and twenty-seven non-musicians participated in the study (age: 20 to 30 years). They were assessed for identification of consonant-vowel syllables (/da/ to /ga/), vowels (/u/ to /a/), vocal music note (/ri/ to /ga/), and instrumental music note (/ri/ to /ga/) across their respective stimulus continuum. The continua contained 15 tokens with equal step size between any adjacent tokens. The resultant identification scores were plotted against each token and were analyzed for presence of categorical boundary. If the categorical boundary was found, the plots were analyzed by six parameters of categorical perception; for the point of 50% crossover, lower edge of categorical boundary, upper edge of categorical boundary, phoneme boundary width, slope, and intercepts. Results: Overall, the results showed that both speech and music are perceived differently in musicians and non-musicians. In musicians, both speech and music are categorically perceived, while in non-musicians, only speech is perceived categorically. Conclusions: The findings of the present study indicate that music is perceived categorically by musicians, even if the stimulus is devoid of vocal tract features. The findings support that the categorical perception is strongly influenced by training and results are discussed in light of notions of motor theory of speech perception.

악보자료 목록의 기술에 관한 연구 (A Study on the Description of Printed Music Cataloging)

  • 한경신
    • 한국도서관정보학회지
    • /
    • 제38권1호
    • /
    • pp.231-256
    • /
    • 2007
  • 본 연구의 목적은 오늘날 악보자료의 목록을 위해 사용되는 목록규칙의 분석을 통해 악보자료의 목록규칙에 관한 올바른 이해를 돕고, 우리의 악보자료에 대한목록과 목록규칙의 발전을 위한 토대를 마련하기 위한 것이다. 이를 위해 먼저 악보자료의 특성과 종류에 대해 살펴보고 기존의 악보자료에 관련된 목록규칙들을 조사하였다. 그리고 현재 악보자료 목록규칙의 근간을 이루는 ISBD(PM)을 비롯하여, 주요 목록규칙으로 사용되는 AACR2R 2002 Revision 2004 Update 제5장 Music과 KCR4 제5장 악보, 그리고 KORMARC(통합서지용)과 MARC21 (Bibliographic Data)의 악보부문을 대상으로, 악보자료 목록시 다른 자료들과 구별되는 기술영역을 비교 분석하였다. 이들 영역은 기술의 정보원, 표제와 자료유형표시, 악보의 종류에 관한 사항, 발행사항, 형태사항, 주기사항 등에서의 특징과제문제 등이다.

  • PDF

텍스타일 프린트 디자인 발상을 위한 대중음악 장르별 감성 언어이미지 연구 I (A Study on the Emotional Language Imagery according to Popular Music Genres for Development of Textile Print Design Ideas I)

  • 김지연;오경화
    • 한국의류산업학회지
    • /
    • 제16권3호
    • /
    • pp.354-365
    • /
    • 2014
  • This study investigates the positioning of emotional language imagesin popular music genres for developing textile print design ideas. Auditory and synaesthetic imagery were employed to deduct emotional language imageries from popular music genres and analyze differences in emotional language imageries according to popular music genres. Six genres of popular music were selected as stimulus and a survey was conducted to analyze emotional language imagery differences and similarities depending on popular music genres. The results of this study were: The results of the factor analysis and the reliability test on emotional language imagery showed factorial structures that include Lyrical-Feminine, Intense-Masculine, Euphoric-Active, Gloomy-Melancholy, Abstruse-Sophisticated, and Addictive-Continuous. The results of the mean scores of emotional language imagery of each popular music genre showed that respondents tended to perceive that ballad and new age music are similar and hip-hop & rap, dance, and metal-rock are similar. Based on the multidimensional scaling analysis, new age positioned Lyrical-Feminine, metal-rock positioned Intense-Masculine, dance music positioned Euphoric-Active, and ballad positioned Gloomy-Melancholy. This study provides elementary resources to inspire innovative textile prints designed through different characteristics of emotional language imagery according to each popular music genre.

Opera Clustering: K-means on librettos datasets

  • 정하림;유주헌
    • 인터넷정보학회논문지
    • /
    • 제23권2호
    • /
    • pp.45-52
    • /
    • 2022
  • With the development of artificial intelligence analysis methods, especially machine learning, various fields are widely expanding their application ranges. However, in the case of classical music, there still remain some difficulties in applying machine learning techniques. Genre classification or music recommendation systems generated by deep learning algorithms are actively used in general music, but not in classical music. In this paper, we attempted to classify opera among classical music. To this end, an experiment was conducted to determine which criteria are most suitable among, composer, period of composition, and emotional atmosphere, which are the basic features of music. To generate emotional labels, we adopted zero-shot classification with four basic emotions, 'happiness', 'sadness', 'anger', and 'fear.' After embedding the opera libretto with the doc2vec processing model, the optimal number of clusters is computed based on the result of the elbow method. Decided four centroids are then adopted in k-means clustering to classify unsupervised libretto datasets. We were able to get optimized clustering based on the result of adjusted rand index scores. With these results, we compared them with notated variables of music. As a result, it was confirmed that the four clusterings calculated by machine after training were most similar to the grouping result by period. Additionally, we were able to verify that the emotional similarity between composer and period did not appear significantly. At the end of the study, by knowing the period is the right criteria, we hope that it makes easier for music listeners to find music that suits their tastes.

산후 우울감을 보이는 산모에서 나타나는 전두엽 뇌파 비대칭에 대한 음악의 영향 (The Effects of Music on the Frontal EEG Asymmetry of the Mothers with Postpartum Blues)

  • 임성진;신철진
    • 생물정신의학
    • /
    • 제18권3호
    • /
    • pp.134-140
    • /
    • 2011
  • Objectives Postpartum blues is known to be a major risk factor for postpartum depression and can be associated with the problems of language skills, behaviors or learning skills of their children. Therefore, it is very important for clinicians to evaluate precisely and control postpartum blues. Recent studies have found that music has an effect on depressive mood and the frontal EEG asymmetry of the patients with depression. The purpose of this study was to find out the effects of music on the frontal EEG asymmetry of the mothers with postpartum blues. Method Among one hundred and seventy mothers assessed with Korean version of the Edinburg Postnatal Depression Scale (EPDS), nine mothers with postpartum blues (EPDS ${\geq}$ 10) as postpartum blues group and nine non-depressive mothers (EPDS < 10) as non-depressive mother group were included. Ten non-labored, non-depressive women were also included as a normal control group. The subjects were evaluated with the State Trait Anxiety Inventory (STAI)-X1, the Visual Analogue Scale (VAS) and the Depression Adjective Checklist-Korean version (K-DACL) and EEG twice before and after the music sesssion with the length of twenty minutes and thirty two seconds. The statistical analyses were done for A1 score (log R - log L) which were computed from the alpha powers at F3 and F4. Results No significant difference was noted in demographic data among all three groups. The postpartum blues group had higher scores in the STAI-X1, the VAS and the K-DACL compared to the other groups at baseline, and their A1 scores were lower than those of only normal controls. There was a statistically significant increase of A1 score only in the postpartum blues group after the music session. Conclusion This study suggests that the mothers with postpartum blues may have a frontal EEG asymmetry which is possibly associated with their depressive mood, and the music session can affect the frontal asymmetry positively.