• Title/Summary/Keyword: computer music

Search Result 522, Processing Time 0.056 seconds

A Comparative Analysis of Music Similarity Measures in Music Information Retrieval Systems

  • Gurjar, Kuldeep;Moon, Yang-Sae
    • Journal of Information Processing Systems
    • /
    • v.14 no.1
    • /
    • pp.32-55
    • /
    • 2018
  • The digitization of music has seen a considerable increase in audience size from a few localized listeners to a wider range of global listeners. At the same time, the digitization brings the challenge of smoothly retrieving music from large databases. To deal with this challenge, many systems which support the smooth retrieval of musical data have been developed. At the computational level, a query music piece is compared with the rest of the music pieces in the database. These systems, music information retrieval (MIR systems), work for various applications such as general music retrieval, plagiarism detection, music recommendation, and musicology. This paper mainly addresses two parts of the MIR research area. First, it presents a general overview of MIR, which will examine the history of MIR, the functionality of MIR, application areas of MIR, and the components of MIR. Second, we will investigate music similarity measurement methods, where we provide a comparative analysis of state of the art methods. The scope of this paper focuses on comparative analysis of the accuracy and efficiency of a few key MIR systems. These analyses help in understanding the current and future challenges associated with the field of MIR systems and music similarity measures.

Attention-based CNN-BiGRU for Bengali Music Emotion Classification

  • Subhasish Ghosh;Omar Faruk Riad
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.9
    • /
    • pp.47-54
    • /
    • 2023
  • For Bengali music emotion classification, deep learning models, particularly CNN and RNN are frequently used. But previous researches had the flaws of low accuracy and overfitting problem. In this research, attention-based Conv1D and BiGRU model is designed for music emotion classification and comparative experimentation shows that the proposed model is classifying emotions more accurate. We have proposed a Conv1D and Bi-GRU with the attention-based model for emotion classification of our Bengali music dataset. The model integrates attention-based. Wav preprocessing makes use of MFCCs. To reduce the dimensionality of the feature space, contextual features were extracted from two Conv1D layers. In order to solve the overfitting problems, dropouts are utilized. Two bidirectional GRUs networks are used to update previous and future emotion representation of the output from the Conv1D layers. Two BiGRU layers are conntected to an attention mechanism to give various MFCC feature vectors more attention. Moreover, the attention mechanism has increased the accuracy of the proposed classification model. The vector is finally classified into four emotion classes: Angry, Happy, Relax, Sad; using a dense, fully connected layer with softmax activation. The proposed Conv1D+BiGRU+Attention model is efficient at classifying emotions in the Bengali music dataset than baseline methods. For our Bengali music dataset, the performance of our proposed model is 95%.

Vision-Based Piano Music Transcription System (비전 기반 피아노 자동 채보 시스템)

  • Park, Sang-Uk;Park, Si-Hyun;Park, Chun-Su
    • Journal of IKEEE
    • /
    • v.23 no.1
    • /
    • pp.249-253
    • /
    • 2019
  • Most of music-transcription systems that have been commercialized operate based on audio information. However, these conventional systems have disadvantages of environmental dependency, equipment dependency, and time latency. This paper studied a vision-based music-transcription system that utilizes video information rather than audio information, which is a traditional method of music-transcription programs. Computer vision technology is widely used as a field for analyzing and applying information from equipment such as cameras. In this paper, we created a program to generate MIDI file which is electronic music notes by using smart-phone cameras to record the play of piano.

Optical Music Score Recognition System for Smart Mobile Devices

  • Han, SeJin;Lee, GueeSang
    • International Journal of Contents
    • /
    • v.10 no.4
    • /
    • pp.63-68
    • /
    • 2014
  • In this paper, we propose a smart system that can optically recognize a music score within a document and can play the music after recognition. Many historic handwritten documents have now been digitalized. Converting images of a music score within documents into digital files is particularly difficult and requires considerable resources because a music score consists of a 2D structure with both staff lines and symbols. The proposed system takes an input image using a mobile device equipped with a camera module, and the image is optimized via preprocessing. Binarization, music sheet correction, staff line recognition, vertical line detection, note recognition, and symbol recognition processing are then applied, and a music file is generated in an XML format. The Music XML file is recorded as digital information, and based on that file, we can modify the result, logically correct errors, and finally generate a MIDI file. Our system reduces misrecognition, and a wider range of music score can be recognized because we have implemented distortion correction and vertical line detection. We show that the proposed method is practical, and that is has potential for wide application through an experiment with a variety of music scores.

Staff-line and Measure Detection using a Convolutional Neural Network for Handwritten Optical Music Recognition (손사보 악보의 광학음악인식을 위한 CNN 기반의 보표 및 마디 인식)

  • Park, Jong-Won;Kim, Dong-Sam;Kim, Jun-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.7
    • /
    • pp.1098-1101
    • /
    • 2022
  • With the development of computer music notation programs, when drawing sheet music, it is often drawn using a computer. However, there are still many use of hand-written notations for educational purposes or to quickly draw sheet music such as listening and dictating. In previous studies, OMR focused on recognizing the printed music sheet made by music notation program. the result of handwritten OMR with camera is poor because different people have different writing methods, and lens distortion. In this study, as a pre-processing process for recognizing handwritten music sheet, we propose a method for recognizing a staff using linear regression and a method for recognizing a bar using CNN. F1 scores of staff recognition and barline detection are 99.09% and 95.48%, respectively. This methodologies are expected to contribute to improving the accuracy of handwriting.

A Study of Digital Music Element for Music Plagiarism Analysis (음악 표절 분석을 위한 디지털 음악 요소에 대한 연구)

  • Shin, Mi-Hae;Jo, Jin-Wan;Lee, Hye-Seung;Kim, Young-Chul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.8
    • /
    • pp.43-52
    • /
    • 2013
  • The purpose of this paper is researching musical elements to analyze plagiarism between two sources. We first search digital music elements to analyze music source and examine how to use these in plagiarism analysis using compiler techniques. In addition we are used open source Java API JFugue to process complex MIDI music data simply. Therefore we designed music plagiarism analysis system by using MusicString which is supported in JFugue and construct AST after investigate MusicString's syntax processing elements to manipulate music plagiarism analysis efficiently. So far music plagiarism analysis is evaluated emotionally and subjectively. But this paper suggests first step to build plagiarism analysis systemically. If this research is well utilized, this is very meaningful to standardize systemically which music is plagiarized or not.

Music Programming Language Composition Using Csound (Csound를 이용한 음악 프로그래밍 언어 제작)

  • Yeo Young-Hwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.7
    • /
    • pp.365-370
    • /
    • 2005
  • The present study is purposed to establish a systematic theory for user-friendly approach to the creation of using a programming language using Csound. Csound is a world-wide computer music programming language and a software synthesizer specialized for prominent sound designers developed by Barry Vercoe at the Media Laboratory in M.I.T. The introduction and the main body of this paper suggested as the starting point of creating electronic music and musical sound the time of combination of music with natural sound or sound from specific media from the viewpoint of traditional Western music. and presents a systematic method composed of the principle of the operation of Csound and basic data samples.

Korean Traditional Music Genre Classification Using Sample and MIDI Phrases

  • Lee, JongSeol;Lee, MyeongChun;Jang, Dalwon;Yoon, Kyoungro
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.4
    • /
    • pp.1869-1886
    • /
    • 2018
  • This paper proposes a MIDI- and audio-based music genre classification method for Korean traditional music. There are many traditional instruments in Korea, and most of the traditional songs played using the instruments have similar patterns and rhythms. Although music information processing such as music genre classification and audio melody extraction have been studied, most studies have focused on pop, jazz, rock, and other universal genres. There are few studies on Korean traditional music because of the lack of datasets. This paper analyzes raw audio and MIDI phrases in Korean traditional music, performed using Korean traditional musical instruments. The classified samples and MIDI, based on our classification system, will be used to construct a database or to implement our Kontakt-based instrument library. Thus, we can construct a management system for a Korean traditional music library using this classification system. Appropriate feature sets for raw audio and MIDI phrases are proposed and the classification results-based on machine learning algorithms such as support vector machine, multi-layer perception, decision tree, and random forest-are outlined in this paper.

Recognition of Music using Backpropagation Network (Backpropagation Network을 이용한 악보 인식)

  • Park, Hyun-Jun;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2007.06a
    • /
    • pp.258-261
    • /
    • 2007
  • This paper presents techniques to recognize music using back propagation network, one of the neural network algorithms, and to preprocess technique for music image. Music symbols and music notes are segmented by preprocessing such as binarization, slope correction, staff line removing, etc. Segmented music symbols and music notes are recognized by music note recognizing network and non-music note recognizing network. We proved correctness of proposed music recognition algorithm through experiments and analysis with various kind of musics.

  • PDF

Music Therapy Counseling Recommendation Model Based on Collaborative Filtering (협업 필터링 기반의 음악 치료 상담 추천 모델)

  • Park, Seong-Hyun;Kim, Jae-Woong;Kim, Dong-Hyun;Cho, Han-Jin
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.9
    • /
    • pp.31-36
    • /
    • 2019
  • Music therapy, a field that convergence music and treatment, which play a fundamental role in personality formation, possesses diverse and complex treatment methods. Music therapists in charge of music therapy may experience the same phenomenon as countertransference in consultation with clients. In addition, experiencing psychological burnout, there are many difficulties in reaching the final goal of music therapy. In this paper, we provide a collaborative filtering-based music therapy consultation data recommendation model for smooth music therapy consultation with clients who visited for music therapy. The proposed model grasps the similarity between the conventional consultation data and the new consultant data through the euclidean distance algorithm. This is to recommend similar consultation materials. Since music therapists can provide optimal consultation materials for consultants who need music therapy, smooth consultation is expected.