• Title/Summary/Keyword: Voice signal

Search Result 433, Processing Time 0.02 seconds

Blind Rhythmic Source Separation (블라인드 방식의 리듬 음원 분리)

  • Kim, Min-Je;Yoo, Ji-Ho;Kang, Kyeong-Ok;Choi, Seung-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.8
    • /
    • pp.697-705
    • /
    • 2009
  • An unsupervised (blind) method is proposed aiming at extracting rhythmic sources from commercial polyphonic music whose number of channels is limited to one. Commercial music signals are not usually provided with more than two channels while they often contain multiple instruments including singing voice. Therefore, instead of using conventional modeling of mixing environments or statistical characteristics, we should introduce other source-specific characteristics for separating or extracting sources in the under determined environments. In this paper, we concentrate on extracting rhythmic sources from the mixture with the other harmonic sources. An extension of nonnegative matrix factorization (NMF), which is called nonnegative matrix partial co-factorization (NMPCF), is used to analyze multiple relationships between spectral and temporal properties in the given input matrices. Moreover, temporal repeatability of the rhythmic sound sources is implicated as a common rhythmic property among segments of an input mixture signal. The proposed method shows acceptable, but not superior separation quality to referred prior knowledge-based drum source separation systems, but it has better applicability due to its blind manner in separation, for example, when there is no prior information or the target rhythmic source is irregular.

Anura Call Monitoring Data Collection and Quality Management through Citizen Participation (시민참여형 무미목 양서류 음성신호 수집 및 품질관리 방안)

  • Kyeong-Tae Kim;Hyun-Jung Lee;Won-Kyong Song
    • Korean Journal of Environment and Ecology
    • /
    • v.38 no.3
    • /
    • pp.230-245
    • /
    • 2024
  • Amphibians, sensitive to external environmental changes, serve as bioindicator species for assessing alterations or disturbances in local ecosystems. It is known that one-third of amphibian species within the order Anura are at risk of extinction due to anthropogenic threats such as habitat destruction and fragmentation caused by urbanization. To develop effective protection and conservation strategies for anuran amphibians, species surveys that account for population characteristics are essential. This study aimed to investigate the potential for citizen participation in ecological monitoring using the mating calls of anura species. We also proposed suitable quality control measures to mitigate errors and biases, ensuring the extraction of reliable species occurrence data. The Citizen Science project was carried out nationwide from April 1 to August 31, 2022, targeting 12 species of anura amphibians in Korea. Citizens voluntarily participated in voice signal monitoring, where they listened to anura species' mating calls and recorded them using a mobile application. Additionally, we established a quality control process to extract reliable species occurrence data, categorizing errors and biases from citizen-collected data into three levels: omission, commission, and incorrect identification. A total of 6,808 observations were collected during the citizen participation in anura species vocalization monitoring. Through the quality control process, errors and biases were identified in 1,944 (28.55%) of the 6,808 data. The most common type of error was omission, accounting for 922 cases (47.43%), followed by incorrect identification with 540 cases (27.78%), and commission with 482 cases (24.79%). During the Citizen Science project, we successfully recorded the mating calls of 10 out of the 12 anuran amphibian species in Korea, excluding the Asian toads (Bufo gargarizans Cantor), Korean brown frog (Rana coreana). Difficulties in collecting mating calls were primarily attributed to challenges in observing due to population decline or discrepancies between the breeding season of non-emergent individuals and the timing of the citizen science project. This study represents the first investigation of distribution status and species emergence data collection through mating calls of anura species in Korea based on citizen participation. It can serve as a foundation for designing future bioacoustic monitoring that incorporates citizen science and quality control measures for citizen science data.

The Audience Behavior-based Emotion Prediction Model for Personalized Service (고객 맞춤형 서비스를 위한 관객 행동 기반 감정예측모형)

  • Ryoo, Eun Chung;Ahn, Hyunchul;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.73-85
    • /
    • 2013
  • Nowadays, in today's information society, the importance of the knowledge service using the information to creative value is getting higher day by day. In addition, depending on the development of IT technology, it is ease to collect and use information. Also, many companies actively use customer information to marketing in a variety of industries. Into the 21st century, companies have been actively using the culture arts to manage corporate image and marketing closely linked to their commercial interests. But, it is difficult that companies attract or maintain consumer's interest through their technology. For that reason, it is trend to perform cultural activities for tool of differentiation over many firms. Many firms used the customer's experience to new marketing strategy in order to effectively respond to competitive market. Accordingly, it is emerging rapidly that the necessity of personalized service to provide a new experience for people based on the personal profile information that contains the characteristics of the individual. Like this, personalized service using customer's individual profile information such as language, symbols, behavior, and emotions is very important today. Through this, we will be able to judge interaction between people and content and to maximize customer's experience and satisfaction. There are various relative works provide customer-centered service. Specially, emotion recognition research is emerging recently. Existing researches experienced emotion recognition using mostly bio-signal. Most of researches are voice and face studies that have great emotional changes. However, there are several difficulties to predict people's emotion caused by limitation of equipment and service environments. So, in this paper, we develop emotion prediction model based on vision-based interface to overcome existing limitations. Emotion recognition research based on people's gesture and posture has been processed by several researchers. This paper developed a model that recognizes people's emotional states through body gesture and posture using difference image method. And we found optimization validation model for four kinds of emotions' prediction. A proposed model purposed to automatically determine and predict 4 human emotions (Sadness, Surprise, Joy, and Disgust). To build up the model, event booth was installed in the KOCCA's lobby and we provided some proper stimulative movie to collect their body gesture and posture as the change of emotions. And then, we extracted body movements using difference image method. And we revised people data to build proposed model through neural network. The proposed model for emotion prediction used 3 type time-frame sets (20 frames, 30 frames, and 40 frames). And then, we adopted the model which has best performance compared with other models.' Before build three kinds of models, the entire 97 data set were divided into three data sets of learning, test, and validation set. The proposed model for emotion prediction was constructed using artificial neural network. In this paper, we used the back-propagation algorithm as a learning method, and set learning rate to 10%, momentum rate to 10%. The sigmoid function was used as the transform function. And we designed a three-layer perceptron neural network with one hidden layer and four output nodes. Based on the test data set, the learning for this research model was stopped when it reaches 50000 after reaching the minimum error in order to explore the point of learning. We finally processed each model's accuracy and found best model to predict each emotions. The result showed prediction accuracy 100% from sadness, and 96% from joy prediction in 20 frames set model. And 88% from surprise, and 98% from disgust in 30 frames set model. The findings of our research are expected to be useful to provide effective algorithm for personalized service in various industries such as advertisement, exhibition, performance, etc.