• Title/Summary/Keyword: Frame Score

Search Result 70, Processing Time 0.026 seconds

Preparation and Characterization of Canned Salmon Frame (연어 frame 통조림의 제조 및 특성)

  • Park, Kwon-Hyun;Yoon, Min-Seok;Kim, Jeong-Gyun;Kim, Hyung-Jun;Shin, Joon-Ho;Lee, Ji-Sun;No, Yoon-I;Heu, Min-Soo;Kim, Jin-Soo
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.43 no.2
    • /
    • pp.93-99
    • /
    • 2010
  • This study was conducted to prepare canned salmon frame and to characterize its food components. In the preparation of high-quality canned foods, the boiling water generated in the pre-heating process should be removed, and then the pre-treated canned salmon frame should be sterilized for an $F_0$ value of 12 min. The proximate composition of the canned salmon frame prepared under optimal conditions (CSFP) was 58.4% moisture, 15.7% protein, 21.4% lipid, and 3.5% ash. Based on the results of volatile basic nitrogen and microbiological tests, the CSFP was acceptable. The sensory score for the color of CSFP was 4.1 points, which was higher than that of commercial canned salmon frame (CCSF). However, there were no significant differences in the sensory scores for flavor and taste between CSFP and CCSF. The total amino acid content of CSFP was 14.58 g/100 g, which was 4.9% lower than that of CCSF. The major amino acids in CSFP were aspartic acid (11.0%), glutamic acid (14.8%), and lysine (10.6%), which accounted for 36.4% of the total amino acid content. The CSFP was high in calcium and phosphorus, while it was low in magnesium and zinc. The major fatty acids in CSFP were 16:0 (15.2%), 18:1n-9 (17.0%), 18:2n-6 (16.7%), 20:5n-3 (9.3%), and 22:6n-3 (8.8%). Based on the results, CSFP is a high-quality canned food in terms of hygiene and nutrition.

Factor Analysis of Genetic Evaluations For Type Traits of Canadian Holstein Sires and Cows

  • Ali, A.K.;Koots, K.R.;Burnside, E.B.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.11 no.5
    • /
    • pp.463-469
    • /
    • 1998
  • Factor analysis was applied as a multivariate statistical technique to official genetic evaluations of type classification traits for 1,265,785 Holstein cows and 10,321 sires computed from data collected between August 1982 and June 1994 in Canada. Type traits included eighteen linear descriptive traits and eight major score card traits. Principal components of the factor analysis showed that only five factors explain the information of the genetic value of linear descriptive traits for both cows and sires. Factor 1 included traits related to mammary system, like texture, median suspensory, fore attachment, fore teat placement and rear attachment height and width. Factor 2 described stature, size, chest width and pin width. These two factors had a similar pattern for both cows and sires. In constrast, Factor 3 for cows involved only bone-quality, while in addition for sires, Factor 3 included foot angle, rear legs desirability and legs set. Factor 4 for cows related to foot angle, set of rear leg and leg desirability, while Factor 4 related to loin strenth and pin setting for sires. Finally, Factor 5 included loin strength and pin setting for cows and described only pin setting for sires. Two factors only were required to describe score card traits of cows and sires. Factor 1 related to final score, feet and legs, udder traits, mammary system and dairy character, while frame/capacity and rump were described by Factor 2. Communality estimates which determine the proportion of variance of a type trait that is shared with other type traits via the common factor variant were high, the highest ${\geq}$ 80% for final score, stature, size and chest width. Pin width and pin desirability had the lowest communality, 56% and 37%. Results indicated shifts in emphasis over the twelve-year period away from udder traits and dairy character, and towards size, scale and width traits. A new system that computes fmal score from type components has been initiated.

A Preliminary Study on Development and Evaluation of Home Health Care Nurse Clinical Practice Program -Focused on Postoperative Orthopedic Patients- (가정간호사 임상실무 훈련프로그램 개발과 평가를 위한 사전 연구 -정형외과 수술 환자를 중심으로-)

  • 서영숙
    • Journal of Korean Academy of Nursing
    • /
    • v.26 no.1
    • /
    • pp.15-32
    • /
    • 1996
  • The clinical practice program for home care nurses was implemented in June 1994, to help to set up a hospital-based home care system in the Kwangju City area as a collaborative work between the Department of Orthopedic Surgery at Chunnam University Hospital and Chunnam University School of Nursing. Under the developed clinical practice strategy, the eight week training was given to five licensed home care nurses who had completed Part I and II of the home health care nursing practicum from June 1994. The purpose of this descriptive evaluation study was to identify the effectiveness of the clinical practice program for home care nurses specialized in the area of patient care for people with musculoskeletal function impairment. As a method in data analysis, data triangulation was used in the five home care nurse case evaluations. The variety of data analyzed include confidence score by home care nurse self-evaluation, patient and family member satisfaction scores, and competency score by preceptor evaluation. The study findings revealed that an increase rate in nursing performance didrate necessarily coincide with an increase not in competency score and also, not with the patient /family member satisfaction scores. And an order derived from the clinical performance scores of five home care nurses corresponded to those from three measurements-competency score, patient satisfaction score, and family member satisfaction score. However, it differed from the order associated with the confidence score. Consistency derived from the three objective evaluation methods may lead to the possibility that the level of competency measured by educator can be further explained by the levels of patient/family member satisfaction. The salient finding of this study was that, in case of nurse A who have had little clinical experience in the orthopedic patient care, there was a significant increase in the level of confidence and competency in subscale of professional skill with the home care clinical practice. Therefore, the effect of the clinical practice program would be successful for nurses who have had little experience in the area of specialization. The study results suggest that there might be some time difference in the development of cognitive sense (confidence) in performance and actual clinical performance (competency). In future research, relationships between the confidence and competency score, and between the confidence score and the patient satisfaction score should to be measured in different time frame to achieve a better explanation power of the study outcome.

  • PDF

Audio Event Detection Based on Attention CRNN (Attention CRNN에 기반한 오디오 이벤트 검출)

  • Kwak, Jin-Yeol;Chung, Yong-Joo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.3
    • /
    • pp.465-472
    • /
    • 2020
  • Recently, various deep neural networks based methods have been proposed for audio event detection. In this study, we improved the performance of audio event detection by adopting an attention approach to a baseline CRNN. We applied context gating at the input of the baseline CRNN and added an attention layer at the output. We improved the performance of the attention based CRNN by using the audio data of strong labels in frame units as well as the data of weak labels in clip levels. In the audio event detection experiments using the audio data from the Task 4 of the DCASE 2018/2019 Challenge, we could obtain maximally a 66% relative increase in the F-score in the proposed attention based CRNN compared with the baseline CRNN.

Robust Method of Video Contrast Enhancement for Sudden Illumination Changes (급격한 조명 변화에 강건한 동영상 대조비 개선 방법)

  • Park, Jin Wook;Moon, Young Shik
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.11
    • /
    • pp.55-65
    • /
    • 2015
  • Contrast enhancement methods for a single image applied to videos may cause flickering artifacts because these methods do not consider continuity of videos. On the other hands, methods considering the continuity of videos can reduce flickering artifacts but it may cause unnecessary fade-in/out artifacts when the intensity of videos changes abruptly. In this paper, we propose a robust method of video contrast enhancement for sudden illumination changes. The proposed method enhances each frame by Fast Gray-Level Grouping(FGLG) and considers the continuity of videos by an exponential smoothing filter. The proposed method calculates the smoothing factor of an exponential smoothing filter using a sigmoid function and applies to each frame to reduce unnecessary fade-in/out effects. In the experiment, 6 measurements are used for the performance analysis of the proposed method and traditional methods. Through the experiment. it has been shown that the proposed method demonstrates the best quantitative performance of MSSIM and Flickering score and show the adaptive enhancement under sudden illumination change through the visual quality comparison.

A study on improving the performance of the machine-learning based automatic music transcription model by utilizing pitch number information (음고 개수 정보 활용을 통한 기계학습 기반 자동악보전사 모델의 성능 개선 연구)

  • Daeho Lee;Seokjin Lee
    • The Journal of the Acoustical Society of Korea
    • /
    • v.43 no.2
    • /
    • pp.207-213
    • /
    • 2024
  • In this paper, we study how to improve the performance of a machine learning-based automatic music transcription model by adding musical information to the input data. Where, the added musical information is information on the number of pitches that occur in each time frame, and which is obtained by counting the number of notes activated in the answer sheet. The obtained information on the number of pitches was used by concatenating it to the log mel-spectrogram, which is the input of the existing model. In this study, we use the automatic music transcription model included the four types of block predicting four types of musical information, we demonstrate that a simple method of adding pitch number information corresponding to the music information to be predicted by each block to the existing input was helpful in training the model. In order to evaluate the performance improvement proceed with an experiment using MIDI Aligned Piano Sounds (MAPS) data, as a result, when using all pitch number information, performance improvement was confirmed by 9.7 % in frame-based F1 score and 21.8 % in note-based F1 score including offset.

Heart Sound-Based Cardiac Disorder Classifiers Using an SVM to Combine HMM and Murmur Scores (SVM을 이용하여 HMM과 심잡음 점수를 결합한 심음 기반 심장질환 분류기)

  • Kwak, Chul;Kwon, Oh-Wook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.3
    • /
    • pp.149-157
    • /
    • 2011
  • In this paper, we propose a new cardiac disorder classification method using an support vector machine (SVM) to combine hidden Markov model (HMM) and murmur existence information. Using cepstral features and the HMM Viterbi algorithm, we segment input heart sound signals into HMM states for each cardiac disorder model and compute log-likelihood (score) for every state in the model. To exploit the temporal position characteristics of murmur signals, we divide the input signals into two subbands and compute murmur probability of every subband of each frame, and obtain the murmur score for each state by using the state segmentation information obtained from the Viterbi algorithm. With an input vector containing the HMM state scores and the murmur scores for all cardiac disorder models, SVM finally decides the cardiac disorder category. In cardiac disorder classification experimental results, the proposed method shows the relatively improvement rate of 20.4 % compared to the HMM-based classifier with the conventional cepstral features.

A Study on Acoustic Masking Effect by Frame-Based Formant Enhancement (프레임 기반의 포먼트 강조에 의한 음향 마스킹 현상 발생에 대한 연구)

  • Jeon, Yu-Yong;Kim, Kyu-Sung;Lee, Sang-Min
    • Journal of Biomedical Engineering Research
    • /
    • v.30 no.6
    • /
    • pp.529-534
    • /
    • 2009
  • One of the characteristics of the hearing impaired is that their frequency selectivity is poorer than that of the normal hearing. To compensate this, formant enhancement algorithms and spectral contrast enhancement algorithms have been developed. However in some cases, these algorithms fail to improve the frequency selectivity of the hearing impaired. One of the reasons is the acoustic masking among enhanced formants. In this study, we tried to enhance the formants based on the individual masking characteristic of each subject. The masking characteristic used in this study was minimum level difference (MLD) between the first formant to the second formant while acoustic masking was occurred. If the level difference between the two formants in each frame is larger than the MLD, the gain of the first formant was decreased to reduce the acoustic masking that occurred among formants. As a result of the speech discrimination test, using formant enhanced speeches, speech discrimination score (SDS) of the speeches having differently enhanced formants was significantly superior to SDS of the speeches having equally enhanced formants. It means that suppression of the acoustic masking among formants improve frequency selectivity of the hearing impaired.

Micro-Expression Recognition Base on Optical Flow Features and Improved MobileNetV2

  • Xu, Wei;Zheng, Hao;Yang, Zhongxue;Yang, Yingjie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.6
    • /
    • pp.1981-1995
    • /
    • 2021
  • When a person tries to conceal emotions, real emotions will manifest themselves in the form of micro-expressions. Research on facial micro-expression recognition is still extremely challenging in the field of pattern recognition. This is because it is difficult to implement the best feature extraction method to cope with micro-expressions with small changes and short duration. Most methods are based on hand-crafted features to extract subtle facial movements. In this study, we introduce a method that incorporates optical flow and deep learning. First, we take out the onset frame and the apex frame from each video sequence. Then, the motion features between these two frames are extracted using the optical flow method. Finally, the features are inputted into an improved MobileNetV2 model, where SVM is applied to classify expressions. In order to evaluate the effectiveness of the method, we conduct experiments on the public spontaneous micro-expression database CASME II. Under the condition of applying the leave-one-subject-out cross-validation method, the recognition accuracy rate reaches 53.01%, and the F-score reaches 0.5231. The results show that the proposed method can significantly improve the micro-expression recognition performance.

YOLOv4-based real-time object detection and trimming for dogs' activity analysis (강아지 행동 분석을 위한 YOLOv4 기반의 실시간 객체 탐지 및 트리밍)

  • Atif, Othmane;Lee, Jonguk;Park, Daihee;Chung, Yongwha
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.11a
    • /
    • pp.967-970
    • /
    • 2020
  • In a previous work we have done, we presented a monitoring system to automatically detect some dogs' behaviors from videos. However, the input video data used by that system was pre-trimmed to ensure it contained a dog only. In a real-life situation, the monitoring system would continuously receive video data, including frames that are empty and ones that contain people. In this paper, we propose a YOLOv4-based system for automatic object detection and trimming of dog videos. Sequences of frames trimmed from the video data received from the camera are analyzed to detect dogs and people frame by frame using a YOLOv4 model, and then records of the occurrences of dogs and people are generated. The records of each sequence are then analyzed through a rule-based decision tree to classify the sequence, forward it if it contains a dog only or ignore it otherwise. The results of the experiments on long untrimmed videos show that our proposed method manages an excellent detection performance reaching 0.97 in average of precision, recall and f-1 score at a detection rate of approximately 30 fps, guaranteeing with that real-time processing.