• Title/Summary/Keyword: Class-Evaluation

Search Result 1,862, Processing Time 0.019 seconds

A Two-Stage Learning Method of CNN and K-means RGB Cluster for Sentiment Classification of Images (이미지 감성분류를 위한 CNN과 K-means RGB Cluster 이-단계 학습 방안)

  • Kim, Jeongtae;Park, Eunbi;Han, Kiwoong;Lee, Junghyun;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.139-156
    • /
    • 2021
  • The biggest reason for using a deep learning model in image classification is that it is possible to consider the relationship between each region by extracting each region's features from the overall information of the image. However, the CNN model may not be suitable for emotional image data without the image's regional features. To solve the difficulty of classifying emotion images, many researchers each year propose a CNN-based architecture suitable for emotion images. Studies on the relationship between color and human emotion were also conducted, and results were derived that different emotions are induced according to color. In studies using deep learning, there have been studies that apply color information to image subtraction classification. The case where the image's color information is additionally used than the case where the classification model is trained with only the image improves the accuracy of classifying image emotions. This study proposes two ways to increase the accuracy by incorporating the result value after the model classifies an image's emotion. Both methods improve accuracy by modifying the result value based on statistics using the color of the picture. When performing the test by finding the two-color combinations most distributed for all training data, the two-color combinations most distributed for each test data image were found. The result values were corrected according to the color combination distribution. This method weights the result value obtained after the model classifies an image's emotion by creating an expression based on the log function and the exponential function. Emotion6, classified into six emotions, and Artphoto classified into eight categories were used for the image data. Densenet169, Mnasnet, Resnet101, Resnet152, and Vgg19 architectures were used for the CNN model, and the performance evaluation was compared before and after applying the two-stage learning to the CNN model. Inspired by color psychology, which deals with the relationship between colors and emotions, when creating a model that classifies an image's sentiment, we studied how to improve accuracy by modifying the result values based on color. Sixteen colors were used: red, orange, yellow, green, blue, indigo, purple, turquoise, pink, magenta, brown, gray, silver, gold, white, and black. It has meaning. Using Scikit-learn's Clustering, the seven colors that are primarily distributed in the image are checked. Then, the RGB coordinate values of the colors from the image are compared with the RGB coordinate values of the 16 colors presented in the above data. That is, it was converted to the closest color. Suppose three or more color combinations are selected. In that case, too many color combinations occur, resulting in a problem in which the distribution is scattered, so a situation fewer influences the result value. Therefore, to solve this problem, two-color combinations were found and weighted to the model. Before training, the most distributed color combinations were found for all training data images. The distribution of color combinations for each class was stored in a Python dictionary format to be used during testing. During the test, the two-color combinations that are most distributed for each test data image are found. After that, we checked how the color combinations were distributed in the training data and corrected the result. We devised several equations to weight the result value from the model based on the extracted color as described above. The data set was randomly divided by 80:20, and the model was verified using 20% of the data as a test set. After splitting the remaining 80% of the data into five divisions to perform 5-fold cross-validation, the model was trained five times using different verification datasets. Finally, the performance was checked using the test dataset that was previously separated. Adam was used as the activation function, and the learning rate was set to 0.01. The training was performed as much as 20 epochs, and if the validation loss value did not decrease during five epochs of learning, the experiment was stopped. Early tapping was set to load the model with the best validation loss value. The classification accuracy was better when the extracted information using color properties was used together than the case using only the CNN architecture.

EVALUATION OF CONDYLAR POSITION USING COMPUTED TOMOGRAPH FOLLOWING BILATERAL SAGITTAL SPLIT RAMUS OSTEOTOMY (전산화단층촬영법을 이용한 하악 전돌증 환자의 하악지 시상 골절단술후 하악과두 위치변화 분석)

  • Chol, Kang-Young;Lee, Sang-Han
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • v.18 no.4
    • /
    • pp.570-593
    • /
    • 1996
  • This study was intended to perform the influence of condyle positional change after surgical correction of skeletal Class III malocclusion after BSSRO in 20 patients(males 9, females 11) using computed tomogram that were taken in centric occlusion before, immediate, and long term after surgery and lateral cephalogram that were taken in centric occlusion before, 7 days within the period intermaxillary fixation, 24hour after removing intermaxillary fixation and long term after surgery. 1. Mean intercondylar distance was $84.45{\pm}4.01mm$ and horizontal long axis of condylar angle was $11.89{\pm}5.19^{\circ}$on right, $11.65{\pm}2.09^{\circ}$on left side and condylar lateral poles were located about 12mm and medial poles about 7mm from reference line(AA') on the axial tomograph. Mean intercondylar distance was $84.43{\pm}3.96mm$ and vertical axis angle of condylar angle was $78.72{\pm}3.43^{\circ}$on right, $78.09{\pm}6.12^{\circ}$on left. 2. No statistical significance was found on the condylar change(T2C-T1C) but it had definitive increasing tendency. There was significant decreasing of the distance between both condylar pole and the AA'(p<0.05) during the long term(TLC-T2C). 3. On the lateral cephalogram, no statistical significance was found between immediate after surgery and 24 hours after the removing of intermaxillary fixation but only the lower incisor tip moved forward about 0.33mm(p<0.05). Considering individual relapse rate, mean relapse rate was 1.2% on L1, 5.0% on B, 2.0% on Pog, 9.1% on Gn, 10.3% on Me(p<0.05). 4. There was statistical significance on the influence of the mandibular set-back to the total mandibular relapse(p<0.05). 5. There was no statistical significance on the influence of the mandibular set-back(T2-T1) to the condylar change(T2C-T1C), the condylar change(T2C-T1C, TLC-T2C) to the mandibular total relapse, the pre-operative condylar position to the condylar change(T2C-T1C, TLC-T2C), the pre-operative mandibular posture to the condylar change(T2C-T1C, TLC-T2C)(p>0.05). 6. The result of multiple regression analysis on the influence of the pre-operative condylar position to the total mandibular relapse revealed that the more increasing of intercondylar distance and condylar vertical axis angle and decreasing of condyalr head long axis angle, the more increasing of mandibular horizontal relapse(L1,B,Pog,Gn,Me) on the right side condyle. The same result was founded in the case of horizontal relapse(L1,Me) on the left side condyle.(p<0.05). 7. The result of multiple regression analysis on the influence of the pre-operative condylar position to the pre-operative mandibular posture revealed that the more increasing of intercondylar distance and condylar vertical axis angle and decreasing of condylar head long axis angle, the more increasing of mandibular vertical length on the right side condyle. and increasing of vertical lengh & prognathism on the left side condyle(p<0.05). 8. The result of simple regression analysis on the influence of the pre-operative mandibular posture to the mandibular total relapse revealed that the more increasing of prognathism, the more increasing of mandibular total relapse in B and the more increasing of over-jet the more increasing of mandibular total relapse(p<0.05). Consequently, surgical mandibular repositioning was not significantly influenced to the change of condylar position with condylar reposition method.

  • PDF