• 제목/요약/키워드: 이미지 학습

Search Result 1,413, Processing Time 0.025 seconds

Development and Application of a Tool for Measuring on a Scientist Image by the Semantic Differential Method (의미분석법에 의한 과학자 이미지 측정도구 개발 및 적용)

  • Youngwook Song;Hyukjoon Choi
    • Journal of Science Education
    • /
    • v.48 no.1
    • /
    • pp.63-73
    • /
    • 2024
  • Knowing the learner's image of a subject-related occupation is good data for determining the direction of a teacher's teaching and learning. Existing drawing image analysis tools have the limitation that it takes a long time to analyze images and drawings of a scientist's appearance. The semantic differential method is a widely used method to analyze images of specific objects. However, research using the semantic differential method has the limitation of failing to reflect terms or factors that change over time by using the adjective pairs used in the initial study as they were in accordance with the research content. In this study, we use the semantic differential method to develop a tool to measure middle school students' scientist image and apply it to middle school students to discuss educational implications regarding the usefulness of measuring scientist image.

A Two-Stage Learning Method of CNN and K-means RGB Cluster for Sentiment Classification of Images (이미지 감성분류를 위한 CNN과 K-means RGB Cluster 이-단계 학습 방안)

  • Kim, Jeongtae;Park, Eunbi;Han, Kiwoong;Lee, Junghyun;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.139-156
    • /
    • 2021
  • The biggest reason for using a deep learning model in image classification is that it is possible to consider the relationship between each region by extracting each region's features from the overall information of the image. However, the CNN model may not be suitable for emotional image data without the image's regional features. To solve the difficulty of classifying emotion images, many researchers each year propose a CNN-based architecture suitable for emotion images. Studies on the relationship between color and human emotion were also conducted, and results were derived that different emotions are induced according to color. In studies using deep learning, there have been studies that apply color information to image subtraction classification. The case where the image's color information is additionally used than the case where the classification model is trained with only the image improves the accuracy of classifying image emotions. This study proposes two ways to increase the accuracy by incorporating the result value after the model classifies an image's emotion. Both methods improve accuracy by modifying the result value based on statistics using the color of the picture. When performing the test by finding the two-color combinations most distributed for all training data, the two-color combinations most distributed for each test data image were found. The result values were corrected according to the color combination distribution. This method weights the result value obtained after the model classifies an image's emotion by creating an expression based on the log function and the exponential function. Emotion6, classified into six emotions, and Artphoto classified into eight categories were used for the image data. Densenet169, Mnasnet, Resnet101, Resnet152, and Vgg19 architectures were used for the CNN model, and the performance evaluation was compared before and after applying the two-stage learning to the CNN model. Inspired by color psychology, which deals with the relationship between colors and emotions, when creating a model that classifies an image's sentiment, we studied how to improve accuracy by modifying the result values based on color. Sixteen colors were used: red, orange, yellow, green, blue, indigo, purple, turquoise, pink, magenta, brown, gray, silver, gold, white, and black. It has meaning. Using Scikit-learn's Clustering, the seven colors that are primarily distributed in the image are checked. Then, the RGB coordinate values of the colors from the image are compared with the RGB coordinate values of the 16 colors presented in the above data. That is, it was converted to the closest color. Suppose three or more color combinations are selected. In that case, too many color combinations occur, resulting in a problem in which the distribution is scattered, so a situation fewer influences the result value. Therefore, to solve this problem, two-color combinations were found and weighted to the model. Before training, the most distributed color combinations were found for all training data images. The distribution of color combinations for each class was stored in a Python dictionary format to be used during testing. During the test, the two-color combinations that are most distributed for each test data image are found. After that, we checked how the color combinations were distributed in the training data and corrected the result. We devised several equations to weight the result value from the model based on the extracted color as described above. The data set was randomly divided by 80:20, and the model was verified using 20% of the data as a test set. After splitting the remaining 80% of the data into five divisions to perform 5-fold cross-validation, the model was trained five times using different verification datasets. Finally, the performance was checked using the test dataset that was previously separated. Adam was used as the activation function, and the learning rate was set to 0.01. The training was performed as much as 20 epochs, and if the validation loss value did not decrease during five epochs of learning, the experiment was stopped. Early tapping was set to load the model with the best validation loss value. The classification accuracy was better when the extracted information using color properties was used together than the case using only the CNN architecture.

Estimation of the Lodging Area in Rice Using Deep Learning (딥러닝을 이용한 벼 도복 면적 추정)

  • Ban, Ho-Young;Baek, Jae-Kyeong;Sang, Wan-Gyu;Kim, Jun-Hwan;Seo, Myung-Chul
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.66 no.2
    • /
    • pp.105-111
    • /
    • 2021
  • Rice lodging is an annual occurrence caused by typhoons accompanied by strong winds and strong rainfall, resulting in damage relating to pre-harvest sprouting during the ripening period. Thus, rapid estimations of the area of lodged rice are necessary to enable timely responses to damage. To this end, we obtained images related to rice lodging using a drone in Gimje, Buan, and Gunsan, which were converted to 128 × 128 pixels images. A convolutional neural network (CNN) model, a deep learning model based on these images, was used to predict rice lodging, which was classified into two types (lodging and non-lodging), and the images were divided in a 8:2 ratio into a training set and a validation set. The CNN model was layered and trained using three optimizers (Adam, Rmsprop, and SGD). The area of rice lodging was evaluated for the three fields using the obtained data, with the exception of the training set and validation set. The images were combined to give composites images of the entire fields using Metashape, and these images were divided into 128 × 128 pixels. Lodging in the divided images was predicted using the trained CNN model, and the extent of lodging was calculated by multiplying the ratio of the total number of field images by the number of lodging images by the area of the entire field. The results for the training and validation sets showed that accuracy increased with a progression in learning and eventually reached a level greater than 0.919. The results obtained for each of the three fields showed high accuracy with respect to all optimizers, among which, Adam showed the highest accuracy (normalized root mean square error: 2.73%). On the basis of the findings of this study, it is anticipated that the area of lodged rice can be rapidly predicted using deep learning.

Siamese Neural Networks to Overcome the Insufficient Data Problems in Product Defect Detection (제품 결함 탐지에서 데이터 부족 문제를 극복하기 위한 샴 신경망의 활용)

  • Shin, Kang-hyeon;Jin, Kyo-hong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.108-111
    • /
    • 2022
  • Applying deep learning to machine vision systems for defect detection of products requires vast amounts of training data about various defect cases. However, since data imbalance occurs according to the type of defect in the actual manufacturing industry, it takes a lot of time to collect product images enough to generalize defect cases. In this paper, we apply a Siamese neural network that can be learned with even a small amount of data to product defect detection, and modify the image pairing method and contrastive loss function by properties the situation of product defect image data. We indirectly evaluated the embedding performance of Siamese neural networks using AUC-ROC, and it showed good performance when the images only paired among same products, not paired among defective products, and learned with exponential contrastive loss.

  • PDF

Smart Mirror for Facial Expression Recognition Based on Convolution Neural Network (컨볼루션 신경망 기반 표정인식 스마트 미러)

  • Choi, Sung Hwan;Yu, Yun Seop
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.200-203
    • /
    • 2021
  • This paper introduces a smart mirror technology that recognizes a person's facial expressions through image classification among several artificial intelligence technologies and presents them in a mirror. 5 types of facial expression images are trained through artificial intelligence. When someone looks at the smart mirror, the mirror recognizes my expression and shows the recognized result in the mirror. The dataset fer2013 provided by kaggle used the faces of several people to be separated by facial expressions. For image classification, the network structure is trained using convolution neural network (CNN). The face is recognized and presented on the screen in the smart mirror with the embedded board such as Raspberry Pi4.

  • PDF

Effects of Spatio-temporal Features of Dynamic Hand Gestures on Learning Accuracy in 3D-CNN (3D-CNN에서 동적 손 제스처의 시공간적 특징이 학습 정확성에 미치는 영향)

  • Yeongjee Chung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.3
    • /
    • pp.145-151
    • /
    • 2023
  • 3D-CNN is one of the deep learning techniques for learning time series data. Such three-dimensional learning can generate many parameters, so that high-performance machine learning is required or can have a large impact on the learning rate. When learning dynamic hand-gestures in spatiotemporal domain, it is necessary for the improvement of the efficiency of dynamic hand-gesture learning with 3D-CNN to find the optimal conditions of input video data by analyzing the learning accuracy according to the spatiotemporal change of input video data without structural change of the 3D-CNN model. First, the time ratio between dynamic hand-gesture actions is adjusted by setting the learning interval of image frames in the dynamic hand-gesture video data. Second, through 2D cross-correlation analysis between classes, similarity between image frames of input video data is measured and normalized to obtain an average value between frames and analyze learning accuracy. Based on this analysis, this work proposed two methods to effectively select input video data for 3D-CNN deep learning of dynamic hand-gestures. Experimental results showed that the learning interval of image data frames and the similarity of image frames between classes can affect the accuracy of the learning model.

Comparative Evaluation of 18F-FDG Brain PET/CT AI Images Obtained Using Generative Adversarial Network (생성적 적대 신경망(Generative Adversarial Network)을 이용하여 획득한 18F-FDG Brain PET/CT 인공지능 영상의 비교평가)

  • Kim, Jong-Wan;Kim, Jung-Yul;Lim, Han-sang;Kim, Jae-sam
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.24 no.1
    • /
    • pp.15-19
    • /
    • 2020
  • Purpose Generative Adversarial Network(GAN) is one of deep learning technologies. This is a way to create a real fake image after learning the real image. In this study, after acquiring artificial intelligence images through GAN, We were compared and evaluated with real scan time images. We want to see if these technologies are potentially useful. Materials and Methods 30 patients who underwent 18F-FDG Brain PET/CT scanning at Severance Hospital, were acquired in 15-minute List mode and reconstructed into 1,2,3,4,5 and 15minute images, respectively. 25 out of 30 patients were used as learning images for learning of GAN and 5 patients used as verification images for confirming the learning model. The program was implemented using the Python and Tensorflow frameworks. After learning using the Pix2Pix model of GAN technology, this learning model generated artificial intelligence images. The artificial intelligence image generated in this way were evaluated as Mean Square Error(MSE), Peak Signal to Noise Ratio(PSNR), and Structural Similarity Index(SSIM) with real scan time image. Results The trained model was evaluated with the verification image. As a result, The 15-minute image created by the 5-minute image rather than 1-minute after the start of the scan showed a smaller MSE, and the PSNR and SSIM increased. Conclusion Through this study, it was confirmed that AI imaging technology is applicable. In the future, if these artificial intelligence imaging technologies are applied to nuclear medicine imaging, it will be possible to acquire images even with a short scan time, which can be expected to reduce artifacts caused by patient movement and increase the efficiency of the scanning room.

Research on Local and Global Infrared Image Pre-Processing Methods for Deep Learning Based Guided Weapon Target Detection

  • Jae-Yong Baek;Dae-Hyeon Park;Hyuk-Jin Shin;Yong-Sang Yoo;Deok-Woong Kim;Du-Hwan Hur;SeungHwan Bae;Jun-Ho Cheon;Seung-Hwan Bae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.7
    • /
    • pp.41-51
    • /
    • 2024
  • In this paper, we explore the enhancement of target detection accuracy in the guided weapon using deep learning object detection on infrared (IR) images. Due to the characteristics of IR images being influenced by factors such as time and temperature, it's crucial to ensure a consistent representation of object features in various environments when training the model. A simple way to address this is by emphasizing the features of target objects and reducing noise within the infrared images through appropriate pre-processing techniques. However, in previous studies, there has not been sufficient discussion on pre-processing methods in learning deep learning models based on infrared images. In this paper, we aim to investigate the impact of image pre-processing techniques on infrared image-based training for object detection. To achieve this, we analyze the pre-processing results on infrared images that utilized global or local information from the video and the image. In addition, in order to confirm the impact of images converted by each pre-processing technique on object detector training, we learn the YOLOX target detector for images processed by various pre-processing methods and analyze them. In particular, the results of the experiments using the CLAHE (Contrast Limited Adaptive Histogram Equalization) shows the highest detection accuracy with a mean average precision (mAP) of 81.9%.

Classification Method of Harmful Image Content Rates in Internet (인터넷에서의 유해 이미지 컨텐츠 등급 분류 기법)

  • Nam, Taek-Yong;Jeong, Chi-Yoon;Han, Chi-Moon
    • Journal of KIISE:Information Networking
    • /
    • v.32 no.3
    • /
    • pp.318-326
    • /
    • 2005
  • This paper presents the image feature extraction method and the image classification technique to select the harmful image flowed from the Internet by grade of image contents such as harmlessness, sex-appealing, harmfulness (nude), serious harmfulness (adult) by the characteristic of the image. In this paper, we suggest skin area detection technique to recognize whether an input image is harmful or not. We also propose the ROI detection algorithm that establishes region of interest to reduce some noise and extracts harmful degree effectively and defines the characteristics in the ROI area inside. And this paper suggests the multiple-SVM training method that creates the image classification model to select as 4 types of class defined above. This paper presents the multiple-SVM classification algorithm that categorizes harmful grade of input data with suggested classification model. We suggest the skin likelihood image made of the shape information of the skin area image and the color information of the skin ratio image specially. And we propose the image feature vector to use in the characteristic category at a course of traininB resizing the skin likelihood image. Finally, this paper presents the performance evaluation of experiment result, and proves the suitability of grading image using image feature classification algorithm.

A Study on the Loss Functions of GAN Models (GAN 모델에서 손실함수 분석)

  • Lee, Cho-Youn;Park, JiSu;Shon, Jin Gon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.942-945
    • /
    • 2019
  • 현재 딥러닝은 컴퓨터 분야에서 이미지 처리 방법으로 활용도가 높아지면서 딥러닝 모델 개발 연구가 활발히 진행되고 있다. 딥러닝 모델 중에서 이미지 생성모델은 대표적으로 GAN(Generative Adversarial Network, 생성적 적대 신경망) 모델을 활용하고 있다. GAN은 생성기 네트워크와 판별기 네트워크를 이용하여 진짜 같은 이미지를 생성한다. 생성된 이미지는 실제 이미지와의 오차를 최소화해야 하며 이때 사용하는 함수를 손실함수라고 한다. GAN에서 손실함수는 이미지를 생성하는 학습이 불안정하여 이미지 품질이 떨어지는 문제가 있다. 개선된 GAN 관련 연구가 진행되고 있지만 완전한 문제 해결에는 부족하다. 본 논문은 7개의 GAN 모델에서 사용하는 손실함수를 분류하고 특징을 분석한다.