• Title/Summary/Keyword: Learned images

Search Result 208, Processing Time 0.034 seconds

Analysis of Articles on Design.Aesthetics and Beauty Aspects in Domestic Men's Fashion (디자인.미학 및 뷰티 분야를 중심으로 본 국내 남성 패션 연구동향)

  • Shin, Myung-Jin;Nam, Yoon-Sook
    • Journal of the Korean Society of Costume
    • /
    • v.61 no.3
    • /
    • pp.63-70
    • /
    • 2011
  • The purpose of this study was to investigate trend of articles on men's fashion in Korea as seen through clothing-related academic journals from 1990 to 2009. For a total of 478 research papers on men's fashion published in 18 clothing-specialized learned society journals, which are KCI-cited journals or candidates thereof The results were as follows: First, With respect to the number of papers on men's fashion from 1990 to 2009, that of the 2000s rapidly increased to 5.2 times that of the 1990s. Second, The number of each area research was ranked aw follows: social psychology marketing 34.1%(163), clothing construction 32.2%(154), design aesthetics 19.7%(94), beauty 7.7%(37), history of clothing 4.8%(23), textile science 1.5%(7.7). Third, Studies on the aesthetics in men's fashion showed a steady increase from 1995, resulting in the number of papers in the 2000s being 12 times that of the 1990s. And, in the 2000s, the subjects explored in the 1990s, such as feminization of men's clothes, gender images, men's suit, etc. were deepened and expanded.

An Automatic Face Hiding System based on the Deep Learning Technology

  • Yoon, Hyeon-Dham;Ohm, Seong-Yong
    • International Journal of Advanced Culture Technology
    • /
    • v.7 no.4
    • /
    • pp.289-294
    • /
    • 2019
  • As social network service platforms grow and one-person media market expands, people upload their own photos and/or videos through multiple open platforms. However, it can be illegal to upload the digital contents containing the faces of others on the public sites without their permission. Therefore, many people are spending much time and effort in editing such digital contents so that the faces of others should not be exposed to the public. In this paper, we propose an automatic face hiding system called 'autoblur', which detects all the unregistered faces and mosaic them automatically. The system has been implemented using the GitHub MIT open-source 'Face Recognition' which is based on deep learning technology. In this system, two dozens of face images of the user are taken from different angles to register his/her own face. Once the face of the user is learned and registered, the system detects all the other faces for the given photo or video and then blurs them out. Our experiments show that it produces quick and correct results for the sample photos.

Design of Robust Face Recognition Pattern Classifier Using Interval Type-2 RBF Neural Networks Based on Census Transform Method (Interval Type-2 RBF 신경회로망 기반 CT 기법을 이용한 강인한 얼굴인식 패턴 분류기 설계)

  • Jin, Yong-Tak;Oh, Sung-Kwun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.64 no.5
    • /
    • pp.755-765
    • /
    • 2015
  • This paper is concerned with Interval Type-2 Radial Basis Function Neural Network classifier realized with the aid of Census Transform(CT) and (2D)2LDA methods. CT is considered to improve performance of face recognition in a variety of illumination variations. (2D)2LDA is applied to transform high dimensional image into low-dimensional image which is used as input data to the proposed pattern classifier. Receptive fields in hidden layer are formed as interval type-2 membership function. We use the coefficients of linear polynomial function as the connection weights of the proposed networks, and the coefficients and their ensuing spreads are learned through Conjugate Gradient Method(CGM). Moreover, the parameters such as fuzzification coefficient and the number of input variables are optimized by Artificial Bee Colony(ABC). In order to evaluate the performance of the proposed classifier, Yale B dataset which consists of images obtained under diverse state of illumination environment is applied. We show that the results of the proposed model have much more superb performance and robust characteristic than those reported in the previous studies.

An Intelligent Fire Learning and Detection System Using Convolutional Neural Networks (컨볼루션 신경망을 이용한 지능형 화재 학습 및 탐지 시스템)

  • Cheoi, Kyungjoo;Jeon, Minseong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.11
    • /
    • pp.607-614
    • /
    • 2016
  • In this paper, we propose an intelligent fire learning and detection system using convolutional neural networks (CNN). Through the convolutional layer of the CNN, various features of flame and smoke images are automatically extracted, and these extracted features are learned to classify them into flame or smoke or no fire. In order to detect fire in the image, candidate fire regions are first extracted from the image and extracted candidate regions are passed through CNN. Experimental results on various image shows that our system has better performances over previous work.

Analyze weeds classification with visual explanation based on Convolutional Neural Networks

  • Vo, Hoang-Trong;Yu, Gwang-Hyun;Nguyen, Huy-Toan;Lee, Ju-Hwan;Dang, Thanh-Vu;Kim, Jin-Young
    • Smart Media Journal
    • /
    • v.8 no.3
    • /
    • pp.31-40
    • /
    • 2019
  • To understand how a Convolutional Neural Network (CNN) model captures the features of a pattern to determine which class it belongs to, in this paper, we use Gradient-weighted Class Activation Mapping (Grad-CAM) to visualize and analyze how well a CNN model behave on the CNU weeds dataset. We apply this technique to Resnet model and figure out which features this model captures to determine a specific class, what makes the model get a correct/wrong classification, and how those wrong label images can cause a negative effect to a CNN model during the training process. In the experiment, Grad-CAM highlights the important regions of weeds, depending on the patterns learned by Resnet, such as the lobe and limb on 미국가막사리, or the entire leaf surface on 단풍잎돼지풀. Besides, Grad-CAM points out a CNN model can localize the object even though it is trained only for the classification problem.

Measurement of Push-up Accuracy Using Image Learning by CNN (CNN 기법의 이미지 학습을 통한 팔굽혀펴기 자세 정확도 측정)

  • Lee, Junseok;Oh, Donghan;Ahn, Kyung-Il
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.6
    • /
    • pp.805-814
    • /
    • 2021
  • Push-ups are one of the body exercises that can be easily measured anytime, anywhere. As one of the most widely used techniques as a test tool for evaluating physical strength, they are broadly used in various fields, especially in fields that require physical ability to estimate, such as military, police, and firefighters. However, social distancing is currently being implemented, and the issue of fairness has been steadily raised due to subtle differences between measurement. Accordingly, in this paper, the correct posture for each individual was photographed and learned by a high-performance computer, and the result was derived by comparing it with the case of performing the incorrect posture of the individual. If method is introduced into the physical fitness evaluation through the proposed method, the individual takes the correct posture and learns the photographed photo, and measures the posture with several images taken during a given time. Through this, it is possible to measure more objectively because it measures with the merit that can be measured even in the present situation and with one's correct posture.

Image analysis technology with deep learning for monitoring the tidal flat ecosystem -Focused on monitoring the Ocypode stimpsoni Ortmann, 1897 in the Sindu-ri tidal flat - (갯벌 생태계 모니터링을 위한 딥러닝 기반의 영상 분석 기술 연구 - 신두리 갯벌 달랑게 모니터링을 중심으로 -)

  • Kim, Dong-Woo;Lee, Sang-Hyuk;Yu, Jae-Jin;Son, Seung-Woo
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.24 no.6
    • /
    • pp.89-96
    • /
    • 2021
  • In this study, a deep-learning image analysis model was established and validated for AI-based monitoring of the tidal flat ecosystem for marine protected creatures Ocypode stimpsoni and their habitat. The data in the study was constructed using an unmanned aerial vehicle, and the U-net model was applied for the deep learning model. The accuracy of deep learning model learning results was about 0.76 and about 0.8 each for the Ocypode stimpsoni and their burrow whose accuracy was higher. Analyzing the distribution of crabs and burrows by putting orthomosaic images of the entire study area to the learned deep learning model, it was confirmed that 1,943 Ocypode stimpsoni and 2,807 burrow were distributed in the study area. Through this study, the possibility of using the deep learning image analysis technology for monitoring the tidal ecosystem was confirmed. And it is expected that it can be used in the tidal ecosystem monitoring field by expanding the monitoring sites and target species in the future.

An Analysis on the Properties of Features against Various Distortions in Deep Neural Networks

  • Kang, Jung Heum;Jeong, Hye Won;Choi, Chang Kyun;Ali, Muhammad Salman;Bae, Sung-Ho;Kim, Hui Yong
    • Journal of Broadcast Engineering
    • /
    • v.26 no.7
    • /
    • pp.868-876
    • /
    • 2021
  • Deploying deep neural network model training performs remarkable performance in the fields of Object detection and Instance segmentation. To train these models, features are first extracted from the input image using a backbone network. The extracted features can be reused by various tasks. Research has been actively conducted to serve various tasks by using these learned features. In this process, standardization discussions about encoding, decoding, and transmission methods are proceeding actively. In this scenario, it is necessary to analyze the response characteristics of features against various distortions that may occur in the data transmission or data compression process. In this paper, experiment was conducted to inject various distortions into the feature in the object recognition task. And analyze the mAP (mean Average Precision) metric between the predicted value output from the neural network and the target value as the intensity of various distortions was increased. Experiments have shown that features are more robust to distortion than images. And this points out that using the feature as transmission means can prevent the loss of information against the various distortions during data transmission and compression process.

Interpolation based Single-path Sub-pixel Convolution for Super-Resolution Multi-Scale Networks

  • Alao, Honnang;Kim, Jin-Sung;Kim, Tae Sung;Oh, Juhyen;Lee, Kyujoong
    • Journal of Multimedia Information System
    • /
    • v.8 no.4
    • /
    • pp.203-210
    • /
    • 2021
  • Deep leaning convolutional neural networks (CNN) have successfully been applied to image super-resolution (SR). Despite their great performances, SR techniques tend to focus on a certain upscale factor when training a particular model. Algorithms for single model multi-scale networks can easily be constructed if images are upscaled prior to input, but sub-pixel convolution upsampling works differently for each scale factor. Recent SR methods employ multi-scale and multi-path learning as a solution. However, this causes unshared parameters and unbalanced parameter distribution across various scale factors. We present a multi-scale single-path upsample module as a solution by exploiting the advantages of sub-pixel convolution and interpolation algorithms. The proposed model employs sub-pixel convolution for the highest scale factor among the learning upscale factors, and then utilize 1-dimension interpolation, compressing the learned features on the channel axis to match the desired output image size. Experiments are performed for the single-path upsample module, and compared to the multi-path upsample module. Based on the experimental results, the proposed algorithm reduces the upsample module's parameters by 24% and presents slightly to better performance compared to the previous algorithm.

A Study on the Optimization of Convolution Operation Speed through FFT Algorithm (FFT 적용을 통한 Convolution 연산속도 향상에 관한 연구)

  • Lim, Su-Chang;Kim, Jong-Chan
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.11
    • /
    • pp.1552-1559
    • /
    • 2021
  • Convolution neural networks (CNNs) show notable performance in image processing and are used as representative core models. CNNs extract and learn features from large amounts of train dataset. In general, it has a structure in which a convolution layer and a fully connected layer are stacked. The core of CNN is the convolution layer. The size of the kernel used for feature extraction and the number that affect the depth of the feature map determine the amount of weight parameters of the CNN that can be learned. These parameters are the main causes of increasing the computational complexity and memory usage of the entire neural network. The most computationally expensive components in CNNs are fully connected and spatial convolution computations. In this paper, we propose a Fourier Convolution Neural Network that performs the operation of the convolution layer in the Fourier domain. We work on modifying and improving the amount of computation by applying the fast fourier transform method. Using the MNIST dataset, the performance was similar to that of the general CNN in terms of accuracy. In terms of operation speed, 7.2% faster operation speed was achieved. An average of 19% faster speed was achieved in experiments using 1024x1024 images and various sizes of kernels.