• Title/Summary/Keyword: CNN(Convolutional Neural Networks)

Search Result 354, Processing Time 0.026 seconds

Proposal of a Step-by-Step Optimized Campus Power Forecast Model using CNN-LSTM Deep Learning (CNN-LSTM 딥러닝 기반 캠퍼스 전력 예측 모델 최적화 단계 제시)

  • Kim, Yein;Lee, Seeun;Kwon, Youngsung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.10
    • /
    • pp.8-15
    • /
    • 2020
  • A forecasting method using deep learning does not have consistent results due to the differences in the characteristics of the dataset, even though they have the same forecasting models and parameters. For example, the forecasting model X optimized with dataset A would not produce the optimized result with another dataset B. The forecasting model with the characteristics of the dataset needs to be optimized to increase the accuracy of the forecasting model. Therefore, this paper proposes novel optimization steps for outlier removal, dataset classification, and a CNN-LSTM-based hyperparameter tuning process to forecast the daily power usage of a university campus based on the hourly interval. The proposing model produces high forecasting accuracy with a 2% of MAPE with a single power input variable. The proposing model can be used in EMS to suggest improved strategies to users and consequently to improve the power efficiency.

Business Application of Convolutional Neural Networks for Apparel Classification Using Runway Image (합성곱 신경망의 비지니스 응용: 런웨이 이미지를 사용한 의류 분류를 중심으로)

  • Seo, Yian;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.1-19
    • /
    • 2018
  • Large amount of data is now available for research and business sectors to extract knowledge from it. This data can be in the form of unstructured data such as audio, text, and image data and can be analyzed by deep learning methodology. Deep learning is now widely used for various estimation, classification, and prediction problems. Especially, fashion business adopts deep learning techniques for apparel recognition, apparel search and retrieval engine, and automatic product recommendation. The core model of these applications is the image classification using Convolutional Neural Networks (CNN). CNN is made up of neurons which learn parameters such as weights while inputs come through and reach outputs. CNN has layer structure which is best suited for image classification as it is comprised of convolutional layer for generating feature maps, pooling layer for reducing the dimensionality of feature maps, and fully-connected layer for classifying the extracted features. However, most of the classification models have been trained using online product image, which is taken under controlled situation such as apparel image itself or professional model wearing apparel. This image may not be an effective way to train the classification model considering the situation when one might want to classify street fashion image or walking image, which is taken in uncontrolled situation and involves people's movement and unexpected pose. Therefore, we propose to train the model with runway apparel image dataset which captures mobility. This will allow the classification model to be trained with far more variable data and enhance the adaptation with diverse query image. To achieve both convergence and generalization of the model, we apply Transfer Learning on our training network. As Transfer Learning in CNN is composed of pre-training and fine-tuning stages, we divide the training step into two. First, we pre-train our architecture with large-scale dataset, ImageNet dataset, which consists of 1.2 million images with 1000 categories including animals, plants, activities, materials, instrumentations, scenes, and foods. We use GoogLeNet for our main architecture as it has achieved great accuracy with efficiency in ImageNet Large Scale Visual Recognition Challenge (ILSVRC). Second, we fine-tune the network with our own runway image dataset. For the runway image dataset, we could not find any previously and publicly made dataset, so we collect the dataset from Google Image Search attaining 2426 images of 32 major fashion brands including Anna Molinari, Balenciaga, Balmain, Brioni, Burberry, Celine, Chanel, Chloe, Christian Dior, Cividini, Dolce and Gabbana, Emilio Pucci, Ermenegildo, Fendi, Giuliana Teso, Gucci, Issey Miyake, Kenzo, Leonard, Louis Vuitton, Marc Jacobs, Marni, Max Mara, Missoni, Moschino, Ralph Lauren, Roberto Cavalli, Sonia Rykiel, Stella McCartney, Valentino, Versace, and Yve Saint Laurent. We perform 10-folded experiments to consider the random generation of training data, and our proposed model has achieved accuracy of 67.2% on final test. Our research suggests several advantages over previous related studies as to our best knowledge, there haven't been any previous studies which trained the network for apparel image classification based on runway image dataset. We suggest the idea of training model with image capturing all the possible postures, which is denoted as mobility, by using our own runway apparel image dataset. Moreover, by applying Transfer Learning and using checkpoint and parameters provided by Tensorflow Slim, we could save time spent on training the classification model as taking 6 minutes per experiment to train the classifier. This model can be used in many business applications where the query image can be runway image, product image, or street fashion image. To be specific, runway query image can be used for mobile application service during fashion week to facilitate brand search, street style query image can be classified during fashion editorial task to classify and label the brand or style, and website query image can be processed by e-commerce multi-complex service providing item information or recommending similar item.

Research on the Utilization of Recurrent Neural Networks for Automatic Generation of Korean Definitional Sentences of Technical Terms (기술 용어에 대한 한국어 정의 문장 자동 생성을 위한 순환 신경망 모델 활용 연구)

  • Choi, Garam;Kim, Han-Gook;Kim, Kwang-Hoon;Kim, You-eil;Choi, Sung-Pil
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.51 no.4
    • /
    • pp.99-120
    • /
    • 2017
  • In order to develop a semiautomatic support system that allows researchers concerned to efficiently analyze the technical trends for the ever-growing industry and market. This paper introduces a couple of Korean sentence generation models that can automatically generate definitional statements as well as descriptions of technical terms and concepts. The proposed models are based on a deep learning model called LSTM (Long Sort-Term Memory) capable of effectively labeling textual sequences by taking into account the contextual relations of each item in the sequences. Our models take technical terms as inputs and can generate a broad range of heterogeneous textual descriptions that explain the concept of the terms. In the experiments using large-scale training collections, we confirmed that more accurate and reasonable sentences can be generated by CHAR-CNN-LSTM model that is a word-based LSTM exploiting character embeddings based on convolutional neural networks (CNN). The results of this study can be a force for developing an extension model that can generate a set of sentences covering the same subjects, and furthermore, we can implement an artificial intelligence model that automatically creates technical literature.

Comparative analysis of Machine-Learning Based Models for Metal Surface Defect Detection (머신러닝 기반 금속외관 결함 검출 비교 분석)

  • Lee, Se-Hun;Kang, Seong-Hwan;Shin, Yo-Seob;Choi, Oh-Kyu;Kim, Sijong;Kang, Jae-Mo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.6
    • /
    • pp.834-841
    • /
    • 2022
  • Recently, applying artificial intelligence technologies in various fields of production has drawn an upsurge of research interest due to the increase for smart factory and artificial intelligence technologies. A great deal of effort is being made to introduce artificial intelligence algorithms into the defect detection task. Particularly, detection of defects on the surface of metal has a higher level of research interest compared to other materials (wood, plastics, fibers, etc.). In this paper, we compare and analyze the speed and performance of defect classification by combining machine learning techniques (Support Vector Machine, Softmax Regression, Decision Tree) with dimensionality reduction algorithms (Principal Component Analysis, AutoEncoders) and two convolutional neural networks (proposed method, ResNet). To validate and compare the performance and speed of the algorithms, we have adopted two datasets ((i) public dataset, (ii) actual dataset), and on the basis of the results, the most efficient algorithm is determined.

Evaluation of Transfer Learning in Gastroscopy Image Classification using Convolutional Neual Network (합성곱 신경망을 활용한 위내시경 이미지 분류에서 전이학습의 효용성 평가)

  • Park, Sung Jin;Kim, Young Jae;Park, Dong Kyun;Chung, Jun Won;Kim, Kwang Gi
    • Journal of Biomedical Engineering Research
    • /
    • v.39 no.5
    • /
    • pp.213-219
    • /
    • 2018
  • Stomach cancer is the most diagnosed cancer in Korea. When gastric cancer is detected early, the 5-year survival rate is as high as 90%. Gastroscopy is a very useful method for early diagnosis. But the false negative rate of gastric cancer in the gastroscopy was 4.6~25.8% due to the subjective judgment of the physician. Recently, the image classification performance of the image recognition field has been advanced by the convolutional neural network. Convolutional neural networks perform well when diverse and sufficient amounts of data are supported. However, medical data is not easy to access and it is difficult to gather enough high-quality data that includes expert annotations. So This paper evaluates the efficacy of transfer learning in gastroscopy classification and diagnosis. We obtained 787 endoscopic images of gastric endoscopy at Gil Medical Center, Gachon University. The number of normal images was 200, and the number of abnormal images was 587. The image size was reconstructed and normalized. In the case of the ResNet50 structure, the classification accuracy before and after applying the transfer learning was improved from 0.9 to 0.947, and the AUC was also improved from 0.94 to 0.98. In the case of the InceptionV3 structure, the classification accuracy before and after applying the transfer learning was improved from 0.862 to 0.924, and the AUC was also improved from 0.89 to 0.97. In the case of the VGG16 structure, the classification accuracy before and after applying the transfer learning was improved from 0.87 to 0.938, and the AUC was also improved from 0.89 to 0.98. The difference in the performance of the CNN model before and after transfer learning was statistically significant when confirmed by T-test (p < 0.05). As a result, transfer learning is judged to be an effective method of medical data that is difficult to collect good quality data.

Prediction Model of User Physical Activity using Data Characteristics-based Long Short-term Memory Recurrent Neural Networks

  • Kim, Joo-Chang;Chung, Kyungyong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.4
    • /
    • pp.2060-2077
    • /
    • 2019
  • Recently, mobile healthcare services have attracted significant attention because of the emerging development and supply of diverse wearable devices. Smartwatches and health bands are the most common type of mobile-based wearable devices and their market size is increasing considerably. However, simple value comparisons based on accumulated data have revealed certain problems, such as the standardized nature of health management and the lack of personalized health management service models. The convergence of information technology (IT) and biotechnology (BT) has shifted the medical paradigm from continuous health management and disease prevention to the development of a system that can be used to provide ground-based medical services regardless of the user's location. Moreover, the IT-BT convergence has necessitated the development of lifestyle improvement models and services that utilize big data analysis and machine learning to provide mobile healthcare-based personal health management and disease prevention information. Users' health data, which are specific as they change over time, are collected by different means according to the users' lifestyle and surrounding circumstances. In this paper, we propose a prediction model of user physical activity that uses data characteristics-based long short-term memory (DC-LSTM) recurrent neural networks (RNNs). To provide personalized services, the characteristics and surrounding circumstances of data collectable from mobile host devices were considered in the selection of variables for the model. The data characteristics considered were ease of collection, which represents whether or not variables are collectable, and frequency of occurrence, which represents whether or not changes made to input values constitute significant variables in terms of activity. The variables selected for providing personalized services were activity, weather, temperature, mean daily temperature, humidity, UV, fine dust, asthma and lung disease probability index, skin disease probability index, cadence, travel distance, mean heart rate, and sleep hours. The selected variables were classified according to the data characteristics. To predict activity, an LSTM RNN was built that uses the classified variables as input data and learns the dynamic characteristics of time series data. LSTM RNNs resolve the vanishing gradient problem that occurs in existing RNNs. They are classified into three different types according to data characteristics and constructed through connections among the LSTMs. The constructed neural network learns training data and predicts user activity. To evaluate the proposed model, the root mean square error (RMSE) was used in the performance evaluation of the user physical activity prediction method for which an autoregressive integrated moving average (ARIMA) model, a convolutional neural network (CNN), and an RNN were used. The results show that the proposed DC-LSTM RNN method yields an excellent mean RMSE value of 0.616. The proposed method is used for predicting significant activity considering the surrounding circumstances and user status utilizing the existing standardized activity prediction services. It can also be used to predict user physical activity and provide personalized healthcare based on the data collectable from mobile host devices.

Investigation of image preprocessing and face covering influences on motion recognition by a 2D human pose estimation algorithm (모션 인식을 위한 2D 자세 추정 알고리듬의 이미지 전처리 및 얼굴 가림에 대한 영향도 분석)

  • Noh, Eunsol;Yi, Sarang;Hong, Seokmoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.7
    • /
    • pp.285-291
    • /
    • 2020
  • In manufacturing, humans are being replaced with robots, but expert skills remain difficult to convert to data, making them difficult to apply to industrial robots. One method is by visual motion recognition, but physical features may be judged differently depending on the image data. This study aimed to improve the accuracy of vision methods for estimating the posture of humans. Three OpenPose vision models were applied: MPII, COCO, and COCO+foot. To identify the effects of face-covering accessories and image preprocessing on the Convolutional Neural Network (CNN) structure, the presence/non-presence of accessories, image size, and filtering were set as the parameters affecting the identification of a human's posture. For each parameter, image data were applied to the three models, and the errors between the actual and predicted values, as well as the percentage correct keypoints (PCK), were calculated. The COCO+foot model showed the lowest sensitivity to all three parameters. A <50% (from 3024×4032 to 1512×2016 pixels) reduction in image size was considered acceptable. Emboss filtering, in combination with MPII, provided the best results (reduced error of <60 pixels).

Fake News Detection Using CNN-based Sentiment Change Patterns (CNN 기반 감성 변화 패턴을 이용한 가짜뉴스 탐지)

  • Tae Won Lee;Ji Su Park;Jin Gon Shon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.4
    • /
    • pp.179-188
    • /
    • 2023
  • Recently, fake news disguises the form of news content and appears whenever important events occur, causing social confusion. Accordingly, artificial intelligence technology is used as a research to detect fake news. Fake news detection approaches such as automatically recognizing and blocking fake news through natural language processing or detecting social media influencer accounts that spread false information by combining with network causal inference could be implemented through deep learning. However, fake news detection is classified as a difficult problem to solve among many natural language processing fields. Due to the variety of forms and expressions of fake news, the difficulty of feature extraction is high, and there are various limitations, such as that one feature may have different meanings depending on the category to which the news belongs. In this paper, emotional change patterns are presented as an additional identification criterion for detecting fake news. We propose a model with improved performance by applying a convolutional neural network to a fake news data set to perform analysis based on content characteristics and additionally analyze emotional change patterns. Sentimental polarity is calculated for the sentences constituting the news and the result value dependent on the sentence order can be obtained by applying long-term and short-term memory. This is defined as a pattern of emotional change and combined with the content characteristics of news to be used as an independent variable in the proposed model for fake news detection. We train the proposed model and comparison model by deep learning and conduct an experiment using a fake news data set to confirm that emotion change patterns can improve fake news detection performance.

A Study about Learning Graph Representation on Farmhouse Apple Quality Images with Graph Transformer (그래프 트랜스포머 기반 농가 사과 품질 이미지의 그래프 표현 학습 연구)

  • Ji Hun Bae;Ju Hwan Lee;Gwang Hyun Yu;Gyeong Ju Kwon;Jin Young Kim
    • Smart Media Journal
    • /
    • v.12 no.1
    • /
    • pp.9-16
    • /
    • 2023
  • Recently, a convolutional neural network (CNN) based system is being developed to overcome the limitations of human resources in the apple quality classification of farmhouse. However, since convolutional neural networks receive only images of the same size, preprocessing such as sampling may be required, and in the case of oversampling, information loss of the original image such as image quality degradation and blurring occurs. In this paper, in order to minimize the above problem, to generate a image patch based graph of an original image and propose a random walk-based positional encoding method to apply the graph transformer model. The above method continuously learns the position embedding information of patches which don't have a positional information based on the random walk algorithm, and finds the optimal graph structure by aggregating useful node information through the self-attention technique of graph transformer model. Therefore, it is robust and shows good performance even in a new graph structure of random node order and an arbitrary graph structure according to the location of an object in an image. As a result, when experimented with 5 apple quality datasets, the learning accuracy was higher than other GNN models by a minimum of 1.3% to a maximum of 4.7%, and the number of parameters was 3.59M, which was about 15% less than the 23.52M of the ResNet18 model. Therefore, it shows fast reasoning speed according to the reduction of the amount of computation and proves the effect.

Human Face Tracking and Modeling using Active Appearance Model with Motion Estimation

  • Tran, Hong Tai;Na, In Seop;Kim, Young Chul;Kim, Soo Hyung
    • Smart Media Journal
    • /
    • v.6 no.3
    • /
    • pp.49-56
    • /
    • 2017
  • Images and Videos that include the human face contain a lot of information. Therefore, accurately extracting human face is a very important issue in the field of computer vision. However, in real life, human faces have various shapes and textures. To adapt to these variations, A model-based approach is one of the best ways in which unknown data can be represented by the model in which it is built. However, the model-based approach has its weaknesses when the motion between two frames is big, it can be either a sudden change of pose or moving with fast speed. In this paper, we propose an enhanced human face-tracking model. This approach included human face detection and motion estimation using Cascaded Convolutional Neural Networks, and continuous human face tracking and modeling correction steps using the Active Appearance Model. A proposed system detects human face in the first input frame and initializes the models. On later frames, Cascaded CNN face detection is used to estimate the target motion such as location or pose before applying the old model and fit new target.