• Title/Summary/Keyword: Deep Features

Search Result 1,096, Processing Time 0.022 seconds

A GAN-based face rotation technique using 3D face model for game characters (3D 얼굴 모델 기반의 GAN을 이용한 게임 캐릭터 회전 기법)

  • Kim, Handong;Han, Jongdae;Yang, Heekyung;Min, Kyungha
    • Journal of Korea Game Society
    • /
    • v.21 no.3
    • /
    • pp.13-24
    • /
    • 2021
  • This paper shows the face rotation applicable to game character facial illustration. Existing studies limited data to human face data, required a large amount of data, and the synthesized results were not good. In this paper, the following method was introduced to solve the existing problems of existing studies. First, a 3D model with features of the input image was rotated and then rendered as a 2D image to construct a data set. Second, by designing GAN that can learn features of various poses from the data built through the 3D model, the input image can be synthesized at a desired pose. This paper presents the results of synthesizing the game character face illustration. From the synthesized result, it can be confirmed that the proposed method works well.

Lightweight Single Image Super-Resolution Convolution Neural Network in Portable Device

  • Wang, Jin;Wu, Yiming;He, Shiming;Sharma, Pradip Kumar;Yu, Xiaofeng;Alfarraj, Osama;Tolba, Amr
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.11
    • /
    • pp.4065-4083
    • /
    • 2021
  • Super-resolution can improve the clarity of low-resolution (LR) images, which can increase the accuracy of high-level compute vision tasks. Portable devices have low computing power and storage performance. Large-scale neural network super-resolution methods are not suitable for portable devices. In order to save the computational cost and the number of parameters, Lightweight image processing method can improve the processing speed of portable devices. Therefore, we propose the Enhanced Information Multiple Distillation Network (EIMDN) to adapt lower delay and cost. The EIMDN takes feedback mechanism as the framework and obtains low level features through high level features. Further, we replace the feature extraction convolution operation in Information Multiple Distillation Block (IMDB), with Ghost module, and propose the Enhanced Information Multiple Distillation Block (EIMDB) to reduce the amount of calculation and the number of parameters. Finally, coordinate attention (CA) is used at the end of IMDB and EIMDB to enhance the important information extraction from Spaces and channels. Experimental results show that our proposed can achieve convergence faster with fewer parameters and computation, compared with other lightweight super-resolution methods. Under the condition of higher peak signal-to-noise ratio (PSNR) and higher structural similarity (SSIM), the performance of network reconstruction image texture and target contour is significantly improved.

Design and Implementation of YouTube-based Educational Video Recommendation System

  • Kim, Young Kook;Kim, Myung Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.5
    • /
    • pp.37-45
    • /
    • 2022
  • As of 2020, about 500 hours of videos are uploaded to YouTube, a representative online video platform, per minute. As the number of users acquiring information through various uploaded videos is increasing, online video platforms are making efforts to provide better recommendation services. The currently used recommendation service recommends videos to users based on the user's viewing history, which is not a good way to recommend videos that deal with specific purposes and interests, such as educational videos. The recent recommendation system utilizes not only the user's viewing history but also the content features of the item. In this paper, we extract the content features of educational video for educational video recommendation based on YouTube, design a recommendation system using it, and implement it as a web application. By examining the satisfaction of users, recommendataion performance and convenience performance are shown as 85.36% and 87.80%.

Forecasting volatility index by temporal convolutional neural network (Causal temporal convolutional neural network를 이용한 변동성 지수 예측)

  • Ji Won Shin;Dong Wan Shin
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.2
    • /
    • pp.129-139
    • /
    • 2023
  • Forecasting volatility is essential to avoiding the risk caused by the uncertainties of an financial asset. Complicated financial volatility features such as ambiguity between non-stationarity and stationarity, asymmetry, long-memory, sudden fairly large values like outliers bring great challenges to volatility forecasts. In order to address such complicated features implicity, we consider machine leaning models such as LSTM (1997) and GRU (2014), which are known to be suitable for existing time series forecasting. However, there are the problems of vanishing gradients, of enormous amount of computation, and of a huge memory. To solve these problems, a causal temporal convolutional network (TCN) model, an advanced form of 1D CNN, is also applied. It is confirmed that the overall forecasting power of TCN model is higher than that of the RNN models in forecasting VIX, VXD, and VXN, the daily volatility indices of S&P 500, DJIA, Nasdaq, respectively.

Texture-Spatial Separation based Feature Distillation Network for Single Image Super Resolution (단일 영상 초해상도를 위한 질감-공간 분리 기반의 특징 분류 네트워크)

  • Hyun Ho Han
    • Journal of Digital Policy
    • /
    • v.2 no.3
    • /
    • pp.1-7
    • /
    • 2023
  • In this paper, I proposes a method for performing single image super resolution by separating texture-spatial domains and then classifying features based on detailed information. In CNN (Convolutional Neural Network) based super resolution, the complex procedures and generation of redundant feature information in feature estimation process for enhancing details can lead to quality degradation in super resolution. The proposed method reduced procedural complexity and minimizes generation of redundant feature information by splitting input image into two channels: texture and spatial. In texture channel, a feature refinement process with step-wise skip connections is applied for detail restoration, while in spatial channel, a method is introduced to preserve the structural features of the image. Experimental results using proposed method demonstrate improved performance in terms of PSNR and SSIM evaluations compared to existing super resolution methods, confirmed the enhancement in quality.

Improving Adversarial Robustness via Attention (Attention 기법에 기반한 적대적 공격의 강건성 향상 연구)

  • Jaeuk Kim;Myung Gyo Oh;Leo Hyun Park;Taekyoung Kwon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.4
    • /
    • pp.621-631
    • /
    • 2023
  • Adversarial training improves the robustness of deep neural networks for adversarial examples. However, the previous adversarial training method focuses only on the adversarial loss function, ignoring that even a small perturbation of the input layer causes a significant change in the hidden layer features. Consequently, the accuracy of a defended model is reduced for various untrained situations such as clean samples or other attack techniques. Therefore, an architectural perspective is necessary to improve feature representation power to solve this problem. In this paper, we apply an attention module that generates an attention map of an input image to a general model and performs PGD adversarial training upon the augmented model. In our experiments on the CIFAR-10 dataset, the attention augmented model showed higher accuracy than the general model regardless of the network structure. In particular, the robust accuracy of our approach was consistently higher for various attacks such as PGD, FGSM, and BIM and more powerful adversaries. By visualizing the attention map, we further confirmed that the attention module extracts features of the correct class even for adversarial examples.

Multivariate Congestion Prediction using Stacked LSTM Autoencoder based Bidirectional LSTM Model

  • Vijayalakshmi, B;Thanga, Ramya S;Ramar, K
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.1
    • /
    • pp.216-238
    • /
    • 2023
  • In intelligent transportation systems, traffic management is an important task. The accurate forecasting of traffic characteristics like flow, congestion, and density is still active research because of the non-linear nature and uncertainty of the spatiotemporal data. Inclement weather, such as rain and snow, and other special events such as holidays, accidents, and road closures have a significant impact on driving and the average speed of vehicles on the road, which lowers traffic capacity and causes congestion in a widespread manner. This work designs a model for multivariate short-term traffic congestion prediction using SLSTM_AE-BiLSTM. The proposed design consists of a Bidirectional Long Short Term Memory(BiLSTM) network to predict traffic flow value and a Convolutional Neural network (CNN) model for detecting the congestion status. This model uses spatial static temporal dynamic data. The stacked Long Short Term Memory Autoencoder (SLSTM AE) is used to encode the weather features into a reduced and more informative feature space. BiLSTM model is used to capture the features from the past and present traffic data simultaneously and also to identify the long-term dependencies. It uses the traffic data and encoded weather data to perform the traffic flow prediction. The CNN model is used to predict the recurring congestion status based on the predicted traffic flow value at a particular urban traffic network. In this work, a publicly available Caltrans PEMS dataset with traffic parameters is used. The proposed model generates the congestion prediction with an accuracy rate of 92.74% which is slightly better when compared with other deep learning models for congestion prediction.

An Untrained Person's Posture Estimation Scheme by Exploiting a Single 24GHz FMCW Radar and 2D CNN (단일 24GHz FMCW 레이더 및 2D CNN을 이용하여 학습되지 않은 요구조자의 자세 추정 기법)

  • Kyongseok Jang;Junhao Zhou;Chao Sun;Youngok Kim
    • Journal of the Society of Disaster Information
    • /
    • v.19 no.4
    • /
    • pp.897-907
    • /
    • 2023
  • Purpose: In this study, We aim to estimate a untrained person's three postures using a 2D CNN model which is trained with minimal FFT data collected by a 24GHz FMCW radar. Method: In an indoor space, we collected FFT data for three distinct postures (standing, sitting, and lying) from three different individuals. To apply this data to a 2D CNN model, we first converted the collected data into 2D images. These images were then trained using the 2D CNN model to recognize the distinct features of each posture. Following the training, we evaluated the model's accuracy in differentiating the posture features across various individuals. Result: According to the experimental results, the average accuracy of the proposed scheme for the three postures was shown to be a 89.99% and it outperforms the conventional 1D CNN and the SVM schemes. Conclusion: In this study, we aim to estimate any person's three postures using a 2D CNN model and a 24GHz FMCW radar for disastrous situations in indoor. it is shown that the different posture of any persons can be accurately estimated even though his or her data is not used for training the AI model.

Attention-Based Heart Rate Estimation using MobilenetV3

  • Yeo-Chan Yoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.12
    • /
    • pp.1-7
    • /
    • 2023
  • The advent of deep learning technologies has led to the development of various medical applications, making healthcare services more convenient and effective. Among these applications, heart rate estimation is considered a vital method for assessing an individual's health. Traditional methods, such as photoplethysmography through smart watches, have been widely used but are invasive and require additional hardware. Recent advancements allow for contactless heart rate estimation through facial image analysis, providing a more hygienic and convenient approach. In this paper, we propose a lightweight methodology capable of accurately estimating heart rate in mobile environments, using a specialized 2-channel network structure based on 2D convolution. Our method considers both subtle facial movements and color changes resulting from blood flow and muscle contractions. The approach comprises two major components: an Encoder for analyzing image features and a regression layer for evaluating Blood Volume Pulse. By incorporating both features simultaneously our methodology delivers more accurate results even in computing environments with limited resources. The proposed approach is expected to offer a more efficient way to monitor heart rate without invasive technology, particularly well-suited for mobile devices.

Dual-stream Co-enhanced Network for Unsupervised Video Object Segmentation

  • Hongliang Zhu;Hui Yin;Yanting Liu;Ning Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.4
    • /
    • pp.938-958
    • /
    • 2024
  • Unsupervised Video Object Segmentation (UVOS) is a highly challenging problem in computer vision as the annotation of the target object in the testing video is unknown at all. The main difficulty is to effectively handle the complicated and changeable motion state of the target object and the confusion of similar background objects in video sequence. In this paper, we propose a novel deep Dual-stream Co-enhanced Network (DC-Net) for UVOS via bidirectional motion cues refinement and multi-level feature aggregation, which can fully take advantage of motion cues and effectively integrate different level features to produce high-quality segmentation mask. DC-Net is a dual-stream architecture where the two streams are co-enhanced by each other. One is a motion stream with a Motion-cues Refine Module (MRM), which learns from bidirectional optical flow images and produces fine-grained and complete distinctive motion saliency map, and the other is an appearance stream with a Multi-level Feature Aggregation Module (MFAM) and a Context Attention Module (CAM) which are designed to integrate the different level features effectively. Specifically, the motion saliency map obtained by the motion stream is fused with each stage of the decoder in the appearance stream to improve the segmentation, and in turn the segmentation loss in the appearance stream feeds back into the motion stream to enhance the motion refinement. Experimental results on three datasets (Davis2016, VideoSD, SegTrack-v2) demonstrate that DC-Net has achieved comparable results with some state-of-the-art methods.