• Title/Summary/Keyword: Deep Learning Convergence Study

Search Result 321, Processing Time 0.031 seconds

Performance Comparison of State-of-the-Art Vocoder Technology Based on Deep Learning in a Korean TTS System (한국어 TTS 시스템에서 딥러닝 기반 최첨단 보코더 기술 성능 비교)

  • Kwon, Chul Hong
    • The Journal of the Convergence on Culture Technology
    • /
    • v.6 no.2
    • /
    • pp.509-514
    • /
    • 2020
  • The conventional TTS system consists of several modules, including text preprocessing, parsing analysis, grapheme-to-phoneme conversion, boundary analysis, prosody control, acoustic feature generation by acoustic model, and synthesized speech generation. But TTS system with deep learning is composed of Text2Mel process that generates spectrogram from text, and vocoder that synthesizes speech signals from spectrogram. In this paper, for the optimal Korean TTS system construction we apply Tacotron2 to Tex2Mel process, and as a vocoder we introduce the methods such as WaveNet, WaveRNN, and WaveGlow, and implement them to verify and compare their performance. Experimental results show that WaveNet has the highest MOS and the trained model is hundreds of megabytes in size, but the synthesis time is about 50 times the real time. WaveRNN shows MOS performance similar to that of WaveNet and the model size is several tens of megabytes, but this method also cannot be processed in real time. WaveGlow can handle real-time processing, but the model is several GB in size and MOS is the worst of the three vocoders. From the results of this study, the reference criteria for selecting the appropriate method according to the hardware environment in the field of applying the TTS system are presented in this paper.

Acoustic Full-waveform Inversion using Adam Optimizer (Adam Optimizer를 이용한 음향매질 탄성파 완전파형역산)

  • Kim, Sooyoon;Chung, Wookeen;Shin, Sungryul
    • Geophysics and Geophysical Exploration
    • /
    • v.22 no.4
    • /
    • pp.202-209
    • /
    • 2019
  • In this study, an acoustic full-waveform inversion using Adam optimizer was proposed. The steepest descent method, which is commonly used for the optimization of seismic waveform inversion, is fast and easy to apply, but the inverse problem does not converge correctly. Various optimization methods suggested as alternative solutions require large calculation time though they were much more accurate than the steepest descent method. The Adam optimizer is widely used in deep learning for the optimization of learning model. It is considered as one of the most effective optimization method for diverse models. Thus, we proposed seismic full-waveform inversion algorithm using the Adam optimizer for fast and accurate convergence. To prove the performance of the suggested inversion algorithm, we compared the updated P-wave velocity model obtained using the Adam optimizer with the inversion results from the steepest descent method. As a result, we confirmed that the proposed algorithm can provide fast error convergence and precise inversion results.

Spectogram analysis of active power of appliances and LSTM-based Energy Disaggregation (다수 가전기기 유효전력의 스팩토그램 분석 및 LSTM기반의 전력 분해 알고리즘)

  • Kim, Imgyu;Kim, Hyuncheol;Kim, Seung Yun;Shin, Sangyong
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.2
    • /
    • pp.21-28
    • /
    • 2021
  • In this study, we propose a deep learning-based NILM technique using actual measured power data for 5 kinds of home appliances and verify its effectiveness. For about 3 weeks, the active power of the central power measuring device and five kinds of home appliances (refrigerator, induction, TV, washing machine, air cleaner) was individually measured. The preprocessing method of the measured data was introduced, and characteristics of each household appliance were analyzed through spectogram analysis. The characteristics of each household appliance are organized into a learning data set. All the power data measured by the central power measuring device and 5 kinds of home appliances were time-series mapping, and training was performed using a LSTM neural network, which is excellent for time series data prediction. An algorithm that can disaggregate five types of energies using only the power data of the main central power measuring device is proposed.

A Study on Hangul Handwriting Generation and Classification Mode for Intelligent OCR System (지능형 OCR 시스템을 위한 한글 필기체 생성 및 분류 모델에 관한 연구)

  • Jin-Seong Baek;Ji-Yun Seo;Sang-Joong Jung;Do-Un Jeong
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.23 no.4
    • /
    • pp.222-227
    • /
    • 2022
  • In this paper, we implemented a Korean text generation and classification model based on a deep learning algorithm that can be applied to various industries. It consists of two implemented GAN-based Korean handwriting generation models and CNN-based Korean handwriting classification models. The GAN model consists of a generator model for generating fake Korean handwriting data and a discriminator model for discriminating fake handwritten data. In the case of the CNN model, the model was trained using the 'PHD08' dataset, and the learning result was 92.45. It was confirmed that Korean handwriting was classified with % accuracy. As a result of evaluating the performance of the classification model by integrating the Korean cursive data generated through the implemented GAN model and the training dataset of the existing CNN model, it was confirmed that the classification performance was 96.86%, which was superior to the existing classification performance.

A study on Korean multi-turn response generation using generative and retrieval model (생성 모델과 검색 모델을 이용한 한국어 멀티턴 응답 생성 연구)

  • Lee, Hodong;Lee, Jongmin;Seo, Jaehyung;Jang, Yoonna;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.1
    • /
    • pp.13-21
    • /
    • 2022
  • Recent deep learning-based research shows excellent performance in most natural language processing (NLP) fields with pre-trained language models. In particular, the auto-encoder-based language model proves its excellent performance and usefulness in various fields of Korean language understanding. However, the decoder-based Korean generative model even suffers from generating simple sentences. Also, there is few detailed research and data for the field of conversation where generative models are most commonly utilized. Therefore, this paper constructs multi-turn dialogue data for a Korean generative model. In addition, we compare and analyze the performance by improving the dialogue ability of the generative model through transfer learning. In addition, we propose a method of supplementing the insufficient dialogue generation ability of the model by extracting recommended response candidates from external knowledge information through a retrival model.

Analysis of methods for the model extraction without training data (학습 데이터가 없는 모델 탈취 방법에 대한 분석)

  • Hyun Kwon;Yonggi Kim;Jun Lee
    • Convergence Security Journal
    • /
    • v.23 no.5
    • /
    • pp.57-64
    • /
    • 2023
  • In this study, we analyzed how to steal the target model without training data. Input data is generated using the generative model, and a similar model is created by defining a loss function so that the predicted values of the target model and the similar model are close to each other. At this time, the target model has a process of learning so that the similar model is similar to it by gradient descent using the logit (logic) value of each class for the input data. The tensorflow machine learning library was used as an experimental environment, and CIFAR10 and SVHN were used as datasets. A similar model was created using the ResNet model as a target model. As a result of the experiment, it was found that the model stealing method generated a similar model with an accuracy of 86.18% for CIFAR10 and 96.02% for SVHN, producing similar predicted values to the target model. In addition, considerations on the model stealing method, military use, and limitations were also analyzed.

Analyses of the Patterns of the Synchronous and Asynchronous Social Media Usage in College e-Learning Settings (대학 이러닝 환경에서 실시간과 비실시간 소셜미디어 활용유형 차이분석)

  • Eom, Sang-Hyeon;Lim, Keol
    • Journal of Digital Convergence
    • /
    • v.15 no.4
    • /
    • pp.27-34
    • /
    • 2017
  • As information technology has been developed in a rapid way, a lot of users get to be familiar with social media. Accordingly, the possibility of social media for educational use has increased. From the view point of learning, social media help learners make communities of practice that can lead to collective intelligence. In this study, two different types of social media, synchronous and asynchronous, were compared in terms of usage patterns in the e-learning settings of college level. Content analysis has figured out four factors: learning content, tasks and assignments, emotional communications, and chatting. There found to be a statistical differences in the postings in all of the factors except tasks and assignments. In the qualitative interviews, the participants told various usage patterns of synchronous and asynchronous social media. In sum, the learners generally preferred synchronous social media. Rather, asynchronous social media were mainly used for deep thinking and summarizing. Last, suggestions were made to improve educational environments for the learners in the digital and social media age.

A Named Entity Recognition Model in Criminal Investigation Domain using Pretrained Language Model (사전학습 언어모델을 활용한 범죄수사 도메인 개체명 인식)

  • Kim, Hee-Dou;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.2
    • /
    • pp.13-20
    • /
    • 2022
  • This study is to develop a named entity recognition model specialized in criminal investigation domains using deep learning techniques. Through this study, we propose a system that can contribute to analysis of crime for prevention and investigation using data analysis techniques in the future by automatically extracting and categorizing crime-related information from text-based data such as criminal judgments and investigation documents. For this study, the criminal investigation domain text was collected and the required entity name was newly defined from the perspective of criminal analysis. In addition, the proposed model applying KoELECTRA, a pre-trained language model that has recently shown high performance in natural language processing, shows performance of micro average(referred to as micro avg) F1-score 98% and macro average(referred to as macro avg) F1-score 95% in 9 main categories of crime domain NER experiment data, and micro avg F1-score 98% and macro avg F1-score 62% in 56 sub categories. The proposed model is analyzed from the perspective of future improvement and utilization.

Case study of AI art generator using artificial intelligence (인공지능을 활용한 AI 예술 창작도구 사례 연구)

  • Chung, Jiyun
    • Trans-
    • /
    • v.13
    • /
    • pp.117-140
    • /
    • 2022
  • Recently, artificial intelligence technology is being used throughout the industry. Currently, Currently, AI art generators are used in the NFT industry, and works using them have been exhibited and sold. AI art generators in the art field include Gated Photos, Google Deep Dream, Sketch-RNN, and Auto Draw. AI art generators in the music field are Beat Blender, Google Doodle Bach, AIVA, Duet, and Neural Synth. The characteristics of AI art generators are as follows. First, AI art generator in the art field are being used to create new works based on existing work data. Second, it is possible to quickly and quickly derive creative results to provide ideas to creators, or to implement various creative materials. In the future, AI art generators are expected to have a great influence on content planning and production such as visual art, music composition, literature, and movie.

Small CNN-RNN Engraft Model Study for Sequence Pattern Extraction in Protein Function Prediction Problems

  • Lee, Jeung Min;Lee, Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.8
    • /
    • pp.49-59
    • /
    • 2022
  • In this paper, we designed a new enzyme function prediction model PSCREM based on a study that compared and evaluated CNN and LSTM/GRU models, which are the most widely used deep learning models in the field of predicting functions and structures using protein sequences in 2020, under the same conditions. Sequence evolution information was used to preserve detailed patterns which would miss in CNN convolution, and the relationship information between amino acids with functional significance was extracted through overlapping RNNs. It was referenced to feature map production. The RNN family of algorithms used in small CNN-RNN models are LSTM algorithms and GRU algorithms, which are usually stacked two to three times over 100 units, but in this paper, small RNNs consisting of 10 and 20 units are overlapped. The model used the PSSM profile, which is transformed from protein sequence data. The experiment proved 86.4% the performance for the problem of predicting the main classes of enzyme number, and it was confirmed that the performance was 84.4% accurate up to the sub-sub classes of enzyme number. Thus, PSCREM better identifies unique patterns related to protein function through overlapped RNN, and Overlapped RNN is proposed as a novel methodology for protein function and structure prediction extraction.