• Title/Summary/Keyword: Deep Learning Convergence Study

Search Result 326, Processing Time 0.023 seconds

A Discussion on AI-based Automated Picture Creations (인공지능기반의 자동 창작 영상에 관한 논구)

  • Junghoe Kim;Joonsung Yoon
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.3
    • /
    • pp.723-730
    • /
    • 2024
  • In order to trace the changes in the concept and understanding of automatically generated images, this study analogously explores the creative methods of photography and cinema, which represent the existing image fields, in terms of AI-based image creation methods and 'automaticity', and discusses the understanding and possibilities of new automatic image creation. At the time of the invention of photography and cinema, the field of 'automatic creation' was established for them in comparison to traditional art genres such as painting. Recently, as AI has been applied to video production, the concept of 'automatic creation' has been expanded, and experimental creations that freely cross the boundaries of literature, art, photography, and film are active. By utilizing technologies such as machine learning and deep learning, AI automated creation allows AI to perform the creative process independently. Automated creation using AI can greatly improve efficiency, but it also risks compromising the personal and subjective nature of art. The problem stems from the fact that AI cannot completely replace human creativity.

Development of T2DM Prediction Model Using RNN (RNN을 이용한 제2형 당뇨병 예측모델 개발)

  • Jang, Jin-Su;Lee, Min-Jun;Lee, Tae-Ro
    • Journal of Digital Convergence
    • /
    • v.17 no.8
    • /
    • pp.249-255
    • /
    • 2019
  • Type 2 diabetes mellitus(T2DM) is included in metabolic disorders characterized by hyperglycemia, which causes many complications, and requires long-term treatment resulting in massive medical expenses each year. There have been many studies to solve this problem, but the existing studies have not been accurate by learning and predicting the data at specific time point. Thus, this study proposed a model using RNN to increase the accuracy of prediction of T2DM. This work propose a T2DM prediction model based on Korean Genome and Epidemiology study(Ansan, Anseong Korea). We trained all of the data over time to create prediction model of diabetes. To verify the results of the prediction model, we compared the accuracy with the existing machine learning methods, LR, k-NN, and SVM. Proposed prediction model accuracy was 0.92 and the AUC was 0.92, which were higher than the other. Therefore predicting the onset of T2DM by using the proposed diabetes prediction model in this study, it could lead to healthier lifestyle and hyperglycemic control resulting in lower risk of diabetes by alerted diabetes occurrence.

Application of deep learning method for decision making support of dam release operation (댐 방류 의사결정지원을 위한 딥러닝 기법의 적용성 평가)

  • Jung, Sungho;Le, Xuan Hien;Kim, Yeonsu;Choi, Hyungu;Lee, Giha
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.spc1
    • /
    • pp.1095-1105
    • /
    • 2021
  • The advancement of dam operation is further required due to the upcoming rainy season, typhoons, or torrential rains. Besides, physical models based on specific rules may sometimes have limitations in controlling the release discharge of dam due to inherent uncertainty and complex factors. This study aims to forecast the water level of the nearest station to the dam multi-timestep-ahead and evaluate the availability when it makes a decision for a release discharge of dam based on LSTM (Long Short-Term Memory) of deep learning. The LSTM model was trained and tested on eight data sets with a 1-hour temporal resolution, including primary data used in the dam operation and downstream water level station data about 13 years (2009~2021). The trained model forecasted the water level time series divided by the six lead times: 1, 3, 6, 9, 12, 18-hours, and compared and analyzed with the observed data. As a result, the prediction results of the 1-hour ahead exhibited the best performance for all cases with an average accuracy of MAE of 0.01m, RMSE of 0.015 m, and NSE of 0.99, respectively. In addition, as the lead time increases, the predictive performance of the model tends to decrease slightly. The model may similarly estimate and reliably predicts the temporal pattern of the observed water level. Thus, it is judged that the LSTM model could produce predictive data by extracting the characteristics of complex hydrological non-linear data and can be used to determine the amount of release discharge from the dam when simulating the operation of the dam.

A Study on Webtoon Background Image Generation Using CartoonGAN Algorithm (CartoonGAN 알고리즘을 이용한 웹툰(Webtoon) 배경 이미지 생성에 관한 연구)

  • Saekyu Oh;Juyoung Kang
    • The Journal of Bigdata
    • /
    • v.7 no.1
    • /
    • pp.173-185
    • /
    • 2022
  • Nowadays, Korean webtoons are leading the global digital comic market. Webtoons are being serviced in various languages around the world, and dramas or movies produced with Webtoons' IP (Intellectual Property Rights) have become a big hit, and more and more webtoons are being visualized. However, with the success of these webtoons, the working environment of webtoon creators is emerging as an important issue. According to the 2021 Cartoon User Survey, webtoon creators spend 10.5 hours a day on creative activities on average. Creators have to draw large amount of pictures every week, and competition among webtoons is getting fiercer, and the amount of paintings that creators have to draw per episode is increasing. Therefore, this study proposes to generate webtoon background images using deep learning algorithms and use them for webtoon production. The main character in webtoon is an area that needs much of the originality of the creator, but the background picture is relatively repetitive and does not require originality, so it can be useful for webtoon production if it can create a background picture similar to the creator's drawing style. Background generation uses CycleGAN, which shows good performance in image-to-image translation, and CartoonGAN, which is specialized in the Cartoon style image generation. This deep learning-based image generation is expected to shorten the working hours of creators in an excessive work environment and contribute to the convergence of webtoons and technologies.

Study on the Modeling of Health Medical Examination Knowledge Base Construction using Data Analysis based on AI (인공지능 기반의 데이터 분석을 적용한 건강검진 지식 베이스 구축 모델링 연구)

  • Kim, Bong-Hyun
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.6
    • /
    • pp.35-40
    • /
    • 2020
  • As we enter the society of the future, efforts to increase healthy living are a major area of concern for modern people. In particular, the development of technology for a healthy life that combines ICT technology with a competitive healthcare industry environment is becoming the next growth engine. Therefore, in this paper, artificial intelligence-based data analysis of the examination results was applied in the health examination process. Through this, a research was conducted to build a knowledge base modeling that can improve the reliability of the overall judgment. To this end, an algorithm was designed through deep learning analysis to calculate and verify the test result index. Then, the modeling that provides comprehensive examination information through judgment knowledge was studied. Through the application of the proposed modeling, it is possible to analyze and utilize big data on national health, so it can be expected to reduce medical expenses and increase health.

A Design of Estimate-information Filtering System using Artificial Intelligent Technology (인공지능 기술을 활용한 부동산 허위매물 필터링 시스템)

  • Moon, Jeong-Kyung
    • Convergence Security Journal
    • /
    • v.21 no.1
    • /
    • pp.115-120
    • /
    • 2021
  • An O2O-based real estate brokerage web sites or apps are increasing explosively. As a result, the environment has been changed from the existing offline-based real estate brokerage environment to the online-based environment, and consumers are getting very good feelings in terms of time, cost, and convenience. However, behind the convenience of online-based real estate brokerage services, users often suffer time and money damage due to false information or malicious false information. Therefore, in this study, in order to reduce the damage to consumers that may occur in the O2O-based real estate brokerage service, we designed a false property information filtering system that can determine the authenticity of registered property information using artificial intelligence technology. Through the proposed research method, it was shown that not only the authenticity of the property information registered in the online real estate service can be determined, but also the temporal and financial damage of consumers can be reduced.

Filter-mBART Based Neural Machine Translation Using Parallel Corpus Filtering (병렬 말뭉치 필터링을 적용한 Filter-mBART기반 기계번역 연구)

  • Moon, Hyeonseok;Park, Chanjun;Eo, Sugyeong;Park, JeongBae;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.5
    • /
    • pp.1-7
    • /
    • 2021
  • In the latest trend of machine translation research, the model is pretrained through a large mono lingual corpus and then finetuned with a parallel corpus. Although many studies tend to increase the amount of data used in the pretraining stage, it is hard to say that the amount of data must be increased to improve machine translation performance. In this study, through an experiment based on the mBART model using parallel corpus filtering, we propose that high quality data can yield better machine translation performance, even utilizing smaller amount of data. We propose that it is important to consider the quality of data rather than the amount of data, and it can be used as a guideline for building a training corpus.

Design of Distributed Hadoop Full Stack Platform for Big Data Collection and Processing (빅데이터 수집 처리를 위한 분산 하둡 풀스택 플랫폼의 설계)

  • Lee, Myeong-Ho
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.7
    • /
    • pp.45-51
    • /
    • 2021
  • In accordance with the rapid non-face-to-face environment and mobile first strategy, the explosive increase and creation of many structured/unstructured data every year demands new decision making and services using big data in all fields. However, there have been few reference cases of using the Hadoop Ecosystem, which uses the rapidly increasing big data every year to collect and load big data into a standard platform that can be applied in a practical environment, and then store and process well-established big data in a relational database. Therefore, in this study, after collecting unstructured data searched by keywords from social network services based on Hadoop 2.0 through three virtual machine servers in the Spring Framework environment, the collected unstructured data is loaded into Hadoop Distributed File System and HBase based on the loaded unstructured data, it was designed and implemented to store standardized big data in a relational database using a morpheme analyzer. In the future, research on clustering and classification and analysis using machine learning using Hive or Mahout for deep data analysis should be continued.

A Study on Mechanism of Intelligent Cyber Attack Path Analysis (지능형 사이버 공격 경로 분석 방법에 관한 연구)

  • Kim, Nam-Uk;Lee, Dong-Gyu;Eom, Jung-Ho
    • Convergence Security Journal
    • /
    • v.21 no.1
    • /
    • pp.93-100
    • /
    • 2021
  • Damage caused by intelligent cyber attacks not only disrupts system operations and leaks information, but also entails massive economic damage. Recently, cyber attacks have a distinct goal and use advanced attack tools and techniques to accurately infiltrate the target. In order to minimize the damage caused by such an intelligent cyber attack, it is necessary to block the cyber attack at the beginning or during the attack to prevent it from invading the target's core system. Recently, technologies for predicting cyber attack paths and analyzing risk level of cyber attack using big data or artificial intelligence technologies are being studied. In this paper, a cyber attack path analysis method using attack tree and RFI is proposed as a basic algorithm for the development of an automated cyber attack path prediction system. The attack path is visualized using the attack tree, and the priority of the path that can move to the next step is determined using the RFI technique in each attack step. Based on the proposed mechanism, it can contribute to the development of an automated cyber attack path prediction system using big data and deep learning technology.

A Study on Self-medication for Health Promotion of the Silver Generation

  • Oh, Soonhwan;Ryu, Gihwan
    • International Journal of Advanced Culture Technology
    • /
    • v.8 no.4
    • /
    • pp.82-88
    • /
    • 2020
  • With the development of medical care in the 21st century and the rapid development of the 4th industry, electronic devices and household goods taking into account the physical and mental aging of the silver generation have been developed, and apps related to health and health are generally developed and operated. The apps currently used by the silver generation are a form that provides information on diseases by focusing on prevention rather than treatment, such as safety management apps for the elderly living alone and methods for preventing diseases. There are not many apps that provide information on foods that have a direct effect and nutrients in that food, and research on apps that can obtain information about individual foods is insufficient. In this paper, we propose an app that analyzes food factors and provides self-medication for health promotion of the silver generation. This app allows the silver generation to conveniently and easily obtain information such as nutrients, calories, and efficacy of food they need. In addition, this app collects/categorizes healthy food information through a textom solution-based crawling agent, and stores highly relevant words in a data resource. In addition, wide deep learning was applied to enable self-medication recommendations for food. When this technique is applied, the most appropriate healthy food is suggested to people with similar eating patterns and tastes in the same age group, and users can receive recommendations on customized healthy foods that they need before eating. This made it possible to obtain convenient healthy food information through a customized interface for the elderly through a smartphone.