• Title/Summary/Keyword: Generating Training Dataset

Search Result 16, Processing Time 0.028 seconds

Generating Training Dataset of Machine Learning Model for Context-Awareness in a Health Status Notification Service (사용자 건강 상태알림 서비스의 상황인지를 위한 기계학습 모델의 학습 데이터 생성 방법)

  • Mun, Jong Hyeok;Choi, Jong Sun;Choi, Jae Young
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.1
    • /
    • pp.25-32
    • /
    • 2020
  • In the context-aware system, rule-based AI technology has been used in the abstraction process for getting context information. However, the rules are complicated by the diversification of user requirements for the service and also data usage is increased. Therefore, there are some technical limitations to maintain rule-based models and to process unstructured data. To overcome these limitations, many studies have applied machine learning techniques to Context-aware systems. In order to utilize this machine learning-based model in the context-aware system, a management process of periodically injecting training data is required. In the previous study on the machine learning based context awareness system, a series of management processes such as the generation and provision of learning data for operating several machine learning models were considered, but the method was limited to the applied system. In this paper, we propose a training data generating method of a machine learning model to extend the machine learning based context-aware system. The proposed method define the training data generating model that can reflect the requirements of the machine learning models and generate the training data for each machine learning model. In the experiment, the training data generating model is defined based on the training data generating schema of the cardiac status analysis model for older in health status notification service, and the training data is generated by applying the model defined in the real environment of the software. In addition, it shows the process of comparing the accuracy by learning the training data generated in the machine learning model, and applied to verify the validity of the generated learning data.

Synthetic Image Dataset Generation for Defense using Generative Adversarial Networks (국방용 합성이미지 데이터셋 생성을 위한 대립훈련신경망 기술 적용 연구)

  • Yang, Hunmin
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.22 no.1
    • /
    • pp.49-59
    • /
    • 2019
  • Generative adversarial networks(GANs) have received great attention in the machine learning field for their capacity to model high-dimensional and complex data distribution implicitly and generate new data samples from the model distribution. This paper investigates the model training methodology, architecture, and various applications of generative adversarial networks. Experimental evaluation is also conducted for generating synthetic image dataset for defense using two types of GANs. The first one is for military image generation utilizing the deep convolutional generative adversarial networks(DCGAN). The other is for visible-to-infrared image translation utilizing the cycle-consistent generative adversarial networks(CycleGAN). Each model can yield a great diversity of high-fidelity synthetic images compared to training ones. This result opens up the possibility of using inexpensive synthetic images for training neural networks while avoiding the enormous expense of collecting large amounts of hand-annotated real dataset.

A Study on Robustness Evaluation and Improvement of AI Model for Malware Variation Analysis (악성코드 변종 분석을 위한 AI 모델의 Robust 수준 측정 및 개선 연구)

  • Lee, Eun-gyu;Jeong, Si-on;Lee, Hyun-woo;Lee, Tea-jin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.5
    • /
    • pp.997-1008
    • /
    • 2022
  • Today, AI(Artificial Intelligence) technology is being extensively researched in various fields, including the field of malware detection. To introduce AI systems into roles that protect important decisions and resources, it must be a reliable AI model. AI model that dependent on training dataset should be verified to be robust against new attacks. Rather than generating new malware detection, attackers find malware detection that succeed in attacking by mass-producing strains of previously detected malware detection. Most of the attacks, such as adversarial attacks, that lead to misclassification of AI models, are made by slightly modifying past attacks. Robust models that can be defended against these variants is needed, and the Robustness level of the model cannot be evaluated with accuracy and recall, which are widely used as AI evaluation indicators. In this paper, we experiment a framework to evaluate robustness level by generating an adversarial sample based on one of the adversarial attacks, C&W attack, and to improve robustness level through adversarial training. Through experiments based on malware dataset in this study, the limitations and possibilities of the proposed method in the field of malware detection were confirmed.

Genetic Algorithm Based Attribute Value Taxonomy Generation for Learning Classifiers with Missing Data (유전자 알고리즘 기반의 불완전 데이터 학습을 위한 속성값계층구조의 생성)

  • Joo Jin-U;Yang Ji-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.13B no.2 s.105
    • /
    • pp.133-138
    • /
    • 2006
  • Learning with Attribute Value Taxonomies (AVT) has shown that it is possible to construct accurate, compact and robust classifiers from a partially missing dataset (dataset that contains attribute values specified with different level of precision). Yet, in many cases AVTs are generated from experts or people with specialized knowledge in their domain. Unfortunately these user-provided AVTs can be time-consuming to construct and misguided during the AVT building process. Moreover experts are occasionally unavailable to provide an AVT for a particular domain. Against these backgrounds, this paper introduces an AVT generating method called GA-AVT-Learner, which finds a near optimal AVT with a given training dataset using a genetic algorithm. This paper conducted experiments generating AVTs through GA-AVT-Learner with a variety of real world datasets. We compared these AVTs with other types of AVTs such as HAC-AVTs and user-provided AVTs. Through the experiments we have proved that GA-AVT-Learner provides AVTs that yield more accurate and compact classifiers and improve performance in learning missing data.

A method of generating virtual shadow dataset of buildings for the shadow detection and removal

  • Kim, Kangjik;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.21 no.5
    • /
    • pp.49-56
    • /
    • 2020
  • Detecting shadows in images and restoring or removing them was a very challenging task in computer vision. Traditional researches used color information, edges, and thresholds to detect shadows, but there were errors such as not considering the penumbra area of shadow or even detecting a black area that is not a shadow. Deep learning has been successful in various fields of computer vision, and research on applying deep learning has started in the field of shadow detection and removal. However, it was very difficult and time-consuming to collect data for network learning, and there were many limited conditions for shooting. In particular, it was more difficult to obtain shadow data from buildings and satellite images, which hindered the progress of the research. In this paper, we propose a method for generating shadow data from buildings and satellites using Unity3D. In the virtual Unity space, 3D objects existing in the real world were placed, and shadows were generated using lights effects to shoot. Through this, it is possible to get all three types of images (shadow-free, shadow image, shadow mask) necessary for shadow detection and removal when training deep learning networks. The method proposed in this paper contributes to helping the progress of the research by providing big data in the field of building or satellite shadow detection and removal research, which is difficult for learning deep learning networks due to the absence of data. And this can be a suboptimal method. We believe that we have contributed in that we can apply virtual data to test deep learning networks before applying real data.

The Method for Generating Recommended Candidates through Prediction of Multi-Criteria Ratings Using CNN-BiLSTM

  • Kim, Jinah;Park, Junhee;Shin, Minchan;Lee, Jihoon;Moon, Nammee
    • Journal of Information Processing Systems
    • /
    • v.17 no.4
    • /
    • pp.707-720
    • /
    • 2021
  • To improve the accuracy of the recommendation system, multi-criteria recommendation systems have been widely researched. However, it is highly complicated to extract the preferred features of users and items from the data. To this end, subjective indicators, which indicate a user's priorities for personalized recommendations, should be derived. In this study, we propose a method for generating recommendation candidates by predicting multi-criteria ratings from reviews and using them to derive user priorities. Using a deep learning model based on convolutional neural network (CNN) and bidirectional long short-term memory (BiLSTM), multi-criteria prediction ratings were derived from reviews. These ratings were then aggregated to form a linear regression model to predict the overall rating. This model not only predicts the overall rating but also uses the training weights from the layers of the model as the user's priority. Based on this, a new score matrix for recommendation is derived by calculating the similarity between the user and the item according to the criteria, and an item suitable for the user is proposed. The experiment was conducted by collecting the actual "TripAdvisor" dataset. For performance evaluation, the proposed method was compared with a general recommendation system based on singular value decomposition. The results of the experiments demonstrate the high performance of the proposed method.

Business Application of Convolutional Neural Networks for Apparel Classification Using Runway Image (합성곱 신경망의 비지니스 응용: 런웨이 이미지를 사용한 의류 분류를 중심으로)

  • Seo, Yian;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.1-19
    • /
    • 2018
  • Large amount of data is now available for research and business sectors to extract knowledge from it. This data can be in the form of unstructured data such as audio, text, and image data and can be analyzed by deep learning methodology. Deep learning is now widely used for various estimation, classification, and prediction problems. Especially, fashion business adopts deep learning techniques for apparel recognition, apparel search and retrieval engine, and automatic product recommendation. The core model of these applications is the image classification using Convolutional Neural Networks (CNN). CNN is made up of neurons which learn parameters such as weights while inputs come through and reach outputs. CNN has layer structure which is best suited for image classification as it is comprised of convolutional layer for generating feature maps, pooling layer for reducing the dimensionality of feature maps, and fully-connected layer for classifying the extracted features. However, most of the classification models have been trained using online product image, which is taken under controlled situation such as apparel image itself or professional model wearing apparel. This image may not be an effective way to train the classification model considering the situation when one might want to classify street fashion image or walking image, which is taken in uncontrolled situation and involves people's movement and unexpected pose. Therefore, we propose to train the model with runway apparel image dataset which captures mobility. This will allow the classification model to be trained with far more variable data and enhance the adaptation with diverse query image. To achieve both convergence and generalization of the model, we apply Transfer Learning on our training network. As Transfer Learning in CNN is composed of pre-training and fine-tuning stages, we divide the training step into two. First, we pre-train our architecture with large-scale dataset, ImageNet dataset, which consists of 1.2 million images with 1000 categories including animals, plants, activities, materials, instrumentations, scenes, and foods. We use GoogLeNet for our main architecture as it has achieved great accuracy with efficiency in ImageNet Large Scale Visual Recognition Challenge (ILSVRC). Second, we fine-tune the network with our own runway image dataset. For the runway image dataset, we could not find any previously and publicly made dataset, so we collect the dataset from Google Image Search attaining 2426 images of 32 major fashion brands including Anna Molinari, Balenciaga, Balmain, Brioni, Burberry, Celine, Chanel, Chloe, Christian Dior, Cividini, Dolce and Gabbana, Emilio Pucci, Ermenegildo, Fendi, Giuliana Teso, Gucci, Issey Miyake, Kenzo, Leonard, Louis Vuitton, Marc Jacobs, Marni, Max Mara, Missoni, Moschino, Ralph Lauren, Roberto Cavalli, Sonia Rykiel, Stella McCartney, Valentino, Versace, and Yve Saint Laurent. We perform 10-folded experiments to consider the random generation of training data, and our proposed model has achieved accuracy of 67.2% on final test. Our research suggests several advantages over previous related studies as to our best knowledge, there haven't been any previous studies which trained the network for apparel image classification based on runway image dataset. We suggest the idea of training model with image capturing all the possible postures, which is denoted as mobility, by using our own runway apparel image dataset. Moreover, by applying Transfer Learning and using checkpoint and parameters provided by Tensorflow Slim, we could save time spent on training the classification model as taking 6 minutes per experiment to train the classifier. This model can be used in many business applications where the query image can be runway image, product image, or street fashion image. To be specific, runway query image can be used for mobile application service during fashion week to facilitate brand search, street style query image can be classified during fashion editorial task to classify and label the brand or style, and website query image can be processed by e-commerce multi-complex service providing item information or recommending similar item.

Using Neural Network Algorithm for Bead Visualization (뉴럴 네트워크 알고리즘을 이용한 비드 가시화)

  • Koo, Chang-Dae;Yang, Hyeong-Seok;Kim, Jung-Yeong;Shin, Sang-Ho
    • Journal of Welding and Joining
    • /
    • v.31 no.5
    • /
    • pp.35-40
    • /
    • 2013
  • In this paper, we propose the Tangible Virtual Reality Representation Method to using haptic device and feature to morphology of created bead from Flux Cored Arc Welding. The virtual reality was started to rising for reduce to consumable materials and welding training risk. And, we will expected maximize virtual reality from virtual welding training. In this paper proposed method is get the database to changing the input factor such as work angle, travelling angle, speed, CTWD. And, it is visualization to bead from extract to optimal morphological feature information to using the Neural Network algorithm. The database was building without error to extract data from automatic robot welder. Also, the Neural Network algorithm was set a dataset of the highest accuracy from verification process in many times. The bead was created in virtual reality from extract to morphological feature information. We were implementation to final shape of bead and overlapped in process by time to using bead generation algorithm and calibration algorithm for generate to same bead shape to real database in process of generating bead. The best advantage of virtual welding training, it can be get the many data to training evaluation. In this paper, we were representation bead to similar shape from generated bead to Flux Cored Arc Welding. Therefore, we were reduce the gap to virtual welding training and real welding training. In addition, we were confirmed be able to maximize the performance of education from more effective evaluation system.

Style Synthesis of Speech Videos Through Generative Adversarial Neural Networks (적대적 생성 신경망을 통한 얼굴 비디오 스타일 합성 연구)

  • Choi, Hee Jo;Park, Goo Man
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.11
    • /
    • pp.465-472
    • /
    • 2022
  • In this paper, the style synthesis network is trained to generate style-synthesized video through the style synthesis through training Stylegan and the video synthesis network for video synthesis. In order to improve the point that the gaze or expression does not transfer stably, 3D face restoration technology is applied to control important features such as the pose, gaze, and expression of the head using 3D face information. In addition, by training the discriminators for the dynamics, mouth shape, image, and gaze of the Head2head network, it is possible to create a stable style synthesis video that maintains more probabilities and consistency. Using the FaceForensic dataset and the MetFace dataset, it was confirmed that the performance was increased by converting one video into another video while maintaining the consistent movement of the target face, and generating natural data through video synthesis using 3D face information from the source video's face.

A Novel Text to Image Conversion Method Using Word2Vec and Generative Adversarial Networks

  • LIU, XINRUI;Joe, Inwhee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.05a
    • /
    • pp.401-403
    • /
    • 2019
  • In this paper, we propose a generative adversarial networks (GAN) based text-to-image generating method. In many natural language processing tasks, which word expressions are determined by their term frequency -inverse document frequency scores. Word2Vec is a type of neural network model that, in the case of an unlabeled corpus, produces a vector that expresses semantics for words in the corpus and an image is generated by GAN training according to the obtained vector. Thanks to the understanding of the word we can generate higher and more realistic images. Our GAN structure is based on deep convolution neural networks and pixel recurrent neural networks. Comparing the generated image with the real image, we get about 88% similarity on the Oxford-102 flowers dataset.