• Title/Summary/Keyword: AI generated images

Search Result 48, Processing Time 0.021 seconds

The Influence of Creator Information on Preference for Artificial Intelligence- and Human-generated Artworks

  • Nam, Seungmin;Song, Jiwon;Kim, Chai-Youn
    • Science of Emotion and Sensibility
    • /
    • v.25 no.3
    • /
    • pp.107-116
    • /
    • 2022
  • Purpose: Researchers have shown that aesthetic judgments of artworks depend on contexts, such as the authenticity of an artwork (Newman & Bloom, 2011) and an artwork's location of display (Kirk et al., 2009; Silveira et al., 2015). The present study aims to examine whether contextual information related to the creator, such as whether an artwork was created by a human or artificial intelligence (AI), influences viewers' preference judgments of an artwork. Methods: Images of Impressionist landscape paintings were selected as human-made artworks. AI-made artwork stimuli were created using Google's Deep Dream Generator by mimicking the Impressionist style via deep learning algorithms. Participants performed a preference rating task on each of the 108 artwork stimuli accompanied by one of the two creator labels. After this task, an art experience questionnaire (AEQ) was given to participants to examine whether individual differences in art experience influence their preference judgments. Results: Setting AEQ scores as a covariate in a two-way ANCOVA analysis, the stimuli with the human-made context were preferred over the stimuli with the AI-made context. Regarding the types of stimuli, the viewers preferred AI-made stimuli to human-made stimuli. There was no interaction effect between the two factors. Conclusion: These results suggest that preferences for visual artworks are influenced by the contextual information of the creator when the individual differences in art experience are controlled.

MalDC: Malicious Software Detection and Classification using Machine Learning

  • Moon, Jaewoong;Kim, Subin;Park, Jangyong;Lee, Jieun;Kim, Kyungshin;Song, Jaeseung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.5
    • /
    • pp.1466-1488
    • /
    • 2022
  • Recently, the importance and necessity of artificial intelligence (AI), especially machine learning, has been emphasized. In fact, studies are actively underway to solve complex and challenging problems through the use of AI systems, such as intelligent CCTVs, intelligent AI security systems, and AI surgical robots. Information security that involves analysis and response to security vulnerabilities of software is no exception to this and is recognized as one of the fields wherein significant results are expected when AI is applied. This is because the frequency of malware incidents is gradually increasing, and the available security technologies are limited with regard to the use of software security experts or source code analysis tools. We conducted a study on MalDC, a technique that converts malware into images using machine learning, MalDC showed good performance and was able to analyze and classify different types of malware. MalDC applies a preprocessing step to minimize the noise generated in the image conversion process and employs an image augmentation technique to reinforce the insufficient dataset, thus improving the accuracy of the malware classification. To verify the feasibility of our method, we tested the malware classification technique used by MalDC on a dataset provided by Microsoft and malware data collected by the Korea Internet & Security Agency (KISA). Consequently, an accuracy of 97% was achieved.

Development of a transfer learning based detection system for burr image of injection molded products (전이학습 기반 사출 성형품 burr 이미지 검출 시스템 개발)

  • Yang, Dong-Cheol;Kim, Jong-Sun
    • Design & Manufacturing
    • /
    • v.15 no.3
    • /
    • pp.1-6
    • /
    • 2021
  • An artificial neural network model based on a deep learning algorithm is known to be more accurate than humans in image classification, but there is still a limit in the sense that there needs to be a lot of training data that can be called big data. Therefore, various techniques are being studied to build an artificial neural network model with high precision, even with small data. The transfer learning technique is assessed as an excellent alternative. As a result, the purpose of this study is to develop an artificial neural network system that can classify burr images of light guide plate products with 99% accuracy using transfer learning technique. Specifically, for the light guide plate product, 150 images of the normal product and the burr were taken at various angles, heights, positions, etc., respectively. Then, after the preprocessing of images such as thresholding and image augmentation, for a total of 3,300 images were generated. 2,970 images were separated for training, while the remaining 330 images were separated for model accuracy testing. For the transfer learning, a base model was developed using the NASNet-Large model that pre-trained 14 million ImageNet data. According to the final model accuracy test, the 99% accuracy in the image classification for training and test images was confirmed. Consequently, based on the results of this study, it is expected to help develop an integrated AI production management system by training not only the burr but also various defective images.

A Discussion on AI-based Automated Picture Creations (인공지능기반의 자동 창작 영상에 관한 논구)

  • Junghoe Kim;Joonsung Yoon
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.3
    • /
    • pp.723-730
    • /
    • 2024
  • In order to trace the changes in the concept and understanding of automatically generated images, this study analogously explores the creative methods of photography and cinema, which represent the existing image fields, in terms of AI-based image creation methods and 'automaticity', and discusses the understanding and possibilities of new automatic image creation. At the time of the invention of photography and cinema, the field of 'automatic creation' was established for them in comparison to traditional art genres such as painting. Recently, as AI has been applied to video production, the concept of 'automatic creation' has been expanded, and experimental creations that freely cross the boundaries of literature, art, photography, and film are active. By utilizing technologies such as machine learning and deep learning, AI automated creation allows AI to perform the creative process independently. Automated creation using AI can greatly improve efficiency, but it also risks compromising the personal and subjective nature of art. The problem stems from the fact that AI cannot completely replace human creativity.

Deep Learning Application of Gamma Camera Quality Control in Nuclear Medicine (핵의학 감마카메라 정도관리의 딥러닝 적용)

  • Jeong, Euihwan;Oh, Joo-Young;Lee, Joo-Young;Park, Hoon-Hee
    • Journal of radiological science and technology
    • /
    • v.43 no.6
    • /
    • pp.461-467
    • /
    • 2020
  • In the field of nuclear medicine, errors are sometimes generated because the assessment of the uniformity of gamma cameras relies on the naked eye of the evaluator. To minimize these errors, we created an artificial intelligence model based on CNN algorithm and wanted to assess its usefulness. We produced 20,000 normal images and partial cold region images using Python, and conducted artificial intelligence training with Resnet18 models. The training results showed that accuracy, specificity and sensitivity were 95.01%, 92.30%, and 97.73%, respectively. According to the results of the evaluation of the confusion matrix of artificial intelligence and expert groups, artificial intelligence was accuracy, specificity and sensitivity of 94.00%, 91.50%, and 96.80%, respectively, and expert groups was accuracy, specificity and sensitivity of 69.00%, 64.00%, and 74.00%, respectively. The results showed that artificial intelligence was better than expert groups. In addition, by checking together with the radiological technologist and AI, errors that may occur during the quality control process can be reduced, providing a better examination environment for patients, providing convenience to radiologists, and improving work efficiency.

Preliminary study of artificial intelligence-based fuel-rod pattern analysis of low-quality tomographic image of fuel assembly

  • Seong, Saerom;Choi, Sehwan;Ahn, Jae Joon;Choi, Hyung-joo;Chung, Yong Hyun;You, Sei Hwan;Yeom, Yeon Soo;Choi, Hyun Joon;Min, Chul Hee
    • Nuclear Engineering and Technology
    • /
    • v.54 no.10
    • /
    • pp.3943-3948
    • /
    • 2022
  • Single-photon emission computed tomography is one of the reliable pin-by-pin verification techniques for spent-fuel assemblies. One of the challenges with this technique is to increase the total fuel assembly verification speed while maintaining high verification accuracy. The aim of the present study, therefore, was to develop an artificial intelligence (AI) algorithm-based tomographic image analysis technique for partial-defect verification of fuel assemblies. With the Monte Carlo (MC) simulation technique, a tomographic image dataset consisting of 511 fuel-rod patterns of a 3 × 3 fuel assembly was generated, and with these images, the VGG16, GoogLeNet, and ResNet models were trained. According to an evaluation of these models for different training dataset sizes, the ResNet model showed 100% pattern estimation accuracy. And, based on the different tomographic image qualities, all of the models showed almost 100% pattern estimation accuracy, even for low-quality images with unrecognizable fuel patterns. This study verified that an AI model can be effectively employed for accurate and fast partial-defect verification of fuel assemblies.

Application of Deep Learning to Solar Data: 1. Overview

  • Moon, Yong-Jae;Park, Eunsu;Kim, Taeyoung;Lee, Harim;Shin, Gyungin;Kim, Kimoon;Shin, Seulki;Yi, Kangwoo
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.1
    • /
    • pp.51.2-51.2
    • /
    • 2019
  • Multi-wavelength observations become very popular in astronomy. Even though there are some correlations among different sensor images, it is not easy to translate from one to the other one. In this study, we apply a deep learning method for image-to-image translation, based on conditional generative adversarial networks (cGANs), to solar images. To examine the validity of the method for scientific data, we consider several different types of pairs: (1) Generation of SDO/EUV images from SDO/HMI magnetograms, (2) Generation of backside magnetograms from STEREO/EUVI images, (3) Generation of EUV & X-ray images from Carrington sunspot drawing, and (4) Generation of solar magnetograms from Ca II images. It is very impressive that AI-generated ones are quite consistent with actual ones. In addition, we apply the convolution neural network to the forecast of solar flares and find that our method is better than the conventional method. Our study also shows that the forecast of solar proton flux profiles using Long and Short Term Memory method is better than the autoregressive method. We will discuss several applications of these methodologies for scientific research.

  • PDF

A Study on the Realization of Virtual Simulation Face Based on Artificial Intelligence

  • Zheng-Dong Hou;Ki-Hong Kim;Gao-He Zhang;Peng-Hui Li
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.2
    • /
    • pp.152-158
    • /
    • 2023
  • In recent years, as computer-generated imagery has been applied to more industries, realistic facial animation is one of the important research topics. The current solution for realistic facial animation is to create realistic rendered 3D characters, but the 3D characters created by traditional methods are always different from the actual characters and require high cost in terms of staff and time. Deepfake technology can achieve the effect of realistic faces and replicate facial animation. The facial details and animations are automatically done by the computer after the AI model is trained, and the AI model can be reused, thus reducing the human and time costs of realistic face animation. In addition, this study summarizes the way human face information is captured and proposes a new workflow for video to image conversion and demonstrates that the new work scheme can obtain higher quality images and exchange effects by evaluating the quality of No Reference Image Quality Assessment.

Increasing Accuracy of Stock Price Pattern Prediction through Data Augmentation for Deep Learning (데이터 증강을 통한 딥러닝 기반 주가 패턴 예측 정확도 향상 방안)

  • Kim, Youngjun;Kim, Yeojeong;Lee, Insun;Lee, Hong Joo
    • The Journal of Bigdata
    • /
    • v.4 no.2
    • /
    • pp.1-12
    • /
    • 2019
  • As Artificial Intelligence (AI) technology develops, it is applied to various fields such as image, voice, and text. AI has shown fine results in certain areas. Researchers have tried to predict the stock market by utilizing artificial intelligence as well. Predicting the stock market is known as one of the difficult problems since the stock market is affected by various factors such as economy and politics. In the field of AI, there are attempts to predict the ups and downs of stock price by studying stock price patterns using various machine learning techniques. This study suggest a way of predicting stock price patterns based on the Convolutional Neural Network(CNN) among machine learning techniques. CNN uses neural networks to classify images by extracting features from images through convolutional layers. Therefore, this study tries to classify candlestick images made by stock data in order to predict patterns. This study has two objectives. The first one referred as Case 1 is to predict the patterns with the images made by the same-day stock price data. The second one referred as Case 2 is to predict the next day stock price patterns with the images produced by the daily stock price data. In Case 1, data augmentation methods - random modification and Gaussian noise - are applied to generate more training data, and the generated images are put into the model to fit. Given that deep learning requires a large amount of data, this study suggests a method of data augmentation for candlestick images. Also, this study compares the accuracies of the images with Gaussian noise and different classification problems. All data in this study is collected through OpenAPI provided by DaiShin Securities. Case 1 has five different labels depending on patterns. The patterns are up with up closing, up with down closing, down with up closing, down with down closing, and staying. The images in Case 1 are created by removing the last candle(-1candle), the last two candles(-2candles), and the last three candles(-3candles) from 60 minutes, 30 minutes, 10 minutes, and 5 minutes candle charts. 60 minutes candle chart means one candle in the image has 60 minutes of information containing an open price, high price, low price, close price. Case 2 has two labels that are up and down. This study for Case 2 has generated for 60 minutes, 30 minutes, 10 minutes, and 5minutes candle charts without removing any candle. Considering the stock data, moving the candles in the images is suggested, instead of existing data augmentation techniques. How much the candles are moved is defined as the modified value. The average difference of closing prices between candles was 0.0029. Therefore, in this study, 0.003, 0.002, 0.001, 0.00025 are used for the modified value. The number of images was doubled after data augmentation. When it comes to Gaussian Noise, the mean value was 0, and the value of variance was 0.01. For both Case 1 and Case 2, the model is based on VGG-Net16 that has 16 layers. As a result, 10 minutes -1candle showed the best accuracy among 60 minutes, 30 minutes, 10 minutes, 5minutes candle charts. Thus, 10 minutes images were utilized for the rest of the experiment in Case 1. The three candles removed from the images were selected for data augmentation and application of Gaussian noise. 10 minutes -3candle resulted in 79.72% accuracy. The accuracy of the images with 0.00025 modified value and 100% changed candles was 79.92%. Applying Gaussian noise helped the accuracy to be 80.98%. According to the outcomes of Case 2, 60minutes candle charts could predict patterns of tomorrow by 82.60%. To sum up, this study is expected to contribute to further studies on the prediction of stock price patterns using images. This research provides a possible method for data augmentation of stock data.

  • PDF

The Design and Experiment of AI Device Communication System Equipped with 5G (5G를 탑재한 AI 디바이스 통신 시스템의 설계 및 실험)

  • Han Seongil;Lee Daesik;Han Jihwan;Moon Hhyunjin;Lim Changmin;Lee Sangku
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.19 no.2
    • /
    • pp.69-78
    • /
    • 2023
  • In this paper, IO+5G dedicated hardware is developed and an AI device communication system equipped with a 5G is designed and tested. The AI device communication system equipped with a 5G receives the collected real-time images and the information collected from the IoT sensor in real time is to analyze the information and generates the risk detection events in the AI processing board. The event generated in the AI processing board creates a 5G channel in the dedicated hardware equipped with IO+5G. The created 5G channel delivers event video to the control video server. The 5G based dongle network enables faster data collection and more precise data measurement compared to wireless LAN and 5G routers. As a result of the experiment in this paper, the average test result of the 5G dongle network is about 51% faster than the Wi-Fi average test result in downlink and about 40% faster in uplink. In addition, when comparing the test result with terms of the 5G rounter to be set to 80% upload and 20% download, the average test result is that the 5G dongle network is about 11.27% faster when downloading and about 17.93% faster when uploading. when comparing the test result with terms of the the router to be set to 60% upload and 40% download, the 5G dongle network is about 11.19% faster when downlinking and about 13.61% faster when uplinking. Therefore, in this paper it describes that the developed 5G dongle network can improve the results by collecting data and analyzing it faster than wireless LAN and 5G routers.