• Title/Summary/Keyword: Image-to-image generation

Search Result 1,573, Processing Time 0.033 seconds

Efficient Generation of Image Identifiers for Image Database (정지영상 데이터베이스의 효율적 인식자 생성)

  • Park, Je-Ho
    • Journal of the Semiconductor & Display Technology
    • /
    • v.10 no.3
    • /
    • pp.89-94
    • /
    • 2011
  • The image identification methodology associates an image with a unique identifiable representation. Whenever the methodology regenerates an identifier for the same image, moreover, the newly created identifier needs to be consistent in terms of representation value. In this paper, we discuss a methodology for image identifier generation utilizing luminance correlation. We furthermore propose a method for performance enhancement of the image identifier generation. We also demonstrate the experimental evaluations for uniqueness and similarity analysis and performance improvement that have shown favorable results.

A Study of Reference Image Generation for Moving Object Detection under Moving Camera (이동카메라에서 이동물체 검출을 위한 참조 영상 생성에 관한 연구)

  • Lee, June-Hyung;Chae, Ok-Sam
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.3
    • /
    • pp.67-73
    • /
    • 2007
  • This paper presents a panoramic reference image generation based automatic algorithm for moving objects detection robust to illumination variations under moving camera. Background image is generated by rotating the fixed the camera on the tripod horizontally. aligning and reorganizing this images. In generation of the cylindrical panoramic image, most of previous works assume the static environment. We propose the method to generating the panoramic reference image from dynamic environments in this paper. We develop an efficient approach for panoramic reference image generation by using accumulated edge map as well as method of edge matching between input image and background image. We applied the proposed algorithm to real image sequences. The experimental results show that panoramic reference image generation robust to illumination variations can be possible using the proposed method.

  • PDF

Stylized Image Generation based on Music-image Synesthesia Emotional Style Transfer using CNN Network

  • Xing, Baixi;Dou, Jian;Huang, Qing;Si, Huahao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.4
    • /
    • pp.1464-1485
    • /
    • 2021
  • Emotional style of multimedia art works are abstract content information. This study aims to explore emotional style transfer method and find the possible way of matching music with appropriate images in respect to emotional style. DCNNs (Deep Convolutional Neural Networks) can capture style and provide emotional style transfer iterative solution for affective image generation. Here, we learn the image emotion features via DCNNs and map the affective style on the other images. We set image emotion feature as the style target in this style transfer problem, and held experiments to handle affective image generation of eight emotion categories, including dignified, dreaming, sad, vigorous, soothing, exciting, joyous, and graceful. A user study was conducted to test the synesthesia emotional image style transfer result with ground truth user perception triggered by the music-image pairs' stimuli. The transferred affective image result for music-image emotional synesthesia perception was proved effective according to user study result.

Performance Comparison According to Image Generation Method in NIDS (Network Intrusion Detection System) using CNN

  • Sang Hyun, Kim
    • International journal of advanced smart convergence
    • /
    • v.12 no.2
    • /
    • pp.67-75
    • /
    • 2023
  • Recently, many studies have been conducted on ways to utilize AI technology in NIDS (Network Intrusion Detection System). In particular, CNN-based NIDS generally shows excellent performance. CNN is basically a method of using correlation between pixels existing in an image. Therefore, the method of generating an image is very important in CNN. In this paper, the performance comparison of CNN-based NIDS according to the image generation method was performed. The image generation methods used in the experiment are a direct conversion method and a one-hot encoding based method. As a result of the experiment, the performance of NIDS was different depending on the image generation method. In particular, it was confirmed that the method combining the direct conversion method and the one-hot encoding based method proposed in this paper showed the best performance.

FEASIBILITY ON GENERATING STEREO MOSAIC IMAGE

  • Noh, Myoung-Jong;Lee, Sung-Hun;Cho, Woo-Sug
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.201-204
    • /
    • 2005
  • Recently, the generation of panoramic images and high quality mosaic images from video sequences has been attempted by a variety of investigations. Among a matter of investigation, in this paper, left and right stereo mosaic image generation utilizing airborne-video sequence images is focused upon. The stereo mosaic image is generated by creating left and right mosaic image which is generated by front and rear slit having different viewing angle in consecutive video frame images. The generation of stereo mosaic image proposed in this paper consists of several processes: camera parameter estimation for each video frame image, rectification, slicing, motion parallax elimination and image mosaicking. However it is necessary to check the feasibility on generating stereo mosaic image as explained processes. Therefore, in this paper, we performed the feasibility test on generating stereo mosaic image using video frame images. In doing so, anaglyphic image for stereo mosaic images is generated and tested for feasibility check.

  • PDF

Stereoscopic Conversion of Object-based MPEG-4 Video (객체 기반 MPEG-4 동영상의 입체 변환)

  • 박상훈;김만배;손현식
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2407-2410
    • /
    • 2003
  • In this paper, we propose a new stereoscopic video conversion methodology that converts two-dimensional (2-D) MPEG-4 video to stereoscopic video. In MPEG-4, each Image is composed of background object and primary object. In the first step of the conversion methodology, the camera motion type is determined for stereo Image generation. In the second step, the object-based stereo image generation is carried out. The background object makes use of a current image and a delayed image for its stereo image generation. On the other hand, the primary object uses a current image and its horizontally-shifted version to avoid the possible vertical parallax that could happen. Furthermore, URFA(Uncovered Region Filling Algorithm) is applied in the uncovered region which might be created after the stereo image generation of a primary object. In our experiment, show MPEG-4 test video and its stereoscopic video based upon out proposed methodology and analyze Its results.

  • PDF

Deep Adversarial Residual Convolutional Neural Network for Image Generation and Classification

  • Haque, Md Foysal;Kang, Dae-Seong
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.1
    • /
    • pp.111-120
    • /
    • 2020
  • Generative adversarial networks (GANs) achieved impressive performance on image generation and visual classification applications. However, adversarial networks meet difficulties in combining the generative model and unstable training process. To overcome the problem, we combined the deep residual network with upsampling convolutional layers to construct the generative network. Moreover, the study shows that image generation and classification performance become more prominent when the residual layers include on the generator. The proposed network empirically shows that the ability to generate images with higher visual accuracy provided certain amounts of additional complexity using proper regularization techniques. Experimental evaluation shows that the proposed method is superior to image generation and classification tasks.

Object Edge-based Image Generation Technique for Constructing Large-scale Image Datasets (대형 이미지 데이터셋 구축을 위한 객체 엣지 기반 이미지 생성 기법)

  • Ju-Hyeok Lee;Mi-Hui Kim
    • Journal of IKEEE
    • /
    • v.27 no.3
    • /
    • pp.280-287
    • /
    • 2023
  • Deep learning advancements can solve computer vision problems, but large-scale datasets are necessary for high accuracy. In this paper, we propose an image generation technique using object bounding boxes and image edge components. The object bounding boxes are extracted from the images through object detection, and image edge components are used as input values for the image generation model to create new image data. As results of experiments, the images generated by the proposed method demonstrated similar image quality to the source images in the image quality assessment, and also exhibited good performance during the deep learning training process.

Image Caption Generation using Recurrent Neural Network (Recurrent Neural Network를 이용한 이미지 캡션 생성)

  • Lee, Changki
    • Journal of KIISE
    • /
    • v.43 no.8
    • /
    • pp.878-882
    • /
    • 2016
  • Automatic generation of captions for an image is a very difficult task, due to the necessity of computer vision and natural language processing technologies. However, this task has many important applications, such as early childhood education, image retrieval, and navigation for blind. In this paper, we describe a Recurrent Neural Network (RNN) model for generating image captions, which takes image features extracted from a Convolutional Neural Network (CNN). We demonstrate that our models produce state of the art results in image caption generation experiments on the Flickr 8K, Flickr 30K, and MS COCO datasets.

Best Practice on Automatic Toon Image Creation from JSON File of Message Sequence Diagram via Natural Language based Requirement Specifications

  • Hyuntae Kim;Ji Hoon Kong;Hyun Seung Son;R. Young Chul Kim
    • International journal of advanced smart convergence
    • /
    • v.13 no.1
    • /
    • pp.99-107
    • /
    • 2024
  • In AI image generation tools, most general users must use an effective prompt to craft queries or statements to elicit the desired response (image, result) from the AI model. But we are software engineers who focus on software processes. At the process's early stage, we use informal and formal requirement specifications. At this time, we adapt the natural language approach into requirement engineering and toon engineering. Most Generative AI tools do not produce the same image in the same query. The reason is that the same data asset is not used for the same query. To solve this problem, we intend to use informal requirement engineering and linguistics to create a toon. Therefore, we propose a sequence diagram and image generation mechanism by analyzing and applying key objects and attributes as an informal natural language requirement analysis. Identify morpheme and semantic roles by analyzing natural language through linguistic methods. Based on the analysis results, a sequence diagram and an image are generated through the diagram. We expect consistent image generation using the same image element asset through the proposed mechanism.