• Title/Summary/Keyword: Image Translation

Search Result 318, Processing Time 0.026 seconds

Analysis of Velocity Potential around Pulsating Bubble near Free or Rigid Surfaces Based on Image Method (이미지 방법을 이용한 자유 및 강체 표면 옆의 맥동하는 버블 주위 속도 포텐셜 해석)

  • Lee, Sangryun;Choi, Gulgi;Kim, Jongchul;Ryu, Seunghwa
    • Journal of Ocean Engineering and Technology
    • /
    • v.32 no.1
    • /
    • pp.28-35
    • /
    • 2018
  • An analytical method for predicting the velocity potential around a pulsating bubble close to a free or rigid wall was established using an image method. Because the velocity potential should satisfy two boundary conditions at the bubble surface and rigid wall, we investigated the velocity in the normal direction at the two boundaries by adding the image bubbles. The potential was analyzed by decomposing the bubble motion as two independent motions, pulsation and translation, and we found that when the number of image bubbles was greater than ten, the two boundary conditions were satisfied for the translation term. By adding many image bubbles after the approximation of the pulsation term, we also confirmed that the boundary condition at the wall was satisfied.

Night-to-Day Road Image Translation with Generative Adversarial Network for Driver Safety Enhancement (운전자 안정성 향상을 위한 Generative Adversarial Network 기반의 야간 도로 영상 변환 시스템)

  • Ahn, Namhyun;Kang, Suk-Ju
    • Journal of Broadcast Engineering
    • /
    • v.23 no.6
    • /
    • pp.760-767
    • /
    • 2018
  • Advanced driver assistance system(ADAS) is a major technique in the intelligent vehicle field. The techniques for ADAS can be separated in two classes, i.e., methods that directly control the movement of vehicle and that indirectly provide convenience to driver. In this paper, we propose a novel system that gives a visual assistance to driver by translating a night road image to a day road image. We use the black box images capturing the front road view of vehicle as inputs. The black box images are cropped into three parts and simultaneously translated into day images by the proposed image translation module. Then, the translated images are recollected to original size. The experimental result shows that the proposed method generates realistic images and outperforms the conventional algorithms.

Image Translation using Pseudo-Morphological Operator (의사 형태학적 연산을 사용한 이미지 변환)

  • Jo, Janghun;Lee, HoYeon;Shin, MyeongWoo;Kim, Kyungsup
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.11a
    • /
    • pp.799-802
    • /
    • 2017
  • We attempt to combines concepts of Morphological Operator(MO) and Convolutional Neural Networks(CNN) to improve image-to-image translation. To do this, we propose an operation that approximates morphological operations. Also we propose S-Convolution, an operation that extends the operation to use multiple filters like CNN. The experiment result shows that it can learn MO with big filter using multiple S-convolution layer of small filter. To validate effectiveness of the proposed layer in image-to-image translation we experiment with GAN with S-convolution applied. The result showed that GAN with S-convolution can achieve distinct result from that of GAN with CNN.

U-net and Residual-based Cycle-GAN for Improving Object Transfiguration Performance (물체 변형 성능을 향상하기 위한 U-net 및 Residual 기반의 Cycle-GAN)

  • Kim, Sewoon;Park, Kwang-Hyun
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.1
    • /
    • pp.1-7
    • /
    • 2018
  • The image-to-image translation is one of the deep learning applications using image data. In this paper, we aim at improving the performance of object transfiguration which transforms a specific object in an image into another specific object. For object transfiguration, it is required to transform only the target object and maintain background images. In the existing results, however, it is observed that other parts in the image are also transformed. In this paper, we have focused on the structure of artificial neural networks that are frequently used in the existing methods and have improved the performance by adding constraints to the exiting structure. We also propose the advanced structure that combines the existing structures to maintain their advantages and complement their drawbacks. The effectiveness of the proposed methods are shown in experimental results.

SkelGAN: A Font Image Skeletonization Method

  • Ko, Debbie Honghee;Hassan, Ammar Ul;Majeed, Saima;Choi, Jaeyoung
    • Journal of Information Processing Systems
    • /
    • v.17 no.1
    • /
    • pp.1-13
    • /
    • 2021
  • In this research, we study the problem of font image skeletonization using an end-to-end deep adversarial network, in contrast with the state-of-the-art methods that use mathematical algorithms. Several studies have been concerned with skeletonization, but a few have utilized deep learning. Further, no study has considered generative models based on deep neural networks for font character skeletonization, which are more delicate than natural objects. In this work, we take a step closer to producing realistic synthesized skeletons of font characters. We consider using an end-to-end deep adversarial network, SkelGAN, for font-image skeletonization, in contrast with the state-of-the-art methods that use mathematical algorithms. The proposed skeleton generator is proved superior to all well-known mathematical skeletonization methods in terms of character structure, including delicate strokes, serifs, and even special styles. Experimental results also demonstrate the dominance of our method against the state-of-the-art supervised image-to-image translation method in font character skeletonization task.

Application of Artificial Neural Network For Sign Language Translation

  • Cho, Jeong-Ran;Kim, Hyung-Hoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.2
    • /
    • pp.185-192
    • /
    • 2019
  • In the case of a hearing impaired person using sign language, there are many difficulties in communicating with a normal person who does not understand sign language. The sign language translation system is a system that enables communication between the hearing impaired person using sign language and the normal person who does not understand sign language in this situation. Previous studies on sign language translation systems for communication between normal people and hearing impaired people using sign language are classified into two types using video image system and shape input device. However, the existing sign language translation system does not solve such difficulties due to some problems. Existing sign language translation systems have some problems that they do not recognize various sign language expressions of sign language users and require special devices. Therefore, in this paper, a sign language translation system using an artificial neural network is devised to overcome the problems of the existing system.

Facial Image Synthesis by Controlling Skin Microelements (피부 미세요소 조절을 통한 얼굴 영상 합성)

  • Kim, Yujin;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.27 no.3
    • /
    • pp.369-377
    • /
    • 2022
  • Recent deep learning-based face synthesis research shows the result of generating a realistic face including overall style or elements such as hair, glasses, and makeup. However, previous methods cannot create a face at a very detailed level, such as the microstructure of the skin. In this paper, to overcome this limitation, we propose a technique for synthesizing a more realistic facial image from a single face label image by controlling the types and intensity of skin microelements. The proposed technique uses Pix2PixHD, an Image-to-Image Translation method, to convert a label image showing the facial region and skin elements such as wrinkles, pores, and redness to create a facial image with added microelements. Experimental results show that it is possible to create various realistic face images reflecting fine skin elements corresponding to this by generating various label images with adjusted skin element regions.

Hand Language Translation Using Kinect

  • Pyo, Junghwan;Kang, Namhyuk;Bang, Jiwon;Jeong, Yongjin
    • Journal of IKEEE
    • /
    • v.18 no.2
    • /
    • pp.291-297
    • /
    • 2014
  • Since hand gesture recognition was realized thanks to improved image processing algorithms, sign language translation has been a critical issue for the hearing-impaired. In this paper, we extract human hand figures from a real time image stream and detect gestures in order to figure out which kind of hand language it means. We used depth-color calibrated image from the Kinect to extract human hands and made a decision tree in order to recognize the hand gesture. The decision tree contains information such as number of fingers, contours, and the hand's position inside a uniform sized image. We succeeded in recognizing 'Hangul', the Korean alphabet, with a recognizing rate of 98.16%. The average execution time per letter of the system was about 76.5msec, a reasonable speed considering hand language translation is based on almost still images. We expect that this research will help communication between the hearing-impaired and other people who don't know hand language.

Rotation and Translation Invariant Feature Extraction Using Angular Projection in Frequency Domain (주파수 영역에서 각도 투영법을 이용한 회전 및 천이 불변 특징 추출)

  • Lee, Bum-Shik;Kim, Mun-Churl
    • Journal of the HCI Society of Korea
    • /
    • v.1 no.2
    • /
    • pp.27-33
    • /
    • 2006
  • This paper presents a new approach to translation and rotation invariant feature extraction for image texture retrieval. For the rotation invariant feature extraction, we invent angular projection along angular frequency in Polar coordinate system. The translation and rotation invariant feature vector for representing texture images is constructed by the averaged magnitude and the standard deviations of the magnitude of the Fourier transform spectrum obtained by the proposed angular projection. In order to easily implement the angular projection, the Radon transform is employed to obtain the Fourier transform spectrum of images in the Polar coordinate system. Then, angular projection is applied to extract the feature vector. We present our experimental results to show the robustness against the image rotation and the discriminatory capability for different texture images using MPEG-7 data set. Our Experiment result shows that the proposed rotation and translation invariant feature vector is effective in retrieval performance for the texture images with homogeneity, isotropy and local directionality.

  • PDF

Development of Stereoscopic PTV Technique and Performance Tests (Stereoscopic PTV 기법의 개발과 성능비교 연구)

  • Lee Sang-Joon;Yoon Jong-Hwan
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.30 no.3 s.246
    • /
    • pp.215-221
    • /
    • 2006
  • A stereoscopic particle tracking velocimetry (SPTV) technique based on the 2-frame hybrid particle tracking velocimetry (PTV) method was developed. The expansion of 2D PTV to SPTV is facilitated by the fact that the PTV method tracks individual particle centroids. To evaluate the performance and measurement accuracy of the present SPTV technique, it was applied to flow images of rigid body translation and synthetic standard images of jet shear flow and impinging jet flow. The data processing routine and measurement uncertainty of the SPTV technique are compared with those of conventional stereoscopic particle image velecimet.y (SPBV). In addition, the centroid translation effect of 2D particle image velocimetry (PIV) is defined and its effect on SPIV measurements is discussed. Compared to the SPIV method, the SPTV technique has inherited merits of concise and precise velocity evaluation procedures and provides better spatial resolution and measurement accuracy.