• Title/Summary/Keyword: Image-to-image Translation

Search Result 306, Processing Time 0.031 seconds

Night-to-Day Road Image Translation with Generative Adversarial Network for Driver Safety Enhancement (운전자 안정성 향상을 위한 Generative Adversarial Network 기반의 야간 도로 영상 변환 시스템)

  • Ahn, Namhyun;Kang, Suk-Ju
    • Journal of Broadcast Engineering
    • /
    • v.23 no.6
    • /
    • pp.760-767
    • /
    • 2018
  • Advanced driver assistance system(ADAS) is a major technique in the intelligent vehicle field. The techniques for ADAS can be separated in two classes, i.e., methods that directly control the movement of vehicle and that indirectly provide convenience to driver. In this paper, we propose a novel system that gives a visual assistance to driver by translating a night road image to a day road image. We use the black box images capturing the front road view of vehicle as inputs. The black box images are cropped into three parts and simultaneously translated into day images by the proposed image translation module. Then, the translated images are recollected to original size. The experimental result shows that the proposed method generates realistic images and outperforms the conventional algorithms.

An Implementation of a System for Video Translation on Window Platform Using OCR (윈도우 기반의 광학문자인식을 이용한 영상 번역 시스템 구현)

  • Hwang, Sun-Myung;Yeom, Hee-Gyun
    • Journal of Internet of Things and Convergence
    • /
    • v.5 no.2
    • /
    • pp.15-20
    • /
    • 2019
  • As the machine learning research has developed, the field of translation and image analysis such as optical character recognition has made great progress. However, video translation that combines these two is slower than previous developments. In this paper, we develop an image translator that combines existing OCR technology and translation technology and verify its effectiveness. Before developing, we presented what functions are needed to implement this system and how to implement them, and then tested their performance. With the application program developed through this paper, users can access translation more conveniently, and also can contribute to ensuring the convenience provided in any environment.

A Defocus Technique based Depth from Lens Translation using Sequential SVD Factorization

  • Kim, Jong-Il;Ahn, Hyun-Sik;Jeong, Gu-Min;Kim, Do-Hyun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.383-388
    • /
    • 2005
  • Depth recovery in robot vision is an essential problem to infer the three dimensional geometry of scenes from a sequence of the two dimensional images. In the past, many studies have been proposed for the depth estimation such as stereopsis, motion parallax and blurring phenomena. Among cues for depth estimation, depth from lens translation is based on shape from motion by using feature points. This approach is derived from the correspondence of feature points detected in images and performs the depth estimation that uses information on the motion of feature points. The approaches using motion vectors suffer from the occlusion or missing part problem, and the image blur is ignored in the feature point detection. This paper presents a novel approach to the defocus technique based depth from lens translation using sequential SVD factorization. Solving such the problems requires modeling of mutual relationship between the light and optics until reaching the image plane. For this mutuality, we first discuss the optical properties of a camera system, because the image blur varies according to camera parameter settings. The camera system accounts for the camera model integrating a thin lens based camera model to explain the light and optical properties and a perspective projection camera model to explain the depth from lens translation. Then, depth from lens translation is proposed to use the feature points detected in edges of the image blur. The feature points contain the depth information derived from an amount of blur of width. The shape and motion can be estimated from the motion of feature points. This method uses the sequential SVD factorization to represent the orthogonal matrices that are singular value decomposition. Some experiments have been performed with a sequence of real and synthetic images comparing the presented method with the depth from lens translation. Experimental results have demonstrated the validity and shown the applicability of the proposed method to the depth estimation.

  • PDF

Image Translation using Pseudo-Morphological Operator (의사 형태학적 연산을 사용한 이미지 변환)

  • Jo, Janghun;Lee, HoYeon;Shin, MyeongWoo;Kim, Kyungsup
    • Annual Conference of KIPS
    • /
    • 2017.11a
    • /
    • pp.799-802
    • /
    • 2017
  • We attempt to combines concepts of Morphological Operator(MO) and Convolutional Neural Networks(CNN) to improve image-to-image translation. To do this, we propose an operation that approximates morphological operations. Also we propose S-Convolution, an operation that extends the operation to use multiple filters like CNN. The experiment result shows that it can learn MO with big filter using multiple S-convolution layer of small filter. To validate effectiveness of the proposed layer in image-to-image translation we experiment with GAN with S-convolution applied. The result showed that GAN with S-convolution can achieve distinct result from that of GAN with CNN.

Analysis of Velocity Potential around Pulsating Bubble near Free or Rigid Surfaces Based on Image Method (이미지 방법을 이용한 자유 및 강체 표면 옆의 맥동하는 버블 주위 속도 포텐셜 해석)

  • Lee, Sangryun;Choi, Gulgi;Kim, Jongchul;Ryu, Seunghwa
    • Journal of Ocean Engineering and Technology
    • /
    • v.32 no.1
    • /
    • pp.28-35
    • /
    • 2018
  • An analytical method for predicting the velocity potential around a pulsating bubble close to a free or rigid wall was established using an image method. Because the velocity potential should satisfy two boundary conditions at the bubble surface and rigid wall, we investigated the velocity in the normal direction at the two boundaries by adding the image bubbles. The potential was analyzed by decomposing the bubble motion as two independent motions, pulsation and translation, and we found that when the number of image bubbles was greater than ten, the two boundary conditions were satisfied for the translation term. By adding many image bubbles after the approximation of the pulsation term, we also confirmed that the boundary condition at the wall was satisfied.

An Object Tracking Method using Stereo Images (스테레오 영상을 이용한 물체 추적 방법)

  • Lee, Hak-Chan;Park, Chang-Han;Namkung, Yun;Namkyung, Jae-Chan
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.5
    • /
    • pp.522-534
    • /
    • 2002
  • In this paper, we propose a new object tracking system using stereo images to improve the performance of the automatic object tracking system. The existing object tracking system has optimum characteristics, but it requires a lot of computation. In the case of the image with a single eye, the system is difficult to estimate and track for the various transformation of the object. Because the stereo image by both eyes is difficult to estimate the translation and the rotation, this paper deals with the tracking method, which has the ability to track the image for translation for real time, with block matching algorithm in order to decrease the calculation. The experimental results demonstrate the usefulness of proposed system with the recognition rate of 88% in the rotation, 89% in the translation, 88% in various image, and with the mean rate of 88.3%.

Hand Language Translation Using Kinect

  • Pyo, Junghwan;Kang, Namhyuk;Bang, Jiwon;Jeong, Yongjin
    • Journal of IKEEE
    • /
    • v.18 no.2
    • /
    • pp.291-297
    • /
    • 2014
  • Since hand gesture recognition was realized thanks to improved image processing algorithms, sign language translation has been a critical issue for the hearing-impaired. In this paper, we extract human hand figures from a real time image stream and detect gestures in order to figure out which kind of hand language it means. We used depth-color calibrated image from the Kinect to extract human hands and made a decision tree in order to recognize the hand gesture. The decision tree contains information such as number of fingers, contours, and the hand's position inside a uniform sized image. We succeeded in recognizing 'Hangul', the Korean alphabet, with a recognizing rate of 98.16%. The average execution time per letter of the system was about 76.5msec, a reasonable speed considering hand language translation is based on almost still images. We expect that this research will help communication between the hearing-impaired and other people who don't know hand language.

Research Trends of Generative Adversarial Networks and Image Generation and Translation (GAN 적대적 생성 신경망과 이미지 생성 및 변환 기술 동향)

  • Jo, Y.J.;Bae, K.M.;Park, J.Y.
    • Electronics and Telecommunications Trends
    • /
    • v.35 no.4
    • /
    • pp.91-102
    • /
    • 2020
  • Recently, generative adversarial networks (GANs) is a field of research that has rapidly emerged wherein many studies conducted shows overwhelming results. Initially, this was at the level of imitating the training dataset. However, the GAN is currently useful in many fields, such as transformation of data categories, restoration of erased parts of images, copying facial expressions of humans, and creation of artworks depicting a dead painter's style. Although many outstanding research achievements have been attracting attention recently, GANs have encountered many challenges. First, they require a large memory facility for research. Second, there are still technical limitations in processing high-resolution images over 4K. Third, many GAN learning methods have a problem of instability in the training stage. However, recent research results show images that are difficult to distinguish whether they are real or fake, even with the naked eye, and the resolution of 4K and above is being developed. With the increase in image quality and resolution, many applications in the field of design and image and video editing are now available, including those that draw a photorealistic image as a simple sketch or easily modify unnecessary parts of an image or a video. In this paper, we discuss how GANs started, including the base architecture and latest technologies of GANs used in high-resolution, high-quality image creation, image and video editing, style translation, content transfer, and technology.

Image matching by Wavelet Local Extrema (웨이브릿 국부 최대-최소값을 이용한 영상 정합)

  • 박철진;김주영;고광식
    • Proceedings of the IEEK Conference
    • /
    • 1999.11a
    • /
    • pp.589-592
    • /
    • 1999
  • Matching is a key problem in computer vision, image analysis and pattern recognition. In this paper a multiscale image matching algorithm by wavelet local extrema is proposed. This algorithm is based on the multiscale wavelet transform of the curvature which can utilize both the information of local extrema positions and magnitudes of transform results. This method has advantages in computational cost to a single scale image matching. It is also rotation-, translation-, and scale-independent image matching method. This matching can be used for the recognition of occluded objects.

  • PDF

Image Feature Extraction Using Energy field Analysis (에너지장 해석을 통한 영상 특징량 추출 방법 개발)

  • 김면희;이태영;이상룡
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2002.10a
    • /
    • pp.404-406
    • /
    • 2002
  • In this paper, the method of image feature extraction is proposed. This method employ the energy field analysis, outlier removal algorithm and ring projection. Using this algorithm, we achieve rotation-translation-scale invariant feature extraction. The force field are exploited to automatically locate the extrema of a small number of potential energy wells and associated potential channels. The image feature is acquired from relationship of local extrema using the ring projection method.

  • PDF