• Title/Summary/Keyword: Low-resolution image

Search Result 866, Processing Time 0.028 seconds

Efficient Multi-scalable Network for Single Image Super Resolution

  • Alao, Honnang;Kim, Jin-Sung;Kim, Tae Sung;Lee, Kyujoong
    • Journal of Multimedia Information System
    • /
    • v.8 no.2
    • /
    • pp.101-110
    • /
    • 2021
  • In computer vision, single-image super resolution has been an area of research for a significant period. Traditional techniques involve interpolation-based methods such as Nearest-neighbor, Bilinear, and Bicubic for image restoration. Although implementations of convolutional neural networks have provided outstanding results in recent years, efficiency and single model multi-scalability have been its challenges. Furthermore, previous works haven't placed enough emphasis on real-number scalability. Interpolation-based techniques, however, have no limit in terms of scalability as they are able to upscale images to any desired size. In this paper, we propose a convolutional neural network possessing the advantages of the interpolation-based techniques, which is also efficient, deeming it suitable in practical implementations. It consists of convolutional layers applied on the low-resolution space, post-up-sampling along the end hidden layers, and additional layers on high-resolution space. Up-sampling is applied on a multiple channeled feature map via bicubic interpolation using a single model. Experiments on architectural structure, layer reduction, and real-number scale training are executed with results proving efficient amongst multi-scale learning (including scale multi-path-learning) based models.

Influence of CT Reconstruction on Spatial Resolution (CT 영상 재구성의 공간분해능에 대한 영향)

  • Chon, Kwon Su
    • Journal of the Korean Society of Radiology
    • /
    • v.12 no.1
    • /
    • pp.85-91
    • /
    • 2018
  • Computed tomography, which obtains section images from reconstruction process using projection images, has been applied to various fields. The spatial resolution of the reconstructed image depends on the device used in CT system, the object, and the reconstruction process. In this paper, we investigates the effect of the number of projection images and the pixel size of the detector on the spatial resolution of the reconstructed image under the parallel beam geometry. The reconstruction program was written in Visual C++, and the matrix size of the reconstructed image was $512{\times}512$. The numerical bar phantom was constructed and the Min-Max method was introduced to evaluate the spatial resolution on the reconstructed image. When the number of projections used in reconstruction process was small, artifact like streak appeared and Min-Max was also low. The Min-Max showed upper saturation when the number of projections is increased. If the pixel size of the detector is reduced to 50% of the pixel size of the reconstructed image, the reconstructed image was perfectly recovered as the original phantom and the Min-Max decreased as increasing the detector pixel size. This study will be useful in determining the detector and the accuracy of rotation stage needed to achieve the spatial resolution required in the CT system.

Development of Algorithms for Correcting and Mapping High-Resolution Side Scan Sonar Imagery (고해상도 사이드 스캔 소나 영상의 보정 및 매핑 알고리즘의 개발)

  • 이동진;박요섭;김학일
    • Korean Journal of Remote Sensing
    • /
    • v.17 no.1
    • /
    • pp.45-56
    • /
    • 2001
  • To acquire seabed information, the mosaic images of the seabed were generated using Side Scan Sonar. Short time energy function which is needed for slant range correction is proposed to get the height of Tow-Fish to the reflected acoustic amplitudes of each ping, and that leads to a mosaic image without water column. While generating mosaic image, maximum value, last value and average value are used for the measure of a pixel in the mosaic image and 3-D information was kept by using acoustic amplitudes which were heading for specific direction. As a generating method of mosaic image, low resolution mosaic image which is over 1m/pixel resolution was generated for whole survey area first, and then high resolution mosaic image which is generated under 0.1m/pixel resolution was generated for the selected area. Rocks, ripple mark, sand wave, tidal flat and artificial fish reef are found in the mosaic image.

Image Fusion Methods for Multispectral and Panchromatic Images of Pleiades and KOMPSAT 3 Satellites

  • Kim, Yeji;Choi, Jaewan;Kim, Yongil
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.5
    • /
    • pp.413-422
    • /
    • 2018
  • Many applications using satellite data from high-resolution multispectral sensors require an image fusion step, known as pansharpening, before processing and analyzing the multispectral images when spatial fidelity is crucial. Image fusion methods are to improve images with higher spatial and spectral resolutions by reducing spectral distortion, which occurs on image fusion processing. The image fusion methods can be classified into MRA (Multi-Resolution Analysis) and CSA (Component Substitution Analysis) approaches. To suggest the efficient image fusion method for Pleiades and KOMPSAT (Korea Multi-Purpose Satellite) 3 satellites, this study will evaluate image fusion methods for multispectral and panchromatic images. HPF (High-Pass Filtering), SFIM (Smoothing Filter-based Intensity Modulation), GS (Gram Schmidt), and GSA (Adoptive GS) were selected for MRA and CSA based image fusion methods and applied on multispectral and panchromatic images. Their performances were evaluated using visual and quality index analysis. HPF and SFIM fusion results presented low performance of spatial details. GS and GSA fusion results had enhanced spatial information closer to panchromatic images, but GS produced more spectral distortions on urban structures. This study presented that GSA was effective to improve spatial resolution of multispectral images from Pleiades 1A and KOMPSAT 3.

Non-Local Mean based Post Processing Scheme for Performance Enhancement of Image Interpolation Method (이미지 보간기법의 성능 개선을 위한 비국부평균 기반의 후처리 기법)

  • Kim, Donghyung
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.16 no.3
    • /
    • pp.49-58
    • /
    • 2020
  • Image interpolation, a technology that converts low resolution images into high resolution images, has been widely used in various image processing fields such as CCTV, web-cam, and medical imaging. This technique is based on the fact that the statistical distributions of the white Gaussian noise and the difference between the interpolated image and the original image is similar to each other. The proposed algorithm is composed of three steps. In first, the interpolated image is derived by random image interpolation. In second, we derive weighting functions that are used to apply non-local mean filtering. In the final step, the prediction error is corrected by performing non-local mean filtering by applying the selected weighting function. It can be considered as a post-processing algorithm to further reduce the prediction error after applying an arbitrary image interpolation algorithm. Simulation results show that the proposed method yields reasonable performance.

Image Registration and Fusion between Passive Millimeter Wave Images and Visual Images (수동형 멀리미터파 영상과 가시 영상과의 정합 및 융합에 관한 연구)

  • Lee, Hyoung;Lee, Dong-Su;Yeom, Seok-Won;Son, Jung-Young;Guschin, Vladmir P.;Kim, Shin-Hwan
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.6C
    • /
    • pp.349-354
    • /
    • 2011
  • Passive millimeter wave imaging has the capability of detecting concealed objects under clothing. Also, passive millimeter imaging can obtain interpretable images under low visibility conditions like rain, fog, smoke, and dust. However, the image quality is often degraded due to low spatial resolution, low signal level, and low temperature resolution. This paper addresses image registration and fusion between passive millimeter images and visual images. The goal of this study is to combine and visualize two different types of information together: human subject's identity and concealed objects. The image registration process is composed of body boundary detection and an affine transform maximizing cross-correlation coefficients of two edge images. The image fusion process comprises three stages: discrete wavelet transform for image decomposition, a fusion rule for merging the coefficients, and the inverse transform for image synthesis. In the experiments, various types of metallic and non-metallic objects such as a knife, gel or liquid type beauty aids and a phone are detected by passive millimeter wave imaging. The registration and fusion process can visualize the meaningful information from two different types of sensors.

Electrical Impedance Tomography and Biomedical Applications

  • Woo, Eung-Je
    • 한국지구물리탐사학회:학술대회논문집
    • /
    • 2007.06a
    • /
    • pp.1-6
    • /
    • 2007
  • Two impedance imaging systems of multi-frequency electrical impedance tomography (MFEIT) and magnetic resonance electrical impedance tomography (MREIT) are described. MFEIT utilizes boundary measurements of current-voltage data at multiple frequencies to reconstruct cross-sectional images of a complex conductivity distribution (${\sigma}+i{\omega}{\varepsilon}$) inside the human body. The inverse problem in MFEIT is ill-posed due to the nonlinearity and low sensitivity between the boundary measurement and the complex conductivity. In MFEIT, we therefore focus on time- and frequency-difference imaging with a low spatial resolution and high temporal resolution. Multi-frequency time- and frequency-difference images in the frequency range of 10 Hz to 500 kHz are presented. In MREIT, we use an MRI scanner to measure an internal distribution of induced magnetic flux density subject to an injection current. This internal information enables us to reconstruct cross-sectional images of an internal conductivity distribution with a high spatial resolution. Conductivity image of a postmortem canine brain is presented and it shows a clear contrast between gray and white matters. Clinical applications for imaging the brain, breast, thorax, abdomen, and others are briefly discussed.

  • PDF

3-D DISPLAY USING COMPUTER-GENERATED BINARY HOLOGRAMS

  • Yoshinori-Kajiki;Masaaki-Okamoto;Koji-Yamasaki;Eiji-Shimizu
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1999.06a
    • /
    • pp.227-232
    • /
    • 1999
  • We have been making researches on 3-D displays using computer-generated holograms(CGHs). Our CGHs are binary Fresnel holograms that reconstruct point light sources and are recorded by using high resolution laser printers (image setters). We use an image setter with a resolution of 5080 dots per inch. It is possible to reconstruct CGHs with light-emitting points. As the resolution of the image setter is not so high, it is better to use a spherical wave as a reference beam. We considered the recordable points objects are restricted by the low resolution, and proposed the multiplex type hologram to reduce the number of point objects recorded in the unit area of the CGH. We proposed a method to make computer-generated color hologram which could reconstruct color point light sources, by combining RGB color filters with the stripe CGHs corresponding to each color. We considered two kinds of gradation method on our binary CGHs. In this paper, we propose a multiple reconstruction method for improving the narrow viewing field.

Applicability Evaluation of Spatio-Temporal Data Fusion Using Fine-scale Optical Satellite Image: A Study on Fusion of KOMPSAT-3A and Sentinel-2 Satellite Images (고해상도 광학 위성영상을 이용한 시공간 자료 융합의 적용성 평가: KOMPSAT-3A 및 Sentinel-2 위성영상의 융합 연구)

  • Kim, Yeseul;Lee, Kwang-Jae;Lee, Sun-Gu
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_3
    • /
    • pp.1931-1942
    • /
    • 2021
  • As the utility of an optical satellite image with a high spatial resolution (i.e., fine-scale) has been emphasized, recently, various studies of the land surface monitoring using those have been widely carried out. However, the usefulness of fine-scale satellite images is limited because those are acquired at a low temporal resolution. To compensate for this limitation, the spatiotemporal data fusion can be applied to generate a synthetic image with a high spatio-temporal resolution by fusing multiple satellite images with different spatial and temporal resolutions. Since the spatio-temporal data fusion models have been developed for mid or low spatial resolution satellite images in the previous studies, it is necessary to evaluate the applicability of the developed models to the satellite images with a high spatial resolution. For this, this study evaluated the applicability of the developed spatio-temporal fusion models for KOMPSAT-3A and Sentinel-2 images. Here, an Enhanced Spatial and Temporal Adaptive Fusion Model (ESTARFM) and Spatial Time-series Geostatistical Deconvolution/Fusion Model (STGDFM), which use the different information for prediction, were applied. As a result of this study, it was found that the prediction performance of STGDFM, which combines temporally continuous reflectance values, was better than that of ESTARFM. Particularly, the prediction performance of STGDFM was significantly improved when it is difficult to simultaneously acquire KOMPSAT and Sentinel-2 images at a same date due to the low temporal resolution of KOMPSAT images. From the results of this study, it was confirmed that STGDFM, which has relatively better prediction performance by combining continuous temporal information, can compensate for the limitation to the low revisit time of fine-scale satellite images.

An Effective Viewport Resolution Scaling Technique to Reduce the Power Consumption in Mobile GPUs

  • Hwang, Imjae;Kwon, Hyuck-Joo;Chang, Ji-Hye;Lim, Yeongkyu;Kim, Cheong Ghil;Park, Woo-Chan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.8
    • /
    • pp.3918-3934
    • /
    • 2017
  • This paper presents a viewport resolution scaling technique to reduce power consumption in mobile graphic processing units (GPUs). This technique controls the rendering resolution of applications in proportion to the resolution factor. In the mobile environment, it is essential to find an effective resolution factor to achieve low power consumption because both the resolution and power consumption of a GPU are in mutual trade-off. This paper presents a resolution factor that can minimize image quality degradation and gain power reduction. For this purpose, software and hardware viewport resolution scaling techniques are applied in the Android environment. Then, the correlation between image quality and power consumption is analyzed according to the resolution factor by conducting a benchmark analysis in the real commercial environment. Experimental results show that the power consumption decreased by 36.96% on average by the hardware viewport resolution scaling technique.