• Title/Summary/Keyword: Resolution of Image

Search Result 3,684, Processing Time 0.03 seconds

Image Processing-based Validation of Unrecognizable Numbers in Severely Distorted License Plate Images

  • Jang, Sangsik;Yoon, Inhye;Kim, Dongmin;Paik, Joonki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.1 no.1
    • /
    • pp.17-26
    • /
    • 2012
  • This paper presents an image processing-based validation method for unrecognizable numbers in severely distorted license plate images which have been degraded by various factors including low-resolution, low light-level, geometric distortion, and periodic noise. Existing vehicle license plate recognition (LPR) methods assume that most of the image degradation factors have been removed before performing the recognition of printed numbers and letters. If this is not the case, conventional LPR becomes impossible. The proposed method adopts a novel approach where a set of reference number images are intentionally degraded using the same factors estimated from the input image. After a series of image processing steps, including geometric transformation, super-resolution, and filtering, a comparison using cross-correlation between the intentionally degraded reference and the input images can provide a successful identification of the visually unrecognizable numbers. The proposed method makes it possible to validate numbers in a license plate image taken under low light-level conditions. In the experiment, using an extended set of test images that are unrecognizable to human vision, the proposed method provides a successful recognition rate of over 95%, whereas most existing LPR methods fail due to the severe distortion.

  • PDF

A Background Segmentation Using Color and Edge Information In Low Resolution Color Image (저해상도 칼라 영상의 색상 정보와 에지정보를 이용한 배경 분리)

  • 정민영;박성한
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.39-42
    • /
    • 2003
  • In this paper, we propose a background segmentation method in low resolution color image. A segmentation algorithm is based on color and edge information. In edge image, adaptive and local thresholds are applied to suppress paint boundaries. Through our experiments, the proposed algorithm efficiently segments background from objects.

  • PDF

SHADOW EXTRACTION FROM ASTER IMAGE USING MIXED PIXEL ANALYSIS

  • Kikuchi, Yuki;Takeshi, Miyata;Masataka, Takagi
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.727-731
    • /
    • 2003
  • ASTER image has some advantages for classification such as 15 spectral bands and 15m ${\sim}$ 90m spatial resolution. However, in the classification using general remote sensing image, shadow areas are often classified into water area. It is very difficult to divide shadow and water. Because reflectance characteristics of water is similar to characteristics of shadow. Many land cover items are consisted in one pixel which is 15m spatial resolution. Nowadays, very high resolution satellite image (IKONOS, Quick Bird) and Digital Surface Model (DSM) by air borne laser scanner can also be used. In this study, mixed pixel analysis of ASTER image has carried out using IKONOS image and DSM. For mixed pixel analysis, high accurated geometric correction was required. Image matching method was applied for generating GCP datasets. IKONOS image was rectified by affine transform. After that, one pixel in ASTER image should be compared with corresponded 15×15 pixel in IKONOS image. Then, training dataset were generated for mixed pixel analysis using visual interpretation of IKONOS image. Finally, classification will be carried out based on Linear Mixture Model. Shadow extraction might be succeeded by the classification. The extracted shadow area was validated using shadow image which generated from 1m${\sim}$2m spatial resolution DSM. The result showed 17.2% error was occurred in mixed pixel. It might be limitation of ASTER image for shadow extraction because of 8bit quantization data.

  • PDF

Super-resolution in Music Score Images by Instance Normalization

  • Tran, Minh-Trieu;Lee, Guee-Sang
    • Smart Media Journal
    • /
    • v.8 no.4
    • /
    • pp.64-71
    • /
    • 2019
  • The performance of an OMR (Optical Music Recognition) system is usually determined by the characterizing features of the input music score images. Low resolution is one of the main factors leading to degraded image quality. In this paper, we handle the low-resolution problem using the super-resolution technique. We propose the use of a deep neural network with instance normalization to improve the quality of music score images. We apply instance normalization which has proven to be beneficial in single image enhancement. It works better than batch normalization, which shows the effectiveness of shifting the mean and variance of deep features at the instance level. The proposed method provides an end-to-end mapping technique between the high and low-resolution images respectively. New images are then created, in which the resolution is four times higher than the resolution of the original images. Our model has been evaluated with the dataset "DeepScores" and shows that it outperforms other existing methods.

Single Image Super Resolution using sub-Edge Extraction based on Hierarchical Structure (계층적 보조 경계 추출을 이용한 단일 영상의 초해상도 기법)

  • Hyun Ho, Han
    • Journal of Digital Policy
    • /
    • v.1 no.2
    • /
    • pp.53-59
    • /
    • 2022
  • In this paper, we proposed a method using sub-edge information extracted through a hierarchical structure in the process of generating super resolution based on a single image. In order to improve the quality of super resolution, it is necessary to clearly distinguish the shape of each area while clearly expressing the boundary area in the image. The proposed method assists edge information of the image in deep learning based super resolution method to create an improved super resolution result while maintaining the structural shape of the boundary region, which is an important factor determining the quality in the super resolution process. In addition to the group convolution structure for performing deep learning based super resolution, a separate hierarchical edge accumulation extraction process based on high-frequency band information for sub-edge extraction is proposed, and a method of using it as an auxiliary feature is proposed. Experimental results showed about 1% performance improvement in PSNR and SSIM compared to the existing super resolution.

High-resolution image restoration based on image fusion (영상융합 기반 고해상도 영상복원)

  • Shin Jeongho;Lee Jungsoo;Paik Joonki
    • Journal of Broadcast Engineering
    • /
    • v.10 no.2
    • /
    • pp.238-246
    • /
    • 2005
  • This paper proposes an iterative high-resolution image interpolation algorithm using spatially adaptive constraints and regularization functional. The proposed algorithm adapts adaptive constraints according to the direction of..edges in an image, and can restore high-resolution image by optimizing regularization functional at each iteration, which is suitable for edge directional regularization. The proposed algorithm outperforms the conventional adaptive interpolation methods as well as non-adaptive ones, which not only can restore high frequency components, but also effectively reduce undesirable effects such as noise. Finally, in order to evaluate the performance of the proposed algorithm, various experiments are performed so that the proposed algorithm can provide good results in the sense of subjective and objective views.

Quantitative Evaluation of Super-resolution Drone Images Generated Using Deep Learning (딥러닝을 이용하여 생성한 초해상화 드론 영상의 정량적 평가)

  • Seo, Hong-Deok;So, Hyeong-Yoon;Kim, Eui-Myoung
    • Journal of Cadastre & Land InformatiX
    • /
    • v.53 no.2
    • /
    • pp.5-18
    • /
    • 2023
  • As the development of drones and sensors accelerates, new services and values are created by fusing data acquired from various sensors mounted on drone. However, the construction of spatial information through data fusion is mainly constructed depending on the image, and the quality of data is determined according to the specification and performance of the hardware. In addition, it is difficult to utilize it in the actual field because expensive equipment is required to construct spatial information of high-quality. In this study, super-resolution was performed by applying deep learning to low-resolution images acquired through RGB and THM cameras mounted on a drone, and quantitative evaluation and feature point extraction were performed on the generated high-resolution images. As a result of the experiment, the high-resolution image generated by super-resolution was maintained the characteristics of the original image, and as the resolution was improved, more features could be extracted compared to the original image. Therefore, when generating a high-resolution image by applying a low-resolution image to an super-resolution deep learning model, it is judged to be a new method to construct spatial information of high-quality without being restricted by hardware.

Tiled Stereo Display System for Immersive Telemeeting

  • Kim, Ig-Jae;Ahn, Sang-Chul;Kim, Hyoung-Gon
    • Journal of Information Display
    • /
    • v.8 no.4
    • /
    • pp.27-31
    • /
    • 2007
  • In this paper, we present an efficient tiled stereo display system for tangible meeting. For tangible meeting, it is important to provide immersive display with high resolution image to cover up the field of view and provide to the local user the same environment as that of remote site. To achieve these, a high resolution image needs to be transmitted for reconstruction of remote world, and it should be displayed using a tiled display. However, it is hard to transmit high resolution image in real time due to the limit of network bandwidth, and so we receive multiple images and reconstruct a remote world with received images in advance. Then, we update only a specific area where remote user exists by receiving low resolution image in realtime. We synthesize the transmitted image to the existing environmental map of remote world and display it as a stereo image. For this, we developed a new system which supports GPU based real time warping and blending, automatic feature extraction using machine vision technique.

Quantitative Analysis of Spatial Resolution for the Influence of the Focus Size and Digital Image Post-Processing on the Computed Radiography (CR(Computed Radiography)에서 초점 크기와 디지털영상후처리에 따른 공간분해능의 정량적 분석)

  • Seoung, Youl-Hun
    • Journal of Digital Convergence
    • /
    • v.12 no.11
    • /
    • pp.407-414
    • /
    • 2014
  • The aim of the present study was to carry out quantitative analysis of spatial resolution for the influence of the focus size and digital image post-processing on the Computed Radiography (CR). The modulation transfer functions of an edge measuring method (MTF) was used for the evaluation of the spatial resolution. The focus size of X-ray tube was used the small focus (0.6 mm) and the large focus (1.2 mm). We evaluated the 50% and 10% of MTF for the enhancement of edge and contrast by using multi-scale image contrast amplification (MUSICA) in digital image post-processing. As a results, the edge enhancement than the contrast enhancement were significantly higher the spatial resolution of MTF 50% in all focus. Also the spatial resolution of the obtained images in a large focus were improved by digital image processing. In conclusion, the results of this study should serve as a basic data for obtain the high resolution clinical images, such as skeletal and chest images on the CR.

Evaluation of Block-based Sharpening Algorithms for Fusion of Hyperion and ALI Imagery (Hyperion과 ALI 영상의 융합을 위한 블록 기반의 융합기법 평가)

  • Kim, Yeji;Choi, Jaewan
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.33 no.1
    • /
    • pp.63-70
    • /
    • 2015
  • An Image fusion, or Pansharpening is a methodology of increasing the spatial resolution of image with low-spatial resolution using high-spatial resolution images. In this paper, we have performed an image fusion of hyperspectral imagery by using panchromatic image with high-spatial resolution, multispectral and hyperspectral images with low-spatial resolution, which had been acquired by ALI and Hyperion of EO-1 satellite sensors. The study has been mainly focused on evaluating performance of fusion process following to the image fusion methodology of the block association, which had applied to ALI and Hyperion dataset by considering spectral characteristics between multispectral and hyperspectral images. The results from experiments have been identified that the proposed algorithm efficiently improved the spatial resolution and minimized spectral distortion comparing with results from a fusion of the only panchromatic and hyperspectral images and the existing block-based fusion method. Through the study in a proposed algorithm, we could concluded in that those applications of airborne hyperspectral sensors and various hyperspectral satellite sensors will be launched at future by enlarge its usages.