• Title/Summary/Keyword: Fusion Image

Search Result 878, Processing Time 0.023 seconds

Effectiveness of Using the TIR Band in Landsat 8 Image Classification

  • Lee, Mi Hee;Lee, Soo Bong;Kim, Yongmin;Sa, Jiwon;Eo, Yang Dam
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.33 no.3
    • /
    • pp.203-209
    • /
    • 2015
  • This paper discusses the effectiveness of using Landsat 8 TIR (Thermal Infrared) band images to improve the accuracy of landuse/landcover classification of urban areas. According to classification results for the study area using diverse band combinations, the classification accuracy using an image fusion process in which the TIR band is added to the visible and near infrared band was improved by 4.0%, compared to that using a band combination that does not consider the TIR band. For urban area landuse/landcover classification in particular, the producer’s accuracy and user’s accuracy values were improved by 10.2% and 3.8%, respectively. When MLC (Maximum Likelihood Classification), which is commonly applied to remote sensing images, was used, the TIR band images helped obtain a higher discriminant analysis in landuse/landcover classification.

Usefulness of Region Cut Subtraction in Fusion & MIP 3D Reconstruction Image (Fusion & Maximum Intensity Projection 3D 재구성 영상에서 Region Cut Subtraction의 유용성)

  • Moon, A-Reum;Chi, Yong-Gi;Choi, Sung-Wook;Lee, Hyuk;Lee, Kyoo-Bok;Seok, Jae-Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.18-23
    • /
    • 2010
  • Purpose: PET/CT combines functional and morphologic data and increases diagnostic accuracy in a variety of malignancies. Especially reconstructed Fusion PET/CT images or MIP (Maximum Intensity Projection) images from a 2-dimensional image to a 3-dimensional one are useful in visualization of the lesion. But in Fusion & MIP 3D reconstruction image, due to hot uptake by urine or urostomy bag, lesion is overlapped so it is difficult that we can distinguish the lesion with the naked eye. This research tries to improve a distinction by removing parts of hot uptake. Materials and Methods: This research has been conducted the object of patients who have went to our hospital from September 2008 to March 2009 and have a lot of urine of remaining volume as disease of uterus, bladder, rectum in the result of PET/CT examination. We used GE Company's Advantage Workstation AW4.3 05 Version Volume Viewer program. As an analysis method, set up ROI in region of removal in axial volume image, select Cut Outside and apply same method in coronal volume image. Next, adjust minimum value in Threshold of 3D Tools, select subtraction in Advanced Processing. It makes Fusion & MIP images and compares them with the image no using Region Cut Definition. Results: In Fusion & MIP 3D reconstruction image, it makes Fusion & MIP images and compares them by using Advantage Workstation AW4.3 05's Region Cut Subtraction, parts of hot uptake according to patient's urine can be removed. Distinction of lesion was clearly reconstructed in image using Region Cut Definition. Conclusion: After examining the patients showing hot uptake on account of volume of urine intake in bladder, in process of reconstruction image, if parts of hot uptake would be removed, it could contribute to offering much better diagnostic information than image subtraction of conventional method. Especially in case of disease of uterus, bladder and rectum, it will be helpful for qualitative improvement of image.

  • PDF

Unsupervised Image Classification through Multisensor Fusion using Fuzzy Class Vector (퍼지 클래스 벡터를 이용하는 다중센서 융합에 의한 무감독 영상분류)

  • 이상훈
    • Korean Journal of Remote Sensing
    • /
    • v.19 no.4
    • /
    • pp.329-339
    • /
    • 2003
  • In this study, an approach of image fusion in decision level has been proposed for unsupervised image classification using the images acquired from multiple sensors with different characteristics. The proposed method applies separately for each sensor the unsupervised image classification scheme based on spatial region growing segmentation, which makes use of hierarchical clustering, and computes iteratively the maximum likelihood estimates of fuzzy class vectors for the segmented regions by EM(expected maximization) algorithm. The fuzzy class vector is considered as an indicator vector whose elements represent the probabilities that the region belongs to the classes existed. Then, it combines the classification results of each sensor using the fuzzy class vectors. This approach does not require such a high precision in spatial coregistration between the images of different sensors as the image fusion scheme of pixel level does. In this study, the proposed method has been applied to multispectral SPOT and AIRSAR data observed over north-eastern area of Jeollabuk-do, and the experimental results show that it provides more correct information for the classification than the scheme using an augmented vector technique, which is the most conventional approach of image fusion in pixel level.

DCNN Optimization Using Multi-Resolution Image Fusion

  • Alshehri, Abdullah A.;Lutz, Adam;Ezekiel, Soundararajan;Pearlstein, Larry;Conlen, John
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.11
    • /
    • pp.4290-4309
    • /
    • 2020
  • In recent years, advancements in machine learning capabilities have allowed it to see widespread adoption for tasks such as object detection, image classification, and anomaly detection. However, despite their promise, a limitation lies in the fact that a network's performance quality is based on the data which it receives. A well-trained network will still have poor performance if the subsequent data supplied to it contains artifacts, out of focus regions, or other visual distortions. Under normal circumstances, images of the same scene captured from differing points of focus, angles, or modalities must be separately analysed by the network, despite possibly containing overlapping information such as in the case of images of the same scene captured from different angles, or irrelevant information such as images captured from infrared sensors which can capture thermal information well but not topographical details. This factor can potentially add significantly to the computational time and resources required to utilize the network without providing any additional benefit. In this study, we plan to explore using image fusion techniques to assemble multiple images of the same scene into a single image that retains the most salient key features of the individual source images while discarding overlapping or irrelevant data that does not provide any benefit to the network. Utilizing this image fusion step before inputting a dataset into the network, the number of images would be significantly reduced with the potential to improve the classification performance accuracy by enhancing images while discarding irrelevant and overlapping regions.

Comparison of various image fusion methods for impervious surface classification from VNREDSat-1

  • Luu, Hung V.;Pham, Manh V.;Man, Chuc D.;Bui, Hung Q.;Nguyen, Thanh T.N.
    • International Journal of Advanced Culture Technology
    • /
    • v.4 no.2
    • /
    • pp.1-6
    • /
    • 2016
  • Impervious surfaces are important indicators for urban development monitoring. Accurate mapping of urban impervious surfaces with observational satellites, such as VNREDSat-1, remains challenging due to the spectral diversity not captured by an individual PAN image. In this article, five multi-resolution image fusion techniques were compared for the task of classifting urban impervious surfaces. The result shows that for VNREDSat-1 dataset, UNB and Wavelet tranformation methods are the best techniques in reserving spatial and spectral information of original MS image, respectively. However, the UNB technique gives the best results when it comes to impervious surface classification, especially in the case of shadow areas included in non-impervious surface group.

Boundary Stitching Algorithm for Fusion of Vein Pattern (정맥패턴 융합을 위한 Boundary Stitching Algorithm)

  • Lim, Young-Kyu;Jang, Kyung-Sik
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2005.05a
    • /
    • pp.521-524
    • /
    • 2005
  • This paper proposes a fusion algorithm which merges multiple vein pattern images into a single image, larger than those images. As a preprocessing step of template matching, during the verification of biometric data such as fingerprint image, vein pattern image of hand, etc., the fusion technique is used to make reference image larger than the candidate images in order to enhance the matching performance. In this paper, a new algorithm, called BSA (Boundary Stitching Algorithm) is proposed, in which the boundary rectilinear parts extracted from the candidate images are stitched to the reference image in order to enlarge its matching space. By applying BSA to practical vein pattern verification system, its verification rate was increased by about 10%.

  • PDF

The evaluation of usefulness of the newly manufactured immobilization device (치료보조기구의 제작 및 유용성 평가)

  • Seo Seok Jin;Kim Chan Yoeng;Lee Je Hee;Park Heung Deuk
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.17 no.1
    • /
    • pp.45-55
    • /
    • 2005
  • Purpose : To evaluate the usefulness of the handmade patient immobilization device and to report the clinical results of it. Materials and methods : We made two fusion images and analyzed those images. One image is made with diagnostic MR image and CT image, the other with therapeutic planning MR image and CT image. With open head holder, we measured the skin dose and attenuation dose. Also, we made the planning CT couch plate with acrylic plate and styrofoam and compared artifact. Results : We could get more accurate fusion image when we use MR head holder(within 2mm error). The skin dose was reduced 2 times and the attenuation dose was reduced more than $20\%$ when open head holder used. The planning CT couch plate was more convenient than conventional board and reduced artifact remarkably. Conclusion : We could verify the localization point in the MR image which is taken with MR head holder. So we could fuse the image more accurately. The same method could be applied to PET and US image, if the alike immobilization device used. With open head holder, the skin dose and the attenuation dose was reduced. And those above devices could substitute for expensive foreign device, if those are manufactured adequately.

  • PDF

Refinements of Multi-sensor based 3D Reconstruction using a Multi-sensor Fusion Disparity Map (다중센서 융합 상이 지도를 통한 다중센서 기반 3차원 복원 결과 개선)

  • Kim, Si-Jong;An, Kwang-Ho;Sung, Chang-Hun;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.4
    • /
    • pp.298-304
    • /
    • 2009
  • This paper describes an algorithm that improves 3D reconstruction result using a multi-sensor fusion disparity map. We can project LRF (Laser Range Finder) 3D points onto image pixel coordinatesusing extrinsic calibration matrixes of a camera-LRF (${\Phi}$, ${\Delta}$) and a camera calibration matrix (K). The LRF disparity map can be generated by interpolating projected LRF points. In the stereo reconstruction, we can compensate invalid points caused by repeated pattern and textureless region using the LRF disparity map. The result disparity map of compensation process is the multi-sensor fusion disparity map. We can refine the multi-sensor 3D reconstruction based on stereo vision and LRF using the multi-sensor fusion disparity map. The refinement algorithm of multi-sensor based 3D reconstruction is specified in four subsections dealing with virtual LRF stereo image generation, LRF disparity map generation, multi-sensor fusion disparity map generation, and 3D reconstruction process. It has been tested by synchronized stereo image pair and LRF 3D scan data.

  • PDF

Super Resolution Fusion Scheme for General- and Face Dataset (범용 데이터 셋과 얼굴 데이터 셋에 대한 초해상도 융합 기법)

  • Mun, Jun Won;Kim, Jae Seok
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.11
    • /
    • pp.1242-1250
    • /
    • 2019
  • Super resolution technique aims to convert a low-resolution image with coarse details to a corresponding high-resolution image with refined details. In the past decades, the performance is greatly improved due to progress of deep learning models. However, universal solution for various objects is a still challenging issue. We observe that learning super resolution with a general dataset has poor performance on faces. In this paper, we propose a super resolution fusion scheme that works well for both general- and face datasets to achieve more universal solution. In addition, object-specific feature extractor is employed for better reconstruction performance. In our experiments, we compare our fusion image and super-resolved images from one- of the state-of-the-art deep learning models trained with DIV2K and FFHQ datasets. Quantitative and qualitative evaluates show that our fusion scheme successfully works well for both datasets. We expect our fusion scheme to be effective on other objects with poor performance and this will lead to universal solutions.

Finger Vein Recognition based on Matching Score-Level Fusion of Gabor Features

  • Lu, Yu;Yoon, Sook;Park, Dong Sun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38A no.2
    • /
    • pp.174-182
    • /
    • 2013
  • Most methods for fusion-based finger vein recognition were to fuse different features or matching scores from more than one trait to improve performance. To overcome the shortcomings of "the curse of dimensionality" and additional running time in feature extraction, in this paper, we propose a finger vein recognition technology based on matching score-level fusion of a single trait. To enhance the quality of finger vein image, the contrast-limited adaptive histogram equalization (CLAHE) method is utilized and it improves the local contrast of normalized image after ROI detection. Gabor features are then extracted from eight channels based on a bank of Gabor filters. Instead of using the features for the recognition directly, we analyze the contributions of Gabor feature from each channel and apply a weighted matching score-level fusion rule to get the final matching score, which will be used for the last recognition. Experimental results demonstrate the CLAHE method is effective to enhance the finger vein image quality and the proposed matching score-level fusion shows better recognition performance.