• Title/Summary/Keyword: 영상 융합

Search Result 2,176, Processing Time 0.03 seconds

Video Sequence Segmentation using Distributed Genetic Algorithms (분산 유전자 알고리즘을 이용한 동영상 분할)

  • 황상원;김은이;김항준
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2000.08a
    • /
    • pp.317-320
    • /
    • 2000
  • 동영상 분할은 컴퓨터 비전 분야에서 중요한 단계로 많이 연구되고 있다 그러나 동영상 분할은 계산 복잡도에 의해 제약을 받는다. 이를 해결하기 위해, 본 논문은 분산 유전자 알고리즘에 기반한 계산 효율을 높일 수 있는 새로운 동영상 분할 방법을 제안한다. 일반적으로 동영상에서 연속한 두 프레임은 높은 상관관계를 가진다. 따라서, 한 프레임의 분할 결과는 이전 프레임의 분할 결과를 사용해서 연속적으로 얻어진다. 그리고 중복된 계산을 제거하기 위해 움직이는 객체에 대응되는 염색체만을 진화시킨다. 실험 결과는 제안한 방법의 효율성을 보여준다.

  • PDF

The Development and Application of Elementary Convergence Teaching and Learning Strategy Using the Science Visual Media (과학 영상매체를 활용한 초등 융합형 교수학습 전략 개발 및 적용)

  • Kwon, Nanjoo
    • Journal of Science Education
    • /
    • v.38 no.1
    • /
    • pp.29-40
    • /
    • 2014
  • The new paradigm of the 21st Century science education explores a wide range of possibilities that can foster students' interest toward science and creative convergence thinking. The purpose of this study was to utilize science visual media that can improve students' scientific creativity and artistic sensibility. Curriculum reorganization can be one solution for the primary convergence science teaching and learning strategies using science visual media. Through a new and exciting experiment to a various science visual media such as science picture, TV film, movie, UCC, etc., we hope this teaching and learning strategy can increase the students' scientific interest and attitude.

  • PDF

An Adaptive Guided Filter for Performance Improvement of Aviation Image Fusion (항공 영상 융합의 성능 향상을 위한 적응 가이디드 필터)

  • Kim, Sun Young;Kang, Chang Ho;Park, Chan Gook
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.44 no.5
    • /
    • pp.407-415
    • /
    • 2016
  • In this paper, an aviation image fusion method is proposed for creating an informative fused image through gray scale images within noise. The proposed method is based on an adaptive guided filter which adjusts regulation parameter of the filter based on peak signal noise ratio (PSNR) in order to behave as an edge-preserving filtering property. Simulation results demonstrate that the proposed method preserves the edge information of the input image and reduces the noise effect while maintaining designed PSNR.

Data Augmentation Scheme for Semi-Supervised Video Object Segmentation (준지도 비디오 객체 분할 기술을 위한 데이터 증강 기법)

  • Kim, Hojin;Kim, Dongheyon;Kim, Jeonghoon;Im, Sunghoon
    • Journal of Broadcast Engineering
    • /
    • v.27 no.1
    • /
    • pp.13-19
    • /
    • 2022
  • Video Object Segmentation (VOS) task requires an amount of labeled sequence data, which limits the performance of the current VOS methods trained with public datasets. In this paper, we propose two effective data augmentation schemes for VOS. The first augmentation method is to swap the background segment to the background from another image, and the other method is to play the sequence in reverse. The two augmentation schemes for VOS enable the current VOS methods to robustly predict the segmentation labels and improve the performance of VOS.

Multimodal Medical Image Fusion Based on Double-Layer Decomposer and Fine Structure Preservation Model (복층 분해기와 상세구조 보존모델에 기반한 다중모드 의료영상 융합)

  • Zhang, Yingmei;Lee, Hyo Jong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.6
    • /
    • pp.185-192
    • /
    • 2022
  • Multimodal medical image fusion (MMIF) fuses two images containing different structural details generated in two different modes into a comprehensive image with saturated information, which can help doctors improve the accuracy of observation and treatment of patients' diseases. Therefore, a method based on double-layer decomposer and fine structure preservation model is proposed. Firstly, a double-layer decomposer is applied to decompose the source images into the energy layers and structure layers, which can preserve details well. Secondly, The structure layer is processed by combining the structure tensor operator (STO) and max-abs. As for the energy layers, a fine structure preservation model is proposed to guide the fusion, further improving the image quality. Finally, the fused image can be achieved by performing an addition operation between the two sub-fused images formed through the fusion rules. Experiments manifest that our method has excellent performance compared with several typical fusion methods.

Current Status of Imaging Physics & Instrumentation In Nuclear Medicine (핵의학 영상 물리 및 기기의 최신 동향)

  • Kim, Hee-Joung
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.42 no.2
    • /
    • pp.83-87
    • /
    • 2008
  • Diagnostic and functional imaging device have been developed independently. The recognition that combining of these two devices can provide better diagnostic outcomes by fusing anatomical and functional images. The representative examples of combining devices would be PET/CT and SPECT/CT. Development and their applications of animal imaging and instrumentation have been very active, as new drug development with advanced imaging device has been increased. The development of advanced imaging device resulted in researching and developing for detector technology and imaging systems. It also contributed to develop a new software, reconstruction algorithm, correction methods for physical factors, image quantitation, computer simulation, kinetic modeling, dosimetry, and correction for motion artifacts. Recently, development of MRI and PET by combining them together was reported. True integration of MRI and PET has been making the progress and their results were reported. The recent status of imaging and instrumentation in nuclear medicine is reported in this paper.

Robust Image Fusion Using Stationary Wavelet Transform (정상 웨이블렛 변환을 이용한 로버스트 영상 융합)

  • Kim, Hee-Hoon;Kang, Seung-Hyo;Park, Jea-Hyun;Ha, Hyun-Ho;Lim, Jin-Soo;Lim, Dong-Hoon
    • The Korean Journal of Applied Statistics
    • /
    • v.24 no.6
    • /
    • pp.1181-1196
    • /
    • 2011
  • Image fusion is the process of combining information from two or more source images of a scene into a single composite image with application to many fields, such as remote sensing, computer vision, robotics, medical imaging and defense. The most common wavelet-based fusion is discrete wavelet transform fusion in which the high frequency sub-bands and low frequency sub-bands are combined on activity measures of local windows such standard deviation and mean, respectively. However, discrete wavelet transform is not translation-invariant and it often yields block artifacts in a fused image. In this paper, we propose a robust image fusion based on the stationary wavelet transform to overcome the drawback of discrete wavelet transform. We use the activity measure of interquartile range as the robust estimator of variance in high frequency sub-bands and combine the low frequency sub-band based on the interquartile range information present in the high frequency sub-bands. We evaluate our proposed method quantitatively and qualitatively for image fusion, and compare it to some existing fusion methods. Experimental results indicate that the proposed method is more effective and can provide satisfactory fusion results.

Image Fusion Framework for Enhancing Spatial Resolution of Satellite Image using Structure-Texture Decomposition (구조-텍스처 분할을 이용한 위성영상 융합 프레임워크)

  • Yoo, Daehoon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.3
    • /
    • pp.21-29
    • /
    • 2019
  • This paper proposes a novel framework for image fusion of satellite imagery to enhance spatial resolution of the image via structure-texture decomposition. The resolution of the satellite imagery depends on the sensors, for example, panchromatic images have high spatial resolution but only a single gray band whereas multi-spectral images have low spatial resolution but multiple bands. To enhance the spatial resolution of low-resolution images, such as multi-spectral or infrared images, the proposed framework combines the structures from the low-resolution image and the textures from the high-resolution image. To improve the spatial quality of structural edges, the structure image from the low-resolution image is guided filtered with the structure image from the high-resolution image as the guidance image. The combination step is performed by pixel-wise addition of the filtered structure image and the texture image. Quantitative and qualitative evaluation demonstrate the proposed method preserves spectral and spatial fidelity of input images.

Development of Multi-Organ Segmentation Model for Support Abdominal Disease Diagnosis (복부질환 진단 지원을 위한 다중 장기 분할 모델 개발)

  • Si-Hyeong Noh;Dong-Wook Lim;Chungsub Lee;Tae-Hoon Kim;Chul Park;Chang-Won Jeong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.546-548
    • /
    • 2023
  • 인공지능 기술을 도입한 의료분야에서 진단 및 예측을 위한 관련 연구가 활발하게 진행되고 있다. 특히, 인공지능 기술 적용에 가장 많이 활용되고 있는 의료영상을 기반으로 하는 질환에 관한 진단 연구는 매우 복잡한 과정이 필요한 질환의 진단에 큰 영향을 미치고 있다. 복부 장기들의 분할은 환자의 질환 진단 지원 및 복강경등의 수술 지원에 매우 중요한 부분을 차지한다. 본 논문에서는 의료영상을 통해 13가지 복부 장기들을 분할하는 모델을 만들고 그 결과를 보인다. 본 논문에서 제안한 모델을 통해 13가지 복부 장기에 대한 분할로 영상분석을 통해 진단 지원이 가능할 것으로 기대한다.

Multimodality Image Registration and Fusion using Feature Extraction (특징 추출을 이용한 다중 영상 정합 및 융합 연구)

  • Woo, Sang-Keun;Kim, Jee-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.2 s.46
    • /
    • pp.123-130
    • /
    • 2007
  • The aim of this study was to propose a fusion and registration method with heterogeneous small animal acquisition system in small animal in-vivo study. After an intravenous injection of $^{18}F$-FDG through tail vain and 60 min delay for uptake, mouse was placed on an acryl plate with fiducial markers that were made for fusion between small animal PET (microPET R4, Concorde Microsystems, Knoxville TN) and Discovery LS CT images. The acquired emission list-mode data was sorted to temporally framed sinograms and reconstructed using FORE rebining and 2D-OSEM algorithms without correction of attenuation and scatter. After PET imaging, CT images were acquired by mean of a clinical PET/CT with high-resolution mode. The microPET and CT images were fusion and co-registered using the fiducial markers and segmented lung region in both data sets to perform a point-based rigid co-registration. This method improves the quantitative accuracy and interpretation of the tracer.

  • PDF