• Title/Summary/Keyword: Fusion Image

Search Result 885, Processing Time 0.022 seconds

A Survey of Objective Measurement of Fatigue Caused by Visual Stimuli (시각자극에 의한 피로도의 객관적 측정을 위한 연구 조사)

  • Kim, Young-Joo;Lee, Eui-Chul;Whang, Min-Cheol;Park, Kang-Ryoung
    • Journal of the Ergonomics Society of Korea
    • /
    • v.30 no.1
    • /
    • pp.195-202
    • /
    • 2011
  • Objective: The aim of this study is to investigate and review the previous researches about objective measuring fatigue caused by visual stimuli. Also, we analyze possibility of alternative visual fatigue measurement methods using facial expression recognition and gesture recognition. Background: In most previous researches, visual fatigue is commonly measured by survey or interview based subjective method. However, the subjective evaluation methods can be affected by individual feeling's variation or other kinds of stimuli. To solve these problems, signal and image processing based visual fatigue measurement methods have been widely researched. Method: To analyze the signal and image processing based methods, we categorized previous works into three groups such as bio-signal, brainwave, and eye image based methods. Also, the possibility of adopting facial expression or gesture recognition to measure visual fatigue is analyzed. Results: Bio-signal and brainwave based methods have problems because they can be degraded by not only visual stimuli but also the other kinds of external stimuli caused by other sense organs. In eye image based methods, using only single feature such as blink frequency or pupil size also has problem because the single feature can be easily degraded by other kinds of emotions. Conclusion: Multi-modal measurement method is required by fusing several features which are extracted from the bio-signal and image. Also, alternative method using facial expression or gesture recognition can be considered. Application: The objective visual fatigue measurement method can be applied into the fields of quantitative and comparative measurement of visual fatigue of next generation display devices in terms of human factor.

A Multi-view Super-Resolution Method with Joint-optimization of Image Fusion and Blind Deblurring

  • Fan, Jun;Wu, Yue;Zeng, Xiangrong;Huangpeng, Qizi;Liu, Yan;Long, Xin;Zhou, Jinglun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.5
    • /
    • pp.2366-2395
    • /
    • 2018
  • Multi-view super-resolution (MVSR) refers to the process of reconstructing a high-resolution (HR) image from a set of low-resolution (LR) images captured from different viewpoints typically by different cameras. These multi-view images are usually obtained by a camera array. In our previous work [1], we super-resolved multi-view LR images via image fusion (IF) and blind deblurring (BD). In this paper, we present a new MVSR method that jointly realizes IF and BD based on an integrated energy function optimization. First, we reformulate the MVSR problem into a multi-channel blind deblurring (MCBD) problem which is easier to be solved than the former. Then the depth map of the desired HR image is calculated. Finally, we solve the MCBD problem, in which the optimization problems with respect to the desired HR image and with respect to the unknown blur are efficiently addressed by the alternating direction method of multipliers (ADMM). Experiments on the Multi-view Image Database of the University of Tsukuba and images captured by our own camera array system demonstrate the effectiveness of the proposed method.

Cloud Detection and Restoration of Landsat-8 using STARFM (재난 모니터링을 위한 Landsat 8호 영상의 구름 탐지 및 복원 연구)

  • Lee, Mi Hee;Cheon, Eun Ji;Eo, Yang Dam
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.5_2
    • /
    • pp.861-871
    • /
    • 2019
  • Landsat satellite images have been increasingly used for disaster damage analysis and disaster monitoring because they can be used for periodic and broad observation of disaster damage area. However, periodic disaster monitoring has limitation because of areas having missing data due to clouds as a characteristic of optical satellite images. Therefore, a study needs to be conducted for restoration of missing areas. This study detected and removed clouds and cloud shadows by using the quality assessment (QA) band provided when acquiring Landsat-8 images, and performed image restoration of removed areas through a spatial and temporal adaptive reflectance fusion (STARFM) algorithm. The restored image by the proposed method is compared with the restored image by conventional image restoration method throught MLC method. As a results, the restoration method by STARFM showed an overall accuracy of 89.40%, and it is confirmed that the restoration method is more efficient than the conventional image restoration method. Therefore, the results of this study are expected to increase the utilization of disaster analysis using Landsat satellite images.

Application of Satellite Data Spatiotemporal Fusion in Predicting Seasonal NDVI (위성영상 시공간 융합기법의 계절별 NDVI 예측에서의 응용)

  • Jin, Yihua;Zhu, Jingrong;Sung, Sunyong;Lee, Dong Kun
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.2
    • /
    • pp.149-158
    • /
    • 2017
  • Fine temporal and spatial resolution of image data are necessary to monitor the phenology of vegetation. However, there is no single sensor provides fine temporal and spatial resolution. For solve this limitation, researches on spatiotemporal data fusion methods are being conducted. Among them, FSDAF (Flexible spatiotemporal data fusion) can fuse each band in high accuracy.In thisstudy, we applied MODIS NDVI and Landsat NDVI to enhance time resolution of NDVI based on FSDAF algorithm. Then we proposed the possibility of utilization in vegetation phenology monitoring. As a result of FSDAF method, the predicted NDVI from January to December well reflect the seasonal characteristics of broadleaf forest, evergreen forest and farmland. The RMSE values between predicted NDVI and actual NDVI (Landsat NDVI) of August and October were 0.049 and 0.085, and the correlation coefficients were 0.765 and 0.642 respectively. Spatiotemporal data fusion method is a pixel-based fusion technique that can be applied to variousspatial resolution images, and expected to be applied to various vegetation-related studies.

3D Fusion Imaging based on Spectral Computed Tomography Using K-edge Images (K-각 영상을 이용한 스펙트럼 전산화단층촬영 기반 3차원 융합진단영상화에 관한 연구)

  • Kim, Burnyoung;Lee, Seungwan;Yim, Dobin
    • Journal of the Korean Society of Radiology
    • /
    • v.13 no.4
    • /
    • pp.523-530
    • /
    • 2019
  • The purpose of this study was to obtain the K-edge images using a spectral CT system based on a photon-counting detector and implement the 3D fusion imaging using the conventional and spectral CT images. Also, we evaluated the clinical feasibility of the 3D fusion images though the quantitative analysis of image quality. A spectral CT system based on a CdTe photon-counting detector was used to obtain K-edge images. A pork phantom was manufactured with the six tubes including diluted iodine and gadolinium solutions. The K-edge images were obtained by the low-energy thresholds of 35 and 52 keV for iodine and gadolinium imaging with the X-ray spectrum, which was generated at a tube voltage of 100 kVp with a tube current of $500{\mu}A$. We implemented 3D fusion imaging by combining the iodine and gadolinium K-edge images with the conventional CT images. The results showed that the CNRs of the 3D fusion images were 6.76-14.9 times higher than those of the conventional CT images. Also, the 3D fusion images was able to provide the maps of target materials. Therefore, the technique proposed in this study can improve the quality of CT images and the diagnostic efficiency through the additional information of target materials.

Similarity Measurement using Gabor Energy Feature and Mutual Information for Image Registration

  • Ye, Chul-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.6
    • /
    • pp.693-701
    • /
    • 2011
  • Image registration is an essential process to analyze the time series of satellite images for the purpose of image fusion and change detection. The Mutual Information (MI) is commonly used as similarity measure for image registration because of its robustness to noise. Due to the radiometric differences, it is not easy to apply MI to multi-temporal satellite images using directly the pixel intensity. Image features for MI are more abundantly obtained by employing a Gabor filter which varies adaptively with the filter characteristics such as filter size, frequency and orientation for each pixel. In this paper we employed Bidirectional Gabor Filter Energy (BGFE) defined by Gabor filter features and applied the BGFE to similarity measure calculation as an image feature for MI. The experiment results show that the proposed method is more robust than the conventional MI method combined with intensity or gradient magnitude.

Automatic Image Matching of Portal and Simulator Images Using courier Descriptors (후리에 표시자를 이용한 포탈영상과 시뮬레이터 영상의 자동결합)

  • 허수진
    • Journal of Biomedical Engineering Research
    • /
    • v.18 no.1
    • /
    • pp.9-16
    • /
    • 1997
  • We develop an automatic imaging matching technique for combining portal image and simulator image for improvements in localization of treatment in radiation therapy. Fusion of images from two imaging modalities is treated as follows. We archive images thxough a frame-yabber. The simulator and portal images are edge detected and enhanced with interpolated adaptive histouam equalization and combined using geometrical parameters relating the coordinates of two image data sets which are calculated using Fourier descriptors. We don't use any kind of imaging markers for patient's convenience. clinical use of this image matching technique for treatment planning will result in improvements in localization of treatment volumes and critical structures. These improvements will allow greater sparing of normal tissues and more precise delivery of energy to the desired irradiation volume.

  • PDF

Adaptive Depth Fusion based on Reliability of Depth Cues for 2D-to-3D Video Conversion (2차원 동영상의 3차원 변환을 위한 깊이 단서의 신뢰성 기반 적응적 깊이 융합)

  • Han, Chan-Hee;Choi, Hae-Chul;Lee, Si-Woong
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.12
    • /
    • pp.1-13
    • /
    • 2012
  • 3D video is regarded as the next generation contents in numerous applications. The 2D-to-3D video conversion technologies are strongly required to resolve a lack of 3D videos during the period of transition to the full ripe 3D video era. In 2D-to-3D conversion methods, after the depth image of each scene in 2D video is estimated, stereoscopic video is synthesized using DIBR (Depth Image Based Rendering) technologies. This paper proposes a novel depth fusion algorithm that integrates multiple depth cues contained in 2D video to generate stereoscopic video. For the proper depth fusion, it is checked whether some cues are reliable or not in current scene. Based on the result of the reliability tests, current scene is classified into one of 4 scene types and scene-adaptive depth fusion is applied to combine those reliable depth cues to generate the final depth information. Simulation results show that each depth cue is reasonably utilized according to scene types and final depth is generated by cues which can effectively represent the current scene.

A Study on Fusion Image Reflected Upon Modern Fashions

  • Shin, Jae-Yong;Kwak, Tai-Gi
    • Proceedings of the Korea Society of Costume Conference
    • /
    • 2003.10a
    • /
    • pp.53-53
    • /
    • 2003
  • These new Phenomenon influences with the new terminology 'fusion' on the way of our life in all areas of our society including the clothes and foods. Along with post-structuralism, which aims at diversification and senses deconstructionalism is a characteristic of this time post-structuralism is based upon the deconstructionalism. Especially the arts in the modern society which is called an information-oriented society lack the totality of a text and become fragmentary, and therefore it follows that traditional values are destroyed and at last a new code such as fusion which is incomprehensible if we would be in monolinear perspective.

  • PDF

Automatic Textile-Image Classification System using Human Emotion (감성 기반의 자동 텍스타일 영상 분류 시스템)

  • Kim, Young-Rae;Shin, Yun-Hee;Kim, Eun-Yi
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2008.06c
    • /
    • pp.561-564
    • /
    • 2008
  • 본 논문에서는 감성을 기반으로 텍스타일 영상을 자동으로 분류할 수 있는 시스템을 제안한다. 이 때, 사용된 감성 그룹은 고바야시의 10가지 감성 키워드 - {romantic, clear, natural, casual, elegant, chic, dynamic, classic, dandy, modern} - 를 이용한다. 제안된 시스템은 특징 추출과 분류로 구성된다. 특징 추출 단계에서는 텍스타일을 구성하는 대표 컬러를 추출하기 위해서 양자화 기법을 이용하고, 패턴정보를 표현하기 위해서는 웨이블릿 변환 후의 통계적인 정보를 이용한다. 신경망 기반의 분류기는 추출된 특징들을 입력으로 받아 입력 텍스타일 영상을 분류한다. 제안된 감성인식 방법의 효율성을 증명하기 위해서 220장의 텍스타일 영상에서 실험한 결과 제안된 방법은 99%의 정확도를 보였다. 이러한 실험 결과는 제안된 방법이 다양한 텍스타일 영상에 대해 일반화되어 사용될 수 있음을 보여주었다.

  • PDF