• 제목/요약/키워드: Fusion Contrast

Search Result 141, Processing Time 0.024 seconds

Change of Fixation Disparity and Accommodation when the Fusion Contrast Varied (융합대비에 따른 주시시차와 조절의 변화)

  • Seo, Jae-Myoung
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.14 no.4
    • /
    • pp.77-81
    • /
    • 2009
  • Purpose: To study the change of fixation disparity and accommodation as fusion contrast is deteriorated. Methods: 16 subjects who had above 20/20 and stereopsis took part. Monocular and binocular refraction were done with Zeiss Polatest Classic whereas the critical angle for stereopsis was done with TNO. A computer programmed with Random-Dot stereogram and vernier test managed a precise change of the fusion contrast and exposure time. Results: The fixation disparity was influenced by reduction of fusion contrast and had tendancy to exophoria (p=0.0004), especially it is considerably higher when uncrossed disparity was shown to exophoric subjects. Although accommodation was not influenced by a change of fusion contrast (p=0.803), vernier acuity was influenced (p=0.0000). Conclusions: Exophoric trend arose as the fusion contrast was reduced, nevertheless there was no accommadative change.

  • PDF

A Novel Multifocus Image Fusion Algorithm Based on Nonsubsampled Contourlet Transform

  • Liu, Cuiyin;Cheng, Peng;Chen, Shu-Qing;Wang, Cuiwei;Xiang, Fenghong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.3
    • /
    • pp.539-557
    • /
    • 2013
  • A novel multifocus image fusion algorithm based on NSCT is proposed in this paper. In order to not only attain the image focusing properties and more visual information in the fused image, but also sensitive to the human visual perception, a local multidirection variance (LEOV) fusion rule is proposed for lowpass subband coefficient. In order to introduce more visual saliency, a modified local contrast is defined. In addition, according to the feature of distribution of highpass subband coefficients, a direction vector is proposed to constrain the modified local contrast and construct the new fusion rule for highpass subband coefficients selection The NSCT is a flexible multiscale, multidirection, and shift-invariant tool for image decomposition, which can be implemented via the atrous algorithm. The proposed fusion algorithm based on NSCT not only can prevent artifacts and erroneous from introducing into the fused image, but also can eliminate 'block effect' and 'frequency aliasing' phenomenon. Experimental results show that the proposed method achieved better fusion results than wavelet-based and CT-based fusion method in contrast and clarity.

Finger Vein Recognition based on Matching Score-Level Fusion of Gabor Features

  • Lu, Yu;Yoon, Sook;Park, Dong Sun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38A no.2
    • /
    • pp.174-182
    • /
    • 2013
  • Most methods for fusion-based finger vein recognition were to fuse different features or matching scores from more than one trait to improve performance. To overcome the shortcomings of "the curse of dimensionality" and additional running time in feature extraction, in this paper, we propose a finger vein recognition technology based on matching score-level fusion of a single trait. To enhance the quality of finger vein image, the contrast-limited adaptive histogram equalization (CLAHE) method is utilized and it improves the local contrast of normalized image after ROI detection. Gabor features are then extracted from eight channels based on a bank of Gabor filters. Instead of using the features for the recognition directly, we analyze the contributions of Gabor feature from each channel and apply a weighted matching score-level fusion rule to get the final matching score, which will be used for the last recognition. Experimental results demonstrate the CLAHE method is effective to enhance the finger vein image quality and the proposed matching score-level fusion shows better recognition performance.

Bayesian Fusion of Confidence Measures for Confidence Scoring (베이시안 신뢰도 융합을 이용한 신뢰도 측정)

  • 김태윤;고한석
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.5
    • /
    • pp.410-419
    • /
    • 2004
  • In this paper. we propose a method of confidence measure fusion under Bayesian framework for speech recognition. Centralized and distributed schemes are considered for confidence measure fusion. Centralized fusion is feature level fusion which combines the values of individual confidence scores and makes a final decision. In contrast. distributed fusion is decision level fusion which combines the individual decision makings made by each individual confidence measuring method. Optimal Bayesian fusion rules for centralized and distributed cases are presented. In isolated word Out-of-Vocabulary (OOV) rejection experiments. centralized Bayesian fusion shows over 13% relative equal error rate (EER) reduction compared with the individual confidence measure methods. In contrast. the distributed Bayesian fusion shows no significant performance increase.

An Image Contrast Enhancement Method based on Pyramid Fusion Using BBWE and MHMD (BBWE와 MHMD를 이용한 피라미드 융합 기반의 영상의 대조 개선 기법)

  • Lee, Dong-Yul;Kim, Jin Heon
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.11
    • /
    • pp.1250-1260
    • /
    • 2013
  • The contrast enhancement techniques based on Laplacian pyramid image fusion have a benefit that they can faithfully describe the image information because they combine the multiple resource images by selecting the desired pixel in each image. However, they also have some problem that the output image may contain noise, because the methods evaluate the visual information on the basis of each pixel. In this paper, an improved contrast enhancement method, which effectively suppresses the noise, using image fusion is proposed. The proposed method combines the resource images by making Laplacian pyramids generated from weight maps, which are produced by measuring the difference between the block-based local well exposedness and local homogeneity for each resource image. We showed the proposed method could produce less noisy images compared to the conventional techniques in the test for various images.

Morphometric Analysis of the Ureter with Respect to Lateral Lumbar Interbody Fusion Using Contrast-Enhanced Computed Tomography

  • Chunneng Huang;Zhenyu Bian;Liulong Zhu
    • Journal of Korean Neurosurgical Society
    • /
    • v.66 no.2
    • /
    • pp.155-161
    • /
    • 2023
  • Objective : To analyze the anatomical location of the ureter in relation to lateral lumbar interbody fusion and evaluate the potential risk of ureteral injury. Methods : One hundred eight patients who performed contrast-enhanced computed tomographic scans were enrolled in this study. The location of the ureter from L2-L3 to L4-L5 was evaluated. The distances between the ureter and psoas muscle, intervertebral disc, and retroperitoneal vessels were also recorded bilaterally. Results : Over 30% of the ureters were close to the working corridor of extreme lumbar interbody fusion at L2-L3. Most of the ureters were close to working corridor of oblique lumbar interbody fusion, especially at L4-L5. The distance from the ureter to the great vessels on the left side was significantly narrowing from L2-L3 to L4-L5 (28.8±9.5 mm, 22.0±8.0 mm, 15.5±8.4 mm), and it was significantly larger than that on the right side (12.3±6.1 mm, 7.4±5.7 mm, 5.4±4.4 mm). Conclusion : Our findings indicate that the location of the ureter varies widely among individuals. To avoid unexpected damage to the ureter, it is imperative to directly visualize it and verify the ureter is not in the surgical pathway during lateral lumbar interbody fusion.

Reflectance estimation for infrared and visible image fusion

  • Gu, Yan;Yang, Feng;Zhao, Weijun;Guo, Yiliang;Min, Chaobo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.8
    • /
    • pp.2749-2763
    • /
    • 2021
  • The desirable result of infrared (IR) and visible (VIS) image fusion should have textural details from VIS images and salient targets from IR images. However, detail information in the dark regions of VIS image has low contrast and blurry edges, resulting in performance degradation in image fusion. To resolve the troubles of fuzzy details in dark regions of VIS image fusion, we have proposed a method of reflectance estimation for IR and VIS image fusion. In order to maintain and enhance details in these dark regions, dark region approximation (DRA) is proposed to optimize the Retinex model. With the improved Retinex model based on DRA, quasi-Newton method is adopted to estimate the reflectance of a VIS image. The final fusion outcome is obtained by fusing the DRA-based reflectance of VIS image with IR image. Our method could simultaneously retain the low visibility details in VIS images and the high contrast targets in IR images. Experiment statistic shows that compared to some advanced approaches, the proposed method has superiority on detail preservation and visual quality.

Contrast Enhancement Based on Weight Mapping Retinex Algorithm (Contrast 향상을 위한 가중치 맵 기반의 Retinex 알고리즘)

  • Lee, Sang-Won;Song, Chang-Young;Cho, Seong-Soo;Kim, Seong-Ihl;Lee, Won-Seok;Kang, June-Gill
    • 전자공학회논문지 IE
    • /
    • v.46 no.4
    • /
    • pp.31-41
    • /
    • 2009
  • The Image sensor of digital still camera has a limited dynamic range. In high dynamic range scenes, a picture often turns out to be underexposed or overexposed. Retinex algorithm based on the theory of the human visual perception is known to be effective contrast enhancement technique. However, it happens the unbalanced contrast enhancement which is the global contrast increased, and the local contrast decreased in the high dynamic range scenes. In this paper, to enhance the both global and local contrast, we propose the weight mapping retinex algorithm. Weight map is composed of the edge and exposure data which are extracted in the each retinex image, and merged with the retinex images in the fusion processing. According to the output picture comparing and numerical analysis, the proposed algorithm gives the better output image with the increased global and local contrast.

Fusion of Global and Adaptive Methods for Contrast Enhancement of Ultrasound Images (초음파 영상의 콘트라스트 향상을 위한 전역적, 적응적 방법의 융합)

  • Yun, Jae-Ho;Park, Rae-Hong
    • Proceedings of the IEEK Conference
    • /
    • 2007.07a
    • /
    • pp.357-358
    • /
    • 2007
  • Contrast enhancement in the field of ultrasound imaging contributes to improve the accuracy of medical diagnosis by enhancing the visibility of ultrasound images. This paper proposes a contrast enhancement method that improves the contrast of ultrasound images both globally and locally by fusing global and adaptive contrast enhancement methods. Experimental results show that our approach yields more competitive results than the existing global and adaptive contrast enhancement methods in enhancing the visibility of ultrasound images.

  • PDF

A flexible, full-color OTFT-OLED display

  • Yagi, I.;Hirai, N.;Miyamoto, Y.;Noda, M.;Imaoka, A.;Yasuda, R.;Yoneya, N.;Nomoto, K.;Yumoto, A.;Kasahara, J.
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2008.10a
    • /
    • pp.1627-1630
    • /
    • 2008
  • We have demonstrated a flexible and full-color OTFT-OLED display. The display has a top-emitting pixel structure with a resolution of 80 ppi, which can be achieved by developed integration architecture of OTFTs. The 0.3-mm-thick flexible display exhibits peak brightness over 100 nit with a contrast ratio greater than 1000:1.

  • PDF