• Title/Summary/Keyword: multiple projection images

Search Result 44, Processing Time 0.017 seconds

2D-MELPP: A two dimensional matrix exponential based extension of locality preserving projections for dimensional reduction

  • Xiong, Zixun;Wan, Minghua;Xue, Rui;Yang, Guowei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.9
    • /
    • pp.2991-3007
    • /
    • 2022
  • Two dimensional locality preserving projections (2D-LPP) is an improved algorithm of 2D image to solve the small sample size (SSS) problems which locality preserving projections (LPP) meets. It's able to find the low dimension manifold mapping that not only preserves local information but also detects manifold embedded in original data spaces. However, 2D-LPP is simple and elegant. So, inspired by the comparison experiments between two dimensional linear discriminant analysis (2D-LDA) and linear discriminant analysis (LDA) which indicated that matrix based methods don't always perform better even when training samples are limited, we surmise 2D-LPP may meet the same limitation as 2D-LDA and propose a novel matrix exponential method to enhance the performance of 2D-LPP. 2D-MELPP is equivalent to employing distance diffusion mapping to transform original images into a new space, and margins between labels are broadened, which is beneficial for solving classification problems. Nonetheless, the computational time complexity of 2D-MELPP is extremely high. In this paper, we replace some of matrix multiplications with multiple multiplications to save the memory cost and provide an efficient way for solving 2D-MELPP. We test it on public databases: random 3D data set, ORL, AR face database and Polyu Palmprint database and compare it with other 2D methods like 2D-LDA, 2D-LPP and 1D methods like LPP and exponential locality preserving projections (ELPP), finding it outperforms than others in recognition accuracy. We also compare different dimensions of projection vector and record the cost time on the ORL, AR face database and Polyu Palmprint database. The experiment results above proves that our advanced algorithm has a better performance on 3 independent public databases.

Fast Multi-GPU based 3D Backprojection Method (다중 GPU 기반의 고속 삼차원 역전사 기법)

  • Lee, Byeong-Hun;Lee, Ho;Kye, Hee-Won;Shin, Yeong-Gil
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.2
    • /
    • pp.209-218
    • /
    • 2009
  • 3D backprojection is a kind of reconstruction algorithm to generate volume data consisting of tomographic images, which provides spatial information of the original 3D data from hundreds of 2D projections. The computational time of backprojection increases in proportion to the size of volume data and the number of projection images since the value of every voxel in volume data is calculated by considering corresponding pixels from hundreds of projections. For the reduction of computational time, fast GPU based 3D backprojection methods have been studied recently and the performance of them has been improved significantly. This paper presents two multiple GPU based methods to maximize the parallelism of GPU and compares the efficiencies of two methods by considering both the number of projections and the size of volume data. The first method is to generate partial volume data independently for all projections after allocating a half size of volume data on each GPU. The second method is to acquire the entire volume data by merging the incomplete volume data of each GPU on CPU. The in-complete volume data is generated using the half size of projections after allocating the full size of volume data on each GPU. In experimental results, the first method performed better than the second method when the entire volume data can be allocated on GPU. Otherwise, the second method was efficient than the first one.

  • PDF

Face Recognition based on Hybrid Classifiers with Virtual Samples (가상 데이터와 융합 분류기에 기반한 얼굴인식)

  • 류연식;오세영
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.40 no.1
    • /
    • pp.19-29
    • /
    • 2003
  • This paper presents a novel hybrid classifier for face recognition with artificially generated virtual training samples. We utilize both the nearest neighbor approach in feature angle space and a connectionist model to obtain a synergy effect by combining the results of two heterogeneous classifiers. First, a classifier called the nearest feature angle (NFA), based on angular information, finds the most similar feature to the query from a given training set. Second, a classifier has been developed based on the recall of stored frontal projection of the query feature. It uses a frontal recall network (FRN) that finds the most similar frontal one among the stored frontal feature set. For FRN, we used an ensemble neural network consisting of multiple multiplayer perceptrons (MLPs), each of which is trained independently to enhance generalization capability. Further, both classifiers used the virtual training set generated adaptively, according to the spatial distribution of each person's training samples. Finally, the results of the two classifiers are combined to comprise the best matching class, and a corresponding similarit measure is used to make the final decision. The proposed classifier achieved an average classification rate of 96.33% against a large group of different test sets of images, and its average error rate is 61.5% that of the nearest feature line (NFL) method, and achieves a more robust classification performance.

Comparison of True and Virtual Non-Contrast Images of Liver Obtained with Single-Source Twin Beam and Dual-Source Dual-Energy CT (간의 단일선원 Twin Beam과 이중선원 이중에너지 전산화단층촬영의 비조영증강 영상과 가상 비조영증강 영상의 비교 연구)

  • Jeong Sub Lee;Guk Myung Choi;Bong Soo Kim;Su Yeon Ko;Kyung Ryeol Lee;Jeong Jae Kim;Doo Ri Kim
    • Journal of the Korean Society of Radiology
    • /
    • v.84 no.1
    • /
    • pp.170-184
    • /
    • 2023
  • Purpose To assess the magnitude of differences between attenuation values of the true non-contrast image (TNC) and virtual non-contrast image (VNC) derived from twin-beam dual-energy CT (tbDECT) and dual-source DECT (dsDECT). Materials and Methods This retrospective study included 62 patients who underwent liver dynamic DECT with tbDECT (n = 32) or dsDECT (n = 30). Arterial VNC (AVNC), portal VNC (PVNC), and delayed VNC (DVNC) were reconstructed using multiphasic DECT. Attenuation values of multiple intra-abdominal organs (n = 11) on TNCs were subsequently compared to those on multiphasic VNCs. Further, we investigated the percentage of cases with an absolute difference between TNC and VNC of ≤ 10 Hounsfield units (HU). Results For the mean attenuation values of TNC and VNC, 33 items for each DECT were compared according to the multiphasic VNCs and organs. More than half of the comparison items for each DECT showed significant differences (tbDECT 17/33; dsDECT 19/33; Bonferroni correction p < 0.0167). The percentage of cases with an absolute difference ≤ 10 HU was 56.7%, 69.2%, and 78.6% in AVNC, PVNC, and DVNC in tbDECT, respectively, and 70.5%, 78%, and 78% in dsDECT, respectively. Conclusion VNCs derived from the two DECTs were insufficient to replace TNCs because of the considerable difference in attenuation values.