• Title/Summary/Keyword: high accuracy reconstruction

Search Result 120, Processing Time 0.036 seconds

Sparse reconstruction of guided wavefield from limited measurements using compressed sensing

  • Qiao, Baijie;Mao, Zhu;Sun, Hao;Chen, Songmao;Chen, Xuefeng
    • Smart Structures and Systems
    • /
    • v.25 no.3
    • /
    • pp.369-384
    • /
    • 2020
  • A wavefield sparse reconstruction technique based on compressed sensing is developed in this work to dramatically reduce the number of measurements. Firstly, a severely underdetermined representation of guided wavefield at a snapshot is established in the spatial domain. Secondly, an optimal compressed sensing model of guided wavefield sparse reconstruction is established based on l1-norm penalty, where a suite of discrete cosine functions is selected as the dictionary to promote the sparsity. The regular, random and jittered undersampling schemes are compared and selected as the undersampling matrix of compressed sensing. Thirdly, a gradient projection method is employed to solve the compressed sensing model of wavefield sparse reconstruction from highly incomplete measurements. Finally, experiments with different excitation frequencies are conducted on an aluminum plate to verify the effectiveness of the proposed sparse reconstruction method, where a scanning laser Doppler vibrometer as the true benchmark is used to measure the original wavefield in a given inspection region. Experiments demonstrate that the missing wavefield data can be accurately reconstructed from less than 12% of the original measurements; The reconstruction accuracy of the jittered undersampling scheme is slightly higher than that of the random undersampling scheme in high probability, but the regular undersampling scheme fails to reconstruct the wavefield image; A quantified mapping relationship between the sparsity ratio and the recovery error over a special interval is established with respect to statistical modeling and analysis.

Very deep super-resolution for efficient cone-beam computed tomographic image restoration

  • Hwang, Jae Joon;Jung, Yun-Hoa;Cho, Bong-Hae;Heo, Min-Suk
    • Imaging Science in Dentistry
    • /
    • v.50 no.4
    • /
    • pp.331-337
    • /
    • 2020
  • Purpose: As cone-beam computed tomography (CBCT) has become the most widely used 3-dimensional (3D) imaging modality in the dental field, storage space and costs for large-capacity data have become an important issue. Therefore, if 3D data can be stored at a clinically acceptable compression rate, the burden in terms of storage space and cost can be reduced and data can be managed more efficiently. In this study, a deep learning network for super-resolution was tested to restore compressed virtual CBCT images. Materials and Methods: Virtual CBCT image data were created with a publicly available online dataset (CQ500) of multidetector computed tomography images using CBCT reconstruction software (TIGRE). A very deep super-resolution (VDSR) network was trained to restore high-resolution virtual CBCT images from the low-resolution virtual CBCT images. Results: The images reconstructed by VDSR showed better image quality than bicubic interpolation in restored images at various scale ratios. The highest scale ratio with clinically acceptable reconstruction accuracy using VDSR was 2.1. Conclusion: VDSR showed promising restoration accuracy in this study. In the future, it will be necessary to experiment with new deep learning algorithms and large-scale data for clinical application of this technology.

Deconvolution Based on the Reconstruction of Residue Polynomials (나머지 다정식의 재구성에 의한 디컨볼루션)

  • 유수현;김재구
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.22 no.6
    • /
    • pp.19-27
    • /
    • 1985
  • In most engineering problems the output of linear system could be expressed by a con-volution of finite input and impulse response. In this paper, the deconvolution algorithm based on the reconstruction of residue polynomials to get a convolution factor, impvlse response or system input, were considered. Two techniques, using a matrix and Euclid's algorithm were discussed. In the illustrated examples, the result showed high accuracy about 10-10 RMS error.

  • PDF

The Correctness Comparison of MCIH Model and WMLF/GI Model for the Individual Haplotyping Reconstruction (일배체형 재조합을 위한 MCIH 모델과 WMLF/GI 모델의 정확도 비교)

  • Jeong, In-Seon;Kang, Seung-Ho;Lim, Hyeong-Seok
    • The KIPS Transactions:PartB
    • /
    • v.16B no.2
    • /
    • pp.157-161
    • /
    • 2009
  • Minimum Letter Flips(MLF) and Weighted Minimum Letter Flips(WMLF) can perform the haplotype reconstruction more accurately from SNP fragments when they have many errors and gaps by introducing the related genotype information. And it is known that WMLF is more accurate in haplotype reconstruction than those based on the MLF. In the paper, we analyze two models under the conditions that the different rates of homozygous site in the genotype information and the different confidence levels according to the sequencing quality. We compare the performance of the two models using neural network and genetic algorithm. If the rate of homozygous site is high and sequencing quality is good, the results of experiments indicate that WMLF/GI has higher accuracy of haplotype reconstruction than that of the MCIH especially when the error rate and gap rate of SNP fragments are high.

Invasion of Pivacy of Federated Learning by Data Reconstruction Attack with Technique for Converting Pixel Value (픽셀값 변환 기법을 더한 데이터 복원공격에의한 연합학습의 프라이버시 침해)

  • Yoon-ju Oh;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.1
    • /
    • pp.63-74
    • /
    • 2023
  • In order to ensure safety to invasion of privacy, Federated Learning(FL) that learns using parameters is emerging. However a paper that leaks training data using gradients was recently published. Our paper implements an experiment to leak training data using gradients in a federated learning environment, and proposes a method to improve reconstruction performance by improving existing attacks that leak training data. Experiments using Yale face database B, MNIST dataset on the proposed method show that federated learning is not safe from invasion of privacy by reconstructing up to 100 data out of 100 training data when performance of federated learning is high at accuracy=99~100%. In addition, by comparing the performance (MSE, PSNR, SSIM) of pixels and the performance of identification by Human Test, we want to emphasize the importance of the performance of identification rather than the performance of pixels.

Study on the spectroscopic reconstruction of explosive-contaminated overlapping fingerprints using the laser-induced plasma emissions

  • Yang, Jun-Ho;Yoh, Jai-Ick
    • Analytical Science and Technology
    • /
    • v.33 no.2
    • /
    • pp.86-97
    • /
    • 2020
  • Reconstruction and separation of explosive-contaminated overlapping fingerprints constitutes an analytical challenge of high significance in forensic sciences. Laser-induced breakdown spectroscopy (LIBS) allows real-time chemical mapping by detecting the light emissions from laser-induced plasma and can offer powerful means of fingerprint classification based on the chemical components of the sample. During recent years LIBS has been studied one of the spectroscopic techniques with larger capability for forensic sciences. However, despite of the great sensitivity, LIBS suffers from a limited detection due to difficulties in reconstruction of overlapping fingerprints. Here, the authors propose a simple, yet effective, method of using chemical mapping to separate and reconstruct the explosive-contaminated, overlapping fingerprints. A Q-switched Nd:YAG laser system (1064 nm), which allows the laser beam diameter and the area of the ablated crater to be controlled, was used to analyze the chemical compositions of eight samples of explosive-contaminated fingerprints (featuring two sample explosive and four individuals) via the LIBS. Then, the chemical validations were further performed by applying the Raman spectroscopy. The results were subjected to principal component and partial least-squares multivariate analyses, and showed the classification of contaminated fingerprints at higher than 91% accuracy. Robustness and sensitivity tests indicate that the novel method used here is effective for separating and reconstructing the overlapping fingerprints with explosive trace.

Multi-Detector Row CT of the Central Airway Disease (Multi-Detector Row CT를 이용한 중심부 기도 질환의 평가)

  • Kang, Eun-Young
    • Tuberculosis and Respiratory Diseases
    • /
    • v.55 no.3
    • /
    • pp.239-249
    • /
    • 2003
  • Multi-detector row CT (MDCT) provides faster speed, longer coverage in conjunction with thin slices, improved spatial resolution, and ability to produce high quality muliplanar and three-dimensional (3D) images. MDCT has revolutionized the non-invasive evaluation of the central airways. Simultaneous display of axial, multiplanar, and 3D images raises precision and accuracy of the radiologic diagnosis of central airway disease. This article introduces central airway imaging with MDCT emphasizing on the emerging role of multiplanar and 3D reconstruction.

Accuracy of three-dimensional periodontal ligament models generated using cone-beam computed tomography at different resolutions for the assessment of periodontal bone loss

  • Hangmiao Lyu;Li Xu;Huimin Ma;Jianxia Hou;Xiaoxia Wang;Yong Wang;Yijiao Zhao;Weiran Li;Xiaotong Li
    • The korean journal of orthodontics
    • /
    • v.53 no.2
    • /
    • pp.77-88
    • /
    • 2023
  • Objective: To develop a method for generating three-dimensional (3D) digital models of the periodontal ligament (PDL) using 3D cone-beam computed tomography (CBCT) reconstruction and to evaluate the accuracy and agreement of the 3D PDL models in the measurement of periodontal bone loss. Methods: CBCT data collected from four patients with skeletal Class III malocclusion prior to periodontal surgery were reconstructed at three voxel sizes (0.2 mm, 0.25 mm, and 0.3 mm), and 3D tooth and alveolar bone models were generated to obtain digital PDL models for the maxillary and mandibular anterior teeth. Linear measurements of the alveolar bone crest obtained during periodontal surgery were compared with the digital measurements for assessment of the accuracy of the digital models. The agreement and reliability of the digital PDL models were analyzed using intra- and interexaminer correlation coefficients and Bland-Altman plots. Results: Digital models of the maxillary and mandibular anterior teeth, PDL, and alveolar bone of the four patients were successfully established. Relative to the intraoperative measurements, linear measurements obtained from the 3D digital models were accurate, and there were no significant differences among different voxel sizes at different sites. High diagnostic coincidence rates were found for the maxillary anterior teeth. The digital models showed high intra- and interexaminer agreement. Conclusions: Digital PDL models generated by 3D CBCT reconstruction can provide accurate and useful information regarding the alveolar crest morphology and facilitate reproducible measurements. This could assist clinicians in the evaluation of periodontal prognosis and establishment of an appropriate orthodontic treatment plan.

3D Point Cloud Reconstruction Technique from 2D Image Using Efficient Feature Map Extraction Network (효율적인 feature map 추출 네트워크를 이용한 2D 이미지에서의 3D 포인트 클라우드 재구축 기법)

  • Kim, Jeong-Yoon;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.26 no.3
    • /
    • pp.408-415
    • /
    • 2022
  • In this paper, we propose a 3D point cloud reconstruction technique from 2D images using efficient feature map extraction network. The originality of the method proposed in this paper is as follows. First, we use a new feature map extraction network that is about 27% efficient than existing techniques in terms of memory. The proposed network does not reduce the size to the middle of the deep learning network, so important information required for 3D point cloud reconstruction is not lost. We solved the memory increase problem caused by the non-reduced image size by reducing the number of channels and by efficiently configuring the deep learning network to be shallow. Second, by preserving the high-resolution features of the 2D image, the accuracy can be further improved than that of the conventional technique. The feature map extracted from the non-reduced image contains more detailed information than the existing method, which can further improve the reconstruction accuracy of the 3D point cloud. Third, we use a divergence loss that does not require shooting information. The fact that not only the 2D image but also the shooting angle is required for learning, the dataset must contain detailed information and it is a disadvantage that makes it difficult to construct the dataset. In this paper, the accuracy of the reconstruction of the 3D point cloud can be increased by increasing the diversity of information through randomness without additional shooting information. In order to objectively evaluate the performance of the proposed method, using the ShapeNet dataset and using the same method as in the comparative papers, the CD value of the method proposed in this paper is 5.87, the EMD value is 5.81, and the FLOPs value is 2.9G. It was calculated. On the other hand, the lower the CD and EMD values, the better the accuracy of the reconstructed 3D point cloud approaches the original. In addition, the lower the number of FLOPs, the less memory is required for the deep learning network. Therefore, the CD, EMD, and FLOPs performance evaluation results of the proposed method showed about 27% improvement in memory and 6.3% in terms of accuracy compared to the methods in other papers, demonstrating objective performance.

Reconstruction of Remote Sensing Data based on dynamic Characteristics of Time Series Data (위성자료의 시계열 특성에 기반한 실시간 자료 재구축)

  • Jung, Myung-Hee;Lee, Sang-Hoon;Jang, Seok-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.8
    • /
    • pp.329-335
    • /
    • 2018
  • Satellite images, which are widely used in various applications, are very useful for monitoring the surface of the earth. Since satellite data is obtained from a remote sensor, it contains a lot of noise and errors depending on observation weather conditions during data acquisition and sensor malfunction status. Since the accuracy of the data affects the accuracy and reliability of the data analysis results, noise removal and data restoration for high quality data is important. In this study, we propose a reconstruction system that models the time dependent dynamic characteristics of satellite data using a multi-period harmonic model and performs adaptive data restoration considering the spatial correlation of data. The proposed method is a real-time restoration method and thus can be employed as a preprocessing algorithm for real-time reconstruction of satellite data. The proposed method was evaluated with both simulated data and MODIS NDVI data for six years from 2011 to 2016. Experimental results show that the proposed method has the potentiality for reconstructing high quality satellite data.