• Title/Summary/Keyword: compressed sensing (CS)

Search Result 47, Processing Time 0.022 seconds

Non-Iterative Threshold based Recovery Algorithm (NITRA) for Compressively Sensed Images and Videos

  • Poovathy, J. Florence Gnana;Radha, S.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.10
    • /
    • pp.4160-4176
    • /
    • 2015
  • Data compression like image and video compression has come a long way since the introduction of Compressive Sensing (CS) which compresses sparse signals such as images, videos etc. to very few samples i.e. M < N measurements. At the receiver end, a robust and efficient recovery algorithm estimates the original image or video. Many prominent algorithms solve least squares problem (LSP) iteratively in order to reconstruct the signal hence consuming more processing time. In this paper non-iterative threshold based recovery algorithm (NITRA) is proposed for the recovery of images and videos without solving LSP, claiming reduced complexity and better reconstruction quality. The elapsed time for images and videos using NITRA is in ㎲ range which is 100 times less than other existing algorithms. The peak signal to noise ratio (PSNR) is above 30 dB, structural similarity (SSIM) and structural content (SC) are of 99%.

A Novel Compressed Sensing Technique for Traffic Matrix Estimation of Software Defined Cloud Networks

  • Qazi, Sameer;Atif, Syed Muhammad;Kadri, Muhammad Bilal
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.10
    • /
    • pp.4678-4702
    • /
    • 2018
  • Traffic Matrix estimation has always caught attention from researchers for better network management and future planning. With the advent of high traffic loads due to Cloud Computing platforms and Software Defined Networking based tunable routing and traffic management algorithms on the Internet, it is more necessary as ever to be able to predict current and future traffic volumes on the network. For large networks such origin-destination traffic prediction problem takes the form of a large under- constrained and under-determined system of equations with a dynamic measurement matrix. Previously, the researchers had relied on the assumption that the measurement (routing) matrix is stationary due to which the schemes are not suitable for modern software defined networks. In this work, we present our Compressed Sensing with Dynamic Model Estimation (CS-DME) architecture suitable for modern software defined networks. Our main contributions are: (1) we formulate an approach in which measurement matrix in the compressed sensing scheme can be accurately and dynamically estimated through a reformulation of the problem based on traffic demands. (2) We show that the problem formulation using a dynamic measurement matrix based on instantaneous traffic demands may be used instead of a stationary binary routing matrix which is more suitable to modern Software Defined Networks that are constantly evolving in terms of routing by inspection of its Eigen Spectrum using two real world datasets. (3) We also show that linking this compressed measurement matrix dynamically with the measured parameters can lead to acceptable estimation of Origin Destination (OD) Traffic flows with marginally poor results with other state-of-art schemes relying on fixed measurement matrices. (4) Furthermore, using this compressed reformulated problem, a new strategy for selection of vantage points for most efficient traffic matrix estimation is also presented through a secondary compression technique based on subset of link measurements. Experimental evaluation of proposed technique using real world datasets Abilene and GEANT shows that the technique is practical to be used in modern software defined networks. Further, the performance of the scheme is compared with recent state of the art techniques proposed in research literature.

Dynamically Collimated CT Scan and Image Reconstruction of Convex Region-of-Interest (동적 시준을 이용한 CT 촬영과 볼록한 관심영역의 영상재구성)

  • Jin, Seung Oh;Kwon, Oh-Kyong
    • Journal of Biomedical Engineering Research
    • /
    • v.35 no.5
    • /
    • pp.151-159
    • /
    • 2014
  • Computed tomography (CT) is one of the most widely used medical imaging modality. However, substantial x-ray dose exposed to the human subject during the CT scan is a great concern. Region-of-interest (ROI) CT is considered to be a possible solution for its potential to reduce the x-ray dose to the human subject. In most of ROI-CT scans, the ROI is set to a circular shape whose diameter is often considerably smaller than the full field-of-view (FOV). However, an arbitrarily shaped ROI is very desirable to reduce the x-ray dose more than the circularly shaped ROI can do. We propose a new method to make a non-circular convex-shaped ROI along with the image reconstruction method. To make a ROI with an arbitrary convex shape, dynamic collimations are necessary to minimize the x-ray dose at each angle of view. In addition to the dynamic collimation, we get the ROI projection data with slightly lower sampling rate in the view direction to further reduce the x-ray dose. We reconstruct images from the ROI projection data in the compressed sensing (CS) framework assisted by the exterior projection data acquired from the pilot scan to set the ROI. To validate the proposed method, we used the experimental micro-CT projection data after truncating them to simulate the dynamic collimation. The reconstructed ROI images showed little errors as compared to the images reconstructed from the full-FOV scan data as well as little artifacts inside the ROI. We expect the proposed method can significantly reduce the x-ray dose in CT scans if the dynamic collimation is realized in real CT machines.

An Iterative Image Reconstruction Method for the Region-of-Interest CT Assisted from Exterior Projection Data (Exterior 투영데이터를 이용한 Region-of-Interest CT의 반복적 영상재구성 방법)

  • Jin, Seung Oh;Kwon, Oh-Kyong
    • Journal of Biomedical Engineering Research
    • /
    • v.35 no.5
    • /
    • pp.132-141
    • /
    • 2014
  • In an ordinary CT scan, a large number of projections with full field-of-view (FFOV) are necessary to reconstruct high resolution images. However, excessive x-ray dosage is a great concern in FFOV scan. Region-of-interest (ROI) CT or sparse-view CT is considered to be a solution to reduce x-ray dosage in CT scanning, but it suffers from bright-band artifacts or streak artifacts giving contrast anomaly in the reconstructed image. In this study, we propose an image reconstruction method to eliminate the bright-band artifacts and the streak artifacts simultaneously. In addition to the ROI scan for the interior projection data with relatively high sampling rate in the view direction, we get sparse-view exterior projection data with much lower sampling rate. Then, we reconstruct images by solving a constrained total variation (TV) minimization problem for the interior projection data, which is assisted by the exterior projection data in the compressed sensing (CS) framework. For the interior image reconstruction assisted by the exterior projection data, we implemented the proposed method which enforces dual data fidelity terms and a TV term. The proposed method has effectively suppressed the bright-band artifacts around the ROI boundary and the streak artifacts in the ROI image. We expect the proposed method can be used for low-dose CT scans based on limited x-ray exposure to a small ROI in the human body.

Restoration of Ghost Imaging in Atmospheric Turbulence Based on Deep Learning

  • Chenzhe Jiang;Banglian Xu;Leihong Zhang;Dawei Zhang
    • Current Optics and Photonics
    • /
    • v.7 no.6
    • /
    • pp.655-664
    • /
    • 2023
  • Ghost imaging (GI) technology is developing rapidly, but there are inevitably some limitations such as the influence of atmospheric turbulence. In this paper, we study a ghost imaging system in atmospheric turbulence and use a gamma-gamma (GG) model to simulate the medium to strong range of turbulence distribution. With a compressed sensing (CS) algorithm and generative adversarial network (GAN), the image can be restored well. We analyze the performance of correlation imaging, the influence of atmospheric turbulence and the restoration algorithm's effects. The restored image's peak signal-to-noise ratio (PSNR) and structural similarity index map (SSIM) increased to 21.9 dB and 0.67 dB, respectively. This proves that deep learning (DL) methods can restore a distorted image well, and it has specific significance for computational imaging in noisy and fuzzy environments.

A Compressed Sensing-Based Signal Detection Technique for Generalized Space Shift Keying Systems (일반화된 공간천이변조 시스템에서 압축센싱기술을 이용한 수신신호 복호 알고리즘)

  • Park, Jeonghong;Ban, Tae Won;Jung, Bang Chul
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.7
    • /
    • pp.1557-1564
    • /
    • 2014
  • In this paper, we propose a signal detection technique based on the parallel orthogonal matching pursuit (POMP) is proposed for generalized shift space keying (GSSK) systems, which is a modified version of the orthogonal matching pursuit (OMP) that is widely used as a greedy algorithm for sparse signal recovery. The signal recovery problem in the GSSK systems is similar to that in the compressed sensing (CS). In the proposed POMP technique, multiple indexes which have the maximum correlation between the received signal and the channel matrix are selected at the first iteration, while a single index is selected in the OMP algorithm. Finally, the index yielding the minimum residual between the received signal and the M recovered signals is selected as an estimate of the original transmitted signal. POMP with Quantization (POMP-Q) is also proposed, which combines the POMP technique with the signal quantization at each iteration. The proposed POMP technique induces the computational complexity M times, compared with the OMP, but the performance of the signal recovery significantly outperform the conventional OMP algorithm.

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.