• Title/Summary/Keyword: Data Matrix

Search Result 2,924, Processing Time 0.027 seconds

Hybrid Watermarking Scheme using a Data Matrix and Secret Key (데이터 매트릭스와 비밀 키를 이용한 하이브리드 워터마킹 방법)

  • Jeon, Seong-Goo;Kim, Il-Hwan
    • Proceedings of the KIEE Conference
    • /
    • 2006.04a
    • /
    • pp.144-146
    • /
    • 2006
  • The Data Matrix of two-dimensional bar codes is a new technology capable of holding relatively large amounts of data compared to the conventional one-dimensional bar code which is just a key that can access detailed information to the host computer database. A secret key is used to prevent a watermark from malicious attacks. We encoded copyright information into a Data Matrix bar code for encoding process and it was spread a pseudo random pattern using owner key. We embedded a randomized watermark into the image using watermark's embedding position, pattern generated with a secret key. The experimental results have shown that the proposed scheme has good quality and is very robust to various attacks, such as JPEG compression and noise. Also the performance of the proposed scheme is verified by comparing the copyright information with the information which is extracted from a bar code scantier.

  • PDF

Robust Non-negative Matrix Factorization with β-Divergence for Speech Separation

  • Li, Yinan;Zhang, Xiongwei;Sun, Meng
    • ETRI Journal
    • /
    • v.39 no.1
    • /
    • pp.21-29
    • /
    • 2017
  • This paper addresses the problem of unsupervised speech separation based on robust non-negative matrix factorization (RNMF) with ${\beta}$-divergence, when neither speech nor noise training data is available beforehand. We propose a robust version of non-negative matrix factorization, inspired by the recently developed sparse and low-rank decomposition, in which the data matrix is decomposed into the sum of a low-rank matrix and a sparse matrix. Efficient multiplicative update rules to minimize the ${\beta}$-divergence-based cost function are derived. A convolutional extension of the proposed algorithm is also proposed, which considers the time dependency of the non-negative noise bases. Experimental speech separation results show that the proposed convolutional RNMF successfully separates the repeating time-varying spectral structures from the magnitude spectrum of the mixture, and does so without any prior training.

Bayesian Modeling of Random Effects Covariance Matrix for Generalized Linear Mixed Models

  • Lee, Keunbaik
    • Communications for Statistical Applications and Methods
    • /
    • v.20 no.3
    • /
    • pp.235-240
    • /
    • 2013
  • Generalized linear mixed models(GLMMs) are frequently used for the analysis of longitudinal categorical data when the subject-specific effects is of interest. In GLMMs, the structure of the random effects covariance matrix is important for the estimation of fixed effects and to explain subject and time variations. The estimation of the matrix is not simple because of the high dimension and the positive definiteness; subsequently, we practically use the simple structure of the covariance matrix such as AR(1). However, this strong assumption can result in biased estimates of the fixed effects. In this paper, we introduce Bayesian modeling approaches for the random effects covariance matrix using a modified Cholesky decomposition. The modified Cholesky decomposition approach has been used to explain a heterogenous random effects covariance matrix and the subsequent estimated covariance matrix will be positive definite. We analyze metabolic syndrome data from a Korean Genomic Epidemiology Study using these methods.

Secure Outsourced Computation of Multiple Matrix Multiplication Based on Fully Homomorphic Encryption

  • Wang, Shufang;Huang, Hai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.11
    • /
    • pp.5616-5630
    • /
    • 2019
  • Fully homomorphic encryption allows a third-party to perform arbitrary computation over encrypted data and is especially suitable for secure outsourced computation. This paper investigates secure outsourced computation of multiple matrix multiplication based on fully homomorphic encryption. Our work significantly improves the latest Mishra et al.'s work. We improve Mishra et al.'s matrix encoding method by introducing a column-order matrix encoding method which requires smaller parameter. This enables us to develop a binary multiplication method for multiple matrix multiplication, which multiplies pairwise two adjacent matrices in the tree structure instead of Mishra et al.'s sequential matrix multiplication from left to right. The binary multiplication method results in a logarithmic-depth circuit, thus is much more efficient than the sequential matrix multiplication method with linear-depth circuit. Experimental results show that for the product of ten 32×32 (64×64) square matrices our method takes only several thousand seconds while Mishra et al.'s method will take about tens of thousands of years which is astonishingly impractical. In addition, we further generalize our result from square matrix to non-square matrix. Experimental results show that the binary multiplication method and the classical dynamic programming method have a similar performance for ten non-square matrices multiplication.

Speaker Adaptation Using ICA-Based Feature Transformation

  • Jung, Ho-Young;Park, Man-Soo;Kim, Hoi-Rin;Hahn, Min-Soo
    • ETRI Journal
    • /
    • v.24 no.6
    • /
    • pp.469-472
    • /
    • 2002
  • Speaker adaptation techniques are generally used to reduce speaker differences in speech recognition. In this work, we focus on the features fitted to a linear regression-based speaker adaptation. These are obtained by feature transformation based on independent component analysis (ICA), and the feature transformation matrices are estimated from the training data and adaptation data. Since the adaptation data is not sufficient to reliably estimate the ICA-based feature transformation matrix, it is necessary to adjust the ICA-based feature transformation matrix estimated from a new speaker utterance. To cope with this problem, we propose a smoothing method through a linear interpolation between the speaker-independent (SI) feature transformation matrix and the speaker-dependent (SD) feature transformation matrix. From our experiments, we observed that the proposed method is more effective in the mismatched case. In the mismatched case, the adaptation performance is improved because the smoothed feature transformation matrix makes speaker adaptation using noisy speech more robust.

  • PDF

Bayesian baseline-category logit random effects models for longitudinal nominal data

  • Kim, Jiyeong;Lee, Keunbaik
    • Communications for Statistical Applications and Methods
    • /
    • v.27 no.2
    • /
    • pp.201-210
    • /
    • 2020
  • Baseline-category logit random effects models have been used to analyze longitudinal nominal data. The models account for subject-specific variations using random effects. However, the random effects covariance matrix in the models needs to explain subject-specific variations as well as serial correlations for nominal outcomes. In order to satisfy them, the covariance matrix must be heterogeneous and high-dimensional. However, it is difficult to estimate the random effects covariance matrix due to its high dimensionality and positive-definiteness. In this paper, we exploit the modified Cholesky decomposition to estimate the high-dimensional heterogeneous random effects covariance matrix. Bayesian methodology is proposed to estimate parameters of interest. The proposed methods are illustrated with real data from the McKinney Homeless Research Project.

Registration of the 3D Range Data Using the Curvature Value (곡률 정보를 이용한 3차원 거리 데이터 정합)

  • Kim, Sang-Hoon;Kim, Tae-Eun
    • Convergence Security Journal
    • /
    • v.8 no.4
    • /
    • pp.161-166
    • /
    • 2008
  • This paper proposes a new approach to align 3D data sets by using curvatures of feature surface. We use the Gaussian curvatures and the covariance matrix which imply the physical characteristics of the model to achieve registration of unaligned 3D data sets. First, the physical characteristics of local area are obtained by the Gaussian curvature. And the camera position of 3D range finder system is calculated from by using the projection matrix between 3D data set and 2D image. Then, the physical characteristics of whole area are obtained by the covariance matrix of the model. The corresponding points can be found in the overlapping region with the cross-projection method and it concentrates by removed points of self-occlusion. By the repeatedly the process discussed above, we finally find corrected points of overlapping region and get the optimized registration result.

  • PDF

In-depth Analysis and Performance Improvement of a Flash Disk-based Matrix Transposition Algorithm (플래시 디스크 기반 행렬전치 알고리즘 심층 분석 및 성능개선)

  • Lee, Hyung-Bong;Chung, Tae-Yun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.12 no.6
    • /
    • pp.377-384
    • /
    • 2017
  • The scope of the matrix application is so broad that it can not be limited. A typical matrix application area in computer science is image processing. Particularly, radar scanning equipment implemented on a small embedded system requires real-time matrix transposition for image processing, and since its memory size is small, a general matrix transposition algorithm can not be applied. In this case, matrix transposition must be done in disk space, such as flash disk, using a limited memory buffer. In this paper, we analyze and improve a recently published flash disk-based matrix transposition algorithm named as asymmetric sub-matrix transposition algorithm. The performance analysis shows that the asymmetric sub-matrix transposition algorithm has lower performance than the conventional sub-matrix transposition algorithm, but the improved asymmetric sub-matrix transposition algorithm is superior to the sub-matrix transposition algorithm in 13 of the 16 experimental data.

Extracting Symbol Informations from Data Matrix two dimensional Barcode Image (Data Matrix 이차원 바코드에서 코드워드를 추출하는 알고리즘 구현)

  • 황진희;한희일
    • Proceedings of the IEEK Conference
    • /
    • 2002.06d
    • /
    • pp.227-230
    • /
    • 2002
  • In this paper, we propose an algorithm to decode Data Matrix two dimensional barcode symbology. We employ hough transform and bilinear image warping to extract the barcode region from the image scanned using a CMOS digital camera. The location of barcode can be found by applying Hough transform. However, barcode image should be warped due to the nonlinearity of lens and the viewing angle of camera. In this paper, bilinear warping transform is adopted to wa게 and align the barcode region of the scanned image. Codeword can be detected from the aligned barcode region.

  • PDF

Developing Novel Algorithms to Reduce the Data Requirements of the Capture Matrix for a Wind Turbine Certification (풍력 발전기 평가를 위한 수집 행렬 데이터 절감 알고리즘 개발)

  • Lee, Jehyun;Choi, Jungchul
    • New & Renewable Energy
    • /
    • v.16 no.1
    • /
    • pp.15-24
    • /
    • 2020
  • For mechanical load testing of wind turbines, capture matrix is constructed for various range of wind speeds according to the international standard IEC 61400-13. The conventional method wastes considerable amount of data by its invalid data policy -segment data into 10 minutes then remove invalid ones. Previously, we have suggested an alternative way to save the total amount of data to build a capture matrix, but the efficient selection of data has been still under question. The paper introduces optimization algorithms to construct capture matrix with less data. Heuristic algorithm (simple stacking and lowest frequency first), population method (particle swarm optimization) and Q-Learning accompanied with epsilon-greedy exploration are compared. All algorithms show better performance than the conventional way, where the distribution of enhancement was quite diverse. Among the algorithms, the best performance was achieved by heuristic method (lowest frequency first), and similarly by particle swarm optimization: Approximately 28% of data reduction in average and more than 40% in maximum. On the other hand, unexpectedly, the worst performance was achieved by Q-Learning, which was a promising candidate at the beginning. This study is helpful for not only wind turbine evaluation particularly the viewpoint of cost, but also understanding nature of wind speed data.