• 제목/요약/키워드: Rank algorithm

Search Result 283, Processing Time 0.022 seconds

ABS ALGORITHM FOR SOLVING A CLASS OF LINEAR DIOPHANTINE INEQUALITIES AND INTEGER LP PROBLEMS

  • Gao, Cheng-Zhi;Dong, Yu-Lin
    • Journal of applied mathematics & informatics
    • /
    • v.26 no.1_2
    • /
    • pp.349-353
    • /
    • 2008
  • Using the recently developed ABS algorithm for solving linear Diophantine equations we introduce an algorithm for solving a system of m linear integer inequalities in n variables, m $\leq$ n, with full rank coefficient matrix. We apply this result to solve linear integer programming problems with m $\leq$ n inequalities.

  • PDF

Dynamic Rank Subsetting with Data Compression

  • Hong, Seokin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.4
    • /
    • pp.1-9
    • /
    • 2020
  • In this paper, we propose Dynamic Rank Subsetting (DRAS) technique that enhances the energy-efficiency and the performance of memory system through the data compression. The goal of this technique is to enable a partial chip access by storing data in a compressed format within a subset of DRAM chips. To this end, a memory rank is dynamically configured to two independent sub-ranks. When writing a data block, it is compressed with a data compression algorithm and stored in one of the two sub-ranks. To service a memory request for the compressed data, only a sub-rank is accessed, whereas, for a memory request for the uncompressed data, two sub-ranks are accessed as done in the conventional memory systems. Since DRAS technique requires minimal hardware modification, it can be used in the conventional memory systems with low hardware overheads. Through experimental evaluation with a memory simulator, we show that the proposed technique improves the performance of the memory system by 12% on average and reduces the power consumption of memory system by 24% on average.

AN EFFICIENT ALGORITHM FOR SLIDING WINDOW BASED INCREMENTAL PRINCIPAL COMPONENTS ANALYSIS

  • Lee, Geunseop
    • Journal of the Korean Mathematical Society
    • /
    • v.57 no.2
    • /
    • pp.401-414
    • /
    • 2020
  • It is computationally expensive to compute principal components from scratch at every update or downdate when new data arrive and existing data are truncated from the data matrix frequently. To overcome this limitations, incremental principal component analysis is considered. Specifically, we present a sliding window based efficient incremental principal component computation from a covariance matrix which comprises of two procedures; simultaneous update and downdate of principal components, followed by the rank-one matrix update. Additionally we track the accurate decomposition error and the adaptive numerical rank. Experiments show that the proposed algorithm enables a faster execution speed and no-meaningful decomposition error differences compared to typical incremental principal component analysis algorithms, thereby maintaining a good approximation for the principal components.

SMOOTH SINGULAR VALUE THRESHOLDING ALGORITHM FOR LOW-RANK MATRIX COMPLETION PROBLEM

  • Geunseop Lee
    • Journal of the Korean Mathematical Society
    • /
    • v.61 no.3
    • /
    • pp.427-444
    • /
    • 2024
  • The matrix completion problem is to predict missing entries of a data matrix using the low-rank approximation of the observed entries. Typical approaches to matrix completion problem often rely on thresholding the singular values of the data matrix. However, these approaches have some limitations. In particular, a discontinuity is present near the thresholding value, and the thresholding value must be manually selected. To overcome these difficulties, we propose a shrinkage and thresholding function that smoothly thresholds the singular values to obtain more accurate and robust estimation of the data matrix. Furthermore, the proposed function is differentiable so that the thresholding values can be adaptively calculated during the iterations using Stein unbiased risk estimate. The experimental results demonstrate that the proposed algorithm yields a more accurate estimation with a faster execution than other matrix completion algorithms in image inpainting problems.

Hybrid Document Summarization using a TextRank Algorithm and an Attentive Recurrent Neural Networks (TextRank 알고리즘과 주의 집중 순환 신경망을 이용한 하이브리드 문서 요약)

  • Jeong, Seok-won;Lee, Hyeon-gu;Kim, Harksoo
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.47-50
    • /
    • 2017
  • 문서 요약은 입력 문서가 가진 주제를 유지하면서 크기가 축약된 새로운 문서를 생성하는 것이다. 문서 요약의 방법론은 크게 추출 요약과 추상 요약으로 구분된다. 추출 요약의 경우 결과가 문서 전체를 충분히 대표하지 못하거나 문장들 간의 호응이 떨어지는 문제점이 있다. 최근에는 순환 신경망 구조의 모델을 이용한 추상 요약이 활발히 연구되고 있으나, 이러한 방법은 입력이 길어지는 경우 정보가 누락된다는 문제점을 가지고 있다. 본 논문에서는 이러한 단점들을 해소하기 위해 추출 요약으로 입력 문서의 중요한 일부 문장들을 선별하고 이를 추상 요약의 입력으로 사용했을 때의 성능 변화를 관찰한다. 추출 요약을 통해 원문 대비 30%까지 문서를 요약한 후 요약을 생성했을 때, ROUGE-1 0.2802, ROUGE-2 0.1294, ROUGE-L 0.3254의 성능을 보였다.

  • PDF

On the Separation of the Rank-1 Chvatal-Gomory Inequalities for the Fixed-Charge 0-1 Knapsack Problem (고정비용 0-1 배낭문제에 대한 크바탈-고모리 부등식의 분리문제에 관한 연구)

  • Park, Kyung-Chul;Lee, Kyung-Sik
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.36 no.2
    • /
    • pp.43-50
    • /
    • 2011
  • We consider the separation problem of the rank-1 Chvatal-Gomory (C-G) inequalities for the 0-1 knapsack problem with the knapsack capacity defined by an additional binary variable, which we call the fixed-charge 0-1 knapsack problem. We analyze the structural properties of the optimal solutions to the separation problem and show that the separation problem can be solved in pseudo-polynomial time. By using the result, we also show that the existence of a pseudo-polynomial time algorithm for the separation problem of the rank-1 C-G inequalities of the ordinary 0-1 knapsack problem.

Hybrid Document Summarization using a TextRank Algorithm and an Attentive Recurrent Neural Networks (TextRank 알고리즘과 주의 집중 순환 신경망을 이용한 하이브리드 문서 요약)

  • Jeong, Seok-won;Lee, Hyeon-gu;Kim, Harksoo
    • 한국어정보학회:학술대회논문집
    • /
    • 2017.10a
    • /
    • pp.47-50
    • /
    • 2017
  • 문서 요약은 입력 문서가 가진 주제를 유지하면서 크기가 축약된 새로운 문서를 생성하는 것이다. 문서 요약의 방법론은 크게 추출 요약과 추상 요약으로 구분된다. 추출 요약의 경우 결과가 문서 전체를 충분히 대표하지 못하거나 문장들 간의 호응이 떨어지는 문제점이 있다. 최근에는 순환 신경망 구조의 모델을 이용한 추상 요약이 활발히 연구되고 있으나, 이러한 방법은 입력이 길어지는 경우 정보가 누락된다는 문제점을 가지고 있다. 본 논문에서는 이러한 단점들을 해소하기 위해 추출 요약으로 입력 문서의 중요한 일부 문장들을 선별하고 이를 추상 요약의 입력으로 사용했을 때의 성능 변화를 관찰한다. 추출 요약을 통해 원문 대비 30%까지 문서를 요약한 후 요약을 생성했을 때, ROUGE-1 0.2802, ROUGE-2 0.1294, ROUGE-L 0.3254의 성능을 보였다.

  • PDF

Dynamic Compressed Representation of Texts with Rank/Select

  • Lee, Sun-Ho;Park, Kun-Soo
    • Journal of Computing Science and Engineering
    • /
    • v.3 no.1
    • /
    • pp.15-26
    • /
    • 2009
  • Given an n-length text T over a $\sigma$-size alphabet, we present a compressed representation of T which supports retrieving queries of rank/select/access and updating queries of insert/delete. For a measure of compression, we use the empirical entropy H(T), which defines a lower bound nH(T) bits for any algorithm to compress T of n log $\sigma$ bits. Our representation takes this entropy bound of T, i.e., nH(T) $\leq$ n log $\sigma$ bits, and an additional bits less than the text size, i.e., o(n log $\sigma$) + O(n) bits. In compressed space of nH(T) + o(n log $\sigma$) + O(n) bits, our representation supports O(log n) time queries for a log n-size alphabet and its extension provides O(($1+\frac{{\log}\;{\sigma}}{{\log}\;{\log}\;n}$) log n) time queries for a $\sigma$-size alphabet.

An Adaptive RLR L-Filter for Noise Reduction in Images (영상의 잡음 감소를 위한 적응 RLR L-필터)

  • Kim, Soo-Yang;Bae, Sung-Ha
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.1
    • /
    • pp.26-30
    • /
    • 2009
  • We propose an adaptive Recursive Least Rank(RLR) L-filter which uses an L-estimator in order statistics and is based on rank estimate in robust statistics. The proposed RLR L-filter is a non-linear adaptive filter using non-linear adaptive algorithm and adapts itself to optimal filter in the sense of least dispersion measure of errors with non-homogeneous step size. Therefore the filter may be suitable for applications when the transmission channel is nonlinear channels such as Gaussian noise or impulsive noise, or when the signal is non-stationary such as image signal.

  • PDF

Robust Non-negative Matrix Factorization with β-Divergence for Speech Separation

  • Li, Yinan;Zhang, Xiongwei;Sun, Meng
    • ETRI Journal
    • /
    • v.39 no.1
    • /
    • pp.21-29
    • /
    • 2017
  • This paper addresses the problem of unsupervised speech separation based on robust non-negative matrix factorization (RNMF) with ${\beta}$-divergence, when neither speech nor noise training data is available beforehand. We propose a robust version of non-negative matrix factorization, inspired by the recently developed sparse and low-rank decomposition, in which the data matrix is decomposed into the sum of a low-rank matrix and a sparse matrix. Efficient multiplicative update rules to minimize the ${\beta}$-divergence-based cost function are derived. A convolutional extension of the proposed algorithm is also proposed, which considers the time dependency of the non-negative noise bases. Experimental speech separation results show that the proposed convolutional RNMF successfully separates the repeating time-varying spectral structures from the magnitude spectrum of the mixture, and does so without any prior training.