• Title/Summary/Keyword: Algorithm save

Search Result 368, Processing Time 0.026 seconds

Two variations of cross-distance selection algorithm in hybrid sufficient dimension reduction

  • Jae Keun Yoo
    • Communications for Statistical Applications and Methods
    • /
    • v.30 no.2
    • /
    • pp.179-189
    • /
    • 2023
  • Hybrid sufficient dimension reduction (SDR) methods to a weighted mean of kernel matrices of two different SDR methods by Ye and Weiss (2003) require heavy computation and time consumption due to bootstrapping. To avoid this, Park et al. (2022) recently develop the so-called cross-distance selection (CDS) algorithm. In this paper, two variations of the original CDS algorithm are proposed depending on how well and equally the covk-SAVE is treated in the selection procedure. In one variation, which is called the larger CDS algorithm, the covk-SAVE is equally and fairly utilized with the other two candiates of SIR-SAVE and covk-DR. But, for the final selection, a random selection should be necessary. On the other hand, SIR-SAVE and covk-DR are utilized with completely ruling covk-SAVE out, which is called the smaller CDS algorithm. Numerical studies confirm that the original CDS algorithm is better than or compete quite well to the two proposed variations. A real data example is presented to compare and interpret the decisions by the three CDS algorithms in practice.

An Empirical Study on Dimension Reduction

  • Suh, Changhee;Lee, Hakbae
    • Journal of the Korean Data Analysis Society
    • /
    • v.20 no.6
    • /
    • pp.2733-2746
    • /
    • 2018
  • The two inverse regression estimation methods, SIR and SAVE to estimate the central space are computationally easy and are widely used. However, SIR and SAVE may have poor performance in finite samples and need strong assumptions (linearity and/or constant covariance conditions) on predictors. The two non-parametric estimation methods, MAVE and dMAVE have much better performance for finite samples than SIR and SAVE. MAVE and dMAVE need no strong requirements on predictors or on the response variable. MAVE is focused on estimating the central mean subspace, but dMAVE is to estimate the central space. This paper explores and compares four methods to explain the dimension reduction. Each algorithm of these four methods is reviewed. Empirical study for simulated data shows that MAVE and dMAVE has relatively better performance than SIR and SAVE, regardless of not only different models but also different distributional assumptions of predictors. However, real data example with the binary response demonstrates that SAVE is better than other methods.

The Montgomery Multiplier Using Scalable Carry Save Adder (분할형 CSA를 이용한 Montgomery 곱셈기)

  • 하재철;문상재
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.10 no.3
    • /
    • pp.77-83
    • /
    • 2000
  • This paper presents a new modular multiplier for Montgomery multiplication using iterative small carry save adder. The proposed multiplier is more flexible and suitable for long bit multiplication due to its scalable property according to design area and required computing time. We describe the word-based Montgomery algorithm and design architecture of the multiplier. Our analysis and simulation show that the proposed multiplier provides area/time tradeoffs in limited design area such as IC cards.

Frequency-Domain RLS Algorithm Based on the Block Processing Technique (블록 프로세싱 기법을 이용한 주파수 영역에서의 회귀 최소 자승 알고리듬)

  • 박부견;김동규;박원석
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.240-240
    • /
    • 2000
  • This paper presents two algorithms based on the concept of the frequency domain adaptive filter(FDAF). First the frequency domain recursive least squares(FRLS) algorithm with the overlap-save filtering technique is introduced. This minimizes the sum of exponentially weighted square errors in the frequency domain. To eliminate discrepancies between the linear convolution and the circular convolution, the overlap-save method is utilized. Second, the sliding method of data blocks is studied Co overcome processing delays and complexity roads of the FRLS algorithm. The size of the extended data block is twice as long as the filter tap length. It is possible to slide the data block variously by the adjustable hopping index. By selecting the hopping index appropriately, we can take a trade-off between the convergence rate and the computational complexity. When the input signal is highly correlated and the length of the target FIR filter is huge, the FRLS algorithm based on the block processing technique has good performances in the convergence rate and the computational complexity.

  • PDF

An accelerated Levenberg-Marquardt algorithm for feedforward network

  • Kwak, Young-Tae
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.5
    • /
    • pp.1027-1035
    • /
    • 2012
  • This paper proposes a new Levenberg-Marquardt algorithm that is accelerated by adjusting a Jacobian matrix and a quasi-Hessian matrix. The proposed method partitions the Jacobian matrix into block matrices and employs the inverse of a partitioned matrix to find the inverse of the quasi-Hessian matrix. Our method can avoid expensive operations and save memory in calculating the inverse of the quasi-Hessian matrix. It can shorten the training time for fast convergence. In our results tested in a large application, we were able to save about 20% of the training time than other algorithms.

Information Dispersal Algorithm and Proof of Ownership for Data Deduplication in Dispersed Storage Systems (분산 스토리지 시스템에서 데이터 중복제거를 위한 정보분산 알고리즘 및 소유권 증명 기법)

  • Shin, Youngjoo
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.25 no.1
    • /
    • pp.155-164
    • /
    • 2015
  • Information dispersal algorithm guarantees high availability and confidentiality for data and is one of the useful solutions for faulty and untrusted dispersed storage systems such as cloud storages. As the amount of data stored in storage systems increases, data deduplication which allows to save IT resources is now being considered as the most promising technology. Hence, it is necessary to study on an information dispersal algorithm that supports data deduplication. In this paper, we propose an information dispersal algorithm and proof of ownership for client-side data deduplication in the dispersed storage systems. The proposed solutions allow to save the network bandwidth as well as the storage space while giving robust security guarantee against untrusted storage servers and malicious clients.

On Evaluation Algorithm for Hierarchical Structure of Attributes with Interaction Relationship (상호연관성을 지닌 계층구조형문제의 평가 알고리즘)

  • Lee C.Y.;Lee S.T.
    • Journal of Korean Port Research
    • /
    • v.7 no.1
    • /
    • pp.5-12
    • /
    • 1993
  • In complex decision making such as ill-defined system, one of the main problem is how to treat ambiguous aspect of the decision making. According to the complexity and ambiguity of the objective systems, many types of evaluation attributes are necessary for the rational decision and the relationship among the attributes become complex and fuzzy. Fuzzy integral is very effective to evalute the complex system with interaction between attributes but how to save the evaluation efforts in the decision making process of grading the membership of the objects or alternative is the problem to be tackled. Because the more object there are to evaluate, the number of decisions to made increase exponentially. Therefore, this paper aimes to propose a new evaluation algorithm based on fuzzy integral which can save the evaluator's efforts in decision making process. The proposed algorithm is constructed as follows : First, compose the fuzzy measure by introducing AHP(Analytical Hierachy Process) & mutual interaction coefficient. Second, generate fuzzy measure value of monotone family set for calculating the fuzzy integral. The effectiveness of the proposed algorithm is investigated through the example and sensitivity of interaction coefficient is illustrated.

  • PDF

FPGA Implementation of High Speed RSA Cryptosystem Using Radix-4 Modified Booth Algorithm and CSA (Radix-4 Modified Booth 알고리즘과 CSA를 이용한 고속 RSA 암호시스템의 FPGA 구현)

  • 박진영;서영호;김동욱
    • Proceedings of the IEEK Conference
    • /
    • 2001.06a
    • /
    • pp.337-340
    • /
    • 2001
  • This paper presented a new structure of RSA cryptosystem using modified Montgomery algorithm and CSA(Carry Save Adder) tree. Montgomery algorithm was modified to a radix-4 modified Booth algorithm. By appling radix-4 modified Booth algorithm and CSA tree to modular multiplication, a clock cycle for modular multiplication has been reduced to (n+3)/2 and carry propagation has been removed from the cell structure of modular multiplier. That is, the connection efficiency of full adders is enhanced.

  • PDF

Optimization for PSC Box Girder Bridges Using Design Sensitivity Analysis (설계 민감도 해석을 이용한 PSC 박스거더교의 최적설계)

  • 조선규;조효남;민대홍;이광민;김환기
    • Proceedings of the Korea Concrete Institute Conference
    • /
    • 2000.10a
    • /
    • pp.205-210
    • /
    • 2000
  • An optimum design algorithm of PSC box girder bridges using design sensitivity analysis is proposed in this paper. For the efficiency of the proposed algorithm, approximated reanalysis techniques using design sensitivity analysis are introduced. And also to save the numerical efforts, an efficient reanalysis technique through approximated structural responses is proposed. A design sensitivity analysis of structural response is executed by automatic differentiation(AD). The efficiency and robustness of the proposed algorithm, compared with conventional algorithm, is successfully demonstrated in the numerical example.

  • PDF

Cluster Based Clock Synchronization for Sensor Network

  • Rashid Mamun-Or;HONG Choong Seon
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.07a
    • /
    • pp.415-417
    • /
    • 2005
  • Core operations (e.9. TDMA scheduler, synchronized sleep period, data aggregation) of many proposed protocols for different layer of sensor network necessitate clock synchronization. Our Paper mingles the scheme of dynamic clustering and diffusion based asynchronous averaging algorithm for clock synchronization in sensor network. Our proposed algorithm takes the advantage of dynamic clustering and then applies asynchronous averaging algorithm for synchronization to reduce number of rounds and operations required for converging time which in turn save energy significantly than energy required in diffusion based asynchronous averaging algorithm.

  • PDF