• Title/Summary/Keyword: Benchmarks

Search Result 379, Processing Time 0.025 seconds

Effects of Additional Constraints on Performance of Portfolio Selection Models with Incomplete Information : Case Study of Group Stocks in the Korean Stock Market (불완전 정보 하에서 추가적인 제약조건들이 포트폴리오 선정 모형의 성과에 미치는 영향 : 한국 주식시장의 그룹주 사례들을 중심으로)

  • Park, Kyungchan;Jung, Jongbin;Kim, Seongmoon
    • Korean Management Science Review
    • /
    • v.32 no.1
    • /
    • pp.15-33
    • /
    • 2015
  • Under complete information, introducing additional constraints to a portfolio will have a negative impact on performance. However, real-life investments inevitably involve use of error-prone estimations, such as expected stock returns. In addition to the reality of incomplete data, investments of most Korean domestic equity funds are regulated externally by the government, as well as internally, resulting in limited maximum investment allocation to single stocks and risk free assets. This paper presents an investment framework, which takes such real-life situations into account, based on a newly developed portfolio selection model considering realistic constraints under incomplete information. Additionally, we examined the effects of additional constraints on portfolio's performance under incomplete information, taking the well-known Samsung and SK group stocks as performance benchmarks during the period beginning from the launch of each commercial fund, 2005 and 2007 respectively, up to 2013. The empirical study shows that an investment model, built under incomplete information with additional constraints, outperformed a model built without any constraints, and benchmarks, in terms of rate of return, standard deviation of returns, and Sharpe ratio.

Application of Tracking Signal to the Markowitz Portfolio Selection Model to Improve Stock Selection Ability by Overcoming Estimation Error (추적 신호를 적용한 마코위츠 포트폴리오 선정 모형의 종목 선정 능력 향상에 관한 연구)

  • Kim, Younghyun;Kim, Hongseon;Kim, Seongmoon
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.41 no.3
    • /
    • pp.1-21
    • /
    • 2016
  • The Markowitz portfolio selection model uses estimators to deduce input parameters. However, the estimation errors of input parameters negatively influence the performance of portfolios. Therefore, this model cannot be reliably applied to real-world investments. To overcome this problem, we suggest an algorithm that can exclude stocks with large estimation error from the portfolio by applying a tracking signal to the Markowitz portfolio selection model. By calculating the tracking signal of each stock, we can monitor whether unexpected departures occur on the outcomes of the forecasts on rate of returns. Thereafter, unreliable stocks are removed. By using this approach, portfolios can comprise relatively reliable stocks that have comparatively small estimation errors. To evaluate the performance of the proposed approach, a 10-year investment experiment was conducted using historical stock returns data from 6 different stock markets around the world. Performance was assessed and compared by the Markowitz portfolio selection model with additional constraints and other benchmarks such as minimum variance portfolio and the index of each stock market. Results showed that a portfolio using the proposed approach exhibited a better Sharpe ratio and rate of return than other benchmarks.

A DEA-based Benchmarking Framework in terms of Organizational Context (조직 상황을 고려한 DEA 기반의 벤치마킹 프레임워크)

  • Seol, Hyeong-Ju;Lim, Sung-Mook;Park, Gwang-Man
    • Journal of Korean Society for Quality Management
    • /
    • v.37 no.1
    • /
    • pp.1-9
    • /
    • 2009
  • Data envelopment analysis(DEA) has proved to be powerful for benchmarking and has been widely used in a variety of settings since the advent of it. DEA can be used in identifying the best performing units to be benchmarked against as well as in providing actionable measure for improvement of a organization's performance. However, the selection of performance benchmarks is a matter of both technical production possibilities and organizational policy considerations, managerial preferences and external restrictions. In that regards, DEA has a limited value in benchmarking because it focuses on only technical production Possibilities. This research proposes a new perspective in using DEA and a frame-work for benchmarking to select benchmarks that are both feasible and desirable in terms of organizational context. To do this, the concept of local and global efficiency is newly proposed. To show how useful the suggested concept and framework are, a case study is addressed.

Performance Comparison between LLVM and GCC Compilers for the AE32000 Embedded Processor

  • Park, Chanhyun;Han, Miseon;Lee, Hokyoon;Cho, Myeongjin;Kim, Seon Wook
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.3 no.2
    • /
    • pp.96-102
    • /
    • 2014
  • The embedded processor market has grown rapidly and consistently with the appearance of mobile devices. In an embedded system, the power consumption and execution time are important factors affecting the performance. The system performance is determined by both hardware and software. Although the hardware architecture is high-end, the software runs slowly due to the low quality of codes. This study compared the performance of two major compilers, LLVM and GCC on a32-bit EISC embedded processor. The dynamic instructions and static code sizes were evaluated from these compilers with the EEMBC benchmarks.LLVM generally performed better in the ALU intensive benchmarks, whereas GCC produced a better register allocation and jump optimization. The dynamic instruction count and static code of GCCwere on average 8% and 7% lower than those of LLVM, respectively.

A Study on the Computation and Number-Sense Ability of Elementary School Students (초등학교 학생들의 계산 능력과 수감각(Number Sense) 연구)

  • Pang, Jeong-Suk
    • Journal of the Korean School Mathematics Society
    • /
    • v.8 no.4
    • /
    • pp.423-444
    • /
    • 2005
  • Despite the importance of number sense, computational skills have been emphasized in elementary mathematics curriculum. There is lack of research on number sense. Against this background, this study analyzed the way 137 sixth grade students coped with routine computation problems and with problems requiring number sense. Students performed better on the computation tasks than on the number sense tasks. With regard to the number sense tasks, many students had a tendency to implement direct computation rather than to use number sense appropriate to the given contexts. Students also had difficulties in making use of effective benchmarks or applying the knowledge of number and operation to various problem contexts. An implication is that students should explore multiple tasks requiring number sense as an integral part of their mathematics learning in order to develop number sense.

  • PDF

Performance Evaluation of JavaScript Engines Using SunSpider Benchmarks (SunSpider 벤치마크를 통한 자바스크립트 엔진의 성능 평가)

  • Jung, Won-Ki;Lee, Seong-Won;Oh, Hyeong-Seok;Oh, Jin-Seok;Moon, Soo-Mook
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.6
    • /
    • pp.722-726
    • /
    • 2010
  • The recent deployment of RIA (Rich Internet Application) is often involved with the complex JavaScript code, which leads to the announcement of high performance JavaScript engines for its efficient execution. And the Sunspider benchmark is being widely used for the performance evaluation of these JavaScript engines. In this paper, we compare the execution methods of three high-performance JavaScript engines, Mozilla TraceMonkey, Google V8, and Apple SquirrelFish Extreme, and measure their performances using the SunSpider benchmark. We also evaluate the pros and cons of each engine, based on its execution method and the code characteristics of the SunSpider benchmarks.

Human Activity Recognition in Smart Homes Based on a Difference of Convex Programming Problem

  • Ghasemi, Vahid;Pouyan, Ali A.;Sharifi, Mohsen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.1
    • /
    • pp.321-344
    • /
    • 2017
  • Smart homes are the new generation of homes where pervasive computing is employed to make the lives of the residents more convenient. Human activity recognition (HAR) is a fundamental task in these environments. Since critical decisions will be made based on HAR results, accurate recognition of human activities with low uncertainty is of crucial importance. In this paper, a novel HAR method based on a difference of convex programming (DCP) problem is represented, which manages to handle uncertainty. For this purpose, given an input sensor data stream, a primary belief in each activity is calculated for the sensor events. Since the primary beliefs are calculated based on some abstractions, they naturally bear an amount of uncertainty. To mitigate the effect of the uncertainty, a DCP problem is defined and solved to yield secondary beliefs. In this procedure, the uncertainty stemming from a sensor event is alleviated by its neighboring sensor events in the input stream. The final activity inference is based on the secondary beliefs. The proposed method is evaluated using a well-known and publicly available dataset. It is compared to four HAR schemes, which are based on temporal probabilistic graphical models, and a convex optimization-based HAR procedure, as benchmarks. The proposed method outperforms the benchmarks, having an acceptable accuracy of 82.61%, and an average F-measure of 82.3%.

Monte Carlo burnup and its uncertainty propagation analyses for VERA depletion benchmarks by McCARD

  • Park, Ho Jin;Lee, Dong Hyuk;Jeon, Byoung Kyu;Shim, Hyung Jin
    • Nuclear Engineering and Technology
    • /
    • v.50 no.7
    • /
    • pp.1043-1050
    • /
    • 2018
  • For an efficient Monte Carlo (MC) burnup analysis, an accurate high-order depletion scheme to consider the nonlinear flux variation in a coarse burnup-step interval is crucial accompanied with an accurate depletion equation solver. In a Seoul National University MC code, McCARD, the high-order depletion schemes of the quadratic depletion method (QDM) and the linear extrapolation/quadratic interpolation (LEQI) method and a depletion equation solver by the Chebyshev rational approximation method (CRAM) have been newly implemented in addition to the existing constant extrapolation/backward extrapolation (CEBE) method using the matrix exponential method (MEM) solver with substeps. In this paper, the quadratic extrapolation/quadratic interpolation (QEQI) method is proposed as a new high-order depletion scheme. In order to examine the effectiveness of the newly-implemented depletion modules in McCARD, four problems in the VERA depletion benchmarks are solved by CEBE/MEM, CEBE/CRAM, LEQI/MEM, QEQI/MEM, and QDM for gadolinium isotopes. From the comparisons, it is shown that the QEQI/MEM predicts ${k_{inf}}^{\prime}s$ most accurately among the test cases. In addition, statistical uncertainty propagation analyses for a VERA pin cell problem are conducted by the sensitivity and uncertainty and the stochastic sampling methods.

Development and verification of PWR core transient coupling calculation software

  • Li, Zhigang;An, Ping;Zhao, Wenbo;Liu, Wei;He, Tao;Lu, Wei;Li, Qing
    • Nuclear Engineering and Technology
    • /
    • v.53 no.11
    • /
    • pp.3653-3664
    • /
    • 2021
  • In PWR three-dimensional transient coupling calculation software CORCA-K, the nodal Green's function method and diagonal implicit Runge Kutta method are used to solve the spatiotemporal neutron dynamic diffusion equation, and the single-phase closed channel model and one-dimensional cylindrical heat conduction transient model are used to calculate the coolant temperature and fuel temperature. The LMW, NEACRP and PWR MOX/UO2 benchmarks and FangJiaShan (FJS) nuclear power plant (NPP) transient control rod move cases are used to verify the CORCA-K. The effects of burnup, fuel effective temperature and ejection rate on the control rod ejection process of PWR are analyzed. The conclusions are as follows: (1) core relative power and fuel Doppler temperature are in good agreement with the results of benchmark and ADPRES, and the deviation between with the reference results is within 3.0% in LMW and NEACRP benchmarks; 2) the variation trend of FJS NPP core transient parameters is consistent with the results of SMART and ADPRES. And the core relative power is in better agreement with the SMART when weighting coefficient is 0.7. Compared with SMART, the maximum deviation is -5.08% in the rod ejection condition and while -5.09% in the control rod complex movement condition.

Exploiting Hardware Events to Reduce Energy Consumption of HPC Systems

  • Lee, Yongho;Kwon, Osang;Byeon, Kwangeun;Kim, Yongjun;Hong, Seokin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.8
    • /
    • pp.1-11
    • /
    • 2021
  • This paper proposes a novel mechanism called Event-driven Uncore Frequency Scaler (eUFS) to improve the energy efficiency of the HPC systems. UFS exploits the hardware events such as LAPI (Last-level Cache Accesses Per Instructions) and CPI (Clock Cycles Per Instruction) to dynamically adjusts the uncore frequency. Hardware events are collected at a reference time period, and the target uncore frequency is determined using the collected event and the previous uncore frequency. Experiments with the NPB benchmarks demonstrate that the eUFS reduces the energy consumption by 6% on average for class C and D NPB benchmarks while it only increases the execution time by 2% on average.