• Title/Summary/Keyword: 테스팅 환경

Search Result 115, Processing Time 0.02 seconds

A Test Case Prioritization Technique via Value-Based Approach (가치기반 접근법을 통한 테스트 케이스 우선순위 기법)

  • Park, Hyun-Cheol;Ryu, He-Yeon;Baik, Jong-Moon
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.5
    • /
    • pp.353-360
    • /
    • 2009
  • Software, once developed, has a long life and evolves through numerous additions and modifications because of the faults, the changes in user requirements, the changes in environments, and so forth. With the evolution of the software, assuring the quality of the software is getting more difficult because of numerous versions of the software. Meanwhile, regression testing has been used to support the software testing activities and assure acquiring appropriate quality through several versions of software. Regression testing, however, is too expensive because it requires lots of test cases executions and the number of test cases increases sharply as the software evolves. For this reason, several techniques are suggested to help conducting regression testing then test case prioritization technique is understood the most effective and efficient technique to support regression testing. In this paper, we propose an approach, Historical Value-Based Approach, which is based on the use of historical information to estimate the current cost and fault severity for cost-cognizant test case prioritization. As a result of the proposed approach, software testers who perform regression testing prioritize their test cases more effectively so that the test effectiveness of them can be improved in terms of APFDc.

A Study on Built-In Self Test for Boards with Multiple Scan Paths (다중 주사 경로 회로 기판을 위한 내장된 자체 테스트 기법의 연구)

  • Kim, Hyun-Jin;Shin, Jong-Chul;Yim, Yong-Tae;Kang, Sung-Ho
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.36C no.2
    • /
    • pp.14-25
    • /
    • 1999
  • The IEEE standard 1149.1, which was proposed to increase the observability and the controllability in I/O pins, makes it possible the board level testing. In the boundary-scan environments, many shift operations are required due to their serial nature. This increases the test application time and the test application costs. To reduce the test application time, the method based on the parallel opereational multiple scan paths was proposed, but this requires the additional I/O pins and the internal wires. Moreover, it is difficult to make the designs in conformity to the IEEE standard 1149.1 since the standard does not support the parallel operation of data shifts on the scan paths. In this paper, the multiple scan path access algorithm which controls two scan paths simultaneously with one test bus is proposed. Based on the new algorithm, the new algorithm, the new board level BIST architecture which has a relatively small area overhead is developed. The new BIST architecture can reduce the test application time since it can shift the test patterns and the test responses of two scan paths at a time. In addition, it can reduce the costs for the test pattern generation and the test response analysis.

  • PDF

Theory and Implementation of Dynamic Taint Analysis for Tracing Tainted Data of Programs (프로그램의 오염 정보 추적을 위한 동적 오염 분석의 이론 및 구현)

  • Lim, Hyun-Il
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.2 no.7
    • /
    • pp.303-310
    • /
    • 2013
  • As the role of software increases in computing environments, issues in software security become more important problems. Dynamic taint analysis is a technique to trace and manage tainted data originated from unreliable sources during the execution of a program. This analysis can be applied to software security verification as well as software behavior understanding, testing unexpected errors, or debugging. In the previous researches, they focussed only to show the analysis results of dynamic taint analysis, and they did not logically describe propagation process of tainted data and analysis procedures. So, there were difficulties in understanding the analysis procedures or applying to other analysis. In this paper, by theoretically describing the analysis procedure, we logically show how the propagation process of tainted data can be traced, and present a theoretical model for dynamic taint analysis. In addition, we verify the correctness of the proposed model by implementing an analyser, and show that propagation of tainted data can be traced by the model. The proposed model can be applied to understand the analysis procedures of data flows in dynamic taint analysis, and can be used as an base knowledge for designing and implementing analysis method, which applies such analysis method.

Study on the Change of Relative Humidity in Subsea Pipeline According to Drying Method (건조 공법에 따른 해저 파이프라인 내부 상대습도 변화 특성 연구)

  • Yang, Seung Ho
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.28 no.2
    • /
    • pp.406-413
    • /
    • 2022
  • The subsea pipeline pre-commissioning stage consists of the following processes: Flooding, Venting, Hydrotesting, Dewatering, Drying, and N2 Purging. Among these processes, drying and nitrogen purging processes are stipulated to reduce and maintain the relative humidity below dew point to prevent the generation of hydrate and the risk of gas explosion in the pipeline during operation. The purpose of this study is to develop an analysis method for the air drying and nitrogen purging process during pre-commissioning of the subsea pipeline, and to evaluate the applicability of the analysis method through comparison with on-site measurement results. An analysis method using Computational Fluid Dynamics (CFD) was introduced and applied as a method for evaluating the relative humidity inside a subsea pipeline, and it was confirmed that analysis results were in good agreement with the on-site measurement results for the air drying and nitrogen purging process of the offshore pipeline. If the developed air drying and nitrogen purging analysis method are used as pre-engineering tools for pre-commissioning of subsea pipelines in the future, it is expected to have a significant impact on the improvement of work productivity.

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.