DOI QR코드

DOI QR Code

A Comparative Study on Similarity Measure Techniques for Cross-Project Defect Prediction

교차 프로젝트 결함 예측을 위한 유사도 측정 기법 비교 연구

  • Received : 2017.10.17
  • Accepted : 2018.02.05
  • Published : 2018.06.30

Abstract

Software defect prediction is helpful for allocating valuable project resources effectively for software quality assurance activities thanks to focusing on the identified fault-prone modules. If historical data collected within a company is sufficient, a Within-Project Defect Prediction (WPDP) can be utilized for accurate fault-prone module prediction. In case a company does not maintain historical data, it may be helpful to build a classifier towards predicting comprehensible fault prediction based on Cross-Project Defect Prediction (CPDP). Since CPDP employs different project data collected from other organization to build a classifier, the main obstacle to build an accurate classifier is that distributions between source and target projects are not similar. To address the problem, because it is crucial to identify effective similarity measure techniques to obtain high performance for CPDP, In this paper, we aim to identify them. We compare various similarity measure techniques. The effectiveness of similarity weights calculated by those similarity measure techniques are evaluated. The results are verified using the statistical significance test and the effect size test. The results show k-Nearest Neighbor (k-NN), LOcal Correlation Integral (LOCI), and Range methods are the top three performers. The experimental results show that predictive performances using the three methods are comparable to those of WPDP.

소프트웨어 결함 예측은 결함이 자주 발생하는 모듈에 집중함으로써 소프트웨어 품질 보증 활동에 귀중한 프로젝트 리소스를 효과적으로 할당하는 데 도움이 될 수 있다. 회사 내에서 수집 된 충분한 기록 데이터를 사용하여 정확한 결함 발생 가능성이 높은 모듈 예측에 대해 WPDP (프로젝트 내 결함 예측)를 사용할 수 있다. 회사가 과거 데이터를 유지하지 못한 경우 CPDP (Cross-Project Defect Prediction) 메커니즘을 기반으로 오류를 예측하는 분류기를 만드는 것이 도움이 될 수 있다. CPDP는 다른 조직에서 수집 한 다른 프로젝트 데이터를 사용하여 분류기를 작성하기 때문에 정확한 분류기를 만드는데 가장 큰 장애물은 소스와 대상 프로젝트 간의 서로 다른 분포이다. 이 문제의 해결을 위해 효과적인 유사도 측정 기술을 식별하는 것이 중요하므로, 본 논문에서는 다양한 유사도 측정 기술을 CPDP 모델에 적용하여 성능을 비교한다. 유사도 가중치의 유효성을 평가하고, 통계적 유의성 검정 및 효과 크기 검정을 통해 결과를 검증한다. 실험 결과, k-Nearest Neighbor (k-NN), LOcal Correlation Integral (LOCI) 및 Range 방법이 유사도 측정 기술 중 상위 3 개에 속했고, 이들을 사용하는 CPDP 예측 성능이 WPDP의 성능과 유사하였다.

Keywords

References

  1. S. Kim, E. Whitehead, and Y. Zhang, "Classifying software changes: Clean or buggy?," Softw. Eng. IEEE Trans., Vol. 34, No.2, pp.181-196, 2008. https://doi.org/10.1109/TSE.2007.70773
  2. K. O. Elish and M. O. Elish, "Predicting defect-prone software modules using support vector machines," J. Syst. Softw., Vol.81, No.5, pp.649-660, May 2008. https://doi.org/10.1016/j.jss.2007.07.040
  3. E. Arisholm, L. C. Briand, and E. B. Johannessen, "A systematic and comprehensive investigation of methods to build and evaluate fault prediction models," J. Syst. Softw., Vol.83, No.1, pp.2-17, Jan. 2010. https://doi.org/10.1016/j.jss.2009.06.055
  4. T. Hall, S. Beecham, D. Bowes, D. Gray, and S. Counsell, "A Systematic Literature Review on Fault Prediction Performance in Software Engineering," IEEE Trans. Softw. Eng., Vol.38, No.6, pp.1276-1304, Nov. 2012. https://doi.org/10.1109/TSE.2011.103
  5. T. Menzies, Z. Milton, B. Turhan, B. Cukic, Y. Jiang, and A. Bener, "Defect prediction from static code features: current results, limitations, new approaches," Autom. Softw. Eng., Vol.17, No.4, pp.375-407, May 2010. https://doi.org/10.1007/s10515-010-0069-5
  6. M. D'Ambros, M. Lanza, and R. Robbes, "Evaluating defect prediction approaches: A benchmark and an extensive comparison," Empir. Softw. Eng., Vol.17, No.4-5, pp.531-577, Aug. 2012. https://doi.org/10.1007/s10664-011-9173-9
  7. T. Zimmermann, N. Nagappan, H. Gall, E. Giger, and B. Murphy, "Cross-project defect prediction," in Proceedings of the 7th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on The Foundations of Software Engineering, pp.91-100, 2009.
  8. Z. He, F. Shu, Y. Yang, M. Li, and Q. Wang, "An investigation on the feasibility of cross-project defect prediction," Autom. Softw. Eng., Vol.19, No.2, pp.167-199, Jul. 2011. https://doi.org/10.1007/s10515-011-0090-3
  9. Y. Ma, G. Luo, X. Zeng, and A. Chen, "Transfer learning for cross-company software defect prediction," Inf. Softw. Technol., Vol.54, No.3, pp.248-256, Mar. 2012. https://doi.org/10.1016/j.infsof.2011.09.007
  10. J. Nam, S. J. Pan, and S. Kim, "Transfer defect learning," in Proceedings of the 35th International Conference on Software Engineering, pp.382-391, 2013.
  11. D. Ryu, J. Jang, and J. Baik, "A Hybrid Instance Selection using Nearest-Neighbor for Cross-Project Defect Prediction," J. Comput. Sci. Technol., Vol.30, No.5, pp.969-980, 2015. https://doi.org/10.1007/s11390-015-1575-5
  12. G. Woodbury, "An Introduction to Statistics." Cengage Learning, 2001.
  13. D. Ryu, O. Choi, and J. Baik, "Value-cognitive boosting with a support vector machine for cross-project defect prediction," Empir. Softw. Eng., Vol.21, No.1, pp.43-71, Feb. 2016. https://doi.org/10.1007/s10664-014-9346-4
  14. B. Turhan, T. Menzies, A. B. Bener, and J. Di Stefano, "On the relative value of cross-company and within-company data for defect prediction," Empir. Softw. Eng., Vol.14, No.5, pp.540-578, Jan. 2009. https://doi.org/10.1007/s10664-008-9103-7
  15. T. Pang-Ning, M. Steinbach, and V. Kumar, "Introduction to Data Mining." 2006.
  16. T. Grbac, G. Mausa, and B. Basic, "Stability of Software Defect Prediction in Relation to Levels of Data Imbalance.," in Proceedings of the 2nd Workshop of SQAMIA, 2013.
  17. N. V Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, "SMOTE : Synthetic Minority Over-sampling Technique," J. Artif. Intell. Res., Vol.16, pp.321-357, 2002.
  18. C. C. Aggarwal, "Outlier Analysis." New York, NY: Springer New York, 2013.
  19. H.-P. Kriegel, M. Schubert, and A. Zimek, "Angle-based outlier detection in high-dimensional data," Proceeding 14th ACM SIGKDD Int. Conf. Knowl. Discov. Data Min. (KDD '08), pp.444-452, 2008.
  20. N. Altman, "An introduction to kernel and nearest-neighbor nonparametric regression," Am. Stat., Vol.46, No.3, pp.175-185, 1992.
  21. R. Hamming, "Error Detecting and Error Correcting Codes," Bell Syst. Tech. J., Vol.XXIX, No.2, 1950.
  22. B. Raman and T. R. Ioerger, "Enhancing Learning using Feature and Example selection," Texas A&M Univ. Coll. Station. TX, USA, 2003.
  23. E. Parzen, "On estimation of a probability density function and mode," Ann. Math. Stat., Vol.33, No.3, pp.1065-1076, 1962. https://doi.org/10.1214/aoms/1177704472
  24. M. Breunig, H. Kriegel, R. Ng, and J. Sander, "LOF: identifying density-based local outliers," ACM Sigmod Rec., pp.1-12, 2000.
  25. S. Papadimitriou, H. Kitagawa, P. B. Gibbons, and C. Faloutsos, "LOCI: Fast outlier detection using the local correlation integral," Proc. - Int. Conf. Data Eng., pp.315-326, 2003.
  26. S. Lloyd, "Least squares quantization in PCM," IEEE Trans. Inf. Theory, Vol.28, No.2, pp.129-137, 1982. https://doi.org/10.1109/TIT.1982.1056489
  27. I. T. Jolliffe, "Principal Component Analysis." Springer, 2002.
  28. T. Kohonen, "Self-organized formation of topologically correct feature maps," Biol. Cybern., Vol.43, No.1, pp.59-69, 1982. https://doi.org/10.1007/BF00337288
  29. C. M. Bishop, "Pattern recognition and machine learning." New York, New York, USA: Springer, 2006.
  30. B. Turhan, A. Tosun MIsirli, and A. Bener, "Empirical evaluation of the effects of mixed project data on learning defect predictors," Inf. Softw. Technol., Vol.55, No.6, pp.1101-1118, Jun. 2013. https://doi.org/10.1016/j.infsof.2012.10.003
  31. M. Jureczko and D. Spinellis, "Using Object-Oriented Design Metrics to Predict Software Defects," in Models and Methods of System Dependability. Oficyna Wydawnicza Politechniki Wroclawskiej, 2010, pp.69-81.
  32. T. Menzies et al., "The PROMISE Repository of empirical software engineering data," 2012. [Online]. Available: http://openscience.us/repo/.
  33. K. Beyer, J. Goldstein, R. Ramakrishnan, and U. Shaft, When is "nearest neighbor" meaningful? Springer-Verlag, 1999.
  34. S. Lessmann, B. Baesens, C. Mues, and S. Pietsch, "Benchmarking Classification Models for Software Defect Prediction: A Proposed Framework and Novel Findings," IEEE Trans. Softw. Eng., Vol.34, No.4, pp.485-496, 2008. https://doi.org/10.1109/TSE.2008.35
  35. M. Hall, E. Frank, and G. Holmes, "The WEKA data mining software: an update," ACM SIGKDD Explor. Newsl., Vol.11, No.1, pp.10-18, 2009. https://doi.org/10.1145/1656274.1656278
  36. P. C. Mahalanobis, "On the generalised distance in statistics," in Proceedings of the National Institute of Sciences of India, Vol.2, No.1, pp.49-55, 1936.
  37. F. Menzies T, Greenwald J, "Data mining static code attributes to learn defect predictors," IEEE Trans. Softw. Eng., Vol.33, No.1, pp.2-13, 2007. https://doi.org/10.1109/TSE.2007.256941
  38. B. Turhan, A. Tosun, and A. Bener, "Empirical Evaluation of Mixed-Project Defect Prediction Models," in Proceedings of the 37th EUROMICRO Conference on Software Engineering and Advanced Applications, pp.396-403, 2011.
  39. Y. Kamei, S. Matsumoto, A. Monden, K. I. Matsumoto, B. Adams, and A. E. Hassan, "Revisiting common bug prediction findings using effort-aware models," IEEE Int. Conf. Softw. Maintenance, ICSM, 2010.
  40. S. Wang and X. Yao, "Using Class Imbalance Learning for Software Defect Prediction," IEEE Trans. Reliab., Vol.62, No.2, pp.434-443, Jun. 2013. https://doi.org/10.1109/TR.2013.2259203
  41. M. Friedman, "The use of ranks to avoid the assumption of normality implicit in the analysis of variance," J. Am. Stat. Assoc., No.32, pp.675-701, 1937.
  42. M. Friedman, "A comparison of alternative tests of significance for the problem of m rankings.," Ann. Math. Stat., No.11, pp.86-92, 1940.
  43. J. Demsar, "Statistical comparisons of classifiers over multiple data sets," J. Mach. Learn. Res., Vol.7, pp.1-30, 2006.
  44. J. Tukey, "Comparing individual means in the analysis of variance," Biometrics, No.5, pp.99-114, 1949.
  45. P. Nemenyi, "Distribution-free multiple comparisons.," Princeton University, 1963.
  46. O. J. Dunn, "Multiple comparisons among means," J. Am. Stat. Assoc., No.56, pp.52-64, 1961.
  47. F. Wilcoxon, "Individual comparisons by ranking methods," Biometrics Bull., pp.80-83, 1945.
  48. A. Arcuri and L. Briand, "A practical guide for using statistical tests to assess randomized algorithms in software engineering," in 2011 33rd International Conference on Software Engineering (ICSE), pp.1-10, 2011.
  49. A. Vargha and H. D. Delaney, "A Critique and Improvement of the CL Common Language Effect Size Statistics of McGraw and Wong," J. Educ. Behav. Stat., Vol.25, No.2, pp.101-132, 2000. https://doi.org/10.3102/10769986025002101
  50. D. M. J. Tax, "DDtools, the Data Description Toolbox for Matlab." 2014.