DOI QR코드

DOI QR Code

Deep Learning Method for Identification and Selection of Relevant Features

  • 투고 : 2024.05.05
  • 발행 : 2024.05.30

초록

Feature Selection have turned into the main point of investigations particularly in bioinformatics where there are numerous applications. Deep learning technique is a useful asset to choose features, anyway not all calculations are on an equivalent balance with regards to selection of relevant features. To be sure, numerous techniques have been proposed to select multiple features using deep learning techniques. Because of the deep learning, neural systems have profited a gigantic top recovery in the previous couple of years. Anyway neural systems are blackbox models and not many endeavors have been made so as to examine the fundamental procedure. In this proposed work a new calculations so as to do feature selection with deep learning systems is introduced. To evaluate our outcomes, we create relapse and grouping issues which enable us to think about every calculation on various fronts: exhibitions, calculation time and limitations. The outcomes acquired are truly encouraging since we figure out how to accomplish our objective by outperforming irregular backwoods exhibitions for each situation. The results prove that the proposed method exhibits better performance than the traditional methods.

키워드

참고문헌

  1. Li, Y., yu Chen, C., and Wasserman, W. W. (2015). Deep feature selection: Theory and application to identify enhancers and promoters. JOURNAL OF COMPUTATIONAL BIOLOGY, pages 1{15.
  2. Marbach, D., Scha_ter, T., Mattiussi, C., and Floreano, D. (2009). Generating realistic in silico gene networks for performance assessment of reverse engineering methods. Journal of Computational Biology, 16(2):229{239.
  3. Montavon, G., Lapuschkin, S., Binder, A., Samek, W., and M uller, K.-R. (2017). Explaining nonlinear classi_cation decisions with deep taylor decomposition. Pattern Recognition, 65:211{222.
  4. Nair, V. and Hinton, G. E. (2010). Recti_ed linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), pages 807{814.
  5. Nilsson, R., Pe~na, J. M., Bj orkegren, J., and Tegn_er, J. (2007). Consistent feature selection for pattern recognition in polynomial time. Journal of Machine Learning Research, 8(Mar):589{612.
  6. Qi, Y. (2012). Random forest for bioinformatics. In Ensemble machine learning, pages 307{323. Springer.
  7. Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1988). Learning representations by back-propagating errors. Cognitive modeling, 5(3):1.
  8. Saeys, Y., Inza, I., and Larraaga, P. (2007). A review of feature selection techniques in bioinformatics. Bioinformatics, 23(19):2507.
  9. Shrikumar, A., Greenside, P., and Kundaje, A. (2017). Learning important features through propagating activation di_erences. arXiv preprint arXiv:1704.02685.
  10. Srivastava, N., Hinton, G. E., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. (2014). Dropout: a simple way to prevent neural networks from over_tting. Journal of Machine Learning Research, 15(1):1929{1958.
  11. Stolovitzk, G., Monroe, D., and Califano, A. (2007). Dialogue on reverse- engineering assessment and methods: The dream of high-throughput pathway inference. Annals of the New York Academy of Sciences, 1115:11{22.
  12. Stolovitzky, G., Prill, R., and Califano, A. (2009). Lessons from the dream2 challenges. Annals of the New York Academy of Sciences, 1158:159{95.
  13. Strobl, C., Boulesteix, A.-L., Kneib, T., Augustin, T., and Zeileis, A. (2008). Conditional variable importance for random forests. BMC bioinformatics, 9(1):307.
  14. Wang, M., Chen, X., and Zhang, H. (2010). Maximal conditional chi-square importance in random forests. Bioinformatics, 26(6):831{837.
  15. Zou, H. and Hastie, T. (2005). Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2):301{320.
  16. Altmann, A., Tolo_si, L., Sander, O., and Lengauer, T. (2010). Permutation importance: a corrected feature importance measure. Bioinformatics, 26(10):1340{1347.
  17. Chen, X., Liu, C.-T., Zhang, M., and Zhang, H. (2007). A forest-based approach to identifying gene and gene{gene interactions. Proceedings of the National Academy of Sciences, 104(49):19199{19203.
  18. Debaditya, R., Sri, R. M. K., and Krishna, M. C. (2015). Feature selection using deep neural networks.
  19. Montavon, G., Lapuschkin, S., Binder, A., Samek, W., and M uller, K.-R. (2017). Explaining nonlinear classi_cation decisions with deep taylor decomposition. Pattern Recognition, 65:211{222.
  20. Nair, V. and Hinton, G. E. (2010). Recti_ed linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), pages 807{814.
  21. Nilsson, R., Pe~na, J. M., Bj orkegren, J., and Tegn_er, J. (2007). Consistent feature selection for pattern recognition in polynomial time. Journal of Machine Learning Research, 8(Mar):589{612.
  22. Qi, Y. (2012). Random forest for bioinformatics. In Ensemble machine learning, pages 307{323. Springer.
  23. Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1988). Learning representations by back-propagating errors. Cognitive modeling, 5(3):1.
  24. Saeys, Y., Inza, I., and Larraaga, P. (2007). A review of feature selection techniques in bioinformatics. Bioinformatics, 23(19):2507.
  25. Shrikumar, A., Greenside, P., and Kundaje, A. (2017). Learning important features through propagating activation di_erences. arXiv preprint arXiv:1704.02685.
  26. Srivastava, N., Hinton, G. E., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. (2014). Dropout: a simple way to prevent neural networks from over_tting. Journal of Machine Learning Research, 15(1):1929{1958.
  27. Stolovitzk, G., Monroe, D., and Califano, A. (2007). Dialogue on reverse- engineering assessment and methods: The dream of high-throughput pathway inference. Annals of the New York Academy of Sciences, 1115:11{22.
  28. Stolovitzky, G., Prill, R., and Califano, A. (2009). Lessons from the dream2 challenges. Annals of the New York Academy of Sciences, 1158:159{95.
  29. Strobl, C., Boulesteix, A.-L., Kneib, T., Augustin, T., and Zeileis, A. (2008). Conditional variable importance for random forests. BMC bioinformatics, 9(1):307.
  30. Wang, M., Chen, X., and Zhang, H. (2010). Maximal conditional chi-square importance in random forests. Bioinformatics, 26(6):831{837.