|
[1] R. Moser, W. Pedrycz, and G. Succi, “A comparative analysis of the efficiency of change metrics and static code attributes for defect prediction,” 2008 ACM/IEEE 30th International Conference on Software Engineering, pp. 181-190, May 2008. [2] T. Hall, S. Beecham, D. Bowes, D. Gray, and S. Counsell, “A systematic review of fault prediction performance in software engineering,” IEEE Transactions on Software Engineering, vol. 38, no. 6, pp. 1276-1304, Nov.-Dec. 2012. [3] T. Hall, S. Beecham, D. Bowes, D. Gray, and S. Counsell, “A Systematic Literature Review on Fault Prediction Performance in Software Engineering,” IEEE Transactions on Software Engineering, vol. 38, pp. 1276-1304, Nov.-Dec. 2012. [4] T. Menzies, Z. Milton, B. Turhan, B. Cukic, Y. Jiang, and A. Bener, “Defect prediction from static code features: current results, limitations, new approaches,” Automated Software Engineering, vol. 17, pp. 375-407, Dec. 2010. [5] H. H. Maurice, Elements of software science (operating and programming systems series): Elsevier Science Inc., 1977. [6] T. J. McCabe, “A complexity measure,” IEEE Transactions on Software Engineering, vol. SE-2, no. 4, pp. 308-320, Dec. 1976. [7] S. R. Chidamber, and C. F. Kemerer, “A metrics suite for object oriented design,” IEEE Transactions on software engineering, vol. 20, no. 6, pp. 476-493, June 1994. [8] R. Harrison, S. J. Counsell, and R. V. Nithi., “An evaluation of the mood set of object-oriented software metrics,” IEEE Transactions on Software Engineering, vol. 24, no. 6, pp. 491-496, June 1998. [9] T. Jiang, L. Tan, and S. Kim., “Personalized Defect Prediction,” 2013 28th IEEE/ACM International Conference on Automated Software Engineering (ASE), pp. 279-289, 2013. [10] D. Gray, D. Bowes, N. Davey, Y. Sun, and B. Christianson, “Using the support vector machine as a classification method for software defect prediction with static code metrics,” Communications in Computer and Information Science, vol. 43, pp. 223-234, Jan. 2009. [11] P. A. Habibi, V. Amrizal, and R. B. Bahaweres, “Cross-project defect prediction for web application using naive bayes (case study: Petstore web application),” 2018 International Workshop on Big Data and Information Security (IWBIS), pp. 13-18, 2018. [12] M. J. Siers, and M. Z. Islam, “Software defect prediction using a cost sensitive decision forest and voting, and a potential solution to the class imbalance problem,” Information Systems, vol. 51, pp. 62-71, 2015. [13] T. Shippey, D. Bowes, and T. Hall, “Automatically Identifying Code Features for Software Defect Prediction: Using AST N-grams,” Information and Software Technology, vol. 106, pp. 142-160, Oct. 2018. [14] T. Zimmermann, N. Nagappan, H. Gall, E. Giger, and B. Murphy., “Cross-project Defect Prediction: A Large Scale Experiment on Data vs. Domain vs. Process.,” In Proceedings of the the 7th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on The Foundations of Software Engineering (ESEC/FSE), Amsterdam, pp. 91-100, Aug. 2009. [15] K. C. Louden, and K. A. Lambert, Programming languages: principles and practices. Cengage Learning, 3 ed., 2011. [16] O. Abdel-Hamid, A.-r. Mohamed, H. Jiang, and G. Penn, “Applying Convolutional Neural Networks concepts to hybrid NN-HMM model for speech recognition,” 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4277-4280, 2012. [17] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” Proceedings of the 25th International Conference on Neural Information Processing Systems (NIPS), Red Hook, NY, USA, vol. 1, pp. 1097-1105, 2012. [18] Q. Xuan, B. Fang, M. Yi Liu, J. W. IEEE, J. Zhang, Y. Zheng, and G. Bao, “Automatic Pearl Classification Machine Based on a Multistream Convolutional Neural Network,” IEEE Transactions on Industrial Electronics, vol. 65, no. 8, pp. 6538-6547, Aug. 2018. [19] Y. Liu, C. Yang, Z. Gao, and Y. Yao, “Ensemble deep kernel learning with application to quality prediction in industrial polymerization processes,” Chemometrics and Intelligent Laboratory Systems, vol. 174, pp. 15-21, Mar. 2018. [20] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-9, 2015. [21] A. Mnih, and G. Hinton, “A scalable hierarchical distributed language model,” Proceedings of the 21st International Conference on Neural Information Processing Systems (NIPS), Red Hook, NY, USA, pp. 1081–1088, 2008. [22] Y.-H. Tu, J. Du, and C.-H. Lee, “Speech Enhancement Based on Teacher–Student Deep Learning Using Improved Speech Presence Probability for Noise-Robust Speech Recognition,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 27, pp. 2080-2091, Dec. 2019. [23] A. J. Keya, S. Afridi, A. S. Maria, S. S. Pinki, J. Ghosh, and M. F. Mridha, “Fake News Detection Based on Deep Learning,” 2021 International Conference on Science & Contemporary Technologies (ICSCT), pp. 1-6, Dec. 2021. [24] T.-Y. Yu, C.-Y. Huang, and N. C. Fang, “Use of Deep Learning Model with Attention Mechanism for Software Fault Prediction,” 2021 8th International Conference on Dependable Systems and Their Applications (DSA), pp. 161-171, Aug. 2021. [25] C.-Y. Huang, Arthur, C. Huang, M.-C. Yang, and W.-C. Su, “A Study of Applying Deep Learning-Based Weighted Combinations to Improve Defect Prediction Accuracy and Effectiveness,” 2019 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM), pp. 1471-1475, Dec. 2019. [26] S. Wang, T. Liu, and L. Tan, “Automatically Learning Semantic Features for Defect Prediction,” 2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE), New York, NY, USA, pp. 297-308, May. 2016. [27] J. Li, P. He, J. Zhu, and M. R. Lyu, “Software Defect Prediction via Convolutional Neural Network,” 2017 IEEE International Conference on Software Quality, Reliability and Security (QRS), pp. 318-328, Jul. 2017. [28] J. Chen, K. Hu, Y. Yu, Z. Chen, Q. Xuan, Y. Liu, and V. Filkov, “Software Visualization and Deep Transfer Learning for Effective Software Defect Prediction,” Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering (ICSE), New York, NY, USA, pp. 578-589, Oct. 2020. [29] A. Krizhevsky, V. Nair, and G. Hinton., “Learning Multiple Layers of Features from Tiny Images,” University of Toronto, 2009. [30] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng, “Reading Digits in Natural Images with Unsupervised Feature Learning,” NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011. [31] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248-255, Jun. 2009. [32] D. Rodriguez, I. Herraiz, R. Harrison, J. Dolado, and J. C. Riquelme, “Preliminary comparison of techniques for dealing with imbalance in software defect prediction,” Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering (EASE), New York, NY, USA, pp. 1-10, 2014. [33] K. He, X. Zhang, S. Ren, and J. Sun, “Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition,” Computer Vision (ECCV) 2014, pp. 346-361, Jun. 2014. [34] "PROMISE Repository," http://openscience.us/repo/defect/ (accessed Jul. 2022). [35] K. Pan, S. Kim, and J. E. James Whitehead, “Bug Classification Using Program Slicing Metrics,” 2006 Sixth IEEE International Workshop on Source Code Analysis and Manipulation, pp. 31-42, Sep. 2006. [36] T. Lee, J. Nam, D. Han, S. Kim, and H. Peter, “Micro Interaction Metrics for Defect Prediction,” Proceedings of the 19th ACM SIGSOFT symposium and the 13th European conference on Foundations of software engineering (ESEC/FSE '11), New York, NY, USA, pp. 311-321, 2011. [37] J. Nam, S. J. Pan, and S. Kim, “Transfer defect learning,” 2013 35th International Conference on Software Engineering (ICSE), pp. 382-391, May 2013. [38] Z. Marian, I.-G. Mircea, I.-G. Czibula, and G. Czibula, “A Novel Approach for Software Defect Prediction Using Fuzzy Decision Trees,” 2016 18th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC), vol. 240-247, Sep. 2016. [39] S. Kim, E. James Whitehead, and Y. Zhang, “Classifying software changes: clean or buggy,” IEEE Transactions on Software Engineering, vol. 34, pp. 181-196, Mar.-Apr. 2008. [40] H. D. Tessema, and S. L. Abebe, “Enhancing Just-in-Time Defect Prediction Using Change Request-based Metrics,” 2021 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER), pp. 511-515, Mar. 2021. [41] X.-Y. Jing, S. Ying, Z.-W. Zhang, S.-S. Wu, and J. Liu, “Dictionary learning based software defect prediction,” Proceedings of the 36th International Conference on Software Engineering (ICSE), pp. 414-423, May 2014. [42] T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient Estimation of Word Representations in Vector Space,” Proceedings of Workshop at ICLR, Jan. 2013. [43] Y. Yang, X. Xia, D. Lo, and J. Grundy, “A Survey on Deep Learning for Software Engineering,” ACM Computing Surveys, New York, NY, USA, Dec. 2021. [44] L. Pelayo, and S. Dick, “Applying Novel Resampling Strategies To Software Defect Prediction,” 2007 Annual Meeting of the North American Fuzzy Information Processing Society (NAFIPS), pp. 69-72, Jun. 2007. [45] R. A. Vivanco, Y. Kamei, A. Monden, K.-i. Matsumoto, and D. Jin, “Using Search-Based Metric Selection and Oversampling to Predict Fault Prone Modules,” 2010 Canadian Conference on Electrical and Computer Engineering (CCECE), pp. 1-6, May 2010. [46] S. Wang, and X. Yao, “Using Class Imbalance Learning for Software Defect Prediction,” IEEE Transactions on Reliability, vol. 62, no. 2, pp. 434-443, Jun. 2013. [47] N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, “SMOTE: Synthetic Minority Over-sampling Technique,” Journal of Artificial Intelligence Research, vol. 16, no. 1, pp. 321-357, Jun. 2002. [48] L. Mou, G. Li, Y. Liu, H. Peng, Z. Jin, Y. Xu, and L. Zhang, “Building Program Vector Representations for Deep Learning,” Knowledge Science, Engineering and Management (KSEM), Nov. 2015. [49] C. Thunes, “Javalang,” GitHub repository, https://github.com/c2nes/javalang (accessed Jul. 2022). [50] C. Shorten, and T. Khoshgoftaar, “A survey on Image Data Augmentation for Deep Learning,” Journal of Big Data, vol. 6, pp. 1-48, Jul. 2019. [51] T. Fawcett, “Introduction to ROC analysis,” Pattern Recognition Letters, vol. 27, pp. 861-874, Jun. 2006. [52] D. Chicco, and G. Jurman, “The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation,” BMC Genomics, vol. 21, Jan. 2020. [53] R. P. Espíndola, and N. Ebecken, “On extending F-measure and G-mean metrics to multi-class problems,” Sixth international conference on data mining, text mining and their business applications, vol. 35, pp. 25-34, Jan. 2005. [54] P. Runeson, and M. Höst, “Guidline for conducting and reporting case study research in software engineering,” Empirical Software Engineering, vol. 14, pp. 131-164, Dec. 2008. [55] "PyTorch," https://pytorch.org/ (accessed Jul. 2022). [56] "TensorFlow," https://www.tensorflow.org/ (accessed Jul. 2022). [57] "Lizard," http://www.lizard.ws/# (accessed Jul. 2022).
|