|
[1] R. Moser, W. Pedrycz, and G. Succi, “A comparative analysis of the efficiency of change metrics and static code attributes for defect prediction,” in ICSE’08: Proc. of the International Conference on Software Engineering, 2008. [2] X.-Y. Jing, S. Ying, Z.-W. Zhang, S.-S. Wu, and J. Liu, "Dictionary learning based software defect prediction," in Proceedings of the 36th International Conference on Software Engineering, 2014, pp. 414-423: ACM. [3] M. Tan, L. Tan, S. Dara, and C. Mayeux, “Online defect prediction for imbalanced data,” in ICSE’15: Proc. of the International Conference on Software Engineering- Volume 2, 2015. [4] J. Nam, S. J. Pan, and S. Kim, “Transfer defect learning,” in ICSE’13: Proc. of the International Conference on Software Engineering, 2013. [5] J. Nam, “Survey on software defect prediction,” Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Tech. Rep, 2014. [6] V. Suma and T. R. G. K. Nair, “Effective Defect Prevention Approach in Software Process for Achieving Better Quality Levels,” World Academy of Science, Engineering and Technology (WASET), 42, pp. 258_262, 2008. M. R. Lyu et al., Handbook of software reliability engineering. IEEE computer society press CA, 1996, vol. 222. [7] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science’06, 313(5786):504–507. [8] J. Li, P. He, J. Zhu, M. R. Lyu, “Software Defect Prediction via Convolutional Neural Network,” in QRS ’17: Proc. of the International Conference on Software Quality, Reliability and Security, 2017. [9] H. K. Dam, T. Pham, S. W. Ng, T. Tran, J. Grundy, A. Ghose, T. Kim, and C.-J. Kim, “A Deep Tree Based Model for Software Defect Prediction,” [Online]. Available: https://arxiv.org/abs/1802.00921. Accessed: Mar. 7, 2018. [10] S. Wang, T. Liu and L. Tan, “Automatically Learning Semantic Features for Defect Prediction,” in ICSE '16, Proceedings of the 38th International Conference on Software Engineering, Pages 297-308. [11] H. K. Dam, T. Tran, T. Pham, S. W. Ng, J. Grundy, and A. Ghose “Automatic feature learning for vulnerability prediction,” [Online]. Available: https://arxiv.org/abs/1708.02368. Accessed: Mar. 7, 2018. [12] M. White, C. Vendome, M. Linares-Va ́squez, and D. Poshyvanyk. Toward deep learning software repositories. In MSR’15, pages 334–345. [13] H. K. Dam, T. Tran, T. Pham, “A Deep Language Model for Software Code,” 63 [Online]. Available: https://arxiv.org/abs/1608.02715. Accessed: Mar. 7, 2018. [14] H. Peng, L. Mou, G. Li, Y. Liu, L. Zhang, and Z. Jin, "Building program vector representations for deep learning," in International Conference on Knowledge Science, Engineering and Management, 2015, pp. 547-553: Springer. [15] A. Hindle, E. T. Barr, Z. Su, M. Gabel, and P. Devanbu, “On the naturalness of software,” in ICSE’12, pages 837–847. [16] C. J. Maddison, and D. Tarlow, “Structured Generative Models of Natural Source Code,” [Online]. Available: https://arxiv.org/abs/1401.0514. Accessed: Mar. 7, 2018. [17] J. Nam and S. Kim. Heterogeneous defect prediction. In FSE’15, pages 508–519. [18] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, “Attention is All You Need,” in NIPS’17: Proc. of the Conference on Neural Information Processing Systems, 2017. [19] M.-T. Luong, H. Pham, and C. D. Manning, “Effective Approaches to Attention- based Neural Machine Translation,” Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2015 [20] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhutdinov, R. Zemel, and Y. Bengio, “Show, Attend and Tell: Neural Image Caption Generation with Visual Attention,” [Online]. Available: https://arxiv.org/abs/1502.03044. Accessed: Mar. 7, 2018. [21] V. Mnih, N. Heess, A. Graves, and K. Kavukcuoglu, “Recurrent Model of Visual Attention,” in NIPS’14: Advances in Neural Information Processing Systems 27. [22] Y. C. Huang, K. L. Peng, and C. Y. Huang, “A History-based Cost-Cognizant Test Case Prioritization Technique in Regression Testing,” Journal of Systems and Software, Vol. 85, Issue 3, pp. 626-637, March 2012. [23] C. Y. Huang, C. S. Kuo, and S. P. Luan, “Evaluation of Bounded Generalized Pareto Model for the Analysis of Fault Distribution of Open Source Software,” IEEE Trans. on Reliability, Vol. 63, No. 1, pp. 309-319, March 2014. [24] S. P. Luan and C. Y. Huang, “An Improved Pareto Distribution for Modeling the Fault Data of Open Source Software,” Software Testing, Verification and Reliability, Vol. 24, Issue 6, pp. 416-437, Sept. 2014. [25] H. H. Maurice, “Elements of software science (operating and program- ming systems series),” 1977. [26] T. J. McCabe, “A complexity measure,” IEEE Transactions on Software Engineering, no. 4, pp. 308–320, 1976. [27] S. R. Chidamber and C. F. Kemerer, “A metrics suite for object oriented design,” IEEE Transactions on Software Engineering, vol. 20, no. 6, 1994. [28] M.H. Halstead, “Elements of Software Science.” Elsevier, North-Holland, 1975. 64 [29] T. Compton, and C. Withrow, “Prediction and Control of Ada Software Defects,” J. Systems and Software, vol. 12, pp. 199-207, 1990. [30] T. M. Khoshgoftaar, E. B. Allen, N. Goel, A. Nandi, and J. McMullan, "Detection of software modules with high debug code churn in a very large legacy system," in Proceedings of ISSRE'96: 7th International Symposium on Software Reliability Engineering, 1996, pp. 364-371: IEEE. [31] A. B. Binkley and S. R. Schach, "Validation of the coupling dependency metric as a predictor of run-time failures and maintenance measures," in Proceedings of the 20th international conference on Software engineering, 1998, pp. 452-455: IEEE. [32] A. E. Hassan. Predicting faults using the complexity of code changes. In ICSE’09, pages 78–88. [33] T. Lee, J. Nam, D. Han, S. Kim, and H. P. In. Micro interaction metrics for defect prediction. In FSE’11, pages 311–321. [34] R. Moser, W. Pedrycz, and G. Succi. A comparative analysis of the efficiency of change metrics and static code attributes for defect prediction. In ICSE’08, pages 181–190. [35] T. T. Nguyen, T. N. Nguyen, and T. M. Phuong. Topic-based defect prediction. In ICSE’11, pages 932–935. [36] M. Allamanis, E. T. Barr, P. Devanbu, and C. Sutton, “A Survey of Machine Learning for Big Code and Naturalness,” ACM Computing Surveys (CSUR) Surveys Homepage archive, Volume 51 Issue 4, September 2018, Article No. 81. [37] J. Wang, B. Shen, and Y. Chen. Compressed c4. 5 models for software defect prediction. In QSIC’12, pages 13–16. [38] T. Khoshgoftaar and N. Seliya. Tree-based software quality estimation models for fault prediction. In Software Metrics’02, pages 203–214. [39] W. Tao and L. Wei-hua. Naive Bayes software defect prediction model. In CiSE’10, pages 1–4. [40] J. Nam, S. J. Pan, and S. Kim, “Transfer defect learning,” in ICSE’13: Proc. of the International Conference on Software Engineering, 2013. [41] B. Turhan, T. Menzies, A. B. Bener, and J. Di Stefano. On the relative value of cross-company and within-company data for defect prediction. Empirical Softw. Engg., 14(5):540–578, 2009. [42] S. Watanabe, H. Kaiya, and K. Kaijiri. Adapting a fault prediction model to allow inter language reuse. In Proceedings of the 4th International Workshop on Predictor Models in Software Engineering, pages 19–24, 2008. [43] X. Yang, D. Lo, X. xia, Y. Zhang, and J. Sun. “Deep learning for just-in-time defect prediction.” In QRS’15, pages 17–26. [44] X.-Y. Jing, S. Ying, Z.-W. Zhang, S.-S. Wu, and J. Liu, “Dictionary learning based 65 software defect prediction,” in ICSE’14: Proc. of the International Conference on Software Engineering, 2014. [45] I. H. Witten, E. Frank, M. A. Hall, and C. J. Pal, Data Mining: Practical machine learning tools and techniques. Morgan Kaufmann, 2016. [46] A. Krizhevsky, I. Sutskever, and G. Hinton. “Imagenet classification with deep convolutional neural networks”. In NIPS, 2012. [47] Xu, L., Ren, J.S., Liu, C., and Jia, J. “Deep convolutional neural network for image deconvolution.” In: NIPS. (2014) 1790–1798. [48] S. Ren, K. He, R. Girshick, and J. Sun. “Faster R-CNN: Towards real-time object detection with region proposal networks.” In NIPS, 2015. [49] A. Graves, A.-r. Mohamed, and G. Hinton, "Speech recognition with deep recurrent neural networks," in 2013 IEEE international conference on acoustics, speech and signal processing, 2013, pp. 6645-6649: IEEE. [50] K. Cho, B. V. Merrie ̈nboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. “Learning phrase representations using rnn encoder-decoder for statistical machine translation.” 2014.T. Mikolov, M. Karafia ́t, L. Burget, J. Cernocky`, and S. Khudanpur. Recurrent neural network based [51] T. Mikolov, M. Karafia ́t, L. Burget, J. Cernocky`, and S. Khudanpur. “Recurrent neural network based language model.” In INTERSPEECH, pages 1045–1048, 2010. [52]S. Hochreiter and J. Schmidhuber. “Long short-term memory.” Neural computation, 9(8):1735–1780, 1997. [53] S. R. Bowman, L. Vilnis, O. Vinyals, A. M. Dai, R. Jo ź efowicz, and S. Bengio. 2016. “Generating sentences from a continuous space.” In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, pages 10–21. 2016. [54] W. Ling, E. Grefenstette, K. M. Hermann, T. Kočiský, A. Senior, F. Wang, and P. Blunsom, “Latent Predictor Networks for Code Generation,” in ACL’ 16: proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, Volume 1. [55] S. Iyer, I. Konstas, A. Cheung, and L. Zettlemoyer, “Summarizing Source Code using a Neural Attention Model,” in ACL’ 16: proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, Volume 1. [56] P. Yin, and G. Neubig, “A Syntactic Neural Model for General-Purpose Code Generation,” in ACL’ 17: proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Volume 1. [57] K. Cho, B. V. Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio, “Learning Phrase Representations using RNN Encoder-Decoder 66 for Statistical Machine Translation,” in EMNLP’14: Proc. of the Empirical Methods in Natural Language Processing, 2014. [58] “JavaParser,” [Online]. Available: https://github.com/donnchadh/JavaParser/tree/master/JavaParser. Accessed: Mar. 7, 2018. [59] R. Scandariato, J. Walden, A. Hovsepyan, and W. Joosen, “Predicting vulnerable software components via text mining.” IEEE Trans. Software Eng., vol. 40, no. 10, pp. 993–1006, 2014. [Online]. Available: http://dblp.uni- trier.de/db/journals/tse/tse40.html#ScandariatoWHJ14 [60] C. D. Manning and H. Schutze. Foundations of statistical natural language processing. MIT press, 1999. [61] M. Tan, L. Tan, S. Dara, and C. Mayeux, “Online defect prediction for imbalanced data,” in ICSE’15: Proc. of the International Conference on Software Engineering- Volume 2, 2015. [62] “PROMISE dataset,” [Online]. Available: http://openscience.us/repo/defect/. Accessed: Mar. 7, 2018. [63] [64] T. Menzies, Z. Milton, B. Turhan, B. Cukic, Y. Jiang, and A. Bener. Defect prediction from static code features: current results, limitations, new approaches. ASE’10, 17(4):375–407. [65] J. T. J. P. Townsend and Psychophysics, "Theoretical analysis of an alphabetic confusion matrix," vol. 9, no. 1, pp. 40-50, 1971. [66] F. Rahman and P. Devanbu. Comparing static bug finders and statistical prediction. In Proceedings of the 2014 International Conference on Software Engineering, ICSE ’14, 2014. [67] I. Jolliffe, Principal component analysis. Springer, 2011. [68] P. Jalote, Software Project Management in Practice, Pearson Education, 2002. [69] C. Ebert, R. Dumke, M. Bundschuh, and A. Schmietendorf, Best Practices in Software Measurement, Springer Verlag, 2004. |