|
[1] 陳俊瑋,「卷積-遞迴神經網路計算薛丁格方程的激發態能量」,國立清華大 學計算與建模科學研究所,碩士論文,2020。 [2] https://en.wikipedia.org/wiki/Hermite_polynomials. [3] https://en.wikipedia.org/wiki/Quantum_harmonic_oscillator. [4] https://zh.wikipedia.org/wiki/%E7%84%A1%E9%99%90%E6%B7%B1%E6% 96%B9%E5%BD%A2%E9%98%B1. [5] T Aboiyar, T Luga, and BV Iyorter. Derivation of continuous linear multistep methods using hermite polynomials as basis functions. American Journal of Applied Mathematics and Statistics, 3(6):220–225, 2015. [6] Azam Asl and Michael L Overton. Analysis of the gradient method with an armijo–wolfe line search on a class of non-smooth convex functions. Optimization methods and software, 35(2):223–242, 2020. [7] Amir Beck. First-order methods in optimization. SIAM, 2017. [8] Adam Berger, Stephen A Della Pietra, and Vincent J Della Pietra. A maximum entropy approach to natural language processing. Computational linguistics, 22(1):39–71, 1996. [9] Riccardo Borghi. The variational method in quantum mechanics: an elementary introduction. European Journal of Physics, 39(3):035410, 2018. [10] Chih-Chung Chang and Chih-Jen Lin. Libsvm: a library for support vector machines. ACM transactions on intelligent systems and technology (TIST), 2(3):1–27, 2011. [11] Yang Ding, Enkeleida Lushi, and Qingguo Li. Investigation of quasi-newton methods for unconstrained optimization. International Journal of Computer Application, 29:48–58, 2010. [12] Tim Dockhorn. A discussion on solving partial differential equations using neural networks. arXiv preprint arXiv:1904.07200, 2019. [13] Siegfried Flügge. Practical quantum mechanics. Springer Science & Business Media, 2012. [14] Philip E Gill and Walter Murray. Quasi-newton methods for unconstrained optimization. IMA Journal of Applied Mathematics, 9(1):91–108, 1972. [15] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 249–256. JMLR Workshop and Conference Proceedings, 2010. [16] David J Griffiths and Darrell F Schroeter. Introduction to quantum mechanics. Cambridge University Press, 2018. [17] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. [18] Jan Hermann, Zeno Schätzle, and Frank Noé. Deep-neural-network solution of the electronic schrödinger equation. Nature Chemistry, 12(10):891–897, 2020. [19] Jionghui Jiang, Xi’an Feng, Zhiwen Hu, Xiaodong Hu, Fen Liu, and Hui Huang. Medical image fusion using transfer learning and l-bfgs optimization algorithm. International Journal of Imaging Systems and Technology, 2021. [20] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [21] Aristidis Likas and Andreas Stafylopatis. Training the random neural network using quasi-newton methods. European Journal of Operational Research, 126(2):331–339, 2000. [22] Weibo Liu, Zidong Wang, Xiaohui Liu, Nianyin Zeng, Yurong Liu, and Fuad E Alsaadi. A survey of deep neural network architectures and their applications. Neurocomputing, 234:11–26, 2017. [23] Lu Lu, Xuhui Meng, Zhiping Mao, and George Em Karniadakis. Deepxde: A deep learning library for solving differential equations. SIAM Review, 63(1):208–228, 2021. [24] Kyle Mills, Michael Spanner, and Isaac Tamblyn. Deep learning and the schrödinger equation. Physical Review A, 96(4):042113, 2017. [25] Nazri Mohd Nawi, Meghana R Ransing, and Rajesh S Ransing. An improved learning algorithm based on the broyden-fletcher-goldfarb-shanno (bfgs) method for back propagation neural networks. In Sixth International Conference on Intelligent Systems Design and Applications, volume 1, pages 152–157. IEEE, 2006. [26] Jorge Nocedal and Stephen Wright. Numerical optimization. Springer Science & Business Media, 2006. [27] A Pavelka and A Procházka. Algorithms for initialization of neural network weights. In In Proceedings of the 12th Annual Conference, MATLAB, pages 453–459, 2004. [28] Dabal Pedamonti. Comparison of non-linear activation functions for deep neural networks on mnist classification task. arXiv preprint arXiv:1804.02763, 2018. [29] Tong Qin, Kailiang Wu, and Dongbin Xiu. Data driven governing equations approximation using deep neural networks. Journal of Computational Physics, 395:620–635, 2019. [30] Vijay K Rohatgi and AK Md Ehsanes Saleh. An introduction to probability and statistics. John Wiley & Sons, 2015. [31] Sebastian Ruder. An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747, 2016. [32] Youcef Saad. Numerical methods for large eigenvalue problems. Manchester University Press, 1992. [33] Sagar Sharma and Simone Sharma. Activation functions in neural networks. Towards Data Science, 6(12):310–316, 2017. [34] Hanif D Sherali and Osman Ulular. Conjugate gradient methods using quasi-newton updates with inexact line searches. Journal of Mathematical Analysis and Applications, 150(2):359–377, 1990. [35] Zhen-Jun Shi and Jie Shen. A gradient-related algorithm with inexact line searches. Journal of computational and applied mathematics, 170(2):349–370, 2004. [36] Justin Sirignano and Konstantinos Spiliopoulos. Dgm: A deep learning algorithm for solving partial differential equations. Journal of computational physics, 375:1339–1364, 2018. [37] Murray R Spiegel, John J Schiller, R Alu Srinivasan, and Mike LeVan. Probability and statistics, volume 2. Mcgraw-hill, 2001. [38] Ingo Steinwart and Andreas Christmann. Support vector machines. Springer Science & Business Media, 2008. [39] Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In International conference on machine learning, pages 1139–1147. PMLR, 2013. [40] Georg Thimm and Emile Fiesler. Neural network initialization. In International Workshop on Artificial Neural Networks, pages 535–542. Springer, 1995. [41] Nailong Wu. The maximum entropy method, volume 32. Springer Science & Business Media, 2012. [42] Hongchao Zhang and William W Hager. A nonmonotone line search technique and its application to unconstrained optimization. SIAM journal on Optimization, 14(4):1043–1056, 2004. |