帳號:guest(18.223.206.144)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):陳勻廷
作者(外文):Chen, Yun-Ting
論文名稱(中文):使用物理信息神經網路求解具有 PT對稱勢的非線性薛丁格方程的波函數
論文名稱(外文):Using physics-informed neural networks to solve the wave function of the nonlinear Schrödinger equation with PT-symmetric potential
指導教授(中文):陳人豪
指導教授(外文):Chen, Jen-Hao
口試委員(中文):劉晉良
陳仁純
口試委員(外文):Liu, Jinn-Liang
Chen, Ren-Chuen
學位類別:碩士
校院名稱:國立清華大學
系所名稱:計算與建模科學研究所
學號:109026506
出版年(民國):111
畢業學年度:110
語文別:英文
論文頁數:32
中文關鍵詞:物理信息神經網絡薛丁格方程式BFGS 算法克蘭克-尼科爾森方法PT 對稱勢
外文關鍵詞:physics-informed neural networksSchrödinger equationBFGSCrank-Nicolson methodPT-symmetric potential
相關次數:
  • 推薦推薦:0
  • 點閱點閱:39
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
我們研究具有 PT 對稱勢的非線性薛丁格方程式,該方程式在材料科學、物理學和量子力學乃至於研究化學中,皆為一項重要的方程式,PT-對稱性分為宇稱對稱性和時間反演對稱性,宇稱對稱性是指所有坐標的符號反轉,時間反演對稱性是指時間變量符號的反轉。本論文透過運用物理信息神經網絡深度學習來尋找此系統在兩種不同 PT 對稱勢下的波函數,並將這些結果與克蘭克-尼科爾森方法所得出的結果進行比較,在程式語言上,我們選用 Python 以及Tensorflow 框架的編寫與訓練,這是一種成熟的大型深度學習軟體庫。優化器選擇了最廣泛使用的擬牛頓法- BFGS 法應用此神經網路上,方法具有二階優化器的收斂速度優勢,並且計算複雜度低於牛頓法。我們也研究了對於神經網路的不同超參數,例如層數和神經元的數量調整,對物理信息神經網絡深度學習具有 PT 對稱勢的非線性薛丁格方程式的影響。
We study the nonlinear Schrödinger equation (NLSE) with the PT-symmetric potential, which is an classical equation in physics and quantum mechanics. PT-symmetry is divided into parity symmetry and time-reversal symmetry. Parity symmetry refers to the change of the sign of all coordinates, and time-reversal symmetry refers to the change of the sign of the time variable.
We apply physics-informed neural networks (PINNs) to find the wave functions of this system under two different PT-symmetric potentials, and the results are compared with those of Crank-Nicolson method. Here we use Python and the Tensorflow suite to implement the code of the deep neural networks. BFGS -one of the most widely used quasi-Newton methods, was chosen as the optimizer in PINNs. We also investigate the influence of the layers and neurons on the learning ability of the PINN method in solving the NLSE with the PT-symmetric potential. The results indicate that the proposed PINNs are effective to solve the wave functions of the NLSE under the PT-symmetric potentials.
Abstract. . . . . . . . . .i
1 Introduction. . . . . . . . . .1
2 Neural Network. . . . . . . . . .3
2.1 Model . . . . . . . . . .3
2.1.1 Physics-informed neural networks. . . . . . . . . .3
2.1.2 The solutions of the NLSE with PT-symmetric harmonic potential. . . . . . . . . .4
2.1.3 The solutions of the NLSE with PT-symmetric Scarf-II potential. . . . . . . . . .7
2.2 Optimizer. . . . . . . . . .8
2.2.1 Newton’s method. . . . . . . . . .9
2.2.2 Quasi-Newton method (BFGS). . . . . . . . . .10
2.3 Initialization. . . . . . . . . .12
2.3.1 Xavier initialization. . . . . . . . . .13
2.4 Finite-difference Method. . . . . . . . . .14
2.4.1 Forward difference. . . . . . . . . .15
2.4.2 Backward difference. . . . . . . . . .16
2.4.3 Crank-Nicolson Method. . . . . . . . . .17
3 Results. . . . . . . . . .19
3.1 PT-symmetric harmonic potential. . . . . . . . . .19
3.1.1 The focusing case with initial conditions from exact soliton. . . . . . . . . .19
3.1.2 The defocusing case with initial conditions from exact soliton. . . . . . . . . .20
3.1.3 The periodic initial condition case. . . . . . . . . .21
3.2 PT-symmetric Scarf-II potential. . . . . . . . . .23
3.2.1 The focusing case with initial conditions from exact soliton. . . . . . . . . .23
3.2.2 The defocusing case with initial conditions from exact soliton. . . . . . . . . .24
3.3 Comparison. . . . . . . . . .25
3.3.1 Comparing the accuracy of PINNs and Crank-Nicolson. . . . . . . . . .25
3.3.2 Comparisons of the influence of layers and neurons on the learning ability of this PINN. . . . . . . . . .26
4 Conclusion. . . . . . . . . .28
Reference. . . . . . . . . .29
[1] Mordecai Avriel. Nonlinear programming: analysis and methods. Courier Corporation, 2003.
[2] Carl M Bender and Stefan Boettcher. Real spectra in non-hermitian hamil-tonians having p t symmetry. Physical review letters, 80(24):5243, 1998.
[3] Feliks Aleksandrovich Berezin and Mikhail Shubin. The Schrödinger Equa-tion, volume 66. Springer Science & Business Media, 2012.
[4] Charles G Broyden. A class of methods for solving nonlinear simultaneous equations. Mathematics of computation, 19(92):577–593, 1965.
[5] Richard L Burden, J Douglas Faires, and Annette M Burden. Numerical analysis. Cengage learning, 2015.
[6] Chih-Chung Chang and Chih-Jen Lin. Libsvm: a library for support vector machines. ACM transactions on intelligent systems and technology (TIST), 2(3):1–27, 2011.
[7] Chenyi Chen, Ari Seff, Alain Kornhauser, and Jianxiong Xiao. Deepdriv-ing: Learning affordance for direct perception in autonomous driving. In Proceedings of the IEEE international conference on computer vision, pages 2722–2730, 2015.
[8] Andrew R Conn, Nicholas IM Gould, and Ph L Toint. Convergence of quasi-newton matrices generated by the symmetric rank one update. Mathematical programming, 50(1):177–195, 1991.
[9] William C Davidon. Variable metric method for minimization. SIAM Journal on Optimization, 1(1):1–17, 1991.
[10] John E Dennis, Jr and Jorge J Moré. Quasi-newton methods, motivation and theory. SIAM review, 19(1):46–89, 1977.
[11] Roger Fletcher. Practical methods of optimization. John Wiley & Sons, 2013.
[12] Jianlong Fu, Heliang Zheng, and Tao Mei. Look closer to see better: Recur-rent attention convolutional neural network for fine-grained image recogni-tion. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4438–4446, 2017.
[13] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth inter-national conference on artificial intelligence and statistics, pages 249–256. JMLR Workshop and Conference Proceedings, 2010.
[14] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative ad-versarial nets. Advances in neural information processing systems, 27, 2014.
[15] Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Rezende, and Daan Wier-stra. Draw: A recurrent neural network for image generation. In International conference on machine learning, pages 1462–1471. PMLR, 2015.
[16] Jiequn Han, Arnulf Jentzen, and Weinan E. Solving high-dimensional par-tial differential equations using deep learning. Proceedings of the National Academy of Sciences, 115(34):8505–8510, 2018.
[17] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic opti-mization. arXiv preprint arXiv:1412.6980, 2014.
[18] Isaac E Lagaris, Aristidis Likas, and Dimitrios I Fotiadis. Artificial neural networks for solving ordinary and partial differential equations. IEEE trans-actions on neural networks, 9(5):987–1000, 1998.
[19] Quoc V Le, Navdeep Jaitly, and Geoffrey E Hinton. A simple way to initialize recurrent networks of rectified linear units. arXiv preprint arXiv:1504.00941, 2015.
[20] G Lévai, A Baran, P Salamon, and T Vertse. Analytical solutions for the radial scarf ii potential. Physics Letters A, 381(23):1936–1942, 2017.
[21] Jiaheng Li and Biao Li. Solving forward and inverse problems of the nonlin-ear schrödinger equation with the generalized-symmetric scarf-ii potential via pinn deep learning. Communications in Theoretical Physics, 73(12):125001, 2021.
[22] Tomas Mikolov and Geoffrey Zweig. Context dependent recurrent neural net-work language model. In 2012 IEEE Spoken Language Technology Workshop (SLT), pages 234–239. IEEE, 2012.
[23] Will Penny and David Frost. Neural networks in clinical medicine. Medical Decision Making, 16(4):386–398, 1996.
[24] Maziar Raissi, Paris Perdikaris, and George E Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Com-putational physics, 378:686–707, 2019.
[25] A-PN Refenes, A Neil Burgess, and Yves Bentz. Neural networks in financial engineering: A study in methodology. IEEE transactions on Neural networks, 8(6):1222–1267, 1997.
[26] Frank Rosenblatt. Principles of neurodynamics. perceptrons and the theory of brain mechanisms. Technical report, Cornell Aeronautical Lab Inc Buffalo NY, 1961.
[27] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by back-propagating errors. nature, 323(6088):533–536, 1986.
[28] Esteban Samaniego, Cosmin Anitescu, Somdatta Goswami, Vien Minh Nguyen-Thanh, Hongwei Guo, Khader Hamdia, X Zhuang, and T Rabczuk. An energy approach to the solution of partial differential equations in com-putational mechanics via machine learning: Concepts, implementation and applications. Computer Methods in Applied Mechanics and Engineering, 362:112790, 2020.
[29] Ingo Steinwart and Andreas Christmann. Support vector machines. Springer Science & Business Media, 2008.
[30] Tjalling J Ypma. Historical development of the newton–raphson method. SIAM review, 37(4):531–551, 1995.
[31] Heiga Ze, Andrew Senior, and Mike Schuster. Statistical parametric speech synthesis using deep neural networks. In 2013 ieee international conference on acoustics, speech and signal processing, pages 7962–7966. IEEE, 2013.
[32] Dmitry A Zezyulin and Vladimir V Konotop. Nonlinear modes in the har-monic pt-symmetric potential. Physical Review A, 85(4):043840, 2012.
[33] Zijian Zhou and Zhenya Yan. Solving forward and inverse problems of the logarithmic nonlinear schrödinger equation with pt-symmetric harmonic po-tential via deep learning. Physics Letters A, 387:127010, 2021.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *