|
[1] 金管會. 金融業投入金融科技發展金額今年將突破百億元. https://www.fsc.gov.tw/ch/home.jsp?id=96&parentpath=0,2&mcustomiz e=news_view.jsp&dataserno=201808210003&aplistdn=ou=news,ou=multi site,ou=chinese,ou=ap_root,o=fsc,c=tw&dtable=News. [2] Vatsal H. Shah. Machine Learning Techniques for Stock Prediction. Foundations of Machine Learning., 2007. [3] Leonardo dos Santos Pinheiro and Mark Dras. Stock market prediction with deep learning: A character-based neural language model for event-based trading. In Proceedings of the Australasian Language Technology Association Workshop 2017, pages 6–15, Brisbane, Australia, dec 2017. [4] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Comput., November 1997. [5] L. C. Jain and L. R. Medsker. Recurrent Neural Networks: Design and Applications. CRC Press, Inc., Boca Raton, FL, USA, 1st edition, 1999. [6] Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. A Bradford Book, Cambridge, MA, USA, 2018. [7] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning, 2013. [8] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms, 2017. [9] 臺灣期貨交易所. Taiwan futures exchange. https://www.taifex.com.tw/cht/index. [10] Inc. Cboe Exchange. Cboe vix white paper. https://www.cboe.com/micro/vix/vixwhite.pdf, 2019. [11] W. Xia, H. Li, and B. Li. A control strategy of autonomous vehicles based on deep reinforcement learning. In 2016 9th International Symposium on Computational Intelligence and Design (ISCID), volume 2, pages 198–201, 2016. [12] Jelena Luketina, Nantas Nardelli, Gregory Farquhar, Jakob Foerster, Jacob Andreas, Edward Grefenstette, Shimon Whiteson, and Tim Rocktäschel. A survey of reinforcement learning informed by natural language, 2019. [13] S. Yun, J. Choi, Y. Yoo, K. Yun, and J. Y. Choi. Action-decision networks for visual tracking with deep reinforcement learning. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1349–1358, 2017. [14] Hado van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double q-learning, 2015. [15] Ziyu Wang, Tom Schaul, Matteo Hessel, Hado van Hasselt, Marc Lanctot, and Nando de Freitas. Dueling network architectures for deep reinforcement learning, 2015. [16] Meire Fortunato, Mohammad Gheshlaghi Azar, Bilal Piot, Jacob Menick, Ian Osband, Alex Graves, Vlad Mnih, Remi Munos, Demis Hassabis, Olivier Pietquin, Charles Blundell, and Shane Legg. Noisy networks for exploration, 2017. [17] Marc G. Bellemare, Will Dabney, and Rémi Munos. A distributional perspective on reinforcement learning, 2017. [18] Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, and David Silver. Emergence of locomotion behaviours in rich environments, 2017. [19] Sangyeon Kim and Myungjoo Kang. Financial series prediction using attention lstm, 2019. [20] Jiayu Qiu, Bin Wang, and Changjun Zhou. Forecasting stock prices with long-short term memory neural network based on attention mechanism. https://doi.org/10.1371/journal.pone.0227222, 2020. [21] Tsung-Ren Lin. The performance of intraday trading in taiwan stock index futures by employing three critical prices and counter daily potential: An application of multicharts programming. https://hdl.handle.net/11296/gc35v7, 2014. [22] Yi-Ling Lai. Using reinforcement learning to establish taiwan stock index future intra-day trading strategies. https://www.csie.ntu.edu.tw/~lyuu/theses/thesis_r96922117.pdf, 2009. [23] Shao-ChunYang. Using modified rainbow for enhancing reinforcement learning for stock trading-nasdaq’s stocks as examples. https://hdl.handle.net/11296/dn3fkh, 2019. [24] Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, and David Silver. Rainbow: Combining improvements in deep reinforcement learning, 2017. [25] Hanchen Xu, Xiao Li, Xiangyu Zhang, and Junbo Zhang. Arbitrage of energy storage in electricity markets with deep reinforcement learning, 2019. [26] 台灣經濟新報. Tej database. https://www.tej.com.tw/twsite/Default.aspx. [27] Chien Yi Huang. Financial trading as a game: A deep reinforcement learning approach, 2018. [28] Kei Ota. tf2rl. https://github.com/keiohta/tf2rl. [29] John Benediktsson and Brian Cappello. Ta-lib. https://mrjbq7.github.io/ta-lib/doc_index.html. |