|
[1] W. Huang, Y. Nakamori, and S.-Y. Wang, “Forecasting stock market movement direction with support vector machine,” Computers & Operations Research, vol. 32, no. 10, pp. 2513–2522, 2005. [2] S. Shen, H. Jiang, and T. Zhang, “Stock market forecasting using machine learning algorithms,” Department of Electrical Engineering, Stanford University, Stanford, CA, pp. 1–5, 2012. [3] G. Bontempi, S. Ben Taieb, and Y.-A. L. Borgne, “Machine learning strategies for time series forecasting,” in Proc. Eur. Bus. Intelligence Summer School. Springer, 2012, pp. 62–77. [4] X. Gao, “Deep reinforcement learning for time series: playing idealized trading games,” arXiv preprint arXiv:1803.03916, 2018. [5] F. E. Tay and L. Cao, “Application of support vector machines in financial time series forecasting,” omega, vol. 29, no. 4, pp. 309–317, 2001. [6] S. Selvin, R. Vinayakumar, E. A. Gopalakrishnan, V. K. Menon, and K. P. Soman, “Stock price prediction using lstm, rnn and cnn-sliding window model,” in 2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI), 2017, pp. 1643–1647. [7] H. Yang, X.-Y. Liu, S. Zhong, and A. Walid, “Deep reinforcement learning for automated stock trading: An ensemble strategy,” in Proceedings of the First ACM International Conference on AI in Finance, 2020, pp. 1–8. [8] J. Brogaard et al., “High frequency trading and its impact on market quality,” Northwestern University Kellogg School of Management Working Paper, vol. 66, 2010. [9] A. Gerig, “High-frequency trading synchronizes prices in financial markets,” arXiv preprint arXiv:1211.1919, 2012. [10] K. Arulkumaran, M. P. Deisenroth, M. Brundage, and A. A. Bharath, “A brief survey of deep reinforcement learning,” arXiv preprint arXiv:1708.05866, 2017. [11] L. P. Kaelbling, M. L. Littman, and A. R. Cassandra, “Planning and acting in partially observable stochastic domains,” Artificial intelligence, vol. 101, no. 1-2, pp. 99–134, 1998. [12] Y. Li, “Deep reinforcement learning: An overview,” arXiv preprint arXiv:1701.07274, 2017. [13] R. S. Sutton and A. G. Barto, Reinforcement learning: An introduction. MIT press, 2018. [14] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017. [15] S. Kakade and J. Langford, “Approximately optimal approximate reinforcement learning,” in In Proc. 19th International Conference on Machine Learning. Citeseer, 2002. [16] J. Ayala, M. García-Torres, J. L. V. Noguera, F. Gómez-Vela, and F. Divina, “Technical analysis strategy optimization using a machine learning approach in stock market indices,” Knowledge-Based Systems, vol. 225, p. 107119, 2021. [17] S. Gumparthi, “Relative strength index for developing effective trading strategies in constructing optimal portfolio,” International Journal of Applied Engineering Research, vol. 12, no. 19, pp. 8926–8936, 2017. [18] J. W. Wilder, New concepts in technical trading systems. Trend Research, 1978. [19] F. Johnston, J. Boyland, M. Meadows, and E. Shale, “Some properties of a simple moving average when applied to forecasting a time series,” Journal of the Operational Research Society, vol. 50, no. 12, pp. 1267–1271, 1999. [20] M. Barlam and A. M. Prasad, “Evaluating stock performance using technical analysis: A case study of tcs ltd.” IUP Journal of Accounting Research & Audit Practices, vol. 20, no. 1, pp. 7–14, 2021. [21] W. F. Sharpe, “The sharpe ratio,” Streetwise–the Best of the Journal of Portfolio Management, pp. 169–185, 1998. [22] F. A. Sortino and L. N. Price, “Performance measurement in a downside risk framework,” the Journal of Investing, vol. 3, no. 3, pp. 59–64, 1994. [23] P. Yu, J. S. Lee, I. Kulyatin, Z. Shi, and S. Dasgupta, “Model-based deep reinforcement learning for dynamic portfolio optimization,” arXiv preprint arXiv:1901.08740, 2019. [24] M. Kritzman and Y. Li, “Skulls, financial turbulence, and risk management,” Financial Analysts Journal, vol. 66, no. 5, pp. 30–41, 2010. [25] G. Lucarelli and M. Borrotti, “A deep reinforcement learning approach for automated cryptocurrency trading,” in IFIP International Conference on Artificial Intelligence Applications and Innovations. Springer, 2019, pp. 247–258. [26] T. Théate and D. Ernst, “An application of deep reinforcement learning to algorithmic trading,” Expert Systems with Applications, vol. 173, p. 114632, 2021. [27] J. Sadighian, “Extending deep reinforcement learning frameworks in cryptocurrency market making,” arXiv preprint arXiv:2004.06985, 2020. [28] J. E. Moody, M. Saffell, Y. Liao, and L. Wu, “Reinforcement learning for trading systems and portfolios.” in KDD, 1998, pp. 279–283. [29] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, “Continuous control with deep reinforcement learning,” arXiv preprint arXiv:1509.02971, 2015. [30] L. Espeholt, H. Soyer, R. Munos, K. Simonyan, V. Mnih, T. Ward, Y. Doron, V. Firoiu, T. Harley, I. Dunning et al., “Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures,” in International Conference on Machine Learning. PMLR, 2018, pp. 1407–1416. [31] K. Cobbe, O. Klimov, C. Hesse, T. Kim, and J. Schulman, “Quantifying generalization in reinforcement learning,” in International Conference on Machine Learning. PMLR, 2019, pp. 1282–1289. [32] A. Briola, J. Turiel, R. Marcaccioli, and T. Aste, “Deep reinforcement learning for active high frequency trading,” arXiv preprint arXiv:2101.07107, 2021. [33] J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz, “Trust region policy optimization,” in International conference on machine learning. PMLR, 2015, pp. 1889–1897. [34] P. Oncharoen and P. Vateekul, “Deep learning using risk-reward function for stock market prediction,” in Proceedings of the 2018 2nd International Conference on Computer Science and Artificial Intelligence, 2018, pp. 556–561.
|