|
1. 林則孟, 生產計劃與管制. 2012: 華泰出版社. 2. Arulkumaran, K., Deisenroth, M. P., Brundage, M., and Bharath, A. A. (2017). A brief survey of deep reinforcement learning. arXiv preprint arXiv:1708.05866. 3. Bertel, S., and Billaut, J.-C. (2004). A genetic algorithm for an industrial multiprocessor flow shop scheduling problem with recirculation. European Journal of Operational Research, 159(3), 651-662. 4. Brucker, P. (1999). Scheduling algorithms. Journal of the Operational Research Society, 50, 774. 5. Chen, C., Xia, B., Zhou, B.-h., and Xi, L. (2015). A reinforcement learning based approach for a multiple-load carrier scheduling problem. Journal of Intelligent Manufacturing, 26(6), 1233-1245. 6. Choe, R., Kim, J., and Ryu, K. R. (2016). Online preference learning for adaptive dispatching of AGVs in an automated container terminal. Applied Soft Computing, 38, 647-660. 7. Han, W., F. Guo, and X. Su, A Reinforcement Learning Method for a Hybrid Flow-Shop Scheduling Problem. Algorithms, 2019. 12(11): p. 222. 8. Johnson, D.S. and M.R. Garey, Computers and intractability: A guide to the theory of NP-completeness. 1979: WH Freeman. 9. Kendall, D.G., Stochastic processes occurring in the theory of queues and their analysis by the method of the imbedded Markov chain. The Annals of Mathematical Statistics, 1953: p. 338-354. 10. Liao, C.-J. and C.-T. You, An improved formulation for the job-shop scheduling problem. Journal of the Operational Research Society, 1992. 43(11): p. 1047-1054. 11. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., and Ostrovski, G. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540), 529. 12. Morton, T. and D.W. Pentico, Heuristic scheduling systems: with applications to production systems and project management. Vol. 3. 1993: John Wiley & Sons. 13. Ouelhadj, D. and S. Petrovic, A survey of dynamic scheduling in manufacturing systems. Journal of Scheduling, 2009. 12(4): p. 417. 14. Pan, J.C.-H. and J.-S. Chen, Mixed binary integer programming formulations for the reentrant job shop scheduling problem. Computers & operations research, 2005. 32(5): p. 1197-1212. 15. Pfeiffer, A., B. Kádár, and L. Monostori, Stability-oriented evaluation of rescheduling strategies, by using simulation. Computers in Industry, 2007. 58(7): p. 630-643. 16. Priore, P., Gómez, A., Pino, R., and Rosillo, R. (2014). Dynamic scheduling of manufacturing systems using machine learning: An updated review. Artificial Intelligence for Engineering Design, Analysis and Manufacturing, 28(1), 83-97. 17. Qu, S., Chu, T., Wang, J., Leckie, J., and Jian, W. (2015). A centralized reinforcement learning approach for proactive scheduling in manufacturing. Paper presented at the 2015 IEEE 20th Conference on Emerging Technologies & Factory Automation (ETFA). 18. Shiue, Y.-R., Data-mining-based dynamic dispatching rule selection mechanism for shop floor control systems using a support vector machine approach. International Journal of Production Research, 2009. 47(13): p. 3669-3690. 19. Sutton, R.S. and A.G. Barto, Reinforcement learning: An introduction. 2018: MIT press. 20. Tang, L.-L., Y. Yih, and C.-Y. Liu, A study on decision rules of a scheduling model in an FMS. Computers in Industry, 1993. 22(1): p. 1-13. 21. Vallada, E. and R. Ruiz, A genetic algorithm for the unrelated parallel machine scheduling problem with sequence dependent setup times. European Journal of Operational Research, 2011. 211(3): p. 612-622. 22. Wang, J., Qu, S., Wang, J., Leckie, J. O., and Xu, R. (2017). Real-Time Decision Support with Reinforcement Learning for Dynamic Flowshop Scheduling. Paper presented at the Smart SysTech 2017; European Conference on Smart Objects, Systems and Technologies. 23. Wang, Y.-C. and J.M. Usher, Application of reinforcement learning for agent-based production scheduling. Engineering Applications of Artificial Intelligence, 2005. 18(1): p. 73-82. 24. Wang, Y.-F., Adaptive job shop scheduling strategy based on weighted Q-learning algorithm. Journal of Intelligent Manufacturing, 2018: p. 1-16. 25. Waschneck, B., Reichstaller, A., Belzner, L., Altenmüller, T., Bauernhansl, T., Knapp, A., and Kyek, A. (2018). Deep reinforcement learning for semiconductor production scheduling. Paper presented at the 2018 29th Annual SEMI Advanced Semiconductor Manufacturing Conference (ASMC). 26. Watkins, Christopher JCH, and P. Dayan, Q-learning. Machine learning, 1992. 8(3-4): p. 279-292. 27. Yuan, B., L. Wang, and Z. Jiang. Dynamic parallel machine scheduling using the learning agent. in 2013 IEEE International Conference on Industrial Engineering and Engineering Management. 2013. IEEE. 28. Zhang, T., S. Xie, and O. Rose. Real-time job shop scheduling based on simulation and markov decision processes. in Proceedings of the 2017 Winter Simulation Conference. 2017. IEEE Press. 29. Zhang, Z., Hu, K., Li, S., Huang, H., and Zhao, S. (2013). Chip Attach Scheduling in Semiconductor Assembly. Journal of Industrial Engineering, 2013. 30. Zhang, Z., Zheng, L., Li, N., Wang, W., Zhong, S., and Hu, K. (2012). Minimizing mean weighted tardiness in unrelated parallel machine scheduling with reinforcement learning. Computers & operations research, 39(7), 1315-1324. 31. Zhang, Z., L. Zheng, and M. Weng, Dynamic parallel machine scheduling with mean weighted tardiness objective by Q-Learning. The International Journal of Advanced Manufacturing Technology, 2007. 34(9-10): p. 968-980. |