|
[1] Chu, H.-h., and Nahrstedt, K. Cpu service classes for multimedia applications. In Proceedings IEEE International Conference on Multimedia Computing and Systems (1999), vol. 1, IEEE, pp. 296–301. [2] Dickman, L., Lindahl, G., Olson, D., Rubin, J., and Broughton, J. Pathscale infinipath: A first look. In 13th Symposium on High Performance Interconnects (HOTI’05) (2005), IEEE, pp. 163–165. [3] Dwarakinath, A. A fair-share scheduler for the graphics processing unit. PhD thesis, The Graduate School, Stony Brook University: Stony Brook, NY., 2008. [4] Gottschlag, M., Hillenbrand, M., Kehne, J., Stoess, J., and Bellosa, F. Logv: Low-overhead gpgpu virtualization. In 2013 IEEE 10th International Conference on High Performance Computing and Communications & 2013 IEEE International Conference on Embedded and Ubiquitous Computing (2013), IEEE, pp. 1721–1726. [5] Gupta, K., Stuart, J. A., and Owens, J. D. A study of persistent threads style gpu programming for gpgpu workloads. In 2012 Innovative Parallel Computing (InPar) (2012), IEEE, pp. 1–14. [6] Gupta, V., Schwan, K., Tolia, N., Talwar, V., and Ranganathan, P. Pegasus: Coordinated scheduling for virtualized accelerator-based systems. In 2011 USENIX Annual Technical Conference (USENIX ATC’11) (2011), p. 31. [7] Kang, D., Jun, T. J., Kim, D., Kim, J., and Kim, D. Convgpu: Gpu management middleware in container based virtualized environment. In 2017 IEEE International Conference on Cluster Computing (CLUSTER) (2017), IEEE, pp. 301–309. [8] Kato, S., Lakshmanan, K., Rajkumar, R., and Ishikawa, Y. Timegraph: Gpu scheduling for real-time multi-tasking environments. In Proc. USENIX ATC (2011), pp. 17–30. [9] Kato, S., McThrow, M., Maltzahn, C., and Brandt, S. Gdev: First-class GPU resource management in the operating system. In Presented as part of the 2012 USENIX Annual Technical Conference (USENIX ATC 12) (Boston, MA, 2012), USENIX, pp. 401–412. [10] Kyriazis, G. Heterogeneous system architecture: A technical review. AMD Fusion Developer Summit (2012), 21. [11] Menychtas, K., Shen, K., and Scott, M. L. Disengaged scheduling for fair, protected access to fast computational accelerators. In ACM SIGPLAN Notices (2014), vol. 49, ACM, pp. 301–316. [12] OrgFoundation, X. Nouveau: Accelerated open source driver for nvidia cards. URL https://nouveau. freedesktop. org/wiki (2011). [13] Park, J. J. K., Park, Y., and Mahlke, S. Chimera: Collaborative preemption for multitasking on a shared gpu. ACM SIGPLAN Notices 50, 4 (2015), 593–606. [14] Rossbach, C. J., Currey, J., Silberstein, M., Ray, B., and Witchel, E. Ptask: operating system abstractions to manage gpus as compute devices. In Proceedings of the Twenty-Third ACM Symposium on Operating Systems Principles (2011), ACM, pp. 233–248. [15] Suzuki, Y., Kato, S., Yamada, H., and Kono, K. Gpuvm: Why not virtualizing gpus at the hypervisor? In Proceedings of the 2014 USENIX Conference on USENIX Annual Technical Conference (Berkeley, CA, USA, 2014), USENIX ATC’14, USENIX Association, pp. 109–120. [16] Suzuki, Y., Yamada, H., Kato, S., and Kono, K. Gloop: An event-driven runtime for consolidating gpgpu applications. In SoCC 2017 - Proceedings of the 2017 Symposium on Cloud Computing (9 2017), Association for Computing Machinery, Inc, pp. 80–93. [17] Tanasic, I., Gelado, I., Cabezas, J., Ramirez, A., Navarro, N., and Valero, M. Enabling preemptive multiprogramming on gpus. In 2014 ACM/IEEE 41st International Symposium on Computer Architecture (ISCA) (2014), IEEE, pp. 193–204. [18] Wang, L., Huang, M., and El-Ghazawi, T. Exploiting concurrent kernel execution on graphic processing units. In 2011 International Conference on High Performance Computing & Simulation (2011), IEEE, pp. 24–32. [19] Wu, B., Liu, X., Zhou, X., and Jiang, C. Flep: Enabling flexible and efficient preemption on gpus. ACM SIGOPS Operating Systems Review 51, 2 (2017), 483–496. [20] Zeno, L., Mendelson, A., and Silberstein, M. Gpupio: the case for i/o-driven preemption on gpus. In Proceedings of the 9th Annual Workshop on General Purpose Processing using Graphics Processing Unit (2016), ACM, pp. 63–71. |