|
[1] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," Proc. IEEE, vol. 86, no. 11, pp.2278-2324, Nov. 1998. [2] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "ImageNet classiffication with deep convolutional neural networks," in Proc. Int. Conf. Neural Information Processing Systems (NIPS), pp. 1097-1105, Dec. 2012. [3] F. N. Iandola et al., "Squeezenet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size," arXiv:1602.07360, Mar. 2016. [4] S. Han et al., "DSD: Regularizing deep neural networks with dense-sparse-dense training flow," arXiv:1607.04381, Jul. 2016. [5] S. Ioffe and C. Szegedy, "Batch normalization: Accelerating deep network training by reducing internal covariate shift," in Proc. Int. Conf. Machine Learning (ICML), pp. 448-456, Feb. 2015. [6] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, "Dropout: A simple way to prevent neural networks from overfitting," J. Machine Learning Research, vol. 15, no. 1, pp. 1929-1958, Jun.2014. [7] M. Lin, Q. Chen, and S. Yan, "Network in network," arXiv:1312.4400, Dec. 2013. [8] K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv:1409.1556, Sep. 2014. [9] C. Szegedy et al., "Going deeper with convolutions," in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pp. 1-9, Jun. 2015. [10] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), Jun. 2016. [11] S. Zagoruyko and N. Komodakis, "Wide residual networks," arXiv:1605.07146, May 2016. [12] R. Girshick, J. Donahue, T. Darrell, and J. Malik, "Rich feature hierarchies for accurate object detection and semantic segmentation," in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pp. 580-587, Jun. 2014. [13] E. Shelhamer, J. Long, and T. Darrell, "Fully convolutional networks for semantic segmentation," in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pp. 3431-3440, Jun. 2015. [14] J. Deng et al., "ImageNet: A large-scale hierarchical image database," in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pp. 248-255, Jun. 2009. [15] R. Deng, R. Lu, C. Lai, T. H. Luan, and H. Liang, "Optimal workload allocation in fog-cloud computing toward balanced delay and power consumption," J. IEEE Internet of Things, vol. 3, no. 6, pp. 1171-1181, Dec. 2016. [16] F. Bonomi, R. Milito, J. Zhu, and S. Addepalli, "Fog computing and its role in the internet of things," in Proc. MCC Wksp. Mobile Cloud Computing, pp. 13-16, Aug. 2012. [17] P. G. Lopez et al., "Edge-centric computing: Vision and challenges," ACM SIGCOMM Computer Communication Review, vol. 45, no. 5, pp. 37-42, Oct. 2015. [18] V. Mushunuri, A. Kattepur, H. K. Rath, and A. Simha, "Resource optimization in fog enabled IoT deployments," in Proc. IEEE Int. Conf. Fog and Mobile Edge Computing (FMEC), pp. 6-13, May 2017. [19] P. Panda, A. Sengupta, and K. Roy, "Energy-efficient and improved image recognition with conditional deep learning," J. Emerging Technologies in Computing Systems, vol. 13, no. 3, pp. 33:1-33:21, Feb. 2017. [20] S. Teerapittayanon, B. McDanel, and H. Kung, "Branchynet: Fast inference via early exiting from deep neural networks," in Proc. IEEE Int. Conf. Pattern Recognition (ICPR), pp. 2464-2469, Dec. 2016. [21] S. Venkataramani, A. Raghunathan, J. Liu, and M. Shoaib, "Scalable-effort classifiers for energy-efficient machine learning," in Proc. Design Automation Conf., pp. 67:1-67:6, Jun. 2015. [22] A. Krizhevsky, "Learning multiple layers of features from tiny images," Master's thesis, Department of Computer Science, University of Toronto, Apr. 2009. [23] H. Zhao, J. Shi, X. Qi, X.Wang, and J. Jia, "Pyramid scene parsing network," in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), to appear, Jul. 2017. [24] Y. Jia et al., "Caffe: Convolutional architecture for fast feature embedding," in Proc. ACM Int. Conf. Multimedia, pp. 675-678, Nov. 2014. |