|
[1] C. B. Choy, D. Xu, J. Gwak, K. Chen, and S. Savarese, “3d-r2n2: A unified approach for single and multi-view 3d object reconstruction,” in Computer Vision – ECCV 2016, B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds. Cham: Springer International Publishing, 2016, pp. 628–644. [2] H. Wang, J. Yang, W. Liang, and X. Tong, “Deep single-view 3d object reconstruction with visual hull embedding,” in Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2019. [3] A. Kar, S. Tulsiani, J. Carreira, and J. Malik, “Category-specific object reconstruction from a single image,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015. [4] S. Tulsiani, A. A. Efros, and J. Malik, “Multi-view consistency as supervisory signal for learning shape and pose prediction,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. [5] H. Fan, H. Su, and L. J. Guibas, “A point set generation network for 3d object reconstruction from a single image,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. 81 [6] A. Arsalan Soltani, H. Huang, J. Wu, T. D. Kulkarni, and J. B. Tenenbaum, “Synthesizing 3d shapes via modeling multi-view depth maps and silhouettes with deep generative networks,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. [7] S. R. Richter and S. Roth, “Matryoshka networks: Predicting 3d geometry via nested shape layers,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. [8] J. Wu, C. Zhang, X. Zhang, Z. Zhang, W. T. Freeman, and J. B. Tenenbaum, “Learning shape priors for single-view 3d completion and reconstruction,” in The European Conference on Computer Vision (ECCV), September 2018. [9] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems 25, F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, Eds. Curran Associates, Inc., 2012, pp. 1097–1105. [Online]. Available: http://papers.nips.cc/paper/ 4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf [10] K. Simonyan and A. Zisserman, “Very deep convolutional networks for largescale image recognition,” arXiv preprint arXiv:1409.1556, 2014. [11] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. [12] A. Johnston, R. Garg, G. Carneiro, I. Reid, and A. van den Hengel, “Scaling cnns for high resolution volumetric reconstruction from a single image,” in 82 The IEEE International Conference on Computer Vision (ICCV) Workshops, Oct 2017. [13] A. X. Chang, T. A. Funkhouser, L. J. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su, J. Xiao, L. Yi, and F. Yu, “Shapenet: An information-rich 3d model repository,” CoRR, vol. abs/1512.03012, 2015. [Online]. Available: http://arxiv.org/abs/1512.03012 [14] F. Yu and V. Koltun, “Multi-scale context aggregation by dilated convolutions,” arXiv preprint arXiv:1511.07122, 2015. [15] P. Wang, P. Chen, Y. Yuan, D. Liu, Z. Huang, X. Hou, and G. Cottrell, “Understanding convolution for semantic segmentation,” arXiv preprint arXiv: 1702.08502, 2017. [16] T. Groueix, M. Fisher, V. G. Kim, B. C. Russell, and M. Aubry, “A papiermâché approach to learning 3d surface generation,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. [17] S. Vicente, J. Carreira, L. Agapito, and J. Batista, “Reconstructing pascal voc,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2014. [18] Q. Huang, H. Wang, and V. Koltun, “Single-view reconstruction via joint analysis of image and shape collections,” ACM Trans. Graph., vol. 34, no. 4, pp. 87:1–87:10, Jul. 2015. [Online]. Available: http://doi.acm.org/10.1145/2766890 83 [19] M. Sung, V. G. Kim, R. Angst, and L. Guibas, “Data-driven structural priors for shape completion,” ACM Trans. Graph., vol. 34, no. 6, pp. 175:1–175:11, Oct. 2015. [Online]. Available: http://doi.acm.org/10.1145/2816795.2818094 [20] Y. Li, A. Dai, L. Guibas, and M. Nießner, “Database-assisted object retrieval for real-time 3d reconstruction,” in Computer Graphics Forum, vol. 34, no. 2. Wiley Online Library, 2015. [21] N. J. Mitra, L. J. Guibas, and M. Pauly, “Partial and approximate symmetry detection for 3d geometry,” ACM Trans. Graph., vol. 25, no. 3, pp. 560–568, Jul. 2006. [Online]. Available: http://doi.acm.org/10.1145/1141911.1141924 [22] Y. Liao, Y. Yang, and Y. F. Wang, “3d shape reconstruction from a single 2d image via 2d-3d self-consistency,” CoRR, vol. abs/1811.12016, 2018. [Online]. Available: http://arxiv.org/abs/1811.12016 [23] S. Tulsiani, T. Zhou, A. A. Efros, and J. Malik, “Multi-view supervision for single-view reconstruction via differentiable ray consistency,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. [24] A. Kar, C. Häne, and J. Malik, “Learning a multi-view stereo machine,” in Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, Eds. Curran Associates, Inc., 2017, pp. 365–376. [Online]. Available: http://papers.nips.cc/paper/6640-learning-a-multi-view-stereo-machine.pdf [25] X. Yan, J. Yang, E. Yumer, Y. Guo, and H. Lee, “Perspective transformer nets: Learning single-view 3d object reconstruction without 3d supervision,” in Advances in Neural Information Processing Systems 29, D. D. Lee, 84 M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, Eds. Curran Associates, Inc., 2016, pp. 1696–1704. [26] G. Yang, Y. Cui, S. Belongie, and B. Hariharan, “Learning single-view 3d reconstruction with limited pose supervision,” in The European Conference on Computer Vision (ECCV), September 2018. [27] X. Sun, J. Wu, X. Zhang, Z. Zhang, C. Zhang, T. Xue, J. B. Tenenbaum, and W. T. Freeman, “Pix3d: Dataset and methods for single-image 3d shape modeling,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. [28] Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao, “3d shapenets: A deep representation for volumetric shapes,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015. [29] J. Wu, Y. Wang, T. Xue, X. Sun, B. Freeman, and J. Tenenbaum, “Marrnet: 3d shape reconstruction via 2.5d sketches,” in Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, Eds. Curran Associates, Inc., 2017, pp. 540–550. [Online]. Available: http://papers.nips. cc/paper/6657-marrnet-3d-shape-reconstruction-via-25d-sketches.pdf [30] R. Girdhar, D. F. Fouhey, M. Rodriguez, and A. Gupta, “Learning a predictable and generative vector representation for objects,” CoRR, vol. abs/1603.08637, 2016. [Online]. Available: http://arxiv.org/abs/1603.08637 [31] J. Wu, C. Zhang, T. Xue, B. Freeman, and J. Tenenbaum, “Learning a probabilistic latent space of object shapes via 3d generative-adversarial model- 85 ing,” in Advances in Neural Information Processing Systems 29, D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, Eds. Curran Associates, Inc., 2016, pp. 82–90. [32] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems 27, Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, Eds. Curran Associates, Inc., 2014, pp. 2672–2680. [Online]. Available: http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf [33] J. Wu, T. Xue, J. J. Lim, Y. Tian, J. B. Tenenbaum, A. Torralba, and W. T. Freeman, “Single image 3d interpreter network,” in European Conference on Computer Vision (ECCV), 2016. [34] A. Brock, T. Lim, J. M. Ritchie, and N. Weston, “Generative and discriminative voxel modeling with convolutional neural networks,” CoRR, vol. abs/1608.04236, 2016. [Online]. Available: http://arxiv.org/abs/1608. 04236 [35] D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” CoRR, vol. abs/1312.6114, 2013. [36] A. Sharma, O. Grau, and M. Fritz, “Vconv-dae: Deep volumetric shape learning without object labels,” in ECCV Workshops, 2016. [37] J. Gwak, C. B. Choy, M. Chandraker, A. Garg, and S. Savarese, “Weakly supervised 3d reconstruction with adversarial constraint,” in 3D Vision (3DV), 2017 Fifth International Conference on 3D Vision, 2017. 86 [38] C.-H. Lin, C. Kong, and S. Lucey, “Learning efficient point cloud generation for dense 3d object reconstruction,” in AAAI Conference on Artificial Intelligence (AAAI), 2018. [39] C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “Pointnet: Deep learning on point sets for 3d classification and segmentation,” arXiv preprint arXiv:1612.00593, 2016. [40] M. Tatarchenko, A. Dosovitskiy, and T. Brox, “Octree generating networks: Efficient convolutional architectures for high-resolution 3d outputs,” in The IEEE International Conference on Computer Vision (ICCV), Oct 2017. [41] G. Riegler, A. Osman Ulusoy, and A. Geiger, “Octnet: Learning deep 3d representations at high resolutions,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. [42] G. Riegler, A. O. Ulusoy, H. Bischof, and A. Geiger, “Octnetfusion: Learning depth fusion from data,” in Proceedings of the International Conference on 3D Vision, 2017. [43] C. Zou, E. Yumer, J. Yang, D. Ceylan, and D. Hoiem, “3d-prnn: Generating shape primitives with recurrent neural networks,” in The IEEE International Conference on Computer Vision (ICCV), 2017. [44] Y. Sun, Z. Liu, Y. Wang, and S. E. Sarma, “Im2avatar: Colorful 3d reconstruction from a single image,” CoRR, vol. abs/1804.06375, 2018. [Online]. Available: http://arxiv.org/abs/1804.06375 87 [45] S. Wang, W. Liu, J. Wu, L. Cao, Q. Meng, and P. J. Kennedy, “Training deep neural networks on imbalanced data sets,” in 2016 International Joint Conference on Neural Networks (IJCNN), July 2016, pp. 4368–4374. [46] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” CoRR, vol. abs/1505.04597, 2015. [Online]. Available: http://arxiv.org/abs/1505.04597 [47] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” International Conference on Learning Representations, 12 2014. [48] H. G. Barrow, J. M. Tenenbaum, R. C. Bolles, and H. C. Wolf, “Parametric correspondence and chamfer matching: Two new techniques for image matching,” in Proceedings of the 5th International Joint Conference on Artificial Intelligence - Volume 2, ser. IJCAI’77. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc., 1977, pp. 659–663. [Online]. Available: http://dl.acm.org/citation.cfm?id=1622943.1622971 [49] B. Yang, S. Rosa, A. Markham, N. Trigoni, and H. Wen, “Dense 3d object reconstruction from a single depth view,” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1–1, 2019. [50] J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba, “Sun database: Large-scale scene recognition from abbey to zoo,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June 2010, pp. 3485–3492. |