帳號:guest(18.222.168.163)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):林振宇
作者(外文):Lin, Chen Yu
論文名稱(中文):針對以沃羅諾伊架構為基礎之類神經網路的最佳化研究
論文名稱(外文):Minimization of Voronoi Diagram-based Artifcial Neural Networks
指導教授(中文):王俊堯
指導教授(外文):Wang, Chun Yao
口試委員(中文):許秋婷
劉奕汶
口試委員(外文):Hsu, Chiou Ting
Liu, Yi Wen
學位類別:碩士
校院名稱:國立清華大學
系所名稱:資訊工程學系
學號:102062577
出版年(民國):104
畢業學年度:103
語文別:英文
論文頁數:19
中文關鍵詞:沃羅諾伊類神經網路化簡
外文關鍵詞:Voronoi DragramANNMinimization
相關次數:
  • 推薦推薦:0
  • 點閱點閱:203
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
類神經網路自數十年前開始受到研究,目前已被廣泛使用在分類性問題上,許多類神經網路的訓練演算法也陸續被提出。不論是在訓練(Training)或是回想(Recalling)階段,類神經網路內的類神經元數量直接影響到該網路的運算速度。意即,一個類神經網路內使用到的類神經元數量越少,該網路便越有效率。若使用硬體實現類神經網路,較少的類神經元意味著製造的成本較低廉。在本論文中,我們提出一個針對使用沃羅諾伊圖(Voronoi diagram)架構為基礎之類神經網路的最佳化方法。我們的方法會移除掉上述類神經網路中冗餘的類神經元,以達到減少該網路中類神經元數量的目的,且經過我們的方法最佳化後的類神經網路,其功能性不會改變。實驗結果顯示,對於使用傳統沃羅諾伊架構為基礎之類神經網路的合成方法合成出之類神經網路,在各案例中,我們最多可以移除94%的類神經元,平均可以移除37%的類神經元。
Artificial Neural Networks (ANNs), which have been widely used to deal with classification problems, have been studied for decades. Different algorithms for synthesizing ANNs have been proposed as well. The number of neurons in an ANN usually affects the effciency of calculation in an ANN, either in the training phase or the recalling phase. That is, the fewer neurons used, the faster the calculationcan be performed. Furthermore, if the neurons are implemented by physical devices,the fewer number of neurons in an ANN reduces the implementation cost. In this paper, we propose a method to minimize the number of neurons used in an ANN that is built by using Voronoi diagrams while preserving its functionality. We conducted experiments on a set of benchmarks. The experimental results show that the resultant ANNs reduce the number of neurons by up to 94%, and by 37% onaverage.
中文摘要
Abstract
Acknowledgement
Contents
List of Tables
List of Figures
1 Introduction
2 Preliminaries
2.1 ANN
2.2 VoD-based ANN
3 VoD-based ANN Minimization
4 Experimental Results
5 Conclusion
[1] C. B. Barber, D. P. Dobkin, and H. T. Huhdanpaa, “The Quickhull Algorithm for Convex Hulls,” ACM Trans. on Mathematical Software, vol. 22, no. 15, pp. 469-483, Dec. 1996.
[2] C. M. Bishop, Neural Networks for Pattern Recognition, Oxford University Press, Inc., NY USA, 1995.
[3] N. K. Bose and A. K. Garga, “Neural Network Design Using Voronoi Diagrams,” IEEE Trans. Neural Networks, vol. 4, pp. 778-787, Sep. 1993.
[4] H. S. M. Coxeter, Regular Polytopes, Dover Publications, 1973.
[5] M. T. Hagan, H. B. Demuth, and M. Beale, Neural Network Design, Pws Pub. Boston, 1996.
[6] R. Hecht-Nielsen, “Theory of the Backpropagation Neural Network”in Proc. International Joint Conference on Neural Networks, vol. 1, pp. 593-605, 1989.
[7] G. E. Hinton and R. R. Salakhutdinov, “Reducing the Dimensionality of Data with Neural Networks,” Science, vol. 313, no. 5786, pp. 504-507, 2006.
[8] K. J. Lang, A. H. Waibel, and G. E. Hinton, “A Time-delay Neural Network Architecture for Isolated Word Recognition,” Neural Networks, vol. 3, no. 1, pp. 23-43, Jan. 1990.
[9] Q. Li and D. W. Tufts, “Synthesizing Neural Networks by Sequential Addition of Hidden Nodes,” in Proc. IEEE International Conference on Neural Networks, vol. 2, pp. 708-713, 1994.
[10] R. P. Lippmann, “An Introduction to Computing with Neural Nets,”IEEE ASSP Magazine, vol. 4, no. 2, pp. 4-22, Apr. 1987.
[11] M. F. Møller, “A Scaled Conjugate Gradient Algorithm for Fast Supervised Learning,” Neural Networks, vol. 6, no. 4, pp. 525-533, 1993.
[12] D. Nguyen and B. Widrow, “Improving the Learning Speed of 2-layer Neural Networks by Choosing Initial Values of the AdaptiveWeights,”in Proc. International Joint Conference on Neural Networks, vol. 3, pp. 21-26, June 1990.
[13] F. P. Preparata and M. I. Shamos, Computational Geometry, Springer-Verlag, 1985.
[14] B. D. Ripley, Pattern Recognition and Neural Networks, Cambridge university press, 1996.
[15] R. Rojas, Neural networks: a systematic introduction, pringer Science and Business Media, 1996.
[16] H. A. Rowley, S. Baluja, and T. Kanade, “Neural Network-based Face Detection,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20, no. 1, pp. 23-38, Jan. 1998.
[17] T. D. Sanger, “Optimal Unsupervised Learning in a Single-layer Linear Feedforward Neural Network,” Neural Networks, vol. 2, no. 6, pp. 459-473, 1989.
[18] B. Sch¨olkopf, J. C. Platt, J. C. Shawe-Taylor, A. J. Smola, and R. C. Williamson, “Estimating the Support of a High-Dimensional Distribution,”Neural Computation, vol. 13, no. 7, pp. 1443-1471, July 2001.
[19] R. S. Sutton and A. G. Barto, Introduction to Reinforcement Learning, MIT Press, 1998.
[20] https://archive.ics.uci.edu/ml/
[21] http://www.keel.es/
[22] http://www.qhull.org/
(此全文未開放授權)
電子全文
摘要
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *