帳號:guest(3.17.166.87)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):許永翰
作者(外文):Hsu, Yung-Han
論文名稱(中文):降低隨機森林於多層式非揮發性隨機存取記憶體耗能的模型修剪演算法
論文名稱(外文):Eco-feller: Minimizing the Energy Consumption of Random Forest Algorithm by an Eco-pruning Strategy over MLC NVRAM
指導教授(中文):石維寬
指導教授(外文):Shih, Wei-Kuan
口試委員(中文):許富皓
陳增益
梁郁珮
口試委員(外文):Hsu, Fu-Hau
CHEN, Tseng-Yi
Liang, Yu-Pei
學位類別:碩士
校院名稱:國立清華大學
系所名稱:資訊系統與應用研究所
學號:108065519
出版年(民國):110
畢業學年度:109
語文別:英文
論文頁數:32
中文關鍵詞:機器學習隨機森林多層式非揮發性記憶體修剪演算法
外文關鍵詞:machine learningrandom forestmulti-level cell non-volatile memorypruning algorithm
相關次數:
  • 推薦推薦:0
  • 點閱點閱:935
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
隨機森林因為他的易用性以及準確度被廣泛用來分類物體,然而隨機森林
為了達到更高的分類準確度,往往會創建非常多的決策樹,再以事後修剪演算
法將低貢獻的樹刪減來增進模型的準確度以及減少儲存空間。另一方面非揮發
性記憶體被視為混合儲存系統的有力候選者。由於在非揮發性記憶體的寫入成
本很高,因此寫入在修剪過程中會被修剪掉的決策樹對非揮發性記憶體會造成
能量以及時間成本的浪費。這篇論文提出一個框架去減少訓練隨機森林帶來的
缺點。這篇論文的精神是在決策樹創建之前就先進行評估然後以不同的寫入模
式將創建好的樹寫入非揮發性記憶體中。實驗結果顯示框架可以有效的減緩浪
費的能量且模型仍然擁有好的準確度。
Random forest has been widely used to classifying objects recently
because of its efficiency and accuracy. On the other hand, nonvolatile
memory has been regarded as a promising candidate to be a
part of a hybrid memory architecture. For achieving the higher
accuracy, random forest tends to construct lots of decision trees,
and then conducts some post-pruning methods to fell low contribution
trees for increasing the model accuracy and space utilization.
However, the cost of writing operations is always very high on nonvolatile
memory. Therefore, writing the to-be-pruned trees into nonvolatile
memory will significantly waste both energy and time. This
thesis proposed a framework to ease such hurt of training a random
forest model. The main spirit of this thesis is to evaluate the
importance of trees before constructing it, and then adopts different
writing modes to write the trees to the non-volatile memory space.
The experimental results show the proposed framework can
significantly mitigate the waste of energy with high accuracy.
摘要..............................................................i
Abstract........................................................ii
1. Introduction..................................................1
2. Background and Motivations....................................5
2.1.MLC PCM with Flexible Retention Time.........................5
2.2.Ensemble Learning - Random Forest............................7
2.3.Related Works...............................................10
2.4.Motivation..................................................12
3. Eco-pruning Random forest framework..........................14
3.1.Key Observation for Design Principle........................14
3.2.Design Overview.............................................16
3.3.Dataset Evaluator...........................................17
3.4.Tree Planter & Dataset Warehouse............................19
3.5.Light Tree Recorder and Reinforcer..........................20
3.6.Working Example.............................................20
4. Evaluation...................................................23
4.1.Experimental Settings.......................................23
4.2.Experimental Result.........................................24
5. Conclusion...................................................28
References......................................................29
[1] H. Liu, Y. Chen, X. Liao, H. Jin, B. He, L. Zheng, and R. Guo, Hardware software cooperative caching for hybrid DRAM/NVM memory architectures," Proceedings of the International Conference on Supercomputing (ICS '17), Chicago, IL, USA, June, 2017, pp. 1-10.
[2] F. Xia, D. Jiang, J. Xiong, N. Sun, \HiKV: A Hybrid Index Key-Value Store for DRAM-NVM Memory Systems," 2017 USENIX Annual Technical Conference (USENIX ATC 17), Santa Clara, CA, USA, July, 2017.
[3] O. Patil, L. Ionkov, J. Lee, F. Mueller, and M. Lang, Performance Characterization of a DRAM-NVM Hybrid Memory Architecture for HPC Applications Using Intel Optane DC Persistent Memory Modules," Proceedings of the International Symposium on Memory Systems (MEMSYS 19), Washington D.C., USA, September, 2019.
[4] M. Skurichina and R. PW Duin, Bagging, boosting and the random subspace method for linear classi ers," Pattern Analysis & Applications, vol. 5, pp. 121-135, 2002.
[5] Q. Li, L. Jiang, Y. Zhang, Y. He, and C. J. Xue, \Compiler directed write-mode selection for high performance low power volatile PCM," Proceedings of the 14th ACM SIGPLAN/SIGBED conference on Languages, compilers and tools for embedded systems (LCTES '13), Seattle, Washington, June, 2013, pp. 101-110.
[6] M. Zhang, L. Zhang, L. Jiang, Z. Liu and F. T. Chong, Balancing Performance and Lifetime of MLC PCM by Using a Region Retention Monitor," 2017 IEEE International Symposium on High Performance Computer Architecture (HPCA), Austin, TX, 2017, pp. 385-396.
[7] S. Chen, Y. Chang, Y. Chang and W. Shih, mwJFS: A Multiwrite-
Mode Journaling File System for MLC NVRAM Storages," in IEEE
Transactions on Very Large Scale Integration (VLSI) Systems, vol. 27, no. 9, pp. 2060-2073, Sept. 2019.
[8] S.-H. Chen, Y.-H. Chang, T.-Y. Chen, Y.-M. Chang, P.-W. Hsiao, H.-W.Wei, and W.-K. Shih, \Enhancing the Energy Eciency of Journaling File System via Exploiting Multi-Write Modes on MLC NVRAM," in Proceedings of the International Symposium on Low Power Electronics and Design (ISLPED '18), Seattle, WA, Jul. 2018.
[9] Tseng-Yi Chen, Yuan-Hao Chang, Ming-Chang Yang and Huang-Wei
Chen, \How to cultivate a green decision tree without loss of accuracy?", in Proceedings of the ACM/IEEE International Symposium on Low Power Electronics and Design(ISLPED '20), New York, NY, United States, 10 August 2020.
[10] Dragos D. MargineantuDepartment, \Pruning Adaptive Boosting *** Icml-97 Final Draft ***", 1997.
[11] Tony R. Martinez and D. Randall Wilson, \Instance Pruning Techniques", In Fisher, D., ed., Machine Learning: Proceedings of the Fourteenth International Conference, MorganKaufmann Publishers, San Francisco, CA, pp. 404-411, 1997.

[12] Gonzalo Martnez-Mu~noz and Alberto Suarez, \Pruning in ordered bagging ensembles", ICML '06: Proceedings of the 23rd international conference on Machine learningJune 2006 Pages 609-616

[13] Zhenyu Lu, Xindong Wu, Xingquan Zhu and Josh Bongard, Ensemble Pruning via Individual Contribution Ordering", Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, July 25-28, 2010.
[14] V. Soto, S. Garca-Moratilla, G. Martnez-Mu~noz, D. Hernandez-Lobato and A. Suarez, \A Double Pruning Scheme for Boosting Ensembles", in IEEE Transactions on Cybernetics, vol. 44, no. 12, pp. 2682-2695, Dec. 2014, doi: 10.1109/TCYB.2014.2313638.
[15] Michael Zhu and Suyog Gupta, To prune, or not to prune: exploring the efficacy of pruning for model compression", from arXiv:1710.01878 submitted on 5 Oct 2017 (v1), last revised 13 Nov 2017 (this version, v2).
[16] Feng Nan, Joseph Wang and Venkatesh Saligrama, Pruning Random Forests for Prediction on a Budget", Advances in Neural Information Processing Systems 29 (NIPS 2016).
[17] Potts, D., Sammut, C. Incremental Learning of Linear Model Trees", Mach Learn 61, 5{48 (2005).
[18] Y. Zhang, S. Burer, and W. N. Street., Ensemble pruning via semi-de finite programming", The Journal of Machine Learning Research,7:1315 - 1338, 2006.
[19] C. C. Ho, Y. M. Chang, Y. H. Chang, H. C. Chen, and T. W. Kuo. 2017. Write-aware Memory Management for Hybrid SLC-MLC PCM Memory Systems. ACM SIGAPP Applied Computing Review (ACR) (2017).
[20] Pan, C., Xie, M., Hu, J., Chen, Y., and Yang, C. (2014, October). 3M-PCM: Exploiting multiple write modes MLC phase change main memory in embedded systems. In Proceedings of the 2014 International Conference on Hardware/Software Codesign and System Synthesis (pp. 1-10).
[21] Russo, U., Ielmini, D., and Lacaita, A. L. (2007). Analytical modeling of chalcogenide crystallization for PCM data-retention extrapolation. IEEE transactions on electron devices, 54(10), 2769-2777.
[22] Asuncion, Arthur, and David Newman. "UCI machine learning repository." (2007).
[23] Pedregosa, Fabian, et al. "Scikit-learn: Machine learning in Python." the Journal of machine Learning research 12 (2011): 2825-2830.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *