帳號:guest(3.14.250.221)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):張騏麟
作者(外文):Chang, Qi-Lin
論文名稱(中文):基於改良式簡化群體演算法與自然梯度下降訓練圖卷積網路
論文名稱(外文):Optimization of Graph Convolutional Network Using Improved Simplified Swarm Optimization Algorithm and Natural Gradient Descent
指導教授(中文):葉維彰
指導教授(外文):Yeh, Wei-Chang
口試委員(中文):賴智明
劉達生
口試委員(外文):Lai, Chyh-Ming
Liu, Ta-Sheng
學位類別:碩士
校院名稱:國立清華大學
系所名稱:工業工程與工程管理學系
學號:111034536
出版年(民國):113
畢業學年度:112
語文別:中文
論文頁數:75
中文關鍵詞:圖神經網路圖卷積網路改良式簡化群體演算法自然梯度下降神經網路模型訓練
外文關鍵詞:Graph Neural NetworksGraph Convolutional NetworksImproved Simplified Swarm AlgorithmNatural Gradient DescentNeural Network Model Training
相關次數:
  • 推薦推薦:0
  • 點閱點閱:0
  • 評分評分:*****
  • 下載下載:2
  • 收藏收藏:0
神經網路的概念起源於1959年,伴隨硬體設備的進步而帶來的強大運算能力,學者們陸續發表許多不同的神經網路模型結構。然而早期的模型設計,多數鎖定在處理歐幾里得空間的資料,對於非歐空間相關的任務,並沒有有效地深度學習模型被提出;為了能夠在圖的相關任務有所突破,GNN模型在2005年被提出,帶動了圖領域深度學習的發展,許多相關模型近年來陸續被提出。
雖然基於GNN的模型,能夠很好地作用在圖上,但目前GNN結構中的優化器,仍然是採用傳統的梯度下降法為主,而其已被證實有著容易陷入局部最佳解、高度依賴初始解、收斂效率不佳等問題;即便已有學者利用自然梯度下降來提升收斂的效果,其仍舊面對陷入局部最佳解等問題。除了自然梯度下降法外,亦有利用元啟發式演算法來訓練神經網路的架構被提出,證實了元啟發式演算法也能夠提升模型的性能,然而這樣的架構僅徘徊在歐式空間的神經網路模型上。故為了能夠有效地訓練圖神經網路,提升其在非歐空間資料的預測能力,本研究結合了改良式簡化群體演算法(iSSO)的優異求解能力,以及自然梯度下降快速收斂的優勢,提出了iSSOβ-Optimizer-KFACϵ訓練圖卷積網路,並在多種資料集上驗證了此演算法框架的可行性。本篇研究的貢獻可以歸納為以下四點:一、首次以元啟發式演算法訓練GNN模型的權重與偏誤參數;二、提出結合iSSO與自然梯度下降,相比iSSO與傳統梯度下降,有著更優異的最佳化表現;三、探索了iSSO在訓練神經網路模型時,面對大量參數更新的困境,並提出iSSOβ來改善;四、在Cora資料集上打敗現今的SOTA模型,獲得最佳的預測準確度。
Various neural network model structures, such as Convolutional Neural Networks, have demonstrated excellent performance on many real-world problems. However, early model designs were mostly focused on processing data in Euclidean spaces, and there was a lack of suitable deep learning models for tasks involving non-Euclidean space data. To address this issue, Graph Neural Network (GNN) model was introduced, providing a framework applicable to most graph data.
While GNN-based models excel in handling graph data, the optimizers used in current structures predominantly rely on traditional gradient descent (GD) methods. These methods have been proven to be susceptible to issues such as getting stuck in local optima, being highly dependent on initial solutions, and poor convergence efficiency. Even natural gradient descent (NGD) can address them by considering the underlying structure of parameters, it still has limited effectiveness. Beyond NGD, metaheuristic algorithms (MH) have been proposed for training NN architectures, but current research has only focused on data in Euclidean spaces. To effectively train GNNs, this study proposed a hybrid algorithm, iSSOβ-Optimizer-KFACϵ, combines Improved Simplified Swarm Optimization Algorithm with NGD. And its feasibility is validated across various datasets. The contributions of this research can be summarized as follows: (1) the first application of MH algorithms to train the weight and bias parameters of GNN models; (2) the proposal of a combination of iSSO and NGD, which outperforms iSSO with traditional GD; (3) exploration of challenges faced by iSSO in training NN models with a large number of parameter updates, along with the introduction of iSSOβ to address this issue; and (4) achieving state-of-the-art performance on the Cora dataset.
摘要 i
Abstract ii
目錄 iii
第一章 緒論 1
1.1 研究背景與動機 1
1.2 研究目的 6
1.3 研究架構 7
第二章 文獻回顧 9
2.1 圖神經網路 9
2.2 圖卷積網路 11
2.3 優化器 14
2.4 自然梯度下降 17
2.5 改良式簡化群體演算法 20
第三章 研究方法 24
3.1 粒子編碼方式 24
3.2 初始化解 25
3.3 模型評估函數 26
3.4 演算法說明 27
3.4.1 iSSOβ-Optimizer-KFACϵ演算法符號 27
3.4.2 iSSOβ-Optimizer-KFACϵ更新機制 28
第四章 實驗與結果分析 33
4.1 資料集介紹 33
4.2 超參數最佳化 35
4.2.1 實驗(一):優化器與自然梯度下降 35
4.2.2 實驗(二):iSSOβ參數設定 39
4.2.3 實驗(三):iSSOβ-Optimizer 42
4.3 實驗結果比較 45
4.3.1 Cora : ANOVA檢定 50
4.3.2 Citeseer : ANOVA檢定 54
4.3.3 Pubmed : ANOVA檢定 58
4.3.4 小結 63
第五章 結論與未來研究規劃 64
5.1 結論 64
5.2 後續研究方向 65
參考文獻 66
[1] L. D. Harmon, "Artificial neuron," Science, vol. 129, no. 3354, pp. 962-963, 1959.
[2] Y. LeCun et al., "Backpropagation applied to handwritten zip code recognition," In: Neural computation, vol. 1, no. 4, pp. 541-551, 1989.
[3] W. Zaremba, I. Sutskever, and O. Vinyals, "Recurrent neural network regularization," In: arXiv preprint arXiv:1409.2329, 2014.
[4] Geoffrey E. Hinton, Li Deng, Dong Yu, George E. Dahl, Abdel rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N. Sainath, and Brian Kingsbury, "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups, " IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 82-97, 2012.
[5] Alex Krizhevsky, Ilya Sutskever, and Geoff Hinton, "Imagenet classification with deep convolutional neural networks,” Advances in Neural Information Processing Systems 25, pp. 1106-1114, 2012.
[6] Pierre Sermanet, Soumith Chintala, and Yann LeCun, "Convolutional neural networks applied to house numbers digit classification, " International Conference on Pattern Recognition (ICPR 2012), 2012.
[7] Q. V. Le, W. Y. Zou, S. Y. Yeung and A. Y. Ng, "Learning hierarchical invariant spatio-temporal features for action recognition with independent subspace analysis," CVPR 2011, pp. 3361-3368, 2011.
[8] M. M. Bronstein, J. Bruna, Y. LeCun, A. Szlam and P. Vandergheynst, "Geometric Deep Learning: Going beyond Euclidean data," IEEE Signal Processing Magazine, vol. 34, no. 4, pp. 18-42, 2017.
[9] G. Tesauro, D. S. Touretzky, and T. Leen, Advances in neural information processing systems 7. MIT press, 1995.
[10] Yoshua Bengio, Learning Deep Architectures for AI, 2009.
[11] Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang and P. S. Yu, "A Comprehensive Survey on Graph Neural Networks," IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 1, pp. 4-24, 2021.
[12] Gori, M., Monfardini, G., and Scarselli, F., "A new model for learning in graph domains," Proceedings. 2005 IEEE International Joint Conference on Neural Networks, vol. 2, pp. 729-734, 2005.
[13] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner and G. Monfardini, "The Graph Neural Network Model," IEEE Transactions on Neural Networks, vol. 20, no. 1, pp. 61-80, 2009.
[14] C. Gallicchio and A. Micheli, "Graph Echo State Networks," The 2010 International Joint Conference on Neural Networks (IJCNN), pp. 1-8, 2010.
[15] Pham, P., Nguyen, L.T.T., Pedrycz, W. et al., "Deep learning, graph-based text representation and classification: a survey, perspectives and challenges," Artificial Intelligence Review, vol. 56, pp. 4893-4927, 2023.
[16] Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl., "Neural message passing for Quantum chemistry," Proceedings of the 34th International Conference on Machine Learning, vol. 70, 2017.
[17] Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L. Hamilton, and Jure Leskovec, "Graph Convolutional Neural Networks for Web-Scale Recommender Systems," Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2018.
[18] S. Rahmani, A. Baghbani, N. Bouguila and Z. Patterson, "Graph Neural Networks for Intelligent Transportation Systems: A Survey," IEEE Transactions on Intelligent Transportation Systems, vol. 24, no. 8, pp. 8846-8885, 2023.
[19] Kipf, T. N. & Welling, M., "Semi-Supervised Classification with Graph Convolutional Networks, " Proceedings of the 5th International Conference on Learning Representations, 2017.
[20] L. Zhao et al., "T-GCN: A Temporal Graph Convolutional Network for Traffic Prediction," IEEE Transactions on Intelligent Transportation Systems, vol. 21, no. 9, pp. 3848-3858, 2020.
[21] Wei-Lin Chiang, Xuanqing Liu, Si Si, Yang Li, Samy Bengio, Cho-Jui Hsieh, " Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks," arXiv preprint arXiv: 1905.07953, 2019.
[22] Shikhar Vashishth, Soumya Sanyal, Vikram Nitin, Partha Talukdar, "Composition-based Multi-Relational Graph Convolutional Networks," Proceedings of the 8th International Conference on Learning Representations, 2020.
[23] Ming Chen, Zhewei Wei, Zengfeng Huang, Bolin Ding, Yaliang Li, "Simple and Deep Graph Convolutional Networks," Proceedings of the 37th International Conference on Machine Learning, vol. 119, pp. 1725-1735, 2020.
[24] Lu H, Uddin S., "Disease Prediction Using Graph Machine Learning Based on Electronic Health Data: A Review of Approaches and Trends," Healthcare, vol. 11, no. 7, pp. 1031, 2023.
[25] J. Zhang, X. Shi, S. Zhao, and I. King, "STAR-GCN: Stacked and reconstructed graph convolutional networks for recommender systems, " Proceedings of the 28th International Joint Conference on Artificial Intelligence, pp. 4264–4270, 2019.
[26] Lv S, Guo D, Xu J, Tang D, Duan N, Gong M, Shou L, Jiang D, Cao G, Hu S, "Graph-Based Reasoning over Heterogeneous External Knowledge for Commonsense Question Answering," Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 05, pp. 8449-8456, 2020.
[27] Yao, L., Mao, C., & Luo, Y., "Graph Convolutional Networks for Text Classification," Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, no. 1, pp. 7370-7377, 2019.
[28] Shoujin Wang, Liang Hu, Yan Wang, Xiangnan He, Quan Z. Sheng, Mehmet A. Orgun, Longbing Cao, Francesco Ricci, Philip S. Yu, "Graph Learning based Recommender Systems: A Review," Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, pp. 4644-4652, 2019.
[29] Sebastian Ruder, "An overview of gradient descent optimization algorithms," arXiv preprint arXiv: 1609.04747, 2016.
[30] Shun-ichi Amari, "Natural Gradient Works Efficiently in Learning," Neural Comput, vol. 10, no. 2, pp. 251–276, 1998.
[31] Ruder, S., "An overview of gradient descent optimization algorithms," arXiv preprint arXiv: 1609.04747, 2016.
[32] Razvan Pascanu, Yoshua Bengio, "Revisiting Natural Gradient for Deep Networks," arXiv preprint arXiv:1301.3584, 2013.
[33] Weihua Liu, Xiabi Liu, "A Novel Structured Natural Gradient Descent for Deep Learning," arXiv preprint arXiv:2109.10100, 2021.
[34] Mohammad Rasool Izadi, Yihao Fang, Robert Stevenson, Lizhen Lin, "Optimization of Graph Neural Networks with Natural Gradient Descent," 2020 IEEE International Conference on Big Data (Big Data), pp. 171-179, 2020.
[35] M. Črepinšek, S.-H. Liu, and M. Mernik, "Exploration and exploitation in evolutionary algorithms: A survey," ACM computing surveys (CSUR), vol. 45, no. 3, pp. 1-33, 2013.
[36] G. Xu, "An adaptive parameter tuning of particle swarm optimization algorithm," Applied Mathematics and Computation, vol. 219, no. 9, pp. 4560-4569, 2013.
[37] X.-S. Yang, Nature-inspired optimization algorithms. Academic Press, 2020.
[38] D. Zang, J. Ding, J. Cheng, D. Zhang, and K. Tang, "A hybrid learning algorithm for the optimization of convolutional neural network," International Conference on Intelligent Computing, pp. 694-705, 2017.
[39] Wael, Korani., Malek, Mouhoub., Samira, Sadaoui., "Optimizing Neural Network Weights using Nature-Inspired Algorithms," arXiv preprint arXiv:2105.09983, 2021.
[40] Kaveh M, Mesgari MS., "Application of Meta-Heuristic Algorithms for Training Neural Networks and Deep Learning Architectures: A Comprehensive Review," Neural Processing Letters, vol. 55, pp. 4519-4622, 2023.
[41] W.-C. Yeh, "An improved simplified swarm optimization," Knowledge-Based Systems, vol. 82, pp. 60-69, 2015.
[42] CHIANG, PO-HUNG, "Convolution Neural Network Weight Optimization Using Improved Simplified Swarm Optimization," National Tsing Hua University, Taiwan.
[43] James Martens and Roger Grosse, "Optimizing neural networks with kronecker-factored approximate curvature," International conference on machine learning, pp. 2408-2417, 2015.
[44] Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning," nature, vol. 521, no. 7553, pp. 436-444, 2015.
[45] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," Advances in neural information processing systems, vol. 25, pp. 1097-1105, 2012.
[46] G. Hinton et al., "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups," IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 82-97, 2012.
[47] Hang Li, "Deep learning for natural language processing: advantages and challenges," National Science Review, vol. 5, issue 1, pp. 24–26, 2018.
[48] Sylvester, J. J., "On an Application of the New Atomic Theory to the Graphical Representation of the Invariants and Covariants of Binary Quantics, with Three Appendices, " American Journal of Mathematics, vol. 1, no. 1, 1878, pp. 64-104.
[49] Thulasiraman, K.; Swamy, M. N. S., Graphs: Theory and Algorithms, pp.97-125, 1992.
[50] D. von Winterfeldt, Ward Edwards, Decision Analysis and Behavioral Research, Cambridge University Press. pp. 63-89, 1986.
[51] Guan, F., Zhu, T., Zhou, W. et al., "Graph neural networks: a survey on the links between privacy and security," Artificial Intelligence Review, vol. 57, no. 2, 2024.
[52] Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, Maosong Sun, "Graph neural networks: A review of methods and applications," AI Open, vol. 1, pp. 57-81, 2020.
[53] M. A. Khamsi, An Introduction to Metric Spaces and Fixed Point Theory. New York: Wiley, 2001.
[54] Keyulu Xu, Weihua Hu, Jure Leskovec, Stefanie Jegelka, "How Powerful are Graph Neural Networks? " arXiv preprint arXiv:1810.00826, 2018.
[55] Yi Ma, Jianye Hao, Yaodong Yang, Han Li, Junqi Jin, Guangyong Chen, " Spectral-based Graph Convolutional Network for Directed Graphs," arXiv preprint arXiv:1907.08990, 2019.
[56] Thomas N. Kipf, Max Welling, "Variational Graph Auto-Encoders," arXiv preprint arXiv:1611.07308, 2017.
[57] D. H. Hubel and T. N. Wiesel, "Receptive fields, binocular interaction and functional architecture in the cat's visual cortex," The Journal of physiology, vol. 160, no. 1, pp. 106-154, 1962.
[58] Fukushima, Kunihiko, "Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position," Biological Cybernetics, vol 36, no. 4, pp.193-202, 1980.
[59] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
[60] Alzubaidi, L., Zhang, J., Humaidi, A.J. et al., "Review of deep learning: concepts, CNN architectures, challenges, applications, future directions, " Journal of Big Data, vol.8, no.53, 2021.
[61] Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun, "Spectral networks and locally connected networks on graphs," International Conference on Learning Representations (ICLR), 2014.
[62] Michael Defferrard, Xavier Bresson, and Pierre Vandergheynst, "Convolutional neural networks on graphs with fast localized spectral filtering," Advances in Neural Information Processing Systems, 2016.
[63] H. Robbins and S. Monro, "A stochastic approximation method," The annals of mathematical statistics, pp. 400-407, 1951.
[64] N. Qian, "On the momentum term in gradient descent learning algorithms," Neural networks, vol. 12, no. 1, pp. 145-151, 1999.
[65] J. Duchi, E. Hazan, and Y. Singer, "Adaptive subgradient methods for online learning and stochastic optimization," Journal of machine learning research, vol. 12, no. 7, 2011.
[66] T. Tieleman and G. Hinton., Lecture 6.5—rmsprop: Divide the gradient by a running average of its recent magnitude, COURSERA: Neural Networks for Machine Learning, 2012.
[67] D. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," arXiv preprint arXiv:1412.6980, 2014.
[68] M. Kuusela, T. Raiko, A. Honkela and J. Karhunen, "A gradient-based algorithm competitive with variational Bayesian EM for mixture of Gaussians," 2009 International Joint Conference on Neural Networks, pp. 1688-1695, 2009.
[69] James Martens, "New insights and perspectives on the natural gradient method," arXiv preprint arXiv:1412.1193, 2014.
[70] Weihua Liu, Xiabi Liu. "A Novel Structured Natural Gradient Descent for Deep Learning," arXiv preprint arXiv: 2109.10100, 2021.
[71] Zhang, Guodong and Martens, James and Grosse, Roger B., "Fast Convergence of Natural Gradient Descent for Over-Parameterized Neural Networks," Advances in Neural Information Processing Systems, vol. 32, 2019.
[72] J. Kennedy and R. Eberhart, "Particle swarm optimization," Proceedings of ICNN'95 - International Conference on Neural Networks, pp. 1942-1948, 1995.
[73] W.-C. Yeh, W.-W. Chang, and Y. Y. Chung, "A new hybrid approach for mining breast cancer pattern using discrete particle swarm optimization and statistical method," Expert Systems with Applications, vol. 36, no. 4, pp. 8204-8211, 2009.
[74] W.-C. Yeh, "A two-stage discrete particle swarm optimization for the problem of multiple multi-level redundancy allocation in series systems," Expert Systems with Applications, vol. 36, no. 5, pp. 9192-9200, 2009.
[75] C.-L. Huang, "A particle-based simplified swarm optimization algorithm for reliability redundancy allocation problems," Reliability Engineering & System Safety, vol. 142, pp. 221-230, 2015.
[76] W.-C. Yeh, S.-Y. Tan, "Simplified Swarm Optimization for the Heterogeneous Fleet Vehicle Routing Problem with Time-Varying Continuous Speed Function," Electronics. vol. 10, no.15, p.1775, 2021.
[77] W.-C. Yeh, Y.-P. Lin, Y.-C. Liang, C.-M. Lai, C.-L. Huang, "Convolution neural network hyperparameter optimization using simplified swarm optimization," arXiv preprint arXiv: 2103.03995, 2021.
[78] D. Karaboga, B. Akay, "A comparative study of artificial bee colony algorithm," Applied Mathematics and Computation, vol. 214, issue 1, pp.108-132, 2009.
[79] Chyh-Ming Lai, Chun-Chih Chiu, Yuh-Chuan Shih, Hsin-Ping Huang, "A hybrid feature selection algorithm using simplified swarm optimization for body fat prediction," Computer Methods and Programs in Biomedicine, vol. 226, 2022.
[80] X. Zhang, W.-c. Yeh, Y. Jiang, Y. Huang, Y. Xiao, and L. Li, "A case study of control and improved simplified swarm optimization for economic dispatch of a stand-alone modular microgrid," Energies, vol. 11, no. 4, pp. 793, 2018.
[81] C.-M. Lai, W.-C. Yeh, and C.-Y. Chang, "Gene selection using information gain and improved simplified swarm optimization," Neurocomputing, vol. 218, pp. 331-338, 2016.
[82] McCallum, A.; Nigam, K.; Rennie, J.; and Seymore, K., "Automating the construction of internet portals with machine learning," Information Retrieval, vol. 3, pp. 127-163, 2000.
[83] X. Glorot and Y. Bengio, "Understanding the difficulty of training deep feedforward neural networks," Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, vol. 9, pp. 249-256, 2010.
[84] Giles, C. & Bollacker, Kurt & Lawrence, Steve, "CiteSeer: An Automatic Citation Indexing System, " Proceedings of 3rd ACM Conference on Digital Libraries, 2000.
[85] Zhilin Yang, William Cohen, and Ruslan Salakhudinov, "Revisiting semi-supervised learning with graph embeddings," International conference on machine learning, " pp. 40-48., 2016
[86] Jie Chen, Tengfei Ma, and Cao Xiao, "Fastgcn: fast learning with graph convolutional networks via importance sampling," arXiv preprint arXiv:1801.10247, 2018.
[87] Ron Levie et al., "Cayleynets: Graph convolutional neural networks with complex rational spectral filters," IEEE Transactions on Signal Processing, vol. 67, no. 1, pp. 97-109, 2018.
[88] Inselberg, A., "The plane with parallel coordinates," The Visual Computer, vol. 1, pp. 69-91, 1985.
[89] Fisher, R. A., The Design of Experiments, 1935.
[90] Anderson, T. W., Darling, D. A., "Asymptotic Theory of Certain “Goodness of Fit” Criteria Based on Stochastic Processes," The Annals of Mathematical Statistics, vol. 23, no. 2, pp.193-212, 1952.
[91] Bartlett, M. S., "Properties of Sufficiency and Statistical Tests," Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences, vol. 160, no. 901, pp.268-282, 1937.
[92] Tukey, J. W., "Comparing Individual Means in the Analysis of Variance," Biometrics, vol. 5, no. 2, pp. 99-114, 1949.
[93] Welch, B. L., "On the Comparison of Several Mean Values: An Alternative Approach," Biometrika, vol. 38, no. 3/4, pp. 330-336, 1951.
[94] Games, P. A., & Howell, J. F., "Pairwise Multiple Comparison Procedures with Unequal N’s and/or Variances: A Monte Carlo Study," Journal of Educational Statistics, vol. 1, no. 2, pp.113-125, 1976.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *