帳號:guest(18.226.185.25)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):吳建昇
作者(外文):Chien-Sheng Wu
論文名稱(中文):基於深度學習神經網路方法下針對長距離跑者進行受傷因子估測
論文名稱(外文):Estimation of Injury Factors for Long Distance Runners Based on Deep Learning Neural Networks
指導教授(中文):鐘太郎
指導教授(外文):Jong, Tai-Lang
口試委員(中文):黃裕煒
謝奇文
口試委員(外文):Huang, Yu-Wei
Hsieh, Chi-Wen
學位類別:碩士
校院名稱:國立清華大學
系所名稱:電機工程學系
學號:107061600
出版年(民國):110
畢業學年度:109
語文別:中文
論文頁數:88
中文關鍵詞:深度學習卷積神經網路循環神經網路3DCNNCRNN長距離跑者受傷因子步態週期
外文關鍵詞:Deep learningConvolutional neural networkRecurrent neural network3DCNNCRNNInjury factors of long distance runnersGait cycle
相關次數:
  • 推薦推薦:0
  • 點閱點閱:532
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
運動員要維持好的競技狀態就要持續以高強度的訓練以及比賽來使自己更具競爭力,但是往往在訓練或者比賽的過程中出現傷勢是運動員無法避免的,且不只是運動員,一般民眾在運動的過程中沒有足夠的專業知識以及不夠強壯的身體也很容易發生受傷的情形。在運動中避免傷勢的發生是相當重要的事情,若是有方法可以減少受傷發生的概率則運動員可以持續維持高水準的競技狀態,一般民眾也不會因為受傷使得本該對身體健康的活動反而對身體造成負擔。運動科學領域的研究顯示跑步步態週期中觸地期時骨盆與水平面傾斜的角度愈大則愈容易造成下肢過勞受傷,故觸地期時骨盆與水平面傾斜的角度為長距離跑者重要的受傷因子。因此本論文藉由使用深度學習的方式對跑者的背面跑步影像進行骨盆傾斜角度的估測。本論文首先設計了一套完善的實驗步驟,並且使用了四個不同的針對影像的深度學習網路架構進行估測並比較其結果。使用的網路架構都是常用於影像分類尤其是動作行為識別領域的網路架構,分別是單幀及多幀卷積神經網路、3D卷積神經網路以及循環卷積神經網路,針對四個不同的網路架構使用跑者跑步的影像作為輸入資料對預測的準確率以及損失函式進行比較。
In order to maintain high competitive level, an athlete should keep training and racing hard. During training and racing, injuries happen to athletes inevitably. Not only do athletes be bothered by injuries, the general public also suffer from injuries during exercising without having enough professional knowledge and strong physiological condition. Avoiding injuries is very important to athletes. If there’s a way to reduce the probability of injuries occurrences, athletes could keep in high competitive level as well as general public could become healthier when exercising. In the domain of sports science, research tells us that a long distance runner with a larger pelvic drop angle during the stance state of ones running gait cycle will suffer lower body injuring more likely because of lower body overuse. Thus, pelvic drop angle is a determinant injury factor of long distance runners. Therefore this thesis applies deep learning method to estimate a runner’s pelvic drop angle using his or her running video shot by their back view. This thesis design a set of complete experimental methods and then use four different deep learning structures designed for video classification, especially on human pose estimation, to estimate the pelvic drop angle of runners. These deep learning structures are single-frame and multi-frame CNN, CRNN and 3D CNN. Comparisons of their differences and performances are made in terms of the estimate accuracy and loss function.
中文摘要 I
Abstract II
致謝 III
目錄 IV
圖目錄 VII
表目錄 IX
第一章 緒論 2
1.1 研究背景 2
1.2 研究動機 3
1.3 研究目標與貢獻 4
1.4 論文架構 5
第二章 文獻回顧 7
2.1 運動科學相關文獻 7
2.1.1 受傷因子 7
2.1.2 受傷因子與步態 7
2.1.3 受傷因子與機器學習 11
2.2 深度學習相關文獻 12
第三章 機器學習與深度學習 17
3.1 機器學習 17
3.1.1 前言 17
3.1.2 監督式學習 19
3.1.3 分類器 20
3.1.4 目標函式與梯度下降法 29
3.1.5 誤差反向傳播 36
3.1.6 預訓練(Pre-training)方法 37
3.2 卷積神經網路(Convolutional Neural Network,CNN) 39
3.2.1 前言 39
3.2.2 卷積核(convolutional kernel) 40
3.2.3 填充(padding) 42
3.2.4 特徵圖(feature map) 43
3.2.5 Resnet-152[60] 44
3.2.6 3DCNN[61] 45
3.3 循環神經網路(Recurrent Neural Network,RNN) 48
3.3.1 循環神經網路[62] 48
3.3.2 長短記憶模型(LSTM) 50
3.4 資料增強 54
3.4.1 翻轉 55
3.4.2 色彩空間 55
3.4.3 修剪 55
3.4.4 旋轉 55
3.4.5 轉換 56
3.4.6 加入雜訊 56
第四章 實驗方法與結果 57
4.1 前言 57
4.2 步態影片資料介紹 60
4.3 資料標籤方式 62
4.4 資料集 67
4.5 資料增強 68
4.6 結果與比較 71
4.6.1 循環卷積神經網路所使用之激勵函式之比較 74
4.6.2 未使用資料增強之實驗結果 75
4.6.3 使用資料增強之實驗結果 77
4.6.4 減少分類類別數量之結果 78
結論與未來展望 81
參考文獻 84
1. Fields, K.B., et al., Prevention of running injuries. Current sports medicine reports, 2010. 9(3): p. 176-182.
2. 蜜雪.史丹克鮑加德, 運動百憂解:克服哀傷的最佳處方箋. 2019: 方舟文化.
3. Van Gent, R., et al., Incidence and determinants of lower extremity running injuries in long distance runners: a systematic review. British journal of sports medicine, 2007. 41(8): p. 469-480.
4. Van der Worp, M.P., et al., Injuries in runners; a systematic review on risk factors and sex differences. PloS one, 2015. 10(2): p. e0114937.
5. Jauhiainen, S., et al., A hierarchical cluster analysis to determine whether injured runners exhibit similar kinematic gait patterns. Scandinavian Journal of Medicine & Science in Sports, 2020. 30(4): p. 732-740.
6. Hesar, N.G.Z., et al., A prospective study on gait-related intrinsic risk factors for lower leg overuse injuries. British journal of sports medicine, 2009. 43(13): p. 1057-1061.
7. Thijs, Y., et al., Gait-related intrinsic risk factors for patellofemoral pain in novice recreational runners. British journal of sports medicine, 2008. 42(6): p. 466-471.
8. Van Ginckel, A., et al., Intrinsic gait-related risk factors for Achilles tendinopathy in novice runners: a prospective study. Gait & posture, 2009. 29(3): p. 387-391.
9. Wnuk, A., et al., Is there a relationship between functional flat foot and prevalence of non-insertional achilles tendinopathy in joggers?—a pilot study. Folia Medica Cracoviensia, 2017.
10. 張世玪, 張. 世代研究. 2010; Available from: https://highscope.ch.ntu.edu.tw/wordpress/?p=7992.
11. Bramah, C., et al., Is there a pathological gait associated with common soft tissue running injuries? The American journal of sports medicine, 2018. 46(12): p. 3023-3031.
12. Oh, H., G. Cha, and S. Oh, Samba: A real-time motion capture system using wireless camera sensor networks. Sensors, 2014. 14(3): p. 5516-5535.
13. Watari, R., et al., Determination of patellofemoral pain sub-groups and development of a method for predicting treatment outcome using running gait kinematics. Clinical biomechanics, 2016. 38: p. 13-21.
14. Luz, B.C., et al., Relationship between rearfoot, tibia and femur kinematics in runners with and without patellofemoral pain. Gait & posture, 2018. 61: p. 416-422.
15. Friedman, J., T. Hastie, and R. Tibshirani, The elements of statistical learning. Mentorship in Healthcare, 2nd edn. MK Update Ltd., New York, 2014.
16. Chen, I., et al., Identification of elite swimmers' race patterns using cluster analysis. International Journal of Sports Science & Coaching, 2007. 2(3): p. 293-303.
17. Ball, K. and R. Best, Different centre of pressure patterns within the golf stroke I: Cluster analysis. Journal of sports sciences, 2007. 25(7): p. 757-770.
18. Krizhevsky, A., I. Sutskever, and G.E. Hinton, Imagenet classification with deep convolutional neural networks. Communications of the ACM, 2017. 60(6): p. 84-90.
19. Karpathy, A., et al. Large-scale video classification with convolutional neural networks. in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. 2014.
20. Donahue, J., et al. Long-term recurrent convolutional networks for visual recognition and description. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2015.
21. Yue-Hei Ng, J., et al. Beyond short snippets: Deep networks for video classification. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2015.
22. Tran, D., et al. Learning spatiotemporal features with 3d convolutional networks. in Proceedings of the IEEE international conference on computer vision. 2015.
23. Oppermann, A. Artificial Intelligence vs. Machine Learning vs. Deep Learning. 2019; Available from: https://towardsdatascience.com/artificial-intelligence-vs-machine-learning-vs-deep-learning-2210ba8cc4ac.
24. LeCun, Y., Y. Bengio, and G. Hinton, Deep learning. Nature, 2015. 521(7553): p. 436-444.
25. James, G., et al., An introduction to statistical learning. Vol. 112. 2013: Springer.
26. Machine Learning Classification – 8 Algorithms for Data Science Aspirants. Available from: https://data-flair.training/blogs/machine-learning-classification-algorithms/.
27. Wright, R.E., Logistic regression. 1995.
28. Huang, T. 機器/統計學習: 羅吉斯回歸(Logistic regression). 2018; Available from: 機器/統計學習: 羅吉斯回歸(Logistic regression).
29. Jiang, M., et al., Text classification based on deep belief network and softmax regression. Neural Computing and Applications, 2018. 29(1): p. 61-70.
30. Yann LeCun, C.C., Christopher J.C. Burges. THE MNIST DATABASE of handwritten digits. 1998; Available from: http://yann.lecun.com/exdb/mnist/.
31. Veetil, S. and Q. Gao, Real-time network intrusion detection using Hadoop-based Bayesian classifier, in Emerging Trends in ICT Security. 2014, Elsevier. p. 281-299.
32. Subasi, A., Practical Machine Learning for Data Analysis Using Python. 2020: Academic Press.
33. Xiaozhou, Y. Linear Discriminant Analysis, Explained. 2020; Available from: https://towardsdatascience.com/linear-discriminant-analysis-explained-f88be6c1e00b.
34. Gholami, R. and N. Fakhari, Support vector machine: principles, parameters, and applications, in Handbook of Neural Computation. 2017, Elsevier. p. 515-535.
35. CH.Tseng. Support Vector Machines 支持向量機. 2017; Available from: https://chtseng.wordpress.com/2017/02/04/support-vector-machines-%E6%94%AF%E6%8C%81%E5%90%91%E9%87%8F%E6%A9%9F/.
36. Nadkarni, P., Clinical Research Computing: A Practitioner's Handbook. Core Technologies: Data Mining and “Big Data”. 2016: Academic Press.
37. Islam, M.J., et al. Investigating the performance of naive-bayes classifiers and k-nearest neighbor classifiers. in 2007 International Conference on Convergence Information Technology (ICCIT 2007). 2007. IEEE.
38. Allibhai, E. Building a k-Nearest-Neighbors (k-NN) Model with Scikit-learn. 2018; Available from: https://towardsdatascience.com/building-a-k-nearest-neighbors-k-nn-model-with-scikit-learn-51209555453a.
39. Varshney, P. K-Nearest Neighbour Explained-Part 2. 2020; Available from: K-Nearest Neighbour Explained-Part 2.
40. Leonard, L.C., Web-Based Behavioral Modeling for Continuous User Authentication (CUA), in Advances in Computers. 2017, Elsevier. p. 1-44.
41. Ganegedara, T. Intuitive Guide to Understanding Decision Trees. 2018; Available from: https://towardsdatascience.com/light-on-math-machine-learning-intuitive-guide-to-understanding-decision-trees-adb2165ccab7.
42. Daoud, M. and M. Mayo, A survey of neural network-based cancer prediction models from microarray data. Artificial intelligence in medicine, 2019. 97: p. 204-214.
43. Santosa, B. Multiclass classification with cross entropy-support vector machines. in The Third Information Systems International Conference. 2015.
44. Sra, S., S. Nowozin, and S.J. Wright, Optimization for Machine Learning. 2012: MIT Press.
45. RUDER, S. An overview of gradient descent optimization algorithms. 2016; Available from: https://ruder.io/optimizing-gradient-descent/index.html#fn1.
46. Simone. Stochastic Gradient Descent on your microcontroller. 2020; Available from: https://eloquentarduino.github.io/2020/04/stochastic-gradient-descent-on-your-microcontroller/.
47. Duchi, J., E. Hazan, and Y. Singer, Adaptive subgradient methods for online learning and stochastic optimization. Journal of machine learning research, 2011. 12(7).
48. Zeiler, M.D., Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012.
49. Tieleman, T. and G. Hinton, Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning, 2012. 4(2): p. 26-31.
50. Kingma, D.P. and J. Ba, Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
51. Selfridge, O., Symposium on the mechanisation of thought processes. 1959.
52. Werbos, P., Beyond regression:" new tools for prediction and analysis in the behavioral sciences. Ph. D. dissertation, Harvard University, 1974.
53. Rumelhart, D.E., G.E. Hinton, and R.J. Williams, Learning representations by back-propagating errors. nature, 1986. 323(6088): p. 533-536.
54. Dauphin, Y., et al., Identifying and attacking the saddle point problem in highdimensional non-convex optimization in Advances in neural information processing systems. 2014.
55. Hinton, G.E. and R.R. Salakhutdinov, Reducing the dimensionality of data with neural networks. science, 2006. 313(5786): p. 504-507.
56. Mohamed, A.-r., G.E. Dahl, and G. Hinton, Acoustic modeling using deep belief networks. IEEE transactions on audio, speech, and language processing, 2011. 20(1): p. 14-22.
57. Dahl, G.E., et al., Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Transactions on audio, speech, and language processing, 2011. 20(1): p. 30-42.
58. Stewart, M. Simple Introduction to Convolutional Neural Networks. 2019; Available from: https://towardsdatascience.com/simple-introduction-to-convolutional-neural-networks-cdf8d3077bac.
59. Chris. Using Constant Padding, Reflection Padding and Replication Padding with TensorFlow and Keras. 2020; Available from: https://www.machinecurve.com/index.php/2020/02/10/using-constant-padding-reflection-padding-and-replication-padding-with-keras/.
60. He, K., et al. Deep residual learning for image recognition. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
61. Ji, S., et al., 3D convolutional neural networks for human action recognition. IEEE transactions on pattern analysis and machine intelligence, 2012. 35(1): p. 221-231.
62. Mittal, A. Understanding RNN and LSTM. 2019; Available from: https://towardsdatascience.com/understanding-rnn-and-lstm-f7cdf6dfc14e.
63. Olah, C. Understanding LSTM Networks. 2015; Available from: https://colah.github.io/posts/2015-08-Understanding-LSTMs/.
64. Shorten, C. and T.M. Khoshgoftaar, A survey on image data augmentation for deep learning. Journal of Big Data, 2019. 6(1): p. 60.
65. 許哲豪. 【AI Column】深度學習,從「框架」開始學起. 2018; Available from: https://makerpro.cc/2018/06/deep-learning-frameworks/.
66. renewang. 深度學習裡的冰與火之歌 : Tensorflow vs PyTorch系列 第 2 篇. 2019; Available from: https://ithelp.ithome.com.tw/articles/10216440.
67. Varma. Learning Anatomy. 2011; Available from: https://varmaanatomy.blogspot.com/2011/08/anatomical-positions-planes.html.
68. Stan. Running Biomechanics Primer. 2016; Available from: http://therunningstan.blogspot.com/2016/01/running-biomechanics-primer.html.

 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *