帳號:guest(18.191.181.228)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士論文系統以作者查詢全國書目
作者(中文):鍾超壹
作者(外文):Chung, Chao-I
論文名稱(中文):CIM深度學習模型之矽後校正
論文名稱(外文):Post-Silicon Calibration of CIM Deep Learning Model
指導教授(中文):張世杰
指導教授(外文):Chang, Shih-Chieh
口試委員(中文):陳添福
何宗易
口試委員(外文):Chen, Tien-Fu
Ho, Tsung-Yi
學位類別:碩士
校院名稱:國立清華大學
系所名稱:資訊工程學系
學號:109062534
出版年(民國):111
畢業學年度:110
語文別:英文
論文頁數:21
中文關鍵詞:深度學習網路類比人工智慧模型校正
外文關鍵詞:Deep neural networksAnalog AIModel Calibration
相關次數:
  • 推薦推薦:0
  • 點閱點閱:82
  • 評分評分:*****
  • 下載下載:0
  • 收藏收藏:0
記憶體內計算(computing in memory;CIM)有效降低傳統處理器計算單
元與記憶體間之資料量,同時也利用記憶體中字元線(word line)與位元
線(bit line)的結構進行巨量的計算,已成為下世代高效能、低功耗人工智
慧計算主要候選人之一。然而其混和訊號(mixed signal)之特性易受設計
變異(variations)影響造成計算結果與預期有相當的誤差。本研究提出以位
元線內積期望值作為晶片誤差校準之基礎,同時透過權重調整降低上述計
算誤差對於神經網路計算推理之正確性影響。我們以關鍵字喚醒(keyword
spotting;KWS)之二元卷積網路之實驗結果可將準確度因變異掉至53.17%
~11.96%之CIM皆提升至70%以上,以文獻中抗變異電路設計與重新訓練等方
法相較,本研究提出之方法在成本、時間上更具有優勢,適合量產CIM採用。
Computing in memory (CIM) effectively reduces the data transformation between the traditional computing unit and memory. It uses the word line and bit line in memory to perform massive calculations. CIM has become one of the candidates for the next generation of high-performance, low-power AI computing. However, CIM's mixed-signal characteristics are vulnerable to variations, resulting in considerable errors in the calculation results. This thesis proposes to use the expected inner product value of the bit line as the basis for the calibration of the chip, and at the same time, reduce the influence of the variations on the correctness of the neural network calculation through weight adjustment. The experimental results of our binary CIM using keyword spotting (KWS) show that under different variation scales, our method significantly improves the accuracies ranging from 11.96\% to 53.17\% to more than 70\%. Compared with methods such as variation-resilient circuit design and retraining in other papers, the proposed method in this thesis has more advantages in cost and time and is suitable for mass production CIM.
摘要
目錄
1 Introduction 1
2 Background 4
2.1 Computing in memory (CIM) . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Variations & impacts on CIM deep learning model . . . . . . . . . . . 7
2.3 Related works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3 Proposed Calibration Method 9
3.1 Variation characterization . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.2 Variation compensation . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4 Experiments 14
4.1 Experimental setting . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.2.1 Embeddings reparation . . . . . . . . . . . . . . . . . . . . . . 15
4.2.2 Robustness comparison . . . . . . . . . . . . . . . . . . . . . . 16
4.2.3 Generalizability . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5 Conclusion 19
References 20
[1] Y.-C. Chiu, Z. Zhang, J.-J. Chen, X. Si, R. Liu, Y.-N. Tu, J.-W. Su, W.-H. Huang,
J.-H. Wang, W.-C. Wei, J.-M. Hung, S.-S. Sheu, S.-H. Li, C.-I. Wu, R.-S. Liu,
C.-C. Hsieh, K.-T. Tang, and M.-F. Chang, “A 4-kb 1-to-8-bit configurable 6t
sram-based computation-in-memory unit-macro for cnn-based ai edge processors,”
IEEE Journal of Solid-State Circuits, vol. 55, no. 10, pp. 2790–2801, Oct
2020.
[2] C.-S. Lin, F.-C. Tsai, J.-W. Su, S.-H. Li, T.-S. Chang, S.-S. Sheu, W.-C. Lo, S.-C.
Chang, C.-I.Wu, and T.-H. Hou, “A 48 tops and 20943 tops/w 512kb computationin-
sram macro for highly reconfigurable ternary cnn acceleration,” in 2021 IEEE
Asian Solid-State Circuits Conference (A-SSCC), Nov 2021, pp. 1–3.
[3] Q. Wang, Y. Park, and W. Lu, “Device variation effects on neural network inference
accuracy in analog in-memory computing systems,” Advanced Intelligent
Systems, p. 2100199, 01 2022.
[4] Y.-W. Kang, C.-F. Wu, Y.-H. Chang, T.-W. Kuo, and S.-Y. Ho, “On minimizing
analog variation errors to resolve the scalability issue of reram-based crossbar accelerators,”
IEEE Transactions on Computer-Aided Design of Integrated Circuits
and Systems, vol. 39, no. 11, pp. 3856–3867, Nov 2020.
[5] Z.-H. Lee, F.-C. Tsai, and S.-C. Chang, “Robust binary neural network against
noisy analog computation,” in 2022 Design, Automation Test in Europe Conference
Exhibition (DATE), March 2022, pp. 484–489.
[6] V. Joshi, M. L. Gallo, S. Haefeli, I. Boybat, S. R. Nandakumar, C. Piveteau,
M. Dazzi, B. Rajendran, A. Sebastian, and E. Eleftheriou, “Accurate
deep neural network inference using computational phase-change memory,”
Nature Communications, vol. 11, no. 1, may 2020. [Online]. Available:
https://doi.org/10.1038%2Fs41467-020-16108-9
[7] M. Qin and D. Vucinic, “Training recurrent neural networks against
noisy computations during inference,” 2018. [Online]. Available: https:
//arxiv.org/abs/1807.06555
[8] S. Yu, H. Jiang, S. Huang, X. Peng, and A. Lu, “Compute-in-memory chips for
deep learning: Recent trends and prospects,” IEEE Circuits and Systems Magazine,
vol. 21, no. 3, pp. 31–56, thirdquarter 2021.
[9] H. Kim, J.-H. Bae, S. Lim, S.-T. Lee, Y.-T. Seo, D. Kwon, B.-G. Park,
and J.-H. Lee, “Efficient precise weight tuning protocol considering variation of the synaptic devices and target accuracy,” Neurocomputing, vol. 378, pp.
189–196, 2020. [Online]. Available: https://www.sciencedirect.com/science/
article/pii/S0925231219314900
[10] M. Klachko, M. R. Mahmoodi, and D. B. Strukov, “Improving noise
tolerance of mixed-signal neural networks,” 2019. [Online]. Available: https:
//arxiv.org/abs/1904.01705
[11] C. Zhou, P. Kadambi, M. Mattina, and P. N. Whatmough, “Noisy
machines: Understanding noisy neural networks and enhancing robustness
to analog hardware errors using distillation,” 2020. [Online]. Available:
https://arxiv.org/abs/2001.04974
[12] P. Warden, “Speech commands: A dataset for limited-vocabulary speech
recognition,” 2018. [Online]. Available: https://arxiv.org/abs/1804.03209
[13] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi, “Xnor-net: Imagenet classification
using binary convolutional neural networks,” in Computer Vision – ECCV
2016, B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds. Cham: Springer International
Publishing, 2016, pp. 525–542.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *