|
[1] H. Bai and G. Shi, “Gas Sensors Based on Conducting Polymers,” Sensors, vol. 7, no. 3, pp. 267–307, Mar. 2007. [2] C. C. Lu, C. C. Li, and H. Chen, “How Robust Is a Probabilistic Neural VLSI System Against Environmental Noise,” in Artificial Neural Networks in Pattern Recognition, pp. 44–53, 2008. [3] T. B. Tang, H. Chen, and A. F. Murray, “Adaptive, integrated sensor processing to compensate for drift and uncertainty: a stochastic ‘neural’ approach,” Nanobiotechnology IEE Proc. -, vol. 151, no. 1, pp. 28–34, Feb. 2004. [4] J. R. Movellan, “A Learning Theorem for Networks at Detailed Stochastic Equilibrium,” Neural Comput., vol. 10, no. 5, pp. 1157–1178, Jul. 1998. [5] G. E. Hinton and T. J. Sejnowski, “Unsupervised Learning: Foundations of Neural Computation,” MIT Press, 1999. [6] G. Hinton, “Training Products of Experts by Minimizing Contrastive Divergence,” Neural Comput., vol. 14, 2000. [7] K. G. Murty, “Linear programming,” New York: John Wiley & Sons Inc., 1983. [8] O. L. Mangasarian, “Linear and Nonlinear Separation of Patterns by Linear Programming,” Oper. Res., vol. 13, no. 3, pp. 444–452, Jun. 1965. [9] W. Zhou, L. Zhang, and L. Jiao, “Linear programming support vector machines,” 2001. [10] A. P. Dempster, N. M. Laird, and D. B. Rubin, “Maximum likelihood from incomplete data via the EM algorithm,” Journal of the Royal Statistical Society, vol. 39, no. 1, pp. 1–38, 1977. [11] L. Devroye, L. Györfi, and G. Lugosi, “A probabilistic theory of pattern recognition,” New York: Springer, 1996. [12] A. A. Ferreira, T. B. Ludermir, and R. R. B. de Aquino, “A comparative study of neural network to artificial noses,” Neural Networks, Proceedings. IEEE International Joint Conference on, vol. 4, pp. 2081–2086 vol. 4, 2005. [13] W. Jatmiko, T. Fukuda, F. Arai, and B. Kusumoputro, “Artificial odor discrimination system using multiple quartz-resonator sensor and neural network for recognizing fragrance mixtures,” Proceedings of the 2004 International Symposium on, pp. 169–174, 2004. [14] W. Jatmiko, T. Fukuda, and K. Sekiyama, “Optimized probabilistic neural networks in recognizing fragrance mixtures using higher number of sensors,” Sensors, pp. 1026–1029, 2005. [15] S. Guney and A. Atasoy, “Classification of n-butanol concentrations with k-NN algorithm and ANN in electronic nose,” Innovations in Intelligent Systems and 101 Applications, pp. 138–142, 2001. [16] K.-T. Tang, S.-W. Chiu, M.-F. Chang, C.-C. Hsieh, and J.-M. Shyu, “A LowPower Electronic Nose Signal-Processing Chip for a Portable Artificial Olfaction System,” IEEE Trans. Biomed. Circuits Syst., vol. 5, no. 4, pp. 380–390, Aug. 2011. [17] X. He, S. Wei, and R. Wang, “Independent Component Analysis and Neural Network Applied on Electronic Nose System,” Bioinformatics and Biomedical Engineering, pp. 490–493, 2008. [18] H. GholamHosseini, D. Luo, H. Liu, and G. Xu, “Intelligent Processing of Enose Information for Fish Freshness Assessment,” Intelligent Sensors, Sensor Networks and Information, pp. 173–177, 2007. [19] K. Z. Mao, K.-C. Tan, and W. Ser, “Probabilistic neural-network structure determination for pattern classification,” IEEE Trans. Neural Netw., vol. 11, no. 4, pp. 1009–1016, Jul. 2000. [20] H. Chen and A. F. Murray, “Continuous restricted Boltzmann machine with an implementable training algorithm,” Vis. Image Signal Process. IEE Proc. -, vol. 150, no. 3, pp. 153–158, Jun. 2003. [21] C. C. Lu, C. Y. Hong, and H. Chen, “A Scalable and Programmable Architecture for the Continuous Restricted Boltzmann Machine in VLSI,” in IEEE International Symposium on Circuits and Systems, pp. 1297–1300, 2007. [22] K. Pearson, “On lines and planes of closest fit to systems of points in space,” Philos. Mag., vol. 2, no. 6, pp. 559–572, 1901. [23] J.-H. Wang, “Design of Continuous Restricted Boltzmann Machine IC for Electronic Nose System,” NTHU, Hsinchu, Taiwan, 2013. [24] T. M. Cover, “Geometrical and Statistical Properties of Systems of Linear Inequalities with Applications in Pattern Recognition,” IEEE Trans. Electron. Comput., vol. EC-14, no. 3, pp. 326–334, Jun. 1965. [25] I. V. Tetko, D. J. Livingstone, and A. I. Luik, “Neural network studies. 1. Comparison of overfitting and overtraining,” J. Chem. Inf. Comput. Sci., vol. 35, no. 5, pp. 826–833, Sep. 1995. [26] K. Ito and K. Kunisch, “Lagrange Multiplier Approach to Variational Problems and Applications,” Society for Industrial and Applied Mathematics, 2008. [27] J. Bilmes, “A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models,” 1998. [28] T. Schmah, G. E. Hinton, S. L. Small, S. Strother, and R. S. Zemel, “Generative versus discriminative training of RBMs for classification of fMRI images,” Advances in Neural Information Processing Systems 21, pp. 1409–1416, 2008. [29] S. Haykin, “Neural Networks: A Comprehensive Foundation,” 2nd ed. Upper 102 Saddle River, NJ, USA: Prentice Hall PTR, 1998. [30] A. Krizhevsky, “Learning Multiple Layers of Features from Tiny Images,” 2009. [31] C. C. Lu and H. Chen, “A Scalable and Programmable Probabilistic Generative Model in VLSI,” 2010. |