|
[1] Lin, S. T. (2011). Marching into molecular design. Asia‐Pacific Journal of Chemical Engineering, 6(2), 195-198. [2] Ng, L. Y., Chong, F. K., & Chemmangattuvalappil, N. G. (2015). Challenges and opportunities in computer-aided molecular design. Computers & Chemical Engineering, 81, 115-129. [3] Joback, K. G., & Reid, R. C. (1987). Estimation of pure-component properties from group-contributions. Chemical Engineering Communications, 57(1-6), 233-243. [4] Fredenslund, A., Jones, R. L., & Prausnitz, J. M. (1975). Group‐contribution estimation of activity coefficients in nonideal liquid mixtures. AIChE Journal, 21(6), 1086-1099. [5] Rogers, D., & Hopfinger, A. J. (1994). Application of genetic function approximation to quantitative structure-activity relationships and quantitative structure-property relationships. Journal of Chemical Information and Computer Sciences, 34(4), 854-866. [6] Emmert-Streib, F. (2012). Statistical modelling of molecular descriptors in QSAR/QSPR. John Wiley & Sons. [7] Chen, D. S., Wong, D. S. H., & Chen, C. Y. (1998). Neural network correlations of detonation properties of high energy explosives. Propellants, Explosives, Pyrotechnics, 23(6), 296-300. [8] Járvás, G., Quellet, C., & Dallos, A. (2011). Estimation of Hansen solubility parameters using multivariate nonlinear QSPR modeling with COSMO screening charge density moments. Fluid Phase Equilibria, 309(1), 8-14. [9] Faber, F. A., Hutchison, L., Huang, B., Gilmer, J., Schoenholz, S. S., Dahl, G. E., ... & Von Lilienfeld, O. A. (2017). Prediction errors of molecular machine learning models lower than hybrid DFT error. Journal of chemical theory and computation, 13(11), 5255-5264. [10] Goh, G. B., Siegel, C., Vishnu, A., Hodas, N. O., & Baker, N. (2017). Chemception: a deep neural network with minimal chemistry knowledge matches the performance of expert-developed QSAR/QSPR models. arXiv preprint arXiv:1706.06689. [11] Grambow, C. A., Li, Y. P., & Green, W. H. (2019). Accurate thermochemistry with small data sets: A bond additivity correction and transfer learning approach. The Journal of Physical Chemistry A, 123(27), 5826-5835. [12] Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. [13] Weininger, D. (1988). SMILES, a chemical language and information system. 1. Introduction to methodology and encoding rules. Journal of chemical information and computer sciences, 28(1), 31-36. [14] Durant, J. L., Leland, B. A., Henry, D. R., & Nourse, J. G. (2002). Reoptimization of MDL keys for use in drug discovery. Journal of chemical information and computer sciences, 42(6), 1273-1280. [15] Klamt, A. (1995). Conductor-like screening model for real solvents: a new approach to the quantitative calculation of solvation phenomena. The Journal of Physical Chemistry, 99(7), 2224-2235. [16] Jaeger, S., Fulle, S., & Turk, S. (2018). Mol2vec: unsupervised machine learning approach with chemical intuition. Journal of chemical information and modeling, 58(1), 27-35. [17] Ma, R., Liu, Z., Zhang, Q., Liu, Z., & Luo, T. (2019). Evaluating Polymer Representations via Quantifying Structure–Property Relationships. Journal of chemical information and modeling, 59(7), 3110-3119. [18] Thompson, B. (1984). Canonical correlation analysis: Uses and interpretation (No. 47). Sage. [19] Jolliffe, I. T. (1982). A note on the use of principal components in regression. Journal of the Royal Statistical Society: Series C (Applied Statistics), 31(3), 300-303. [20] O'Boyle, N. M., Banck, M., James, C. A., Morley, C., Vandermeersch, T., & Hutchison, G. R. (2011). Open Babel: An open chemical toolbox. Journal of cheminformatics, 3(1), 33. [21] McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics, 5(4), 115-133. [22] LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324. [23] Han, J., & Moraga, C. (1995, June). The influence of the sigmoid function parameters on the speed of backpropagation learning. In International Workshop on Artificial Neural Networks (pp. 195-201). Springer, Berlin, Heidelberg. [24] Nair, V., & Hinton, G. E. (2010, January). Rectified linear units improve restricted boltzmann machines. In ICML. [25] Raschka, S. (2016). Introduction to Artificial Neural Networks and Deep Learning. [26] LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324. [27] Cauchy, A. (1847). Méthode générale pour la résolution des systemes d’équations simultanées. Comp. Rend. Sci. Paris, 25(1847), 536-538. [28] Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors. nature, 323(6088), 533-536. [29] Duchi, J., Hazan, E., & Singer, Y. (2011). Adaptive subgradient methods for online learning and stochastic optimization. Journal of machine learning research, 12(7). [30] Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. [31] Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors. nature, 323(6088), 533-536. [32] Hermans, M., & Schrauwen, B. (2013). Training and analysing deep recurrent neural networks. In Advances in neural information processing systems (pp. 190-198). [33] Kosiorowski, D., Mielczarek, D., & Rydlewski, J. (2017). Forecasting of a hierarchical functional time series on example of macromodel for day and night air pollution in silesia region: a critical overview. arXiv preprint arXiv:1712.03797. [34] Hochreiter, S., Bengio, Y., Frasconi, P., & Schmidhuber, J. (2001). Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. [35] Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780. [36] Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. [37] Jozefowicz, R., Zaremba, W., & Sutskever, I. (2015, June). An empirical exploration of recurrent network architectures. In International conference on machine learning (pp. 2342-2350). [38] Harris, Z. S. (1954). Distributional structure. Word, 10(2-3), 146-162. [39] Klamt, A., & Eckert, F. (2000). COSMO-RS: a novel and efficient method for the a priori prediction of thermophysical data of liquids. Fluid Phase Equilibria, 172(1), 43-72. [40] Lin, S. T., & Sandler, S. I. (2002). A priori phase equilibrium prediction from a segment contribution solvation model. Industrial & engineering chemistry research, 41(5), 899-913. [41] Islam, M. R., & Chen, C. C. (2015). COSMO-SAC sigma profile generation with conceptual segment concept. Industrial & Engineering Chemistry Research, 54(16), 4441-4454. [42] Cereto-Massagué, A., Ojeda, M. J., Valls, C., Mulero, M., Garcia-Vallvé, S., & Pujadas, G. (2015). Molecular fingerprint similarity search in virtual screening. Methods, 71, 58-63. [43] Glen, R. C., Bender, A., Arnby, C. H., Carlsson, L., Boyer, S., & Smith, J. (2006). Circular fingerprints: flexible molecular descriptors with applications from physical chemistry to ADME. IDrugs, 9(3), 199. [44] Xue, L., Godden, J. W., Stahura, F. L., & Bajorath, J. (2003). Design and evaluation of a molecular fingerprint involving the transformation of property descriptor values into a binary classification scheme. Journal of chemical information and computer sciences, 43(4), 1151-1157. [45] James, C. A., Weininger, D., & Delany, J. (1995). Daylight Theory Manual. Daylight Chemical Information Systems. Inc., Irvine, CA. [46] Morgan, H. L. (1965). The generation of a unique machine description for chemical structures-a technique developed at chemical abstracts service. Journal of Chemical Documentation, 5(2), 107-113. [47] Novak, R., Bahri, Y., Abolafia, D. A., Pennington, J., & Sohl-Dickstein, J. (2018). Sensitivity and generalization in neural networks: an empirical study. arXiv preprint arXiv:1802.08760. [48] Abdi, H., & Williams, L. J. (2010). Principal component analysis. Wiley interdisciplinary reviews: computational statistics, 2(4), 433-459. [49] Ramakrishnan, R., Dral, P. O., Rupp, M., & Von Lilienfeld, O. A. (2014). Quantum chemistry structures and properties of 134 kilo molecules. Scientific data, 1(1), 1-7. [50] Frisch, M. J. E. A., Trucks, G. W., Schlegel, H. B., Scuseria, G. E., Robb, M. A., Cheeseman, J. R., ... & Nakatsuji, H. (2009). Gaussian 09, revision D. 01. [51] De Boer, P. T., Kroese, D. P., Mannor, S., & Rubinstein, R. Y. (2005). A tutorial on the cross-entropy method. Annals of operations research, 134(1), 19-67.
|