|
[1] M. G. Lopez P., H. Molina Lozano, L. P. Sanchez F., and L. N. Oliva Moreno, “Blind Source Separation of audio signals using independent component analysis and wavelets,” in CONIELECOMP 2011, 21st International Conference on Electrical Communications and Computers, 2011, pp. 152–157. [2] Longji Sun and Qi Cheng, “Real-time microphone array processing for sound source separation and localization,” in 2013 47th Annual Conference on Information Sciences and Systems (CISS), 2013, pp. 1–6. [3] J. Nikunen and T. Virtanen, “Direction of Arrival Based Spatial Covariance Model for Blind Sound Source Separation,” IEEE/ACM Trans. Audio, Speech, and Language Processing, vol. 22, no. 3, pp. 727–739, Mar. 2014. [4] Y. Yang, Z. Li, X. Wang, and D. Zhang, “Noise source separation based on the blind source separation,” in 2011 Chinese Control and Decision Conference (CCDC), 2011, pp. 2236–2240. [5] L. Wang, T. Gerkmann, and S. Doclo, “Noise PSD Estiamtion using Blind Source Separation in a Diffuse Noise Field,” 13th Internationa Workshop on Acoustic Signal Enhancement, pp. 1–4, 2012. [6] A. Hajisami, H. Viswanathan, and D. Pompili, “‘Cocktail Party in the Cloud’: Blind Source Separation for Co-Operative Cellular Communication in Cloud RAN,” in 2014 IEEE 11th International Conference on Mobile Ad Hoc and Sensor Systems, 2014, pp. 37–45. [7] Y. Guo, G. R. Naik, and H. Nguyen, “Single channel blind source separation based local mean decomposition for biomedical applications,” Engineering in Medicine and Biology Society (EMBC), 35th Annual International Conference of the IEEE, pp. 6812–5, Jan. 2013. [8] C. Lin and E. Hasting, “Blind source separation of heart and lung sounds based on nonnegative matrix factorization,” in 2013 International Symposium on Intelligent Signal Processing and Communication Systems, 2013, pp. 731–736. [9] M. Y. Abbass, S. A. Shehata, S. S. Haggag, S. M. Diab, B. M. Salam, S. El-Rabaie, and F. E. Abd El-Samie, “Blind separation of noisy images using finite Ridgelet Transform and wavelet de-noising,” in 2013 Second International Japan-Egypt Conference on Electronics, Communications and Computers (JEC-ECC), 2013, pp. 176–181. [10] J.-F. Cardoso, “Blind signal separation: statistical principles,” Proc. IEEE, vol. 86, no. 10, pp. 2009–2025, 1998. [11] M. Zibulevsky and B. A. Pearlmutter, “Blind Source Separation by Sparse Decomposition in a Signal Dictionary,” Neural Computation, vol. 13, no. 4, pp. 863–882, Apr. 2001. [12] A. S. Bregman, Auditory Scene Analysis: The Perceptual Organization of Sound, Cambridge, MA: MIT Press, 1990. [13] J.-F. Cardoso, “Source separation using higher order moments,” in International Conference on Acoustics, Speech, and Signal Processing, 1989, pp. 2109–2112. [14] A. Mansour, M. Kawamoto, and N. Ohnishi, “Blind separation for instantaneous mixture of speech signals: algorithms and performances,” in 2000 TENCON Proceedings. Intelligent Systems and Technologies for the New Millennium (Cat. No.00CH37119), 2000, vol. 1, pp. 26–32. [15] D.-T. Pham, “Blind separation of instantaneous mixture of sources based on order statistics,” IEEE Trans. Signal Processing, vol. 48, no. 2, pp. 363–375, 2000. [16] M. Z. Ikram and D. R. Morgan, “A beamforming approach to permutation alignment for multichannel frequency-domain blind speech separation,” in IEEE International Conference on Acoustics Speech and Signal Processing, 2002, vol. 1, pp. I–881–I–884. [17] S. Kurita, H. Saruwatari, S. Kajita, K. Takeda, and F. Itakura, “Evaluation of blind signal separation method using directivity pattern under reverberant conditions,” in 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.00CH37100), 2000, vol. 5, pp. 3140–3143. [18] K. Toyama and M. D. Plumbley, “Using phase linearity in frequency-domain ICA to tackle the permutation problem,” in 2009 IEEE International Conference on Acoustics, Speech and Signal Processing, 2009, pp. 3165–3168. [19] K. Matsuoka, “Minimal distortion principle for blind source separation,” in Proceedings of the 41st SICE Annual Conference. SICE 2002., 2002, vol. 4, pp. 2138–2143. [20] F. Nesta, T. S. Wada, and B.-H. Juang, “Coherent spectral estimation for a robust solution of the permutation problem,” in 2009 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, 2009, pp. 105–108. [21] D. Nion, K. N. Mokios, N. D. Sidiropoulos, and A. Potamianos, “Batch and Adaptive PARAFAC-Based Blind Separation of Convolutive Speech Mixtures,” IEEE Trans. Audio, Speech, and Language Processing, vol. 18, no. 6, pp. 1193–1207, Aug. 2010. [22] H. Sawada, S. Araki, and S. Makino, “Measuring Dependence of Bin-wise Separated Signals for Permutation Alignment in Frequency-domain BSS,” in 2007 IEEE International Symposium on Circuits and Systems, 2007, pp. 3247–3250. [23] H. Sawada, R. Mukai, S. Araki, and S. Makino, “A Robust and Precise Method for Solving the Permutation Problem of Frequency-Domain Blind Source Separation,” IEEE Trans. Speech and Audio Processing, vol. 12, no. 5, pp. 530–538, Sep. 2004. [24] Wanlong Li, Ju Liu, Jun Du, and Shuzhong Bai, “Solving permutation problem in frequency-domain blind source separation using microphone sub-arrays,” in 2008 International Conference on Neural Networks and Signal Processing, 2008, pp. 67–72. [25] R. Mazur, J. O. Jungmann, and A. Mertins, “A new clustering approach for solving the permutation problem in convolutive blind source separation,” in 2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, 2013, pp. 1–4. [26] L. Parra and C. Spence, “Convolutive blind separation of non-stationary sources,” IEEE Trans. Speech and Audio Processing, vol. 8, no. 3, pp. 320–327, May 2000. [27] D. Pham, C. Serviere, and H. Boumaraf, “Blind separation of convolutive audio mixtures using nonstationarity,” Proc. ICA, 2003. [28] C. Jutten and J. Herault, “Blind separation of sources, part I: An adaptive algorithm based on neuromimetic architecture,” Signal processing 24.1 (1991): 1-10. [29] P. Comon, “Independent component analysis, a new concept?,” Signal processing 36.3 (1994): 287-314. [30] A. J. Bell and T. J. Sejnowski, “An Information-Maximization Approach to Blind Separation and Blind Deconvolution,” Neural Computation, vol. 7, no. 6, pp. 1129–1159, Nov. 1995. [31] A. Hyvärinen, “Fast and robust fixed-point algorithms for independent component analysis.,” IEEE Trans. Neural Networks, vol. 10, no. 3, pp. 626–34, Jan. 1999. [32] A. Hyvärinen and E. Oja, “Independent component analysis: algorithms and applications,” Neural networks, 13.4(2000): 411-430. [33] T. Cover and J. Thomas, Elements of information theory. John Wiley & Sons, Inc., 2012. [34] M. Jones and R. Sibson, “What is projection pursuit?,” Journal of the Royal Statistical Society. Series A (General) (1987): 1-37. [35] A. Hyvrinen, “New approximations of differential entropy for independent component analysis and projection pursuit,” Advances in Neural Information Processing Systems 10 (1998): 273-279. [36] L. Tong, V. C. Soon, Y. F. Huang, and R. Liu, “AMUSE: a new blind identification algorithm,” in IEEE International Symposium on Circuits and Systems, 1990, pp. 1784–1787. [37] H. Shen and K. Huper, “Newton-Like Methods for Parallel Independent Component Analysis,” in 2006 16th IEEE Signal Processing Society Workshop on Machine Learning for Signal Processing, 2006, pp. 283–288. [38] S. Choi, S. Amari, A. Cichocki, and R. Liu, “Natural gradient learning with a nonholonomic constraint for blind deconvolution of multiple channels,” First International Workshop on Independent Component Analysis and Signal Separation. 1999. [39] A. J. Bell and T. J. Sejnowski, “An Information-Maximization Approach to Blind Separation and Blind Deconvolution,” Neural Computation, vol. 7, no. 6, pp. 1129–1159, Nov. 1995. [40] D. Luenberger, Optimization by vector space methods, John Wiley & Sons, Inc., 1969. [41] W. Zhang, J. Liu, J. Sun, and S. Bai, “A New Two-Stage Approach to Underdetermined Blind Source Separation using Sparse Representation,” in 2007 IEEE International Conference on Acoustics, Speech and Signal Processing - ICASSP ’07, 2007, vol. 3, pp. III–953–III–956. [42] V. G. Reju, “Underdetermined Convolutive Blind Source Separation via Time–Frequency Masking,” IEEE Trans. Audio, Speech, and Language Processing, vol. 18, no. 1, pp. 101–116, Jan. 2010. [43] M. Joho, H. Mathis, and R. Lambert, “Overdetermined blind source separation: Using more sensors than source signals in a noisy mixture,” Proc. ICA. 2000. [44] Y. Xue and Y. Wang, “A novel method for overdetermined blind source separation,” in The 2nd International Conference on Information Science and Engineering, 2010, pp. 1751–1754. [45] R. Aichner, S. Araki, S. Makino, T. Nishikawa, and H. Saruwatari, “Time domain blind source separation of non-stationary convolved signals by utilizing geometric beamforming,” in Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing, 2002, pp. 445–454. [46] A. Oppenheim, R. Schafer, and J. Buck, Discrete-time signal processing, Vol. 2. Englewood Cliffs: Prentice-hall, 1989. [47] E. Bingham and A. Hyvärinen, “A fast fixed-point algorithm for independent component analysis of complex valued signals,” International journal of neural systems 10.01 (2000): 1-8. [48] H. Sawada, R. Mukai, and S. Kethulle, “Spectral smoothing for frequency-domain blind source separation,” in Proc. IWAENC 2003. [49] S. Araki, R. Mukai, S. Makino, T. Nishikawa, and H. Saruwatari, “The fundamental limitation of frequency domain blind source separation for convolutive mixtures of speech,” IEEE Trans. Speech and Audio Processing, vol. 11, no. 2, pp. 109–116, Mar. 2003. [50] E. Vincent, R. Gribonval, and C. Fevotte, “Performance measurement in blind audio source separation,” IEEE Trans. Audio, Speech, and Language Processing, vol. 14, no. 4, pp. 1462–1469, Jul. 2006. [51] http://www.kecl.ntt.co.jp/icl/signal/sawada/demo/bss2to4/index.html.
|