|
[1] M. Joesch, B. Schnell, S. V. Raghu, D. F. Reiff, and A. Borst, “On and off pathwaysin drosophila motion vision,”Nature, vol. 468, no. 7321, p. 300, 2010. [2] H. B. Barlow and R. M. Hill, “Selective sensitivity to direction of movement inganglion cells of the rabbit retina,”Science, vol. 139, no. 3553, pp. 412–412, 1963. [3] J. S. Kim, M. J. Greene, A. Zlateski, K. Lee, M. Richardson, S. C. Turaga, M. Pur-caro, M. Balkam, A. Robinson, B. F. Behabadi,etal., “Space–time wiring specificitysupports direction selectivity in the retina,”Nature, vol. 509, no. 7500, p. 331, 2014. [4] D. H. Hubel and T. N. Wiesel, “Receptive fields of single neurones in the cat’s striatecortex,”The Journal of physiology, vol. 148, no. 3, pp. 574–591, 1959. [5] N. J. Strausfeldet al., “Atlas of an insect brain,” 1976. [6] T. Poggio and W. Reichardt, “Considerations on models of movement detection,”Kybernetik, vol. 13, no. 4, pp. 223–227, 1973. [7] H. Barlow and W. R. Levick, “The mechanism of directionally selective units inrabbit’s retina.,”The Journal of physiology, vol. 178, no. 3, pp. 477–504, 1965. [8] E. H. Adelson and J. R. Bergen, “Spatiotemporal energy models for the perceptionof motion,”Josa a, vol. 2, no. 2, pp. 284–299, 1985. [9] B. Hassenstein and W. Reichardt, “Systemtheoretische analyse der zeit-, reihenfolgen-und vorzeichenauswertung bei der bewegungsperzeption desrüsselkäfers chlorophanus,”Zeitschrift für Naturforschung B, vol. 11, no. 9-10,pp. 513–524, 1956. [10] J. P. Van Santen and G. Sperling, “Elaborated reichardt detectors,”JOSA A, vol. 2,no. 2, pp. 300–321, 1985. [11] A. Borst and T. Euler, “Seeing things in motion: models, circuits, and mechanisms,”Neuron, vol. 71, no. 6, pp. 974–994, 2011. [12] K. Yonehara and B. Roska, “Motion detection: neuronal circuit meets theory,”Cell,vol. 154, no. 6, pp. 1188–1189, 2013.24
[13] A. Borst, “Fly visual course control: behaviour, algorithms and circuits,”NatureReviews Neuroscience, vol. 15, no. 9, pp. 590–599, 2014. [14] A. Borst and M. Helmstaedter, “Common circuit design in fly and mammalian mo-tion vision,”nature neuroscience, vol. 18, no. 8, p. 1067, 2015. [15] S.-y. Takemura, A. Nern, D. B. Chklovskii, L. K. Scheffer, G. M. Rubin, and I. A.Meinertzhagen, “The comprehensive connectome of a neural substrate for‘on’motion detection in drosophila,”Elife, vol. 6, p. e24394, 2017. [16] J. A. Strother, S.-T. Wu, A. M. Wong, A. Nern, E. M. Rogers, J. Q. Le, G. M. Rubin,and M. B. Reiser, “The emergence of directional selectivity in the visual motionpathway of drosophila,”Neuron, vol. 94, no. 1, pp. 168–182, 2017. [17] A. Borst, “A biophysical mechanism for preferred direction enhancement in fly mo-tion vision,”PLoS computational biology, vol. 14, no. 6, p. e1006240, 2018. [18] H. Eichner, M. Joesch, B. Schnell, D. F. Reiff, and A. Borst, “Internal structure ofthe fly elementary motion detector,”Neuron, vol. 70, no. 6, pp. 1155–1164, 2011. [19] H. BKP, “Robot vision,” 1986. [20] A. B. Watson and A. J. Ahumada Jr, “A look at motion in the frequency domain,”1983. [21] D. J. Heeger, “Model for the extraction of image flow,”JOSA A, vol. 4, no. 8,pp. 1455–1471, 1987. [22] D. J. Heeger, “Optical flow using spatiotemporal filters,”International journal ofcomputer vision, vol. 1, no. 4, pp. 279–302, 1988. [23] D. J. Fleet and A. D. Jepson, “Computation of component image velocity from localphase information,”International journal of computer vision, vol. 5, no. 1, pp. 77–104, 1990. [24] M. Sutton, W. Wolters, W. Peters, W. Ranson, and S. McNeill, “Determination ofdisplacements using an improved digital correlation method,”Imageandvisioncom-puting, vol. 1, no. 3, pp. 133–139, 1983. [25] B. D. Lucas, T. Kanade,et al., “An iterative image registration technique with anapplication to stereo vision,” 1981.25
[26] B. K. Horn and B. G. Schunck, “Determining optical flow,”Artificial intelligence,vol. 17, no. 1-3, pp. 185–203, 1981. [27] G. Farnebäck, “Two-frame motion estimation based on polynomial expansion,” inScandinavian conference on Image analysis, pp. 363–370, Springer, 2003. [28] A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. VanDer Smagt, D. Cremers, and T. Brox, “Flownet: Learning optical flow with convolu-tional networks,” inProceedings of the IEEE international conference on computervision, pp. 2758–2766, 2015. [29] E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox, “Flownet 2.0:Evolution of optical flow estimation with deep networks,” inProceedings of theIEEE conference on computer vision and pattern recognition, pp. 2462–2470, 2017. [30] A. R. Bruss and B. K. Horn, “Passive navigation,”Computer Vision, Graphics, andImage Processing, vol. 21, no. 1, pp. 3–20, 1983. [31] A. Wedel, T. Brox, T. Vaudrey, C. Rabe, U. Franke, and D. Cremers, “Stereo-scopic scene flow computation for 3d motion understanding,”International Journalof Computer Vision, vol. 95, no. 1, pp. 29–51, 2011. [32] C. Mead, “Neuromorphic electronic systems,”Proceedings of the IEEE, vol. 78,no. 10, pp. 1629–1636, 1990. [33] R. R. Harrison, “A biologically inspired analog ic for visual collision detection,”IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 52, no. 11,pp. 2308–2318, 2005. [34] T. Zhang, H. Wu, A. Borst, K. Kuhnlenz, and M. Buss, “An fpga implementation ofinsect-inspired motion detector for high-speed vision systems,” in2008 IEEE Inter-national Conference on Robotics and Automation, pp. 335–340, IEEE, 2008. [35] H. Wu, K. Zou, T. Zhang, A. Borst, and K. Kühnlenz, “Insect-inspired high-speedmotion vision system for robot control,”Biological cybernetics, vol. 106, no. 8-9,pp. 453–463, 2012. [36] I. Ridwan and H. Cheng, “An event-based optical flow algorithm for dynamic visionsensors,” inInternationalConferenceImageAnalysisandRecognition, pp. 182–189,Springer, 2017.26
[37] G. Haessig, A. Cassidy, R. Alvarez, R. Benosman, and G. Orchard, “Spiking opticalflow for event-based sensors using ibm’s truenorth neurosynaptic system,”IEEEtransactions on biomedical circuits and systems, vol. 12, no. 4, pp. 860–870, 2018. [38] S. Marĉelja, “Mathematical description of the responses of simple cortical cells,”JOSA, vol. 70, no. 11, pp. 1297–1300, 1980. [39] J. G. Daugman, “Uncertainty relation for resolution in space, spatial frequency, andorientation optimized by two-dimensional visual cortical filters,”JOSA A, vol. 2,no. 7, pp. 1160–1169, 1985. [40] I. Fogel and D. Sagi, “Gabor filters as texture discriminator,”Biological cybernetics,vol. 61, no. 2, pp. 103–113, 1989. [41] D. Sun, S. Roth, and M. J. Black, “Secrets of optical flow estimation and their prin-ciples,” in2010 IEEE computer society conference on computer vision and patternrecognition, pp. 2432–2439, IEEE, 2010. [42] S. Baker, D. Scharstein, J. Lewis, S. Roth, M. J. Black, and R. Szeliski, “A databaseand evaluation methodology for optical flow,”International Journal of ComputerVision, vol. 92, no. 1, pp. 1–31, 2011. [43] J.-Y. Bouguetet al., “Pyramidal implementation of the affine lucas kanade featuretracker description of the algorithm,”Intel Corporation, vol. 5, no. 1-10, p. 4, 2001. [44] M. J. Black and P. Anandan, “A framework for the robust estimation of optical flow,”in1993 (4th) International Conference on Computer Vision, pp. 231–236, IEEE,1993. [45] A. Ranjan and M. J. Black, “Optical flow estimation using a spatial pyramid net-work,” inProceedings of the IEEE Conference on Computer Vision and PatternRecognition, pp. 4161–4170, 2017. [46] C. Forster, M. Pizzoli, and D. Scaramuzza, “Svo: Fast semi-direct monocular vi-sual odometry,” in2014 IEEE international conference on robotics and automation(ICRA), pp. 15–22, IEEE, 2014. [47] S. Baker and I. Matthews, “Lucas-kanade 20 years on: A unifying framework,”In-ternational journal of computer vision, vol. 56, no. 3, pp. 221–255, 2004.27
[48] O. Haggui, C. Tadonki, F. Sayadi, and O. Bouraoui, “Efficient gpu implementationof lucas-kanade through openacc,” inProceedings of the 14th International JointConference on Computer Vision, Imaging and Computer Graphics Theory and Ap-plications, VISAPP, Prague, Czech Republic, vol. 5, pp. 768–775, 2019. |