DOI QR코드

DOI QR Code

Brain-Inspired Artificial Intelligence

브레인 모사 인공지능 기술

  • Published : 2021.06.01

Abstract

The field of brain science (or neuroscience in a broader sense) has inspired researchers in artificial intelligence (AI) for a long time. The outcomes of neuroscience such as Hebb's rule had profound effects on the early AI models, and the models have developed to become the current state-of-the-art artificial neural networks. However, the recent progress in AI led by deep learning architectures is mainly due to elaborate mathematical methods and the rapid growth of computing power rather than neuroscientific inspiration. Meanwhile, major limitations such as opacity, lack of common sense, narrowness, and brittleness have not been thoroughly resolved. To address those problems, many AI researchers turn their attention to neuroscience to get insights and inspirations again. Biologically plausible neural networks, spiking neural networks, and connectome-based networks exemplify such neuroscience-inspired approaches. In addition, the more recent field of brain network analysis is unveiling complex brain mechanisms by handling the brain as dynamic graph models. We argue that the progress toward the human-level AI, which is the goal of AI, can be accelerated by leveraging the novel findings of the human brain network.

Keywords

Acknowledgement

본 연구는 한국전자통신연구원 연구운영비지원사업의 일환으로 수행되었음[21ZS1100, 자율성장형 복합인공지능 원천기술 연구].

References

  1. D. Hassabis et al., "Neuroscience-inspired artificial intelligence," Neuron, vol. 95, no. 2, July 2017, pp. 245-258. https://doi.org/10.1016/j.neuron.2017.06.011
  2. N.C. Thompson et al., "The computational limits of deep learning," July 2020, arXiv: 2007.05558.
  3. M.M. Waldrop, "What are the limits of deep learning?," PNAS, vol. 116, no. 4, Jan. 2019, pp. 1074-1077. https://doi.org/10.1073/pnas.1821594116
  4. D. Heaven, "Deep trouble for deep learning," Nature, vol. 574, no. 7777, Oct. 2019, pp. 163-166. https://doi.org/10.1038/d41586-019-03013-5
  5. Wikimedia Commons: Components of neuron, https://commons.wikimedia.org/wiki/File:Components_of_neuron.jpg.
  6. Wikimedia Commons: Connectome extraction procedure, https://commons.wikimedia.org/wiki/File:Connectome_extraction_procedure.jpg.
  7. Wikimedia Commons: The Human Connectome, https://commons.wikimedia.org/wiki/File:The_Human_Connectome.png.
  8. https://www.flickr.com/photos/nihgov/46551667272/.
  9. K. Lucas, "The 'all or none' contraction of the amphibian skeletal muscle fibre," J. Physiol., vol. 38, no. 2-3, 1909, pp. 113-133. https://doi.org/10.1113/jphysiol.1909.sp001298
  10. W.S. Mcculloch et al., "A logical calculus of the ideas immanent in nervous activity," Bull. Math. Biophys, vol. 5, no. 4, 1943, pp. 115-133. https://doi.org/10.1007/BF02478259
  11. D.O. Hebb, The organization of behavior: A neuropsychological theory, Psychology Press, London, UK, 2005.
  12. E.L. Bienenstock et al., "Theory for the development of neuron selectivity: Orientation specificity and binocular interaction in visual cortex", J. Neurosci., vol. 2, no. 1, 1982, pp. 32-48. https://doi.org/10.1523/JNEUROSCI.02-01-00032.1982
  13. E. Oja, "Simplified neuron model as a principal component analyzer," J. Math. Biol., vol. 15, no. 3, 1982, pp. 267-273. https://doi.org/10.1007/BF00275687
  14. F. Rosenblatt, "The perceptron: A probabilistic model for information storage and organization in the brain," Psychol. Rev., vol. 65, no. 6, 1958, pp. 386-408. https://doi.org/10.1037/h0042519
  15. F. Rosenblatt, "Principles of neurodynamics. Perceptrons and the theory of brain mechanisms," Cornell Aeronautical Lab, Buffalo NY, USA, 1961.
  16. M. Minsky and S.A. Papert, Perceptrons: An introduction to computational geometry, MIT press, London, UK, 2017.
  17. J.J. Hopfield, "Neural networks and physical systems with emergent collective computational abilities," PNAS, vol. 79, no. 8, 1982, pp. 2554-2558. https://doi.org/10.1073/pnas.79.8.2554
  18. G.E. Hinton and T.J. Sejnowski, "Learning and relearning in Boltzmann machines," Parallel Distrib. Process.: Explor. Microstruct. Cogn., vol. 2, no. 1, 1986, pp. 282-317.
  19. G.E. Hinton, S. Osindero, and Y.W. Teh, "A fast learning algorithm for deep belief nets," Neural Comput., vol. 18, no. 7, July 2006, pp. 1527-1554. https://doi.org/10.1162/neco.2006.18.7.1527
  20. R.M. French, "Catastrophic forgetting in connectionist networks," Trends Cogn. Sci., vol. 3, no. 4, 1999, pp. 128-135. https://doi.org/10.1016/S1364-6613(99)01294-2
  21. T. Hospedales et al., "Meta-learning in neural networks: a survey," Nov. 2020, arXiv: 2004.05439.
  22. S. Hochreiter et al., "Long short-term memory," Neural Comput., vol. 9, no. 8, 1997, pp. 1735-1780. https://doi.org/10.1162/neco.1997.9.8.1735
  23. A. Vaswani et al., "Attention is all you need," 2017, arXiv: 1706.03762.
  24. T.P. Lillicrap et al., "Backpropagation and the brain," Nat. Rev. Neurosci., vol. 21, Apr. 2020, pp. 335-346. https://doi.org/10.1038/s41583-020-0277-3
  25. T.P. Lillicrap et al., "Random synaptic feedback weights support error backpropagation for deep learning," Nat. Commun., vol. 7, no. 1, Dec. 2016, pp. 1-10.
  26. Y. Bengio et al., "Towards biologically plausible deep learning," Aug. 2016, arXiv: 1502.04156.
  27. J.C.R. Whittington et al., "Theories of error back-propagation in the brain," Trends Cogn. Sci., vol. 23, no. 3, Mar. 2019, pp. 235-250. https://doi.org/10.1016/j.tics.2018.12.005
  28. A. Tavanaei et al., "Deep learning in spiking neural networks," Neural Netw., vol. 111, Mar. 2019, pp. 47-63. https://doi.org/10.1016/j.neunet.2018.12.002
  29. W. Xiao et al., "Biologically-plausible learning algorithms can scale to large datasets," Dec. 2018, arXiv: 1811.03567.
  30. M. Akrout et al., "Deep learning without weight transport," Jan. 2020, arXiv: 1904.05391.
  31. C. Baldassi et al., "Learning may need only a few bits of synaptic precision," Phys. Rev. E, vol. 93, no. 5, May 2016.
  32. W. Wen et al., "TernGrad: Ternary gradients to reduce communication in distributed deep learning," Dec. 2017, arXiv: 1705.07878.
  33. M. Rastegari et al., "XNoR-Net: ImageNet classification using binary convolutional neural networks," in Computer Vision-ECCV 2016, vol. 9908, Springer, Cham Switzerland, 2016, pp. 525-542.
  34. Y. Yang et al., "Training high-performance and large-scale deep neural networks with full 8-bit integers," Neural Netw., vol. 125, May 2020, pp. 70-82. https://doi.org/10.1016/j.neunet.2019.12.027
  35. M. Lechner et al., "Neural circuit policies enabling auditable autonomy," Nat. Mach. Intell., vol. 2, Oct. 2020, pp. 642-652. https://doi.org/10.1038/s42256-020-00237-3
  36. E.D. Adrian et al., "The impulses produced by sensory nerve endings," J. Physiol., vol. 61 no. 4, 1926, pp. 465-483. https://doi.org/10.1113/jphysiol.1926.sp002308
  37. G.Q. Bi et al., "Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type," J. Neurosci., vol. 18, no. 24, 1998, pp. 10464-10472. https://doi.org/10.1523/jneurosci.18-24-10464.1998
  38. W. Guo et al., "Neural coding in spiking neural networks: A comparative study for robust neuromorphic systems," Front. Behav. Neurosci., vol. 15, 2021.
  39. S.J. Thorpe, "Spike arrival times: A highly efficient coding scheme for neural networks," Parallel Process. Neural Syst., 1990, pp. 91-94.
  40. D.E. Feldman, "The spike-timing dependence of plasticity," Neuron, vol. 75, no. 4, 2012, pp. 556-571. https://doi.org/10.1016/j.neuron.2012.08.001
  41. T. Masquelier et al., "Spike timing dependent plasticity finds the start of repeating patterns in continuous spike trains," PloS one, vol. 3, no. 1, 2008, e1377. https://doi.org/10.1371/journal.pone.0001377
  42. S.M. Bohte et al., "SpikeProp: Backpropagation for networks of spiking neurons," in Proc. ESANN, Bruges, Belgium, Apr. 2000, pp. 419-424.
  43. A. Kugele et al., "Efficient processing of spatio-temporal data streams with spiking neural networks," Front. Neurosci., vol. 14, 2020.
  44. D.S. Bassett, et al., "Network neuroscience," Nat. Neurosci., Mar. 2017.
  45. A. Fomito and E.T. Bullmore, "Connectomic intermediate phenotypes for psychiatric disorders," Front. Psychiatry, Apr. 2012.
  46. M. Lukosevicius, H. Jaeger, and B. Schrauwen, "Reservoir Comput. Trends," vol. 26, May 2012, pp. 365-371.
  47. H. Jaeger, "The "echo state" approach to analysing and training recurrent neural networks-with an erratum note," Bonn, GMD Tech. Rep. vol. 148, Jan. 2010.
  48. L. Grigoryeva et al., "Echo state networks are universal," Neural Netw., vol. 108, Dec. 2018, pp. 495-508. https://doi.org/10.1016/j.neunet.2018.08.025
  49. H. Jaeger, W. Maass, and J. Principe, "Special issue on echo state networks and liquid state machines," Neural Netw., vol. 20, no. 3, Apr. 2017, pp. 287-289. https://doi.org/10.1016/j.neunet.2007.04.001
  50. O. Sporns, "The human connectome: Origins and challenges," NeuroImage, vol. 80, Oct. 2013. pp. 53-61. https://doi.org/10.1016/j.neuroimage.2013.03.023
  51. S.W. Oh et al., "A mesoscale connectome of the mouse brain," Nature, vol. 508, no. 7495, Apr. 2014.
  52. D. Meunier et al., "Hierarchical modularity in human brain functional networks," Front. Neuroinform., vol. 3, 2009.
  53. N.T. Markov et al., "A weighted and directed interareal connectivity matrix for macaque cerebral cortex," Cereb. Cortex, vol. 24, 2014.
  54. R.F. Betzel and D.S. Bassett, "Specificity and robustness of long-distance connections in weighted, interareal connectomes," PNAS, vol. 115, no. 2, May 2018.
  55. F. Damicelli et al., "Brain connectivity meets reservoir computing," Neurosci., Jan. 2021.
  56. L.E. Suarez et al., "Learning function from structure in neuromorphic networks," Preprint form Biology, Nov. 2020, doi: 10.1101/2020.11.10.350876.
  57. W. Luo and Ji-Song Guan, "Do brain oscillations orchestrate memory?," Brain Sci. Adv., vol. 4, no. 1, Oct. 2018. pp. 16-33. https://doi.org/10.26599/bsa.2018.9050008
  58. R. Fuevara Erra et al., "Neural synchronization from the perspective of non-linear dynamics," Front. Comput. Neurosci., Oct. 2017.
  59. P. Fries, "Rhythms for cognition: Communication through coherence," Neuron, vol. 88, no. 1, Oct. 2015.
  60. C. Duclos et al., "Brain network motifs are markers of loss and recovery of consciousness," Sci. Rep., vol. 11, Mar. 2020.
  61. O. Sporns et al., "Motifs in brain networks," PLoS Biol., vol. 2, Nov. 2004.
  62. D.S. Bassert et al., "Dynamic reconfiguration of human brain networks during learning," PNAS, May 2011, pp. 7641-7646.
  63. M. Pedersenet al., "Multilayer network switching rate predicts brain performance," PNAS, vol. 115, Dec. 2018.
  64. R.F. Betzel et al., "Generative models for network neuroscience: Prospects and promise," J. R. Soc., vol. 14, no. 136, Jun. 2017.