Neural nets, invented and evolved
This recent overview of recurrent neural networks recognizes that this work has long diverged from what is happening in neuroscience:
You might think that the study of artificial neural networks requires a sophisticated understanding of neuroscience. But you’d be wrong. Most press coverage overstates the connection between real brains and artificial neural nets. Biology has inspired much foundational work in computer science. But truthfully, most deep-learning researchers, ourselves included, actually know little about the brain.
The building blocks are still the McCullough-Pitts model of the neuron, described in 1943. So it was interesting to see about the same time how neuroscientists are looking at actual neurons. As expected, biology turns out to be yet more complex. Individual neurons sometimes have thousands of synapses because they are recognizing not just patterns, but temporal sequences of patterns:
We propose that basal dendrites of a neuron recognize patterns of cell activity that precede the neuron firing, in this way the basal dendrites learn and store transitions between activity patterns.
That said, it is not obviously the case that AI researchers should move to more complex kinds of neuron. Studying the natural brain should lead to fruitful ideas for engineered computation. But there is no reason to think that the natural architecture is somehow optimal or even necessary for engineering various kinds of intelligence. The brain evolved naturally. The path dependence implied by that is everything. Evolution and biology always are messy.