The ideas in this article are strongly reminiscent of those in Jeff Hawkins' "On Intelligence" (2004).
The idea of the brain operating (at least in part) as a "prediction machine" is certainly not a new one. I'm actually surprised it's taken this long for this sort of experimental confirmation and theory to become more mainstream.
"Being You" by Anil Seth, released last month, is a great primer on predictive processing and its relationship to the science of consciousness. Its a great complement to Hawkins' latest. https://www.amazon.com/dp/B08W2J9WWD
>The mind has a basic habit, which is to create things. In fact, when the
Buddha describes causality, how experiences come about, he says that the power of creation or sankhara—the mental tendency to put things together—actually comes prior to our sensory experience. It’s because the mind is active, actively putting things together, that it knows things.
>The problem is that most of its actions, most of its creations, come out of ignorance, so the kind of knowledge that comes from those creations can be misleading.
The second paragraph is getting off the topic - or is it?
On any serious discussion of neuroscience, I'd advise to keep Jeff Hawkins out of it. Not only are the ideas he publishes often gross simplifications with little data to back it up (but does wonders as marketing), they were ideas pushed and developed by real working neuroscientists. Just my opinion.
isn't that pretty face value what Numenta's goal is though? They read research papers on neuroscience and try to distill it down to applicable engineering problems for their HTM and see what works.
i can't speak much as to their original research efforts, but i personally appreciate the engineering-research side of what they are doing.
I have read the book and I really liked the first half, which explains the "thousand brains theory of intelligence". Very inspiring and thought-provoking (at least to me as an interested amateur in this field). The second half, however, would better have been a book of its own. It's about Hawkins' ideas on AGI implications and whatnot, which is quite entertaining but devalues the first half, in a way.
It’s basically how large language models like GTP-3 work. They simply predict the next most probable word and suddenly you can prompt them to give you answers to rather intelligent tasks.
The idea of the brain operating (at least in part) as a "prediction machine" is certainly not a new one. I'm actually surprised it's taken this long for this sort of experimental confirmation and theory to become more mainstream.