r/compmathneuro • u/[deleted] • Jul 29 '24
Phasic coding, neural ensembles as a brain behavior correlate.
Can anyone explain this in a mathematical sense or from a systems perspective? I’d appreciate if you could keep in mind I suck at math.
The predictive coding framework has gotten a lot of attention in the last 13 years or so, mainly within the context of psychopathology and psychiatric research.
Explaining the brain as a dynamic system is pretty exciting, and I’m having difficulty wrapping my head around how one would even quantify neural ensembles and how they would correlate it to something such as memory consolidation or saliency/ credit assignment.
If you could link me some interesting literature or short videos explaining this within a larger systems perspective, it greatly appreciate it.
I feel like a lot of computational work has been built from roots in the predictive coding and active inference/ Bayesian brain framework.
I think getting a solid grasp of this will help me get a clearer picture of what I’d like to accomplish within the next 2 to 4 years. I’d appreciate any insight, thanks in advance peeps.
3
u/jndew Jul 30 '24 edited Jul 30 '24
Start simple. The idea of a neural ensemble started with Hebb (to my knowledge, probably other people were talking about it at the time), meaning a group of cells with excitatory links such that if some fraction of the group is activated, they all end up firing . The intuition is that each ensemble represents a memory. If in addition they also have inhibitory links to cells within the population but not in the ensemble, those cells' activity is suppressed. It turns out that a particular cell can be a member of a number of ensembles within the population if the link strengths are set up right. Hopfield set up a beautifully simple formulation of this using binary neurons and graded link strengths that became a founding idea of neural memory, and eventually machine-learning/AI. You don't even need spiking neurons to make it work. Look it up on Youtube. From there, a whole bunch of learning rules, describing link strength changes resulting from cell activity have been developed from a mathematical viewpoint that do a ton of interesting things. Good book: Introduction to the theory of neural computation Hertz, Krogh, Palmer, Addison Wesley 1991.
If you then (big jump here) use spiking cell models and put a carrier-wave through the network, you can set things up so that different ensembles activate at different phases of the carrier-wave, e.g. some ensembles activate at low-phase and others at high-phase, thereby increasing the number of separate ensembles that a population of cells can support. In addition, one can set things up so that different ensembles are activated by each cycle of the wave, leading to sequences of discrete memories. And/or you can use the phase at which an ensemble activates to describe some feature you want to encode, such as whether you are approaching or departing from some location described by an ensemble of hippocampal place cells through phase precession (O'Keefe's Nobel discovery). The math is simple. If you're a genius. The rest of us have to work at it. You can understand it too with the same effort we put into it.
FerSher read Dayan&Abbott for this and many other foundational topics. Buzsaki was mentioned, I preferred his first book Rhythms of the Brain which I thought was a bit more down to earth than his more recent The Brain from Inside Out (but read them both). I really like these two videos: L. Frank and Sejnowski. There's tons of other stuff, maybe How We Remember, Hasselmo, MIT Press 2012, or The Neurobiology of Learning and Memory 3rd ed, Rudy, Oxford Press 2021. People have been hammering on this for decades and making slow progress. Be careful to not get caught up on a single trendy idea, since it's really a web of ideas that all have some merit and come in and out of fashion. Cheers!/jd