TL;DR. A cascade propagates longer in a neural network with stronger recurrent connections. The ongoing activity allows delayed recovery of stimuli and can act as working memory.
Cortical neurons spike stochastically, and in networks, these spikes propagate as cascades. When distributions of cascade size and duration follow a power law, which they do (sort-of-maybe), the cascades are called neuronal avalanches and are thought to display self-organizing criticality. While previous studies mostly use mean-field branching process theory to study cascades, we use linear dynamical systems to estimate cascading dynamics in a stochastic neuronal model and living neuronal systems. We find that
- the dominant eigenvalue of the network constrains the shape of the distribution of cascade duration,
- cycles extend cascade duration (and spikes can propagate cyclically in living networks),
- highly controllable neurons (i.e., those that best control network states) best extend cascade duration, and
- long cascades allow delayed recovery of stimulus patterns.
Collectively, these results demonstrate how cascading neural networks could contribute to cognitive faculties that require persistent activation of neuronal patterns, such as working memory or attention.
You can read the full paper here (in revision).