Stories as Predictions
In the Story Model, a story is a generative hypothesis about "what is happening / what will happen to me" that competes with other stories inside a hierarchical predictive-control system. The brain continuously minimizes prediction error (free energy) by adjusting its internal stories—perception and belief revision—and by acting to make the world match the story—policy selection (Friston, 2010; Clark, 2013).
A "story" in the Story Model maps onto a generative hypothesis in the predictive processing framework — the brain's best current guess about what's happening and what comes next.
Survival Relevance as Precision
Perceived survival relevance is the system's estimate that a story matters for keeping the organism within viable bounds (allostasis). In predictive-processing terms, this estimate maps closely onto precision—a weight (inverse uncertainty) that scales the impact of a story's prediction errors on inference and control.
Precision, strictly speaking, reflects the brain's confidence that a prediction error is reliable and worth acting on. Survival relevance enters the system through prior preferences—the organism's built-in model of what bodily and environmental states are compatible with staying alive (Friston, 2010). These two mechanisms work together: when a story's prediction errors concern states far from viable bounds, the brain assigns those errors high precision, which amplifies their influence on what you perceive, feel, and do. When a story concerns states within comfortable bounds, precision stays low and the story fades into the background.
Precision is also redistributed across sensory channels in a context-sensitive fashion—attention itself can be understood as the optimization of precision during hierarchical inference (Feldman & Friston, 2010)—and epistemic actions that resolve uncertainty are themselves driven by expected changes in precision (Parr & Friston, 2017).
Precision is context-dependent: interoceptive and exteroceptive cues signaling current and anticipated metabolic demands modulate the gain on prediction errors, shaping which stories dominate inference at any moment (Barrett & Simmons, 2015).
Neuromodulators: Broadcasting Survival Relevance
Multiple neuromodulatory systems have been proposed to regulate gain, uncertainty, salience, and policy confidence—functions that map onto precision in the predictive-processing framework.
Norepinephrine (from the locus coeruleus) adjusts gain—the responsivity of cortical neurons—amplifying contrast between activated and inhibited populations. It toggles between exploitation (phasic mode, when a story's predictions are paying off) and exploration (tonic mode, when utility in the current task wanes and a new story may be needed), amplifying stories whose outcomes look survival-relevant under current uncertainty (Aston-Jones & Cohen, 2005).
Dopamine (from midbrain circuits) serves two related functions that predictive-processing and reward-learning frameworks describe in different vocabularies. In reward-learning terms, phasic dopamine signals encode prediction errors—the difference between expected and actual outcomes—that calibrate which cues and actions are improving survival proxies (Schultz, Dayan, & Montague, 1997). In parallel, dopamine tags cues and actions with incentive salience ("wanting"), biasing selection toward stories promising improved conditions (Berridge & Robinson, 1998).
Active inference models interpret this same mechanism as dopamine adjusting the precision over policies—the confidence the brain places on different action plans—so that better-predicted-payoff actions receive stronger activation (FitzGerald, Dolan, & Friston, 2015). These three accounts were once seen as competing: Berridge and Robinson showed that dopamine depletion spares learning, directly challenging the prediction-error-drives-learning reading of Schultz. FitzGerald and colleagues offer one reinterpretation that eases the tension, proposing that dopamine sets the precision (confidence) on action policies rather than writing learning per se—preserving the prediction-error signal while explaining why learning survives dopamine loss. Whether this fully resolves the debate remains open. Taken together, dopamine steers behavior toward stories whose outcomes the brain estimates as survival-relevant.
Acetylcholine and norepinephrine together separate expected from unexpected uncertainty, setting attention and learning rates—how strongly a story should control inference and behavior in volatile vs. stable environments (Yu & Dayan, 2005).
Feelings: The Survival Readout
Feelings in this framework are the organism's readout of how much the active story is forecast to change the body's energy budget and threat profile. Limbic and visceromotor areas issue top-down allostatic predictions about internal bodily state—anticipatory motor commands that regulate the body in advance of expected demands (Barrett & Simmons, 2015).
Active inference models describe the same process in terms of precision-weighted interoceptive prediction errors: affective experience may arise when interoceptive prediction errors are precision-weighted as significant (Seth, 2013). The convergence of these two frameworks—Barrett's constructed emotion and Friston's active inference—gives us a coherent picture: strong feelings arise when the brain's story about what's happening carries survival-relevant implications for the body, and the brain is confident those implications are real.
These affective signals interact with frontal control circuits that compute the Expected Value of Control (EVC)—an online cost-benefit estimate of deploying effortful, deliberate control in service of the active story (Shenhav, Botvinick, & Cohen, 2013). When EVC is high—expected payoff justifies the cost of cognitive effort—deliberate control is recruited. When EVC is low or stress is high, behavior defaults to faster, automatized processing (Aston-Jones & Cohen, 2005; Maier & Seligman, 2016).
Learning: Tuning the System
Learning calibrates the whole system toward survival relevance. Positive and negative reward prediction errors from dopaminergic midbrain recalibrate which cues and actions are actually improving survival proxies—resource acquisition, social safety, threat avoidance—strengthening stories that paid off and weakening those that didn't (Schultz et al., 1997). Memory systems preferentially encode content processed for survival relevance, even with abstract laboratory stimuli, demonstrating a built-in bias toward better retention of survival-relevant stories (Nairne, Thompson, & Pandeirada, 2007).
Under uncontrollable stress, serotonergic dorsal raphe circuits promote passive, defensive responding—the default state that Maier and Seligman's revised model now treats as unlearned. What is actually learned is controllability: ventral medial prefrontal cortex detects that outcomes are controllable and actively inhibits the passive default, re-establishing agency (Maier & Seligman, 2016).
Future Stories: Pre-Allocating Precision
Future-oriented stories can pre-allocate precision to actions before payoffs arrive. Episodic future thinking recruits prefrontal-mediotemporal interactions to make distant outcomes feel concrete, reducing delay discounting and effectively increasing the weight the brain places on long-horizon actions in the valuation process (Peters & Büchel, 2010).
Implementation intentions ("if situation Z, then do Y") delegate control of goal-directed responses to anticipated situational cues, so that when the cue appears the response is triggered automatically—swiftly, efficiently, and without requiring conscious intent (Gollwitzer, 1999). In Story Model terms, this amounts to front-loading precision at the moment of choice.
The Full Picture
Stories tagged as survival-relevant receive higher precision, which (i) amplifies their influence on perception and feeling, (ii) biases attention and memory toward story-consistent evidence, and (iii) steers action selection via neuromodulatory control of policy gain.
The outcome is motivated behavior that, when the tags are accurate, improves real survival proxies—and when the tags are off (chronic stress, maladaptive cues, evolutionary mismatch), generates strong feelings pointed in the wrong direction.
References
Aston-Jones, G., & Cohen, J. D. (2005). An integrative theory of locus coeruleus-norepinephrine function: Adaptive gain and optimal performance. Annual Review of Neuroscience, 28, 403–450. doi:10.1146/annurev.neuro.28.061604.135709
Barrett, L. F., & Simmons, W. K. (2015). Interoceptive predictions in the brain. Nature Reviews Neuroscience, 16(7), 419–429. doi:10.1038/nrn3950
Berridge, K. C., & Robinson, T. E. (1998). What is the role of dopamine in reward: Hedonic impact, reward learning, or incentive salience? Brain Research Reviews, 28(3), 309–369. doi:10.1016/S0165-0173(98)00019-8
Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36(3), 181–204. doi:10.1017/S0140525X12000477
Feldman, H., & Friston, K. J. (2010). Attention, uncertainty, and free-energy. Frontiers in Human Neuroscience, 4, 215. doi:10.3389/fnhum.2010.00215
FitzGerald, T. H. B., Dolan, R. J., & Friston, K. J. (2015). Dopamine, reward learning, and active inference. Frontiers in Computational Neuroscience, 9, 136. doi:10.3389/fncom.2015.00136
Friston, K. J. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11(2), 127–138. doi:10.1038/nrn2787
Gollwitzer, P. M. (1999). Implementation intentions: Strong effects of simple plans. American Psychologist, 54(7), 493–503. doi:10.1037/0003-066X.54.7.493
Maier, S. F., & Seligman, M. E. P. (2016). Learned helplessness at fifty: Insights from neuroscience. Psychological Review, 123(4), 349–367. doi:10.1037/rev0000033
Nairne, J. S., Thompson, S. R., & Pandeirada, J. N. S. (2007). Adaptive memory: Survival processing enhances retention. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33(2), 263–273. doi:10.1037/0278-7393.33.2.263
Parr, T., & Friston, K. J. (2017). Uncertainty, epistemics and active inference. Journal of the Royal Society Interface, 14(136), 20170376. doi:10.1098/rsif.2017.0376
Peters, J., & Büchel, C. (2010). Episodic future thinking reduces reward delay discounting through an enhancement of prefrontal-mediotemporal interactions. Neuron, 66(1), 138–148. doi:10.1016/j.neuron.2010.03.026
Schultz, W., Dayan, P., & Montague, P. R. (1997). A neural substrate of prediction and reward. Science, 275(5306), 1593–1599. doi:10.1126/science.275.5306.1593
Seth, A. K. (2013). Interoceptive inference, emotion and the embodied self. Trends in Cognitive Sciences, 17(11), 565–573. doi:10.1016/j.tics.2013.09.007
Shenhav, A., Botvinick, M. M., & Cohen, J. D. (2013). The expected value of control: An integrative theory of anterior cingulate cortex function. Neuron, 79(2), 217–240. doi:10.1016/j.neuron.2013.07.007
Yu, A. J., & Dayan, P. (2005). Uncertainty, neuromodulation, and attention. Neuron, 46(4), 681–692. doi:10.1016/j.neuron.2005.04.026