It is gratifying to see that our article, The myth of cognitive decline, has received a fair amount of attention in the past few weeks, because, as we point out in the paper, figuring out exactly what happens to our minds and memories in healthy ageing is of real importance, both at an individual and a societal level.
In a recent post, distinguished ageing researcher Patrick Rabbitt attacked the central thesis of our article – namely, that the evidence for cognitive decline in hea lthy minds is weak, and that the methods used to argue that our cognitive abilities decline critically fail to account for the growing information processing loads that experience brings. His take:
[The] feel-good news that slowing of decisions on all tasks is not a defining symptom of progressive failure but an honourable distinction of an age-stocked mind has eagerly excited the media (Telegraph; Guardian; BBC World Service; New York Times), but not researchers on cognitive aging.
For perfectly human reasons, it is hardly surprising that our work has failed to get researchers on cognitive ageing as “eagerly excited” as other folks. As Professor Rabbitt makes clear, no one likes to be told they are doing things wrong.
The important question is: are researchers on cognitive ageing doing things wrong?
I strongly encourage you to read Professor Rabbitt’s article. Despite the fact that it misrepresents our work and our claims (we point out that Cognitive Decline is a myth, not Cognitive Ageing; no one here is arguing against the inexorable march of time), we feel that there is much to be learned from this discussion. Professor Rabbitt’s post articulates a number of the fallacies and weaknesses that dog current research efforts. These misconceptions have led researchers – and by extension the public – to misunderstand the realities of cognitive ageing, and they serve to perpetuate many superstitious beliefs about cognitive decline. Professor Rabbitt’s post provides a helpful means of highlighting them.
To that end, rather than engaging in a tedious line-by-line rebuttal of the points in Professor Rabbitt’s article, I thought it would be useful to extract the key questions from it, and make a brief FAQ:
“Questions people often ask about cognitive ageing, and what to know about the answers.”
1. What about the evidence of atrophy in the brains of older adults? Isn’t that clear causal evidence that cognitive abilities decline with age? How can you argue with that?
As Ben Carey put it, in his New York Times piece,
In fact, [Ramscar and colleagues’] study is not likely to overturn 100 years of research, cognitive scientists say. Neuroscientists have some reason to believe that neural processing speed, like many reflexes, slows over the years; anatomical studies suggest that the brain also undergoes subtle structural changes that could affect memory.
Needless to say, we don’t share Ben’s confidence in the degree to which evidence from the brain can inform our understanding of cognitive “decline” in the absence of good models of cognitive function. And, as we pointed out in our article, the current models of “cognitive function” in the ageing literature are woeful.
Good functional models are important because, despite many widespread beliefs to the contrary, the brains of healthy adults do not experience significant cell loss as they age, nor do they undergo dramatic changes in neuronal morphology. Indeed, in a recent review in Nature Neuroscience, Burke & Barnes, 2006, describe this misconception as “the myth of brain ageing”.
The patterns of change in neuronal morphology over the lifespan are both more complex and more puzzling than the common notion of “brain atrophy” alluded to by Professor Rabbitt would suggest. Most of the typical changes in brain morphology that have been observed in healthy ageing involve declines in grey matter density, with a more complex pattern of change in white matter. Although these patterns are typically seen across many parts of the dorsal, frontal, and parietal lobes in adulthood, in some brain areas, such as the cingulate gyrus, grey matter density does not appear to decline across the lifespan in healthy adults (Sowell et al, 2003). Further, in other brain areas, such as the parahippocampal gyrus, there is evidence of significant dendritic growth in normal human aging (but not in senile dementia, Buell et al, 1979; 1981).
It goes without saying that the complex and systematic pattern of changes that are actually seen in neural morphology are not going to be explained without the development of functional models of what brain systems actually do.
Moreover, not only is it far from self-evident that healthy brains decline physically with age, but it is also extremely difficult to disentangle “declines” in brain function and neural plasticity from learning, which is itself reflected in neuronal morphology as changes in the density and composition of dendrites and spines (in gray matter) and axons (in white matter; see e.g., Merrill et al, 2001; Zuo et al, 2005; Rapp et al, 1996; Flood et al, 1991, 1993; Barnes & Burke, 2006; Zatorre, Fields & Johansen-Berg, 2012). This is a problem that anyone interested in understanding the relationship between ageing and changes in neuronal morphology must face.
In order to be sure that all of the changes in white matter organization that ones sees in a healthy brain are lesions – as Professor Rabbitt seems rather breezily eager to do – and not a sign of learning, one first needs a functional model of “normal” learning.
How important is it to understand the relationship between learning and plasticity before rushing off and viewing every change in neural morphology as a sign of brain atrophy? Consider this: Studies of 11 – 17 year-olds have revealed patterns of changes in gray and white matter densities that are remarkably similar to those associated with ageing (Alemán-Gómez et al, 2013). Should these findings be interpreted as a marker for the (extremely) early onset of age-related declines in plasticity? Or as evidence of ordinary, business-as-usual learning from experience?.
Consider also the more extensive age-related reductions in grey matter density that are observed in the posterior temporal cortex in the left (as compared to right) hemisphere (Sowell et al, 2003). Are these differences, which are particularly evident in posterior language areas, really just the result of simple (and presumably somewhat random) declines in cortical plasticity resulting from “brain ageing?” Given that language is one of the most extensive functional systems any brain ever learns, it does not seem unreasonable to suggest that some of these systematic changes in neuronal morphology may in fact reflect the effects of learning this system, rather than atrophy.
To know exactly what is going on, of course, one needs a model of what “normal” learning in the brain actually looks like. This is why we chose the particular models that we used in our work. They may be crude, and ultimately misguided, but they are the most extensively investigated, and best supported, models of brain based learning at a functional level that we know of (Schultz et al, 1997; Schultz, 2006; Daw et al, 2008, 2011).
As our article illustrates, when even these rather crude models of learning enter into the picture, one’s view of the impact of age on cognitive processing can change dramatically.
2. But there is a ton of evidence that people’s scores on cognitive tests go down with age. Isn’t that clear causal evidence that cognitive abilities decline with age?
This is, in essence, what Professor Rabbitt would like you to believe. His methodology (and indeed that of most of the researchers studying changes “cognitive performance” in ageing) is borrowed wholesale from the men-in-bad-suits who obsess over modeling IQ scores: define a bunch of tests, stare at comparisons across them, see where performance overlaps, and try to draw conclusions. (The Carnegie Mellon statistician Cosma Shalizi has written a wonderfully lucid series of articles about why – from a functional and causal point of view – this is the scientific equivalent of staring at tea leaves).
Setting aside for a second the fact that, as Professor Rabbitt will concede, differences in performance on these tests can tell you nothing about the causes of such differences, there are any number of problems with directly comparing the test performance of different groups. Not only can test performance be influenced by testing contexts, but across age groups, test scores have been shown to vary dramatically.
People studying life long cognitive development used to worry about these questions a lot (most notably Warner Schaie, but even Professor Rabbitt himself has written many a wise word on the topic). I assume some still do. However, despite the fact that when they have their methodology hats on, researchers who use these methods are quick to acknowledge that they cannot conclude anything about causality from them, the temptation to interpret the changes they observe on their tests causally – as evidence of decline – invariably proves too strong to resist.
This matters. Here’s a plot comparing the performance of a group of 19 year olds with a group of 57 years olds on a range of measures of “cognitive performance” (from Hargreaves, Pexman, Zdrazilova, & Sargious, 2012). I’ve plotted the data so that the young adults form a reference group, and the performance of the older adults reflects the change in their performance against baseline. In the leftmost two bars, I’ve also included estimates of the amount of print exposure each group has experienced.
There are a few things to note about this data. First, it was an experimental study in which participants were carefully matched on a range of control variables, including sample size.
Second, it was study of expertise, rather than a study of “ageing”. There are good reasons to believe that this matters.
Third, although, unsurprisingly, the older adults’ print exposure is greater than that of the younger adults, what may surprise some readers is that of the 7 cognitive measures tested, the older group out–perform the younger group on 6 of them: cowat (f, a, s and un), animal naming, and anagrams. (Even if we collapse all of the FAS style tasks into one, the fact is that the older adults outperformed the young on 3 out of 4 measures. And those older adults really outperform the young on anagrams).
Cognitive decline, it would appear, is as much in the eye of the test as the tester. (Especially if the tester is only minded to find tests that conform to the expected pattern of data that is thought to relate to the hypothesis s/he is examining.)
What one would like to know, of course, is what differences in performance on these tests actually mean. Do the changes in performance observed in cognitive tests actually support the claim that ageing leads to declines in the cognitive processes of healthy adults? Or not?
To answer this question, one needs a model of cognitive processing. In our article, we simulated how learning affects information processing with the Rescorla-Wagner model (a mechanistic learning model that explains and predicts more data than any other formalism in the psychological literature), and estimated how experience affects retrieval from memory with Shannon’s information theorems (which define information processing).
By contrast, Professor Rabbitt offers no model. All he has to offer are correlations among tests that were devised in the earlier parts of the 20th Century, based largely on the observation of school children. If vocabulary scores fall on these tests, Professor Rabbitt can only infer that vocabulary knowledge declines, whatever that actually means in practice. (I’ll deal with this point next).
Admittedly: Are our methods gross simplifications? Yes. Do they offer more insight than reciting correlations? In our article we show that our mechanistic methods successfully predict several previously unnoticed differences in the performance of older and younger adults. In my next post, I’ll show how they can make sense of the relationship between print exposure and the differences in performance by age observed on the various measures in the plot above. Some readers might even feel that at times, our models come dangerously close to offering explanations.
3. How can you be sure that older adults know more than young adults? The explanations in your article all hinge on their being a steady growth in information loads over the lifetime. Where’s the evidence?
As Professor Rabbitt puts it,
Ramscar et al insist that vocabulary tests cannot be appropriate measures because they are biased towards low [sic] frequency words and so do not accurately assess older people who know more rare words that are not tested. It is questionable whether most older people actually do know more rare words than most young adults, but scores on vocabulary tests are not the only, or the best comparison. … Perhaps Ramscar et al elide this point because of their need to counter a quite different objection that old people generally have only equal or even lower scores on vocabulary tests than the young.
We discuss the unreliability of vocabulary tests at length in our article. Someone who takes the scores produced by psychometric vocabulary tests across the lifespan seriously either doesn’t understand language, or else doesn’t understand statistics. Or maybe both. I will devote another blog post to explaining in simple terms why it is that these tests are as reliable as a stopped clock. Here I want to focus on why my colleagues and I have very good reasons for believing that older people do indeed know far more rare words than younger people.
The problem one has to face here is that, more often than not, the skewed distributions of language are wont to lead our intuitions about vocabulary astray. This is true even of people who actively study the statistics of language. Science recently published a much discussed study of the Google Books corpus that, by means of automated analysis and hand annotation, estimated the total vocabulary of American English in the year 2000 to be 1,022,000 word types, including proper nouns.
Simply contrasting this with the 2000 US census, which identified over 1.15 million different surnames that were shared by five or more people (and a further 5 million shared by < five people), reveals just how hard the real estimation task is. Either one can decide that ‘George’ and ‘Washington’ aren’t really words in American English (which, even if it makes one feel better, leaves one with the problem of figuring out what the information loads are in minds that definitely know the words ‘George’ and ‘Washington’). Or one can begin to understand that anyone’s intuition about what is or isn’t a word in American English – i.e., the kinds of judgments that drive hand annotation – are going to be seriously flawed as we reach the margins of our own personal experience with the language.
So, how might we measure vocabulary knowledge at the margins of personal experience? One brilliant way of doing so has been devised by our friend Emmanuel Keuleers, along with Marc Brysbaert, Paweł Mandera, & Michael Stevens at the Universiteit Gent. Rather than testing 100s of individuals on the same flawed vocabulary test, Emmanuel and his colleagues crowd-sourced lexical decision data from hundreds of thousands of Dutch speakers (around .2% of the world’s population) who had to discriminate real Dutch words (selected from a list of around 50,000 items), from pseudo-Dutch words – that is, words that look like Dutch, but aren’t in the language. (As the foregoing might indicate, this is not an easy task, especially given that Emmanuel and his colleagues are the best people in the world at making these fake words).
As you can see, accuracy in this task actually improves as people get older, especially for Dutch speakers living in the Netherlands. Indeed, Emmanuel informs me that the time people have been exposed to language – also known as age – is a better predictor of performance than education.
Now compare these Dutch findings with the vocabulary scores taken over time from around 7500 participants using a standard psychometric vocabulary measure (figure taken from Singh-Manoux et al., 2012).
If you use a test of just 33 words (that was designed sometime in the 1940s), it looks like people’s vocabularies — and minds — go into decline even before they reach their fiftieth birthdays!
Why does the Dutch test show improvements with age, whereas Professor Rabbitt thinks that vocabulary scores decline, as in the study above? I hope by now the answer is becoming clearer: Testing vocabulary knowledge is hard. It takes knowledge of language, and it takes knowledge of language statistics. And only when you have that knowledge, and put it to good use, can you start devising better ways of testing vocabulary at the margins. This is why Emmanuel et al’s test requires people to separate real words from pseudo-words, and why it tests a large enough set of real words to be meaningful.
(Note: because vocabulary measurement is hard, we should be careful in how we interpret the declining rate of improvement in Keuleers et al’s plot. At least some part of it will be due to ceiling effects, as the discriminatory power of even the 50,000 words tested here is exhausted.)
The Dutch results are powerful evidence against the kind of forgetting that researchers like Professor Rabbitt have to assume in order to interpret the “evidence” from their tests as decline. If the people tested in this study were forgetting words, their performance would not improve. Period.
So: Forget about forgetting. The only way to explain the improvements seen here is to assume (not unreasonably) that the older participants are better able to distinguish real Dutch words from fake Dutch words because they have more real Dutch words stored in their memories. And because they have more rare Dutch words stored in their memories.
4. Why don’t differences in vocabulary knowledge match up with differences in word processing in a simple way?
As Professor Rabbitt points out, there is a potential paradox in our work, in that if you compare across people of the same age, as opposed to between people of different ages, then on many measures, people who have larger vocabularies do better than people who have smaller vocabularies.
As Rabbitt puts it,
people of any age whose brains are so stuffed with words that they can produce more names of animals within a fixed time also produce words in other categories correspondingly faster and more accurately. This does not support the Ramscar hypothesis that words are retrieved more slowly from a large vocabulary. This is not a problem for the elegantly simple Simpson model…
The reason this isn’t a paradox, of course, is that the brain is not a fixed system. As the neuroscientists Alvaro Pascual-Leone, Amir Amedi, Felipe Fregni, and Lotfi B. Merabet put it in a recent review of what we know about how brains learn,
plasticity is not an occasional state of the nervous system; instead, it is the normal ongoing state of the nervous system throughout the life span. A full, coherent account of any sensory or cognitive theory has to build into its framework the fact that the nervous system, and particularly the brain, undergoes continuous changes in response to modifications in its input afferents and output targets.
Learning changes the brain. Just 7 days training in something as inconsequential as juggling is sufficient to produce visible changes in gray matter density and the organization of white matter pathways in the occipito-temporal areas associated with the processing of complex visual motion (Draganski et al, 2004; Driemeyer et al, 2008). Similar patterns of change are even visible in elderly participants (Boyke et al, 2008; albeit that the elderly learn less well on average over the same time frame). And because learning changes the brain, prior learning always impacts subsequent learning. There is no such thing as “learning” in a vacuum.
This in turn means that a full, coherent account of language processing across the lifetime can’t simply consider the effects of having a large vocabulary in a vacuum, as Rabbitt does. If we are serious about trying to understand the interaction between experience, vocabulary size and processing, we have to consider how people end up with different sized vocabularies, and how this might affect learning and processing at different stages of linguistic development.
In a series of studies, Anne Fernald and her colleges have elegantly shown how, consistent with Rabbitt’s observation, vocabulary scores actually predict speed of language processing in childhood. Children with larger vocabularies process words faster than children with smaller vocabularies. Perhaps unsurprisingly, Fernald and her colleagues have also shown that vocabulary scores and processing speeds are highly correlated with the amount of language a child is exposed to. Moreover, as Hart & Risely (1995) revealed in their landmark studies, the amount of language that children are exposed to can vary surprisingly. Depending on the social environment a child grows up in, the amount of language she hears can, in fact, differ quite dramatically. Rabbitt’s ‘model,’ which does not care about what a larger vocabulary means, would not consider any of this to be relevant.
Yet these points are of particular importance when we are dealing with human brains, because in children, not only will learning be having an impact on the local morphology of areas processing the various factors that contribute to behavior, but also because the maturation and development of the overall structure of the human brain is occurring throughout childhood.
Given what we know about the way brains learn and develop, it doesn’t seem like much of a leap to suppose that the very different levels of language input that different children experience might in turn result in the children who are exposed to large amounts of language developing much richer neural networks in the areas involved in lexical processing than children whose linguistic experience is impoverished. In the model we used to predict lexical processing speeds in our article, we at least consider this question, and the relation between network density and processing speed. And, in theory at least, the model presented in our article confidently predicts that dedicating more processing hardware to a task in the brain will lead to faster processing speeds.
Of course ultimately, what one would want to be able to do is integrate the many strands that influence the development of neural networks in the maturing mind, and the way processing in these networks responds to information gains in mature minds. I won’t pretend for a second that our models are even close to doing all this.
Yet consider the complexity involved in the task I just described. I have no idea how one would begin to try to do this other than by building and testing models. In many ways, science is simply an interative process of making ever less wrong models.
Taken at face value, it would appear that Professor Rabbitt believes that he can understand the way a brain develops in response to the environment, and the way that processing changes as the brain matures and experience grows, without any model. He appears to suggest that all can be revealed if we just compare scores on enough tests.
I can only wish him the best of luck with that.
5. OK. So much for language. Why do responses on simple button-pushing tasks slow over the lifetime?
It is commonly assumed that declines on cognitive test performance are signs that cognitive processes decline across the lifespan. Unfortunately, there is a very serious problem with this conclusion: In order to establish that processes decline, you have to control for what – and how much – stuff is being processed. Thanks to the development of massive linguistic databases, we can now begin to objectively do this for language. As we show in our article, for many processes related to language, once you control for how much stuff we know and have to process, there is simply no need to talk about decline.
Over the past several months, I have given numerous talks on our work (kicking the tyres on our ideas, and trying to get feedback about things we might have overlooked), and one question that always comes up in one guise or another is: “Okay, you’ve shown it for language, but what about X,” where X is a domain where I would have no idea about how to go about measuring the information in the environment that is associated with that task. Button-pushing is Professor Rabbitt’s X.
The first part of my answer is this: Language is, arguably, the most complex system of knowledge any brain has to learn. If, when it comes to language, we see no evidence of decline once we control for the information gain that comes with experience, are there any good reasons for me to believe that a different set of principles apply to domains where it is harder to quantify that information gain?
It is possible that the answer to this is yes, but as yet, I see no good evidence to support that possibility.
To return to Professor Rabbitt’s button pushing example, here’s a suggestion: Imagine programming a computer model of button pushing. Now imagine trying to extend the range of responses the model is capable of, so that it can push buttons in response to different instructions in a variety of contexts. Now be sure, because it’s a finger pushing the button, to make your model capable of doing all the other stuff fingers do.
How likely is it that you would be able to do all this while not having the information processing load in the model increase?
This last question highlights a deeper point in this debate: Researchers in the brain and cognitive sciences are engaged in a tortuous process of trying to reverse engineer a complex physical information processing device. Yet the simple fact is that very few researchers in the field have any training in information processing systems, and of the few that do, most have training at the software rather than the hardware end. Most researchers have only the dimmest idea how increases in data and task complexity impact information processing in the physical systems that actually do the processing.
While it is clear that the brain is not a computer in a straightforward sense, the fact is that our best models of neural information processing are based on machine information processing. So while I can see how advocating the “no model” approach is the easier option here, how likely is it that this particular easy option will lead to meaningful progress?
6. OK. But you must believe in decline, surely? I mean, people do get slower? Doesn’t that mean something?
At the outset of his blog post, Professor Rabbitt asserts that, “slowing of all decisions is a key behavioural marker of mental changes in old age.”
In a recent study, Klessinger, Szczerbinski, & Varley (2012) found that younger adults (M age 24.5) took an average of a little over 3 seconds to discriminate the correct answer to a simple addition problem (e.g., 28+16) from a lure. Older adults (M age 56) made the same decisions a full second faster.
The methods currently employed to describe cognitive development in older adults – those which underpin the “cognitive decline” industry in psychology – cannon explain this. They can just add a “mental addition construct” to the list of variables in a psychometric model, and then sagely “conclude” from their model that the “mental addition construct” doesn’t decline with age.
The problem, of course, is that this is not an explanation, it is simply a redescription of the data.
I remain open minded about the possibility of cognitive decline in healthy brains. The shock for me, as a cognitive scientist, has been just how flimsy the evidence for processing declines really is.
I also think that the building of functional models of cognitive processes is a necessary part in the development of our understanding of the mind, and the way it ages. In my next post, I will show how a functional analysis of cognitive tasks in relation to the learned information they access — the information that must be processed in performing a task — can help explain different patterns of change in performance: both where performance appears to improve with age, and where it appears to decline.
I may even describe how discrimination learning and the distribution of numbers in a typical adult’s experience actually serve to make information processing in a simple addition task easier across the lifespan, while making other tasks, like remembering a random string of digits, harder.
Alemán-Gómez, Y. et al. (2013). The Human Cerebral Cortex Flattens during Adolescence. J. Neurosci., 33(38), 15004-15010.
Borovsky, A., Elman, J.L., & Fernald, A. (2012). Knowing a lot for one’s age: Vocabulary skill and not age is associated with anticipatory incremental sentence interpretation in children and adults. Journal of Experimental Child Psychology, 112(4), 417-36.
Boyke, J., Driemeyer, J., Gaser, C., Buchel, C. & May, A. (2008). Training-induced brain structure changes in the elderly. J. Neurosci., 28, 7031–7035.
Brysbaert, M., Keuleers, E., Mandera, P. & Stevens, M (2014) The first results of the Groot Nationaal Onderzoek Woordenschat. Presentation at the 24th Computational Linguistics in the Netherlands Conference, Leiden, Netherlands, January 17th, 2014.
Buell, S. J. & Coleman, P. D. (1981). Quantitative evidence for selective dendritic growth in normal human aging but not in senile dementia. Brain Res., 214, 23–41.
Buell, S. J. & Coleman, P. D. (1979). Dendritic growth in the aged human brain and failure of growth in senile dementia. Science, 206, 854–856.
Barnes, C. & Burke, S. (2006). Neural plasticity in the ageing brain. Nature Reviews Neuroscience 7 (1): 30–40.
Daw, N.D., Courville, A.C., Dayan, P. (2008) “Semi-Rational Models of Conditioning: The Case of Trial Order.” in The Probabilistic Mind; Chater, N., Oaksford, M., Eds.; Oxford University Press: Oxford.
Daw, N. D., Gershman, S. J., Seymour, B., Dayan, P., & Dolan, R. J. (2011). Model-based influences on humans’ choices and striatal prediction errors. Neuron, 69(6), 1204-1215.
Draganski, B., Gaser, C., Busch, V., Schuierer, G., Bogdahn, U., & May, A. (2004). Neuroplasticity: changes in grey matter induced by training. Nature, 427(6972), 311-312.
Driemeyer, J., Boyke, J., Gaser, C., Büchel, C., & May, A. (2008). Changes in gray matter induced by learning—revisited. PLoS One, 3(7), e2669.
Fernald, A., Marchman, V. A. & Weisleder, A. (2012). SES differences in language processing skill and vocabulary are evident at 18 months. Developmental Science,16(2), 234-248.
Flood, D. G. (1991). Region-specific stability of dendritic extent in normal human aging and regression in Alzheimer’s disease. II. Subiculum. Brain Res., 540, 83–95.
Flood, D. G. (1993). Critical issues in the analysis of dendritic extent in aging humans, primates, and rodents. Neurobiol. Aging, 14, 649–654.
Hargreaves, I, Pexman, P, Zdrazilova, L, & Sargious, P (2012). How a hobby can shape cognition: visual word recognition in competitive Scrabble players. Memory & Cognition, 40(1), 1-7.
Hart, B & Risley, T.T. (1995) Meaningful Differences in the Everyday Experience of Young American Children. Brookes Publishing.
Klessinger, N., Szczerbinski, M., & Varley, R. (2012) The role of number words: the phonological length effect in multidigit addition. Memory & Cognition, 40(8), 1289-302.
Ramscar M, Hendrix P, Shaoul C, Milin P, & Baayen H (2014). The myth of cognitive decline: non-linear dynamics of lifelong learning. Topics in Cognitive Science, 6 (1), 5-42 PMID: 24421073
Rapp, P. R. & Gallagher, M. (1996). Preserved neuron number in the hippocampus of aged rats with spatial learning deficits. Proc. Natl Acad. Sci. USA, 93, 9926–9930.
Schultz W (2006). Behavioral theories and the neurophysiology of reward. Annual Review of Psychology. 57, 87-115
Schultz, W., Dayan, P., & Montague, P. R. (1997). A neural substrate of prediction and reward. Science, 275(5306), 1593-1599.
Sowell, E.R. et al. (2003). Mapping cortical change across the human life span. Nat. Neurosci., 6(3), 309–15.
Weisleder, A. & Fernald, A. (2013). Talking to children matters: Early language experience strengthens processing and builds vocabulary. Psychological Science, 24(11), 2143-2152.
Zatorre, R. J., Fields, R. D., & Johansen-Berg, H. (2012). Plasticity in gray and white: neuroimaging changes in brain structure during learning. Nature Neuroscience, 15(4), 528-536.
Zucker RS (1999) Calcium and activity dependent synaptic plasticity. Curr Opin Neurobiol, 9, 305–313.
Zuo Y, Lin A, Chang P, Gan WB (2005) Development of long-term dendritic spine stability in diverse regions of cerebral cortex. Neuron, 46, 181–189.