Keywords

17.1 Introduction

Increase in life expectancy is one of the core characteristics of the modern life (Schneider 1999). However, whereas medicine and technology enable relatively good health (Eggleston and Fuchs 2012), the apparent cognitive decline is still considered one of the most central aspects of ageing (for an overview see, Craik and Salthouse 1992; but for a steady age-related advantage in vocabulary, Ben-David et al. 2015; also see Evans 2018; Chap. 16, in this volume). It is one of the most feared aspects of growing old (Morley 2004). In this context, accurate measures of cognitive decline have become increasingly important. These typically involve the assessment of core cognitive skills, such as memory, language proficiency, intelligence and executive functions (for a review see, Monge and Madden 2016).

Older adults’ performance on cognitive assessment tests has important implications both on the individual level and on the societal level. For individuals, performance is a marker for cognitive ability, influencing self-image and life choices. For society as a whole, performance sets the perspective (and expectations) on the capabilities of older people in general, contributing to ageism. Cognitive assessment tests are administered with two (dependent) implicit assumptions: (a) the tests are a valid gauge of performance in older adults and (b) cognitive abilities decline in older age. Because test performance is taken to provide a good estimate of abilities in older adults, reduced test performance is interpreted as reflecting an age-related cognitive decline. If one assumes an age-related cognitive decline, reduced performance in cognitive tests can be seen as further support for test validity. In the current chapter, we wish to challenge these assumptions, questioning the validity of cognitive tests as an unbiased gauge of older adults’ abilities, and as a result, questioning the extent of age-related cognitive decline.

As a first step, we review evidence suggesting that performance on cognitive tests is affected by sensory decline. Currently, cognitive tests are not designed to take this factor into account. Specifically, sensory decline in ageing degrades the information processed, impairing cognitive processing (Schneider and Pichora-Fuller 2000). Indeed, age-related changes in cognitive performance (on several tests) can be minimized (or even effaced) when sensory decline is controlled for, or by changing the sensory context of the test (vision: Ben-David and Schneider 2009; auditory: Ben-David et al. 2011a).

In the second part of this chapter, we discuss evidence of the impact of age-based stereotype threat on test performance. Specifically, the predicament arising from negative ageing stereotypes on cognitive decline can be experienced as a self-evaluation threat (Steele and Aronson 1995), leading to decreased performance, thus fulfilling the ageist prophecy (Hess et al. 2003). Here, too, there is evidence to suggest that elevating stereotype threat may minimize age-related changes in performance (e.g., Mazerolle et al. 2017). Finally, notwithstanding age-related neurological changes (as frontal and hippocampal decline; West 1996), this chapter suggests that the common assumption on the extent of age-related decline in cognitive abilities may be exaggerated, and the respective role of the sensory and social contexts on performance is considerably ignored.

To understand the interplay between the implicit assumptions and the sensory and social contexts, consider the following example. A 75-year-old goes to a university lab to be tested for cognitive abilities, or to a clinic to be tested for cognitive impairment, when decline (or even dementia) is suspected. The mere presentation of the test may elicit the expectation to perform poorly, negatively affecting performance. Auditory and visual information, such as test material and instructions, present a greater sensory challenge due to age-related sensory decline, again negatively affecting performance. Reduced performance serves to further validate common stereotypes about the rate and extent of cognitive deterioration with ageing, as the test is taken as valid and unbiased. Simply put, performance on tests, which may be biased due to sensory and social aspects of ageing, confirms assumptions of reduced cognitive abilities in ageing.

In sum, our analysis of the literature focuses on the two main threats to the validity of neuropsychological assessment in ageing: the sensory context and the social context. These contexts not only describe the mechanisms underlying biases in evaluating cognitive performance in older age, but also offer insights that can improve the validity of such tests. Targeting the sensory and social context in neuropsychological assessment may assist in reducing age-bias (leading to ageism) in the scientific, medical and general community.

17.2 Ageing and the Sensory Context of Neuropsychological Assessment

17.2.1 Age-Related Sensory Decline

Ageing is commonly accompanied by a sensory decline (Crews and Campbell 2004), indicated by an increase in the use of sensory aids, such as hearing devices and glasses (Roberts and Allen 2016). To focus our discussion, we target auditory and visual age-related decline in healthy ageing. Auditory decline in healthy ageing is related to neural changes across different auditory areas in the brain, as well as cochlear changes in the ear (Gordon-Salant et al. 2010). Auditory changes lead to an increase in hearing thresholds, where one needs louder target sounds for correct detection (Roberts and Allen 2016), and supra-threshold changes (Glasberg and Moore 1992), such as distortion of the incoming stimuli (Grose and Mamo 2010). Ageing is also marked by less efficient auditory scene analysis (Bregman 1990), where both top-down and bottom-up processes impact difficulties in segregating the target speech from competing sound sources in the background (e.g., other people talking; Tun and Wingfield 1999). Visual changes in healthy ageing typically include retinal degeneration (Monge and Madden 2016), and presbyopia (Scialfa 2002). These changes degrade basic visual processes, such as near-term vision (Spear 1993), visual acuity (clarity of vision, Owsley 2011), contrast sensitivity (the ability to detect luminance differences to distinguish objects from background, Greene and Madden 1987) and colour perception (e.g., Nguyen-Tri et al. 2003).

Intelligence Tests

The age-related sensory decline discussed above is likely to have an impact on cognitive processing, and therefore, on neuropsychological assessment (Roberts and Allen 2016; Schneider and Pichora-Fuller 2000). Perhaps the most striking evidence comes from a line of studies on age-related sensory decline and scores on different intelligence batteries (e.g., Anstey et al. 2001; Lindenberger and Baltes 1994). Findings show that visual and auditory acuity scores (near and far visual acuity, pure tone auditory acuity) could account for 93.1% of the age-related variance in intelligence scores (with 156 participants aged 70–103). These findings were later replicated and extended on a larger sample (687 participants), with a larger age range (25–103 years, Baltes and Lindenberger 1997) and to longitudinal data (Ghisletta and Lindenberger 2005). In sum, evidence suggests that sensory decline is closely related to a decline in performance on neuropsychological assessment.

17.2.2 Theories on the Interaction of Sensory and Cognitive Ageing

How to explain this link between sensory and cognitive ageing? Four possible hypotheses have been discussed (Schneider and Pichora-Fuller 2000, see also Wayne and Johnsrude 2015): (1) Sensory deprivation hypothesis. Sensory decline, over time, leads to a cognitive decline due to social isolation and reduced use of the relevant cognitive functions (Lin et al. 2013); (2) Cognitive load hypothesis. Cognitive decline leads to a decline in perceptual processes (the interpretation of the sensory input). This is based on the idea that cognitive load can impair even simple sensory tasks (Li et al. 2001; Lindenberger et al. 2000) (3) Common cause hypothesis. Degeneration in the central nervous system causes a deterioration of both perception and cognition (Baltes and Lindenberger 1997). Indeed, cardiovascular risk factors have been associated with both hearing loss and cognitive decline (Roberts and Allen 2016); finally, (4) Information degradation hypothesis. Unclear and distorted perceptual information delivered to the cognitive system directly impairs cognitive performance, due to an increase in the resources required for the perception process and errors embedded in the input (Schneider and Pichora-Fuller 2000).

These hypotheses are not necessarily mutually exclusive. Clearly, the interaction of sensory and cognitive processing suggests that each factor can affect the other, with similar biological changes influencing both (Baltes and Lindenberger 1997). However, the information degradation hypothesis presents the framework for understanding the possible age biases in neuropsychological testing that may lead to ageism. If perception and cognition are taken to comprise one integrated system (Wingfield and Tun 2007), where both processes share the same pool of resources (Glisky 2007), then when perceptual processing requires more resources due to age-related sensory decline, less resources are available for cognitive processing (Heinrich et al. 2008). Furthermore, cognitive processing demands more resources when it is based on degraded sensory information, tapping into the already reduced pool. This model suggests that sensory degradation (i.e., the reduced quality of sensory information) is an alternative explanation for age-related declines in performance. Thus, it directly challenges the two implicit assumptions underlying neuropsychological testing in ageing: test validity and the extent of age-related cognitive decline. The validity of this model can be tested very easily with simple experimental manipulations (see Monge and Madden 2016). This hypothesis also affords a possible remediation for neuropsychological assessments in older age. Mainly, ameliorating the sensory input (or removing them all together, Ben-David and Icht 2017), can minimize age-related differences.

In the next sections, we discuss the assessment of three main cognitive abilities, taken to represent age-related cognitive decline: inhibition, speech comprehension and memory. We offer evidence to suggest that the neuropsychological assessment can be drastically impacted by age-related sensory degradation in vision (inhibition), in hearing (comprehension) or both (memory).

17.2.3 Inhibition: An Example of the Effect of Visual Degradation on Neuropsychological Tests

The need to inhibit irrelevant information is a central cognitive ability in daily activities. For example, when driving a car one must attend to the road, while ignoring irrelevant visual distractors, such as billboards. Similarly, when reading this text, one needs to ignore irrelevant dimensions, such as the size and shape of the page, and focus on the content of the words. One of the prominent theories on cognitive changes in ageing suggests that this specific process deteriorates in ageing (Hasher and Zacks 1988). The age-related decrease in the efficiency of inhibitory processes is a part of a general theory on a decrease in executive functions – a decrease in monitoring and control of behaviour (Baddeley 1996). This cognitive decline is generally attributed to selective prefrontal deterioration in ageing (Dempster 1992). However, recent studies suggest that visual sensory degradation can explain some of the age-related variance in performance (see a discussion in Ben-David et al. 2014a).

The ‘gold standard’ for evaluating inhibition in ageing is the colour-word Stroop test (Stroop 1935; see Melara and Algom 2003 for a relevant review). In this paradigm, participants are asked to name aloud the font colours of printed words, ignoring their content. For example, saying aloud “blue” when presented with the word RED printed in blue. The latency advantage for naming the font colour of a colour-neutral word (e.g., TABLE printed in blue) over an incongruent colour-word (RED in blue) is termed Stroop interference. An age-related increase in Stroop interference has been shown repeatedly in the literature (for reviews, see Ben-David and Schneider 2009; McDowd and Shaw 2000). It is commonly interpreted as reflecting an age-related decrease in the efficiency in inhibition (e.g., Troyer et al. 2006).

In the past decade, a line of studies by Ben-David and colleagues suggest that variance in colour-vision (in people with clinically normal colour-vision) can mediate performance on the Stroop test in various populations: healthy ageing (Anstey et al. 2002; Ben-David and Schneider 2009), people with dementia (Ben-David et al. 2014b) and people with traumatic brain injury (Ben-David et al. 2011b, 2016). In a meta-analysis (Ben-David and Schneider 2009), an age-related increase in colour-naming latencies (naming the font colour of a colour-neutral word) was found to be significantly larger than an age-related increase in reading latencies (reading a word printed black on white). This increased difficulty in colour-vision processing was found to be a possible source for reduced performance on the Stroop test, beyond any changes in inhibition. In a follow-up study, to simulate an age-related colour deficiency, the Stroop test was presented with desaturated colour-set for a group of younger adults. By reducing the amount of colour information available, Ben-David and Schneider (2010) were able to “age” younger adults, generating the age-related increase in Stroop interference. Somewhat similar results were obtained in other inhibition test. For example, Bertone et al. (2007) simulated age-related visual acuity degradation by fitting younger adults with occlusion filter lenses to blur their vision (e.g., to 20/40), severely reducing performance on an inhibition test.

In sum, mimicking an age-related decrease in colour perception in younger adults was sufficient to lead to Stroop interference that is similar to that found in older adults. Because the colour-word Stroop test is widely used in clinical, neuropsychological and experimental screening tests as a measure of inhibitory control in older participants, there is reason to question the validity of inhibition diagnosis and the extent to which the inhibitory process in ageing decreases.

17.2.4 Comprehension: An Example of the Effect of Auditory Degradation on Neuropsychological Tests

Age-related auditory degradation impairs speech perception on various levels: single-word identification, sentence comprehension, source localization and the ability to segregate the source speaker from a noisy background (Ben-David et al. 2012; Humes and Dubno 2009; Schneider et al. 2002). This is supported by a line of studies showing that when the listening situation is adjusted to match older and younger adults’ auditory abilities, speech processing can be equated (for a review, see Schneider et al. 2010). For example, (Ben-David et al. 2011b) used an eye-tracking paradigm to measure age-related effects on speech processing, as the spoken word unfolds in time. The study found equivalent online speech processing for older and younger adults, when speech was presented in quiet. When the noise level was tailored to compensate for age-related auditory changes (by setting different signal-to-noise ratios to equate single-word identification across age-groups), again, online speech processing was mostly not affected by ageing (for possible cognitive effects, see Hadar et al. 2016).

Age-related changes in speech perception, as presented above, can directly affect performance on neuropsychological tests. Generally, basic test instructions are spoken, presenting a possible source for misunderstanding. The additional effort for deciphering spoken instructions may also decrease available resources and induce stress (Schneider et al. 2010). Importantly, in many cognitive tests, test stimuli are spoken rather than printed, presenting further sources for biases. For example, Schneider et al. (2000) asked older and younger participants to listen to lectures (8–13 min) on a noisy background and to later answer questions concerning their content. When both age groups heard the lecture at the same sound level (mimicking conditions in the clinic), significant age-related differences were found. However, when auditory conditions were individually adjusted to compensate for sensory degradation, these differences were eliminated. In other words, older adults were able to perform as well as younger adults, even in a complex listening comprehension task, when sensory degradation was compensated for (for replication, see Lu et al. 2016). These results may be further supported by a recent study indicating that younger adults with hearing impairment show equal cognitive performance to older adults with similar sensory decline (Verhaegen et al. 2014). Noting that declines in comprehension are often interpreted as a direct indication of age-related cognitive decline and even pre-dementia (Schneider et al. 2005), these findings present implications for various neuropsychological assessment tools.

17.2.5 Memory: An Example of the Effect of Auditory and Visual Degradation on Neuropsychological Tests

As discussed in the second section of this chapter, memory deterioration is one of the most prominent stereotypes in ageing. Research presents abundant evidence suggesting an age-related decline in a variety of memory tasks (Craik and Jennings 1992; Zacks et al. 2000). Yet, like comprehension, memory was also shown to depend on the perception of the auditory stimuli (Peelle and Wingfield 2005). Consider studies of spoken word-pairs, where listeners are presented with a list of spoken word-pairs, and later asked to recall a target word when given its pair. In an early study (Murphy et al. 2000), memory performance of older adults listening in quiet was nearly equivalent to that of younger adults listening in noise. Later, Heinrich and Schneider (2011) adjusted signal-to-noise ratios to equate identification of a single spoken word across age-groups (i.e., louder background noise was presented to younger adults). In this condition, memory performance of older adults matched that of younger adults, when ample time was given.

Visual decline was also shown to impair performance on memory tests. For example, Dickinson and Rabbitt (1991) occluded younger adults’ vision to mimic older adults’ visual perception when asked to read aloud a passage for later recall. Memory performance was severely diminished, even though no errors were performed during reading. In other words, the additional toll on processing (generated by visual degradation) can affect non-perceptual, higher-order cognitive processes, even when visual processing is still intact. Finally, visual sensory degradation was also correlated with other visual memory tasks (Stevens et al. 1998).

Dementia Screening Tests

Age-related sensory degradation was not only found to impact memory tests in healthy ageing, but also dementia assessment, with severe implications for false positive mistakes (e.g., Feldman et al. 2008). We focus on commonly used screening tests for dementia such as the Mini-Mental State Examination (MMSE; Folstein et al. 1975) and the Montreal Cognitive Assessment (MoCA: Nasreddine et al. 2005). These tests are considered accurate tools for identifying indicators for pre-dementia and mild cognitive impairment (MCI: Luis et al. 2009). Yet, there is a body of evidence in the literature to indicate that these tools can be affected by sensory degradation as well. For example, Dupuis et al. (2015) found that older adults with sensory loss (whose vision and\or hearing scores did not meet the criteria for healthy ageing) were twice as likely to score below the cut-off point for potential pre-dementia as older adults with clinically normal visual and auditory thresholds. Indeed, in a different study, one third of older adults with hearing loss, who were already diagnosed with dementia, were reclassified as having a lesser degree of dementia when the MMSE was administered using sound amplification (Weinstein and Amsel 1986). Similarly, significantly better MMSE scores were documented when older adults with hearing loss were using fitted hearing aids (Acar et al. 2011). Dementia measures are also affected by age-related sensory decline that is still within clinically normal boundaries. For example, tailoring stimuli contrast level for older participants to resemble contrast perception of younger adults was sufficient to erase all age-related differences in a digit cancellation task (another indicator for pre-dementia) with healthy adults. When the same manipulation was conducted with older adults diagnosed with dementia, it significantly minimized performance decline (Toner et al. 2012). In sum, age-related visual and auditory decline can lead to overestimation and false-positive diagnosis of dementia. Results call for a careful examination of test results in light of age-related sensory decline.

17.2.6 Clinical Implications: Compensating for Sensory Degradation

Simple actions may overcome the sensory bias in neuropsychological testing. We recommend adjusting test stimuli and instructions to compensate for age-related sensory changes. These include sound amplification, reducing background noise and increasing target sound level (enhancing signal-to-noise ratio), enhancing stimuli visual contrast, using larger font size, increasing the amount of light in the room, careful choice of colours and so on. We also recommend assessing the visual and auditory abilities of older participants in all cases of neuropsychological assessments. It is imperative to ensure the use of corrective aids (visual or auditory) when these are needed, as hearing and visual loss can severely reduce performance. These sensory scores can also serve to statistically adjust neuropsychological assessment scores to better reflect abilities (e.g., the Stroop task, Ben-David and Schneider 2010). In sum, to avoid a false diagnosis of a clinical cognitive decline one must not ignore or underestimate sensory decline.

17.2.7 Sensory Context: Summary

In the first section of this chapter, we discussed evidence showing that physical aspects of the test material (and instructions) present a direct threat to the validity of neuropsychological testing in ageing. Namely, age-related decline in performance on assessment tools may reflect, at least in part, a sensory rather than a cognitive decline. When reduced performance is evident, it is likely to be attributed to lower cognitive ability of the older test taker, rather than transient sensory contextual factors. This ageist bias might serve to further validate negative ageing stereotypes, resulting ultimately in the negative portrayal of older adults across both scientific literature and every day cultural representations. In the following section, we discuss how such age-based stereotypes may have a negative impact on performance, suggesting that the social aspects of the test may also put into question the validity of neuropsychological testing in ageing.

17.3 The Social Context of Neuropsychological Assessment in Ageing

Stereotype threat, one of the most widely investigated topics in social psychology (Pennington et al. 2016) occurs when underachievement among stigmatized group members is rooted in the situation more than in the individual (Leyens et al. 2000). The existence of a negative stereotype about a person’s group means that in situations where the stereotype is applicable, the person will be at risk of confirming it as self-characteristic (Aronson 2002). In the seminal work of Steele and Aronson (1995), African American participants were tested on a verbal reasoning task. When the task was presented as a diagnostic indicator of intellectual ability, the performance of African-Americans (a population that generally suffers from a stereotype on intellectual abilities) was significantly worse compared to that of their Caucasian peers. When the task was presented as non-diagnostic, these differences in performance were eliminated. Therefore, making the racial stereotype about intellectual ability relevant to test performance impaired African Americans’ performance relative to Caucasian participants.

Stereotype threat effects have been studied across different stereotyped social groups including women (e.g., Spencer et al. 1999), individuals from low socioeconomic status (e.g. Spencer and Castano 2007), gay men (Bosson et al. 2004), and older adults, as we will review in the next sections.

17.3.1 Age-Based Stereotype Threat

Older adults face pervasive negative stereotypes portraying them as forgetful, incompetent, and cognitively inferior to younger adults (Hummert et al. 1994; Kite and Wagner 2002). These stereotypes were found to have a negative impact on older adults’ performance across diverse domains, and on their general well-being. A recent meta-analysis of 32 studies, covering more than a decade of research (Lamont et al. 2015), suggests that age-related reduced performance on memory tests and other cognitive measures may be vulnerable to age-based stereotype threat (with a moderate effect size, d = 0.36 − an effect size measures the strength of a phenomenon, Sawilowsky 2009). In the current section of the chapter, we focus on how older adults’ negative beliefs about their memory and other cognitive abilities can impact their performance on assessment tools.

Age-based stereotype threat is typically induced in the lab by manipulating the relevance of the stereotype to task performance. Most studies usually invoke age-based stereotype threat by presenting participants with fictitious information confirming that cognitive abilities decline with age, by emphasizing the task’s evaluative nature, or by implying an age-based comparison. One of the pioneering investigations of age-based stereotype threat is a study by Hess et al. (2003). In this study, stereotype threat was manipulated by informing younger and older adults about research findings depicting either the negative impact of ageing on memory (threat condition), or describing that memory is maintained across the lifespan (no-threat condition). Participants were then given a free recall memory test. Consistent with the stereotype threat framework, no age-related differences were noted in the no-threat condition. However, in the threat condition, recall was higher for younger than for older adults. In sum, traditional findings on a significant age-related decrease in performance on memory tasks might be affected by the activation of pervasive negative ageing stereotypes.

Real-world testing situations, such as testing in the clinic or a research lab, may induce a stereotype threat by nature, without requiring any special manipulations (Spencer et al. 2016). Simply taking a test in a negatively stereotyped domain, in the context of performance evaluation in ageing, is enough to trigger stereotype threat. Consider again a memory test. The typical procedure for testing the memory abilities of older adults (Hughes et al. 2013) involves multiple stereotype threat cues. These include the testing location: a college campus where older adult participants are surrounded by younger adults, or a hospital clinic highlighting sickness and disability. Test instructions may also trigger stereotype threat as participants are informed that their memory will be tested, with the commonly implicit assumption that their performance will be compared to that of younger adults. In fact, recruiting participants based on their age, or merely asking them to indicate their age in a demographic questionnaire can serve as a cue (Kang and Chasteen 2009). All of these cues may potentially lead to performance decrement, ultimately resulting in an over-estimation of age-related memory decline.

17.3.2 Clinical Implications of Age-Based Stereotype Threat

Perhaps the most striking indication of the consequences of age-based stereotype threat on assessment arises from two seminal studies on dementia assessment. Mazerolle et al. (2017) used the most common brief neuropsychological assessment batteries in ageing, MMSE and MoCA (as discussed in the previous part of this chapter). Experimenters simply informed all participants that they would perform a memory task and that both younger and older adults are taking part in the study (threat condition). In the reduced-threat condition, participants were also informed that typically there are no age-related differences in this task. Results show that 40% of the sample of community-dwelling older adults met the screening criteria for pre-dementia in the threat condition. However, only 10% of the participants in the reduced-threat condition met these criteria. This finding is further supported in a study by Haslam et al. (2012) investigating a commonly used measure for early dementia detection (ACE-R). When participants were self-categorized as old, and were also led to expect that age-related cognitive decline is all-encompassing (rather than limited to a specific domain), 70% met the diagnostic criterion for dementia. By contrast, only 14% on average met this criterion in all other (lower-threat) conditions. In sum, stereotype threat may lead not only to overestimation of cognitive decline but also to the false-positive diagnosis of pre-dementia, suggesting potential severe implications in real life.

It is important to note that age-based stereotype threat does not begin and end in the testing situation alone. There is evidence that older adults perform worse on a memory test following negative, rather than positive, age-related subliminal primes. That is, when words associated with negative aspects of aging (e.g., DEMENTIA) were presented at a speed allowing perception but not awareness (i.e. subliminal presentation), performance on a subsequent memory test was worse than when a positive word (e.g., SAGE) was subliminally presented (e.g., Hess and Hinson 2006; Levy 1996; Levy and Leifheit-Limson 2009). These implicit stereotypes may generate expectations that act as self-fulfilling prophecies (Levy and Leifheit-Limson 2009), which may be at the root of the motivation of the older adult to search for a neuropsychological evaluation in the first place (Régner et al. 2016). For example, when an older adult is placed in situations predominantly occupied by younger adults (i.e., stereotype threat), typical memory lapses may be interpreted as abnormal age-related decline. There is evidence that both younger and older adults attribute older adults’ memory failures to internal, stable factors (Erber et al. 1990). Older adults may refer to a case of forgetfulness as their “senior moment” (Barber 2017). Interpreting every-day memory failures as a sign for a need for intervention (Erber et al. 1990) may lead the older person to search for clinical assessment. Because the clinical assessment is also negatively affected by stereotype threat, this person may perform below his/her abilities, with profound clinical, social and economic consequences. Moreover, even if the person does in fact suffer a probable cognitive impairment, the debilitating effects of stereotype threat may be especially prominent for him/her (Scholl and Sabat 2008). These effects may influence the choice of suitable treatment options, and the adherence to treatment, resulting in an entirely different course of disease.

Eliminating age based stereotype cues calls for a multi-faceted approach, as demonstrated by Sindi et al. (2013). The authors compared younger and older adults’ salivary cortisol levels, as a measure for stress, and performance on a memory neuropsychological test. The focus of the study was a manipulation of the environments to be stressful (unfavourable condition) or not stressful (favourable condition) for each age group. The unfavourable condition for older adults included: (1) A testing location known to young adults but not to older adults (university campus); (2) Testing performed in the afternoon, a non-optimal time for testing older adults, but optimal for younger adults; (3) A younger adult research assistant; (4) A memory task with which older adults, as opposed to younger adults, were not familiar (word list recall); (5) Instructions emphasizing the memory component of the task, matching the negative stereotype for older, but not for younger adults. In contrast, the favourable condition for older adults included: (1) A testing location that was known to older adults, but not to young adults (an older adult community centre); (2) Testing performed in the morning, an optimal time for testing older adults, but not for younger adults; (3) An older adult research assistant; (4) A memory task that was developed based on learning capacities of older adults (a face-association memory task, as older adults perform better when asked to recall relevant information, rather than unrelated words); (5) Instructions excluding an explicit indication that the task is testing memory. As expected, higher cortisol levels and lower memory performance were found for older adults in the unfavourable as compared to the favourable conditions. However, younger adults were not affected by the testing conditions. Although it is impossible to identify the relative contributions of each of these situational factors, this study demonstrates that many facets in the testing environment may be experienced differently by older and younger adults.

17.3.3 Understanding Stereotype Threat Effects

Generally, stereotype threat may arise from any situational cue indicating that an individual is at risk of confirming the stereotype, reminding the individual of culturally held stereotypes (Spencer et al. 2016). The literature has identified multiple reasons for the effects of stereotype threat on performance among younger adults.

One of the first mechanisms offered to understand stereotype threat effects is Negative Affect (Steele and Aronson 1995). In particular, increased levels of anxiety have been offered to mediate the effects of stereotype threat on performance. However, results regarding this hypothesis have been mixed (see a review by Pennington et al. 2016), with several studies failing to establish this relationship (e.g., Spencer et al. 1999). Therefore, while anxiety may play a role in explaining stereotype threat effects (especially when assessed via indirect measures, Bosson et al. 2004), it is likely not the only or the key explanation.

Taking a cognitive resources perspective, the Process Model (Schmader et al. 2008) suggests that stereotype threat disrupts performance via three distinct, yet interrelated, mechanisms: (a) triggering physiological stress response; (b) triggering a tendency to actively monitor performance, aimed to detect self-relevant information and signs of failure; (c) triggering efforts to suppress negative thoughts and emotions. Each of these mechanisms consumes cognitive resources that are required for successful performance on a given task. Generally speaking, there is ample direct and indirect evidence consistent with cognitive resource depletion in stereotype threat (Pennington et al. 2016).

Focusing specifically on older adults, the mechanisms underlying age-based stereotype threat are not fully understood, and may not be generalized from studies conducted among younger adults. With regard to affective factors, similar to younger adults, little evidence has been found for anxiety mediating the effects of age-based stereotype threat on performance (Chasteen et al. 2005; Hess et al. 2003; but see Swift et al. 2013). Inconsistent support has also been noted for the cognitive-resources hypothesis in older adults (Brelet et al. 2016; Mazerolle et al. 2012 vs. Hess et al. 2009; Popham and Hess 2015).

Why is it so difficult to fully understand stereotype threat effects among older adults? The answer may lie in the treatment of stereotype threat as a unitary concept, tailored by younger adults’ experiences and perspectives. For example, according to Barber (2017), the reason cognitive depletion does not necessarily explain stereotype-threat effects in older adults may relate to their favourable emotion regulation abilities. Namely, regulating the negative affective states (such as anxiety and stress) induced by stereotype threat is resource demanding for younger adults. However, for older adults, regulating aversive emotions is less resource demanding (Scheibe and Blanchard-Fields 2009), suggesting a smaller role for this mechanism.

If stereotype threat effects are not fully explained by cognitive-resources depletion or by negative affect, what can explain them? Current literature appears to support the regulatory fit hypothesis in explaining stereotype threat effects in older age. According to this view, stereotype threat may elicit a prevention focus (Seibt and Förster 2004), in which participants aim not to be their worst, as opposed to striving to be their best. As suggested by the regulatory focus theory (Higgins 1997, 1999), this prevention focus will result in more cautious, error-free and loss aversion strategies. Consistent with the idea of a prevention focus, older adults under stereotype threat have been found to be more risk-averse in their decision making compared to non-threatened older adults (Coudin and Alexopoulos 2010), and respond more slowly (Popham and Hess 2015). In addition, stereotype threat was found to reduce older adults’ (veridical) recall and recognition, but improve memory accuracy (Barber and Mather 2013b; Wong and Gallo 2016).

The fact that stereotype threat may elicit a prevention focus does not necessarily mean that it will result in decreased performance. There is evidence that stereotype threat could even improve older adults’ performance when the task is framed as relating to losses rather than gains. For example, Barber and Mather (2013a) tested older adults on a working memory task, after inducing an age-based stereotype threat. When the tests were focused on gains (i.e., money earned for correct responses), stereotype threat was found to impair performance. However, when the tests were focused on losses (i.e., money lost for incorrect responses) threat improved performance. These findings were replicated using other memory tasks (MMSE, Word List Memory; Barber et al. 2015). This line of findings can support the regulatory fit (Higgins 2000) framework. According to this view, when task demands match the person’s regularity focus, performance will increase, while a mismatch will decrease it. In sum, the reported negative effects of stereotype threat on older adults’ performance may stem from a mismatch between the task structure and the threat-induced prevention focus (Grimm et al. 2009).

17.3.4 Social Context: Summary

Findings presented in this section suggest the pervasive negative impact of age-based stereotype threat on performance in neuropsychological assessment. Evidence in the literature also suggests that this effect can have severe consequences, as pre-dementia may be falsely detected in healthy older adults in the presence of stereotype threat cues. These cues are not only the outcome of laboratory manipulations, but may be present in the daily testing of older adults in the clinic or a university lab. It is important to recognize these cues in order to shape testing environments that evince the accurate capacities of older adults.

Although the mechanisms underlying threat effects in older adults are yet to be fully understood, the possibility that in some cases, activating negative age-based stereotypes does not necessarily result in performance decrement, is another promising direction. Changing the reward structure of the task to be loss-based (for example by emphasizing accuracy and minimization of mistakes), may also have important clinical implications.

Speaking more broadly, while several critiques have questioned whether stereotype threat actually generalizes from the laboratory into real-world testing situations (e.g., Sackett et al. 2004), when focusing on older adults, we believe that stereotype threat is responsible for an over-estimation of age-based cognitive decline among both scholars and practitioners. In some cases, this may lead to crossing a clinical boundary from normal to abnormal impairment (Haslam et al. 2012). Indeed, :“it is hard to imagine a social psychological effect that could have greater clinical relevance” (p.782).

17.4 General Discussion

The goal of this chapter was to test the two implicit assumptions underlying neuropsychological testing in ageing: test validity and the generalized view of the extent of age-related decline in cognitive abilities. We presented evidence from the current literature on the negative effects of age-related sensory decline and age-based stereotype threat on test performance. Our findings challenge these two assumptions, suggesting that age-related changes may not be as severe as previously suggested. Namely, age-related sensory decline and stereotype threat were shown to influence the context of the neuropsychological assessment and lead to an inaccurate measure of cognitive performance. In extreme cases, their influence may cause false diagnosis of pre-dementia, i.e., crossing a clinical boundary from normal to abnormal impairment. These contextual factors are not only the outcome of laboratory manipulation, but may be present in daily testing of older adults in the clinic or a university lab. It is important to account for these factors in order to shape testing environments appropriate for older adults.

The pair of implicit assumptions appears to be embedded in test administration and scoring in many of the available tools. Generally, test materials are not formatted to meet age-related sensory degradation, and administration protocols specify neither the sensory context (such as illumination and noise levels) nor the social context (how to minimize stereotype threat) of the test. In most tests, normative scores are adjusted separately for each age group to compensate for the expected age-related decline (e.g., Stroop, WAIS). Simply put, the same score on a neuropsychological assessment may reflect normative performance for 70 year olds, but flawed performance for 20 year olds, echoing the pair of assumptions. Thus, test results, coupled with these assumptions, may provide support for ageism, affecting both the scientific and medical community and the public at large.

The evidence presented in this chapter has implications outside of the clinic/lab. Consider the office environment. The lighting and the noise level may be challenging for older employees. Thus, employees may have more difficulties in processing the written information, and in taking an active part in conversations. The resources allocated for deciphering the sensory input are not available for further cognitive processing. As a result, it is more difficult for older employees to utilize the full potential of their cognitive abilities. In turn, both colleagues and managers may perceive reduced performance as reflecting internal factors (an attribution error) − i.e., an age-related cognitive decline − judging older employees more harshly. This will lead to an atmosphere engendering stereotype threat. Taken together, daily activities in the office may be perceived by older employees as a test for cognitive abilities, where they are expected to underperform, and where sensory conditions place them at a disadvantage. By considering the sensory and social contexts, employers can improve the quality of work of older employees, extending their participation in the labour market and improving their quality of life.

Finally, upon reading this chapter, one may be reminded of the seminal book by Guthrie, “Even the Rat Was White: A Historical View of Psychology” (Guthrie 1976). Guthrie exposed how racist views shaped by studies in the early twentieth century affected the implementation and analysis of intelligence assessment tools at that time. Those studies suggested that the tone of skin was directly related to a decline in cognitive performance, and were guided by implicit assumptions on (1) test validity and (2) a racially-based decline in cognitive performance. Guthrie demonstrates the devastating implications of such tests by citing the following conclusions of a study by Philips, 1914: “If the Binet tests are at all a gauge of mentality, it must follow that there is a difference in mentality between the coloured and the white children. And this raises the question, should the two groups be instructed under the same curriculum?” (as quoted in Guthrie 1976:55). We hope that we can avoid repeating history, making the same mistakes in the twenty-first century. If researchers and clinicians acknowledge the current sensory and social biases of cognitive testing in older adults, it will set us on a promising path towards the design of more accurate assessment tools.