I need your help!

I want your feedback to make the book better for you and other readers. If you find typos, errors, or places where the text may be improved, please let me know. The best ways to provide feedback are by GitHub or hypothes.is annotations.

Opening an issue or submitting a pull request on GitHub: https://github.com/isaactpetersen/Principles-Psychological-Assessment

Hypothesis Adding an annotation using hypothes.is. To add an annotation, select some text and then click the symbol on the pop-up menu. To see the annotations of others, click the symbol in the upper right-hand corner of the page.

Chapter 24 Assessment of Cognition

24.1 Overview of Cognition

Cognitive-behavioral therapy (CBT) is the most effective treatment we have for many mental health conditions, especially depression and anxiety. The cognitive therapy component of CBT assumes that thoughts mediate the influences of the environment on emotions and behavioral responses. In general, cognitions and cognitive processes play a key role in many psychological theories. Therefore, cognitions and distorted cognitions are assumed to be a key construct to assess. However, there are many challenges to cognitive assessment. Some consider cognitions to be fictitious entities or epiphenomenal to people’s experience—i.e., that is, they are not a causal process, but rather they are a by-product of neural and biological processes.

24.2 Aspects of Cognition Assessed

Multiple aspects of cognition can be assessed, including cognitive products, processes, and structures or organization. Cognitive products include a person’s conscious thoughts or imaginal images. Cognitive processes include how a person transforms the environmental input and how they take meaning from it. Cognitive structures and organization include the hypothesized structures that a person has to guide information processing.

Consider an example of a person with an anxiety disorder. Cognitive products for the person may include having the thought, “I’m going to make a fool of myself, and everyone will think I’m stupid”. The person’s cognitive processes may involve the over-estimation of personal risk. The person’s cognitive structures may include “danger” schemas that, when activated, lead to attention bias toward threat stimuli. Though, this finding is under question because of challenges to the reliability of the dot-probe task, which has been frequently used to examine attentional bias.

24.3 Approaches to Assessing Cognition

The general approaches to assessing cognition include endorsement methods and production methods. Endorsement methods for assessing cognition contain a predetermined set of thoughts that participants identify or rate, such as a checklist. Using production methods, participants generate or recall their thoughts, i.e., the method involves free response. The psychometric status is currently stronger for endorsement methods; however, increasing attention is being given to production methods. Approaches to assessing cognition are reviewed by Dunkley et al. (2019).

24.3.1 Self-Reports of Cognition

Self-reports of cognition can either be in written or interview format and comprise most of the cognitive assessment techniques that are regularly used in clinical practice. Few of the other approaches to assessing cognition are used routinely in clinical practice. In a self-report of cognition, the participant must reflect on and report about their own cognitive style. Thus, self-report of cognition involves introspection.

Self-reports of cognition are still used in CBT, to learn about people’s cognitive styles, their attributions to life events or a specific negative event, and their expectancies. Expectancies include, for instance, what the person anticipates happening.

24.3.1.1 Pros

Pros of self-reports of cognition include:

  • They are cheap, easy to score and administer, and are able to be useful in clinical practice.
  • They are sensitive to treatment effects. Cognitive therapy advocates would argue that changes in cognitions mediate treatment changes (i.e., they are thought to be a causal mechanism that explains how therapy works).
  • They are standardized, normed, and have been psychometrically validated.

24.3.1.2 Cons

Despite the fact that these techniques are so popular, there are still some major drawbacks to these approaches. Cons of self-reports of cognition include:

  • They require a participant to have insight into their cognitions that they can report on—such insight is not always typical. Nisbett & Wilson (1977) published a classic study titled, “Telling more than we can know: Verbal reports on mental processes”. The study was based on the idea that people believe they were thinking particular things; however, they reported thoughts on things that would not have been possible for them to know. Findings such as these challenge the validity of self-reported cognition.
  • They require a person to reflect upon their cognitions, retrospectively recalling what they were thinking in various situations.
  • At least in the written format, which is often used as an endorsement method, participants are limited to responding to the experimenter-generated choice options—which can lead people into having certain thoughts and make people endorse thoughts that they did not actually have.

24.3.2 Think-Aloud Approaches (Thought Listing)

In think-aloud approaches, the participant is asked to verbalize their cognitions in the form of a continuous monologue while performing some task, and responses are recorded for later evaluation. In this way, thoughts are assessed concurrently with their occurrence. In some approaches, such as Articulated Thoughts in Simulated Situations (ATSS), thoughts are assessed in simulated situations that are geared towards eliciting certain emotions—the “situations” can be presented in video, audiotape, or virtual reality format. In think-aloud approach, the participant lists their thoughts, so the approach is also called “thought listing”.

The purpose of think-aloud approaches is to avoid retrospection, and to create a situation in the lab that resembles a real-world situation to see what thoughts or what stream of thoughts are generated. Think-aloud approaches are reviewed by Davison et al. (1997).

24.3.2.1 Examples

Examples of think-aloud assessments of cognition include video replay and private speech.

24.3.2.1.1 Video Replay

Video replay is also called videotape thought reconstruction. In videotape thought reconstruction, the examiner video records the interaction, then replays the video for the examinee, and asks the examinee what they were thinking during that moment. It is more proximal than retrospective report. The act of re-watching a video of oneself can be strange, and there is commonly reactivity to that.

24.3.2.1.2 Private Speech

Another example of think-aloud cognition is children’s private speech. Young children sometimes talk out loud to themselves to guide their behavior, and the speech is not serving a communicative function. This is known as private or self-directed speech. Private speech is most common among 2- to 7-year-olds, and it provides insight into their cognition, often while performing a challenging task.

24.3.2.2 Pros

Pros of think-aloud approaches include:

  • Participants are asked to report on cognitions at the same time or just after the thoughts are occurring, so they are less sensitive to retrospective recall bias.
  • Responses are not limited to experimenter-selected choices—all cognitions can be assessed. The constraints are made in terms of how the thoughts are coded and analyzed by the experimenter, not in terms of the unstructured and open-ended thoughts that the participant can provide.
  • Specific situations of interest to the research team can be explored. For example, the examiner can examine the respondent in situations that are thought to evoke particular cognitive processes, including situations that might be impractical due to low frequency or that are unethical to assess in their natural context. For example, an examiner could examine a person’s self-reported thoughts in response to social criticism for someone with social anxiety.

24.3.2.3 Cons

Cons of think-aloud approaches include:

  • Performance is susceptible to observer effects. Observer effects are changes in the person’s response because they are being watched or recorded. That is, response biases may be present because the person’s responses are not anonymous.
  • Social desirability bias: people may censor negative or disturbing thoughts and may describe their thoughts in a way that portrays themselves in a more positive light.
  • It can be difficult to generate the thoughts. There are individual differences in people’s awareness of their cognitions.
  • Think-aloud approaches can interrupt the thought flow—we think faster than we can speak, so verbalizing thoughts can alter the thoughts. That is, talking about thoughts interferes with the task itself.
  • It can be difficult to get a whole narrative of thoughts from think-aloud approaches.
  • It can be difficult to code the qualitative thoughts and reduce them down to meaningful data. And coders may attribute different meaning to a thought than was intended by the respondent.
  • Think-aloud approaches tend to be lower in ecological validity because they typically occur in the lab rather than in a naturalistic context.

24.3.3 Random Thought Sampling

The purpose of thought sampling is to quantify characteristics or aspects of thinking in an ecologically valid context. By ecologically valid, we mean that the situation closely represents the specific situations under which the person usually exists. In a random thought sampling procedure, participants are given a beeper or some other device that beeps randomly. When the person hears the beep, they must record what they are doing and what they are thinking—either quantitatively or qualitatively. The person hears a beep, and jots down the thought they were having when the beep occurred, so it is retrospective back a second.

In the experience sampling method (ESM), the examiner can also ask participants to report the context in which the experience occurred. This can be helpful for those with schizophrenia who are hallucinating to help link the hallucinations to particular situations or contexts. Thought sampling and ESM are described by Hurlburt (1997).

24.3.3.1 Pros

Pros of random thought sampling include:

  • The delay of recall is instantaneous—hence participants are not forced to recall what they were thinking.
  • The situations are ecologically valid in that they are occurring in situations that the person actually experiences, rather than a lab.
  • Random thought sampling gets at what people are actually thinking about rather than what they think they are thinking about.
  • Random thought sampling is better able to detect fluctuations in thoughts than retrospective approaches, and it is better able to detect co-variation with other processes, such as mood.
  • Random thought sampling involves less interference with the interaction. Presumably, the “flow” of a situation is not interrupted…although it very well might be.
  • A client’s reactivity could show therapeutic benefit.

24.3.3.2 Cons

Cons of random thought sampling include:

  • It assesses narrow slices of time.
  • Because it assesses narrow slices of time, it is hard to capture the whole sequence of events.
  • It is odd to be interrupted by a beeper.
  • Random thought sampling could elicit reactivity such that thinking about one’s thoughts could change thoughts or create thoughts.
  • Random thought sampling only accesses the contents of the consciousness, like the approaches discussed earlier. That is, the respondent has to be aware of the thoughts in order to write them down. There are likely many cognitive processes that nobody is aware of.
  • The written-down thoughts have to be coded.
  • Because random thought sampling typically occurs in an ecologically valid context, the experimenter has less experimental control of the situation, context, and stimuli. Consequently, the experimenter is less likely to discover relations between rare but theoretically important processes.

24.3.4 Cognitive Science Approaches

Cognitive science approaches to assessing cognition include a range of non-mutually exclusive approaches, including performance-based measures, cognitive modeling, and cognitive neuroscience approaches. These cognitive science approaches are different from the approaches described earlier in that they do not rely on respondents reporting their thoughts. Cognitive science approaches try to assess a person’s cognitions based on behavior or neural functioning. For instance, using performance-based measures and cognitive modeling, the approaches are based on behavioral performance and then use a model to try to estimate what is happening cognitively—the participants just have to engage and process information.

24.3.4.1 Performance-Based Measures

Performance-based measures are used to assess many domains and constructs, including attention, executive functions, inhibitory control, memory, decision-making, categorization, etc. Examples of performance-based measures include the Stroop test, Iowa Gambling Task, and implicit cognition tasks such as the Implicit Association Test. In many performance-based tasks, respondents’ accuracy and reaction times are evaluated when there are competing demands.

24.3.4.2 Cognitive Modeling

Cognitive modeling attempts to decompose behavioral performance into different indices that reflect different sub-processes, such as working memory and processing speed. Behavior is complex and is influenced by many different processes, such as cognitive, motivational, and response processes. If you observe that a person shows behavioral deficits, it can be helpful to know what specific process is responsible for the behavioral deficits.

It can be difficult to use multiple behavioral tasks to decompose basic processes because any task taps multiple cognitive processes, including likely overlapping processes, and each is affected by measurement error. Cognitive modeling seeks to identify the “hidden processes” that underlie behavioral performance in complex tasks. Cognitive modeling generally requires a lot of data—for example, many trials—but it can be used with small sample sizes.

Busemeyer & Stout (2002) provide an example of cognitive modeling of the Iowa Gambling Task to identify the sub-processes that account for behavioral performance deficits in Huntington’s Disease. The Iowa Gambling Task (IGT) attempts to simulate real-life decision-making based on learning from decisions that have different probabilities of rewards and punishments. In the Iowa Gambling Task, participants are presented with four virtual decks on a computer screen. They pick a card each turn, and have to select from one of the four decks. Some decks reward the player more often and some decks penalize the player more often. The player has to figure out which decks to pick from based on statistical probabilities.

Busemeyer & Stout (2002) compared and tested competing models for the task based on different theoretical possibilities. With cognitive models, the aim is to achieve a combination of accuracy and parsimony (simplicity). Busemeyer & Stout (2002) selected a model with three parameters: a cognitive process, a motivational process, and a response mechanism. A cognitive process was examined with an updating rate parameter, which describes a person’s memory for past sequences produced by each deck. A motivational process was examined with the attention weight parameter, which describes the amount of attention a person allocated to gains versus losses. A response mechanism, such as recklessness and/or impulsivity, was assessed with threshold parameter, which describes the sensitivity of the choice mechanisms to the expectancies.

Busemeyer & Stout (2002) found that people with Huntington’s Disease showed deficits in the updating rate and threshold parameters but not in the attention weight parameter. In particular, the people with Huntington’s Disease were more reactive to recent information and forgot old information more rapidly, and they became less sensitive with training and produced more random behavior. But people with Huntington’s Disease did not show deficits in how much attention they allocated to losses.

Other examples of clinically relevant cognitive model include cognitive modeling approaches to better understand cognitive processes in sexual aggression and eating disorders (Treat et al., 2007).

24.3.4.3 Challenges

Despite the advantages of not relying on participants’ recall of their thoughts, there are challenges to using cognitive science approaches to assess cognition.

24.3.4.3.1 Reliability of Difference Scores

One challenge related to the use of performance-based assessment of cognition is that many such tasks involve difference scores. A difference score involves the subtraction of one score from another score. In performance-based assessments of cognition, many tasks subtract scores, especially accuracy or reactive time, in one condition from scores in another condition. For example, the most-often used dependent variable in the Flanker task, Stroop task, stop-signal task, and dot-probe task is a difference score. For instance, in the Flanker task, the reaction time to incongruent (interference) stimuli is subtracted from the reaction time to congruent stimuli.

The problem is that difference scores tend to be lower in reliability than other scores because differences depend on the reliability of both indices in the subtraction, as described in Section 4.5.8. Difference scores tend to be lower in reliability than each of the indices that compose it, especially when the two indexes are correlated. Therefore, scores on these tasks tend to be less reliable than scores on other tasks that do not involve difference scores. To be reliable, difference scores require high reliability of the individual indices compared to the correlation between them. Otherwise said, the more two things are the same thing, the more likely that subtracting one from the other leaves measurement error rather than construct variance.

Because of the reliance of difference scores in the dot-probe task, Rodebaugh et al. (2016) challenged whether attention bias toward threat exists or whether it is stable across time for those with anxiety.

24.3.4.3.2 Reliability Paradox

Another challenge with performance-based assessments of cognition is known as the reliability paradox (Hedge et al., 2018). Many basic cognitive paradigms, such as the Flanker task, were designed for detecting normative cognitive effects. However, a downside of that is that not all of them are good for assessing individual differences. That is, their scores are not reliable for a particular person. Low reliability precludes making strong inferences and decisions about individuals and calls into question the validity of inferences regarding individual differences, or people’s change, in scores on those tasks.

The reliability paradox is that despite some robust cognitive tasks showing well-established experimental effects, many of these tasks do not produce reliable individual differences, including the Flanker task, Stroop task, and stop-signal task, etc. Experimental effects become well-established, and therefore those tasks become widely used, when between-subject variability is low. However, low between-subject variability leads to low reliability of individual differences. The stability coefficient, i.e., test–retest reliability, an index of reliability of individual differences relies on a Pearson correlation. As described in Section 4.5.1.1.1, correlation requires variability: restricted range leads to weaker associations. When a measure has low between-subject variability, it prevents us from detecting consistent rank ordering of people across time. That is, the very reason such tasks produce robust and easily replicable experimental effects due to low between-person variability makes their use as correlational tools problematic. Moreover, the poor test–retest reliability of many performance-based assessments is exacerbated by the use of difference scores.

The implications of the reliability paradox are that many well-established approaches in experimental, cognitive, and neuropsychology may not translate well to the study of individual differences, even though the measures can be useful for making group-level inferences. It is important to know the reliability of your measures, especially when dealing with issues of trying to understand where a particular person stands on the construct relative to other people.

Unreliability and measurement error are threats to science and knowledge. Unreliability leads to false inferences and failures to replicate findings. As described in Section 5.6, and formalized with the attenuation formula (Equation (5.2)), associations with other variables are weakened to the extent that measurement error exists. If you know the degree of unreliability, you can account for it using the disattenuation formula (Equation (5.3)), to get more accurate estimates of the true associations with other variables.

It is important to test the reliability of your measures and report the reliability in papers. Continually work to improve the reliability of your measures. Ways to improve the reliability of measures are described in Section 4.14. Another way to improve reliability of scores is to aggregate multiple measures using a multimethod approach in structural equation modeling.

24.3.5 Cognitive Neuroscience Approaches

Another branch of cognitive science approaches includes cognitive neuroscience approaches. For example, cognitive neuroscience approaches connect cognitive processes to measures of brain functioning, such as electroencephalography (EEG) or functional magnetic resonance imaging (fMRI). An example of linking cognitive processes to measures of brain functioning might be examining people’s digit span in relation to fMRI measures. Cognitive neuroscience techniques are discussed in Chapter 20. Cognitive neuroscience is a high priority of the National Institute of Mental Health with their emphasis on the Research Domain Criteria (RDoC). RDoC seeks to assess the underlying substrates of illness across multiple levels of analysis and the neurodevelopmental trajectories rather than just behavioral symptoms, which are heterogeneous.

24.4 Conclusion

Multiple aspects of cognition can be assessed, including cognitive products, processes, and structures or organization. The general approaches to assessing cognition include endorsement methods and production methods. Cognitive assessments include self-report, think-aloud approaches, random thought sampling, performance-based measures, cognitive modeling, and cognitive neuroscience approaches. One challenge related to the use of performance-based assessment of cognition is that many such tasks involve difference scores, which tend to be lower in reliability than each of the indices that compose it. Another challenge with performance-based assessments of cognition is known as the reliability paradox that despite some robust cognitive tasks showing well-established experimental effects, many of these tasks do not produce reliable individual differences due to low between-subject variability and the use of difference scores.

24.5 Suggested Readings

Dunkley et al. (2019); Treat et al. (2007)

References

Busemeyer, J. R., & Stout, J. C. (2002). A contribution of cognitive decision models to clinical assessment: Decomposing performance on the Bechara gambling task. Psychological Assessment, 14(3), 253–262. https://doi.org/10.1037/1040-3590.14.3.253
Davison, G. C., Vogel, R. S., & Coffman, S. G. (1997). Think-aloud approaches to cognitive assessment and the articulated thoughts in simulated situations paradigm. Journal of Consulting and Clinical Psychology, 65(6), 950–958. https://doi.org/10.1037/0022-006X.65.6.950
Dunkley, D. M., Segal, Z. V., & Blankstein, K. R. (2019). Cognitive assessment: Issues and methods. In K. S. Dobson & D. J. A. Dozois (Eds.), Handbook of cognitive-behavioral therapies (4th ed., pp. 85–119). Guilford Press.
Hedge, C., Powell, G., & Sumner, P. (2018). The reliability paradox: Why robust cognitive tasks do not produce reliable individual differences. Behavior Research Methods, 50(3), 1166–1186. https://doi.org/10.3758/s13428-017-0935-1
Hurlburt, R. T. (1997). Randomly sampling thinking in the natural environment. Journal of Consulting and Clinical Psychology, 65(6), 941–949. https://doi.org/10.1037/0022-006X.65.6.941
Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological Review, 84(3), 231–259. https://doi.org/10.1037/0033-295x.84.3.231
Rodebaugh, T. L., Scullin, R. B., Langer, J. K., Dixon, D. J., Huppert, J. D., Bernstein, A., Zvielli, A., & Lenze, E. J. (2016). Unreliability as a threat to understanding psychopathology: The cautionary tale of attentional bias. Journal of Abnormal Psychology, 125(6), 840–851. https://doi.org/10.1037/abn0000184
Treat, T. A., McFall, R. M., Viken, R. J., Kruschke, J. K., Nosofsky, R. M., & Wang, S. S. (2007). Clinical cognitive science: Applying quantitative models of cognitive processing to examine cognitive aspects of psychopathology. In R. W. J. Neufeld (Ed.), Advances in clinical cognitive science: Formal modeling of processes and symptoms (pp. 179–205). American Psychological Association.

Feedback

Please consider providing feedback about this textbook, so that I can make it as helpful as possible. You can provide feedback at the following link: https://forms.gle/95iW4p47cuaphTek6