I want your feedback to make the book better for you and other readers. If you find typos, errors, or places where the text may be improved, please let me know. The best ways to provide feedback are by GitHub or hypothes.is annotations.
Opening an issue or submitting a pull request on GitHub: https://github.com/isaactpetersen/Principles-Psychological-Assessment
Adding an annotation using hypothes.is. To add an annotation, select some text and then click the symbol on the pop-up menu. To see the annotations of others, click the symbol in the upper right-hand corner of the page.
Chapter 13 Ethical Issues in Assessment
Ethics are relevant to all of our domains as psychologists, including (but not limited to) research, teaching, assessment, intervention, and supervision.
13.1 Belmont Report
The Belmont Report informs professional ethics. The principles from the Belmont Report include: respect for persons, beneficence, and justice.
13.1.1 Respect for Persons
A key component of respect for persons is informed consent. It also involves special considerations to protect the welfare of vulnerable people, including children, prisoners, pregnant women, people with intellectual disability, and people who are economically or educationally disadvantaged. It is important to get full informed consent from clients or participants before taking actions that will affect them.
13.1.2 Beneficence
The principle of beneficence is about maximizing benefit and minimizing harm, similar to the aphorism, “First, do no harm”. Clients and participants have the prerogative to make choices about whether to receive a procedure, so they need to know the benefits and harm of each procedure. Just having the intent to help is not enough—you also need to be using a helpful technique and to have the knowledge of how to implement the technique.
13.1.3 Justice
Justice goes beyond the interaction with the individual client or participant; it deals with broader, societal concerns. For instance, justice involves ensuring that the people taking the risk will also benefit. For example, if a study to test the effectiveness of a medication enrolls primarily homeless people, the participant population may be taking the risk of trying the drug but not ultimately benefiting from the knowledge gained.
13.2 My Ethical Advice
My ethical advice is summarized in the following two pieces of advice:
- Don’t do stupid things
- Don’t do bad things
13.2.1 Don’t Do Stupid Things
First, don’t do stupid things. With respect to assessement, for instance, do not use weak assessment instruments. The assessment techniques you use should be reliable and valid for the tasks and population of interest. They should be based on scientific research, not by professional judgment. Reliability and validity have many facets and all facets must be used in evaluating assessment techniques. Reliability and validity are characteristics of uses of tests, not the tests themselves. So, identify what is important, for example test–retest reliability versus internal consistency. Reliability and validity may differ for populations that vary in characteristics (e.g., age, gender, education, ethnicity).
13.2.2 Don’t Do Bad Things
Second, don’t do bad things. For instance, do not administer services without first getting the person’s consent. Informed consent for an assessment includes multiple components. Ensure that the person being assessed is informed about important aspects of the testing situation. Say what services will be administered (i.e., the nature of testing), why the testing will be administered (i.e., the purpose of testing), what the evidence is for the assessment devices, the costs and potential side effects of the assessment and the tests, the pros and cons of underdoing the assessment, and other options available. Describe the type of information that the client will receive. Describe any third-party involvement (e.g., insurance companies, schools, employers, court) in accessing the assessment information. Describe the limits of confidentiality, which typically would include when someone is in danger, as defined by unreported child abuse or neglect, elder abuse, or imminent suicidality or homicidality. If the assessment is court-ordered or the results of the assessment are subpoenaed by a court of law, the limits of confidentiality change. It is also important to describe the psychologist’s role relative to the client. Be transparent about what services are experimental; do not pretend to know more than you know. Do not make false or misleading claims or promises.
Honor the agreement implied by the explanation provided in the informed consent. For an example of a document that I use with clients that describes the pros and cons of an intervention technique (parent management training) and other options available, see here: https://osf.io/qmnu6.
It is also important to maintain the integrity of assessment techniques. Do not teach people how to raise their scores. There has to be integrity to begin assessment.
13.3 APA Ethics Code
The American Psychological Association (APA) publishes ethics guidelines in its document entitled, Ethical Principles of Psychologists and Code of Conduct (hereafter referred to as the “APA Ethics Code”) (American Psychological Association, 2017). The latest version of the APA Ethics Code is available here: https://www.apa.org/ethics/code (archived at https://perma.cc/6F4Z-WQ57). The current version of the APA Ethics code was published in 2017 and is available here: https://www.apa.org/ethics/code/ethics-code-2017.pdf (archived at: https://perma.cc/PF6F-QZ5C). The APA also published “Guidelines for Psychological Assessment and Evaluation”, available here: https://www.apa.org/about/policy/guidelines-psychological-assessment-evaluation.pdf (archived at https://perma.cc/FRB2-ND58).
Here, I emphasize a few of the guidelines from the APA Ethics Code with respect to assessment. However, you should read and follow the entire APA Ethics Code. That said, the APA Ethics Code is not up to date for clinical science. But there are many important components. Case illustrations of ethical guidelines from the APA Ethics Code are provided by L. Campbell et al. (2010). Ethical issues in psychological assessment are also discussed by Bersoff et al. (2012) and Nagy (2011).
13.3.1 Competence
Section 2 of the APA Ethics Code notes that you should only act withing the boundaries of your competence. It is also important to be aware of and report conflicts of interest.
13.3.2 Bases for Assessments
Section 9 of the APA Ethics Code describes ethics guidelines for assessment. Section 9.01 describes ethics guidelines regarding the bases for assessments. Use techniques that are sufficient to substantiate the findings, based on strong psychometrics. In a professional context, sit down face to face and evaluate the individual, as opposed to just relying on automated reports. If using computerized assessments, you should have the expertise to consider the appropriateness of the interpretations. Work to reduce or eliminate bias. If using record review, explain the sources of information used for conclusions and recommendations. Maintain a high standard when recording assessment information. Psychologists have responsibility for maintenance and retention of their records. Records should describe the nature, delivery, progress, and results of services, and related fees. Take reasonable steps to maintain confidentiality of records. When a client might want to know how records will be maintained, disclose record-keeping procedures in the informed consent. The duration that full records should be kept depends on whether they are adults or children. For adults, keep full records for 7 years after the last date of service. For children, keep full records for 7 years after the last date of service, or for 3 years after they become adults, whichever is later. The latest version of the APA record keeping guidelines is available here: https://www.apa.org/practice/guidelines/record-keeping (archived at https://perma.cc/S37L-5VNX). The current version of the APA Ethics code was published in 2007 and is available here: https://www.apa.org/pubs/journals/features/record-keeping.pdf (archived at: https://perma.cc/7A8U-M4TQ).
13.3.3 Use of Assessments
Section 9.02 describes ethics guidelines regarding the use of assessments. It notes to use assessments “in light of research or evidence”. But how would research and evidence not go together?! This phrase from the APA Ethics Code suggests that some forms of non-research-based evidence have equal footing with research in terms of deciding what assessments to use. I would argue that assessments should be selected based on the best available scientific evidence. And if scientific evidence does not yet exist to use an assessment device for a particular purpose with a particular population, the assessment device should be studied and evaluated before it is used to make important decisions.
The assessment should be reliable and valid for the population and purpose. When the reliability and validity has not been established, describe the strengths and weaknesses. Use language and competence level that is appropriate for the participant/client. Consider the potential social consequences of testing. High stakes decisions probably should not be made on the basis of the results of a single test.
13.3.4 Informed Consent in Assessments
Section 9.03 describes ethics guidelines regarding informed consent in assessments. Informed consent should be obtained unless:
- the assessment is government mandated (which changes things a bit),
- it is implied because it is considered voluntary (e.g., when applying for a job), or
- evaluating decisional capacity and the person is not capable of providing consent; in these cases, you should still inform them of what testing will include and the purpose of assessment.
13.3.5 Release of Test Data
Section 9.04 describes ethics guidelines regarding release of test data. “Test data” include raw and scaled scores, a client’s responses to questions or stimuli, and the psychologist’s notes about the participant. You are expected to release test data if the client provides a signed release, unless it would cause substantial harm or would be a misrepresentation of the data or test. You may also have to release test data if there is a law or court order, but you may be able to fight these.
13.3.6 Other Guidelines
Section 9.07 notes that you should not promote the use of assessment techniques by unqualified people. Section 9.08 notes that you should not use obsolete tests and outdated results.
13.3.7 Maintain Test Security (Integrity)
Section 9.11 describes ethics guidelines regarding maintaining test security (integrity). Maintaining test integrity involves protecting test instruments, protocols, questions or stimuli, and manuals from unauthorized access. If releasing test data, take care to release only the scores of copyrighted tests; do not release the copyrighted tests themselves. For example, if performing intellectual testing, try to release the subscale scores and summary scores (e.g., T scores) without releasing test stimuli. Also, do not coach people on how to perform better on questions, and do not let them take tests home.
13.5 Clinical Report Writing
Write every report as if it will be used in a court of law. You want it to be able to stand up in court. Always identify the source of each inference (e.g., “Per the client’s report, …”). Do not write a self-reported statement as if it were a factual statement. Instead of writing, “The client cut herself”, write “The client reported that she cut herself”. Do not overstate findings. Information from reports can be used to make decisions that significantly impact people’s lives, so it is important not to misrepresent information. Recognize factors, such as language, culture, and the situation, that could have impacted the results, and note them in the report. Recognize limitations in the recommendations.
13.6 Open Science
There has been a replication crisis in psychology and social science, in which the findings from many studies have failed to replicate when independent researchers attempt to replicate the studies’ findings (Open Science Collaboration, 2015). Due to the replication crisis, there has been a movement toward open, transparent science where researchers pre-register their methods, and share their data and analysis scripts. Open practices reduce threats to replicability and increase the public accessibility of science. The movement to open practices in science is known as the open science movement.
The open science movement seeks to increase the replicability and accessibility of science through three primary methods. First, the open science movement seeks to improve transparency, i.e., what researchers are actually doing in their studies and analyses. Second, especially when researchers have confirmatory hypotheses, the open science movement seeks to restrict researcher degrees of freedom so that the researchers do what they said they would do rather than contingent analysis decisions to get a desired result. Similar to a garden of forking paths (see Figure 1.1), there are so many decision points in a study, and different decisions can lead to drastically different results. Third, the open science movement seeks to prevent ethically questionable research practices. Ethically questionable research practices may include (John et al., 2012):
- not reporting all dependent variables
- deciding to collect more data after looking at the data to see whether the results were significant
- not reporting all of a study’s conditions
- stopping collecting data earlier than planned because the expected result is detected
- rounding down a p-value
- selectively reporting studies that worked
- deciding whether to exclude data after looking at the impact of doing so on the results
- reporting an unexpected finding as having been predicted from the start (known as HARK-ing: hypothesizing after the results are known)
- claiming that results are unaffected by demographic variables when one is actually unsure
- falsifying data
- p-Hacking: multiple testing, selective reporting, and misreporting of the true effect size
To advance reproducibility and replicability in science, researchers are encouraged to provide greater transparency and sharing of their research. For instance, one way to provide greater transparency and to limit the researcher’s degrees of freedom is to (pre-)register a study. There is a continuum of study registration that includes pre-registration, co-registration, and post-registration, depending on when the registration occurs relative to data collection and analysis (Benning et al., 2019). Study pre-registration involves publicly posting a study’s design, hypotheses, methods, materials, and data analysis plan prior to data collection. Co-registration occurs after data collection starts but before data analysis begins. Post-registration occurs after data analysis has begun.
Although pre-registration is most commonly used with confirmatory research in which you have a directional hypothesis, you can also pre-register exploratory research in which you have no directional hypothesis. Pre-registration helps to make it more explicit what tests are confirmatory (based on a priori directional hypotheses) and which are exploratory. It is important to interpret the results of confirmatory and exploratory tests differently, because you are more likely to get false positive results if conducting lots of exploratory tests. You can also pre-register decision trees that allow flexibility for multiple approaches depending on the data.
Some journals see the value of pre-registered studies and publish registered reports. A registered report is a study that is evaluated based on the pre-registered methods, hypotheses, and data analysis plan, and if the pre-registered study is accepted, the eventual paper will be accepted (as long as the researchers followed the pre-registered plan) regardless of whether the findings supported the hypotheses. Registered reports of exploratory analyses are sometimes called exploratory reports.
There are many tools that aim to advance the open science movement.
A key tool is the Open Science Framework (OSF; https://osf.io).
The OSF is a free, open source platform that supports the transparency of scientific research.
There are numerous ways that researchers can make use of the OSF to advance the aims of open science.
Here is a template for a pre-registration of a study on the OSF: https://osf.io/2hrfy.
Researchers can (pre-)register their study on the OSF by posting their hypotheses, study methods, and data analysis plan.
In addition to the OSF, other website for pre-registration include AsPredicted (https://aspredicted.org), PROSPERO (for systematic reviews; https://www.crd.york.ac.uk/prospero), and ClinicalTrials.gov (https://clinicaltrials.gov).
In addition, researchers can use the OSF to post manuals, protocols, lab notebooks, data, data processing syntax, statistical analysis code, detailed results, computational notebooks, and paper preprints (Tackett, Brandes, & Reardon, 2019).
A computational notebook (e.g., Jupyter notebook, R Markdown, Quarto) is a document in which the analysis code, output, and text are weaved together, so the reader can see how each statistical result was obtained for reproducibility.
An example of an R Markdown computational notebook is on the book’s page on the OSF: https://osf.io/nkyve (the associated .Rmd
file that generated the html is here: https://osf.io/fxrzm).
An example of a Quarto computational notebook is also on the book’s page on the OSF: https://osf.io/8ybtk (the associated .qmd
file that generated the html is here: https://osf.io/zbruf).
In addition to the OSF, other website for sharing materials include Databrary (which is particularly useful for sharing videos; https://nyu.databrary.org), Zenodo (https://zenodo.org), the Dataverse Project (https://dataverse.org), and figshare (https://figshare.com).
After pre-registration, you can share a preprint of your paper on a preprint server like arXiv, PsyArXiv, bioRxiv, SocArXiv, EdArXiv, MetaArXiv, or medRxiv. For an example of a preprint, see here: https://psyarxiv.com/gtsvw. Posting a preprint disseminates the paper without waiting for peer review and the publishing process. It also provides free access to papers that might otherwise be hidden behind journals’ paywalls. Also, it helps establishes who was first to perform work on a topic.
There are now journals dedicated to publishing replication studies, including ReScienceX (http://rescience.org/x) and to publishing null findings, including the Journal of Articles in Support of the Null Hypothesis (https://www.jasnh.com).
Nevertheless, open science does not provide a one-size-fits-all solution to the vast diversity of scientific methods. For instance, there may be important ways needed to adapt open science and pre-registration practices to longitudinal research (Petersen et al., 2024).
13.7 Conclusion
According to the Belmont Report, professional ethics includes respect for persons, beneficence, and justice. It is also important to follow the APA Ethics Code (American Psychological Association, 2017) and AERA guidelines (American Educational Research Association et al. (2014)). It is also important to produce a replicable science, which requires better science, including use of measures with strong psychometrics (reliability and validity), in addition to not engaging in questionable research practices. The open science movement, including pre-registrations, sharing data, preprints, and research materials may help improve the replicability of science.