Case Study vs. Single-Case Experimental Designs

What's the difference.

Case study and single-case experimental designs are both research methods used in psychology and other social sciences to investigate individual cases or subjects. However, they differ in their approach and purpose. Case studies involve in-depth examination of a single case, such as an individual, group, or organization, to gain a comprehensive understanding of the phenomenon being studied. On the other hand, single-case experimental designs focus on studying the effects of an intervention or treatment on a single subject over time. These designs use repeated measures and control conditions to establish cause-and-effect relationships. While case studies provide rich qualitative data, single-case experimental designs offer more rigorous experimental control and allow for the evaluation of treatment effectiveness.

AttributeCase StudySingle-Case Experimental Designs
Research DesignQualitativeQuantitative
FocusExploratoryHypothesis Testing
Sample SizeUsually smallUsually small
Data CollectionObservations, interviews, documentsObservations, measurements
Data AnalysisQualitative analysisStatistical analysis
GeneralizabilityLowLow
Internal ValidityLowHigh
External ValidityLowLow

Further Detail

Introduction.

When conducting research in various fields, it is essential to choose the appropriate study design to answer research questions effectively. Two commonly used designs are case study and single-case experimental designs. While both approaches aim to provide valuable insights into specific phenomena, they differ in several key attributes. This article will compare and contrast the attributes of case study and single-case experimental designs, highlighting their strengths and limitations.

Definition and Purpose

A case study is an in-depth investigation of a particular individual, group, or event. It involves collecting and analyzing qualitative or quantitative data to gain a comprehensive understanding of the subject under study. Case studies are often used to explore complex phenomena, generate hypotheses, or provide detailed descriptions of unique cases.

On the other hand, single-case experimental designs are a type of research design that focuses on studying a single individual or a small group over time. These designs involve manipulating an independent variable and measuring its effects on a dependent variable. Single-case experimental designs are particularly useful for examining cause-and-effect relationships and evaluating the effectiveness of interventions or treatments.

Data Collection and Analysis

In terms of data collection, case studies rely on various sources such as interviews, observations, documents, and artifacts. Researchers often employ multiple methods to gather rich and diverse data, allowing for a comprehensive analysis of the case. The data collected in case studies are typically qualitative in nature, although quantitative data may also be included.

In contrast, single-case experimental designs primarily rely on quantitative data collection methods. Researchers use standardized measures and instruments to collect data on the dependent variable before, during, and after the manipulation of the independent variable. This allows for a systematic analysis of the effects of the intervention or treatment on the individual or group being studied.

Generalizability

One of the key differences between case studies and single-case experimental designs is their generalizability. Case studies are often conducted on unique or rare cases, making it challenging to generalize the findings to a larger population. The focus of case studies is on providing detailed insights into specific cases rather than making broad generalizations.

On the other hand, single-case experimental designs aim to establish causal relationships and can provide evidence for generalizability. By systematically manipulating the independent variable and measuring its effects on the dependent variable, researchers can draw conclusions about the effectiveness of interventions or treatments that may be applicable to similar cases or populations.

Internal Validity

Internal validity refers to the extent to which a study accurately measures the cause-and-effect relationship between variables. In case studies, establishing internal validity can be challenging due to the lack of control over extraneous variables. The presence of multiple data sources and the potential for subjective interpretation may also introduce bias.

In contrast, single-case experimental designs prioritize internal validity by employing rigorous control over extraneous variables. Researchers carefully design the intervention or treatment, implement it consistently, and measure the dependent variable under controlled conditions. This allows for a more confident determination of the causal relationship between the independent and dependent variables.

Time and Resources

Case studies often require significant time and resources due to their in-depth nature. Researchers need to spend considerable time collecting and analyzing data from various sources, conducting interviews, and immersing themselves in the case. Additionally, case studies may involve multiple researchers or a research team, further increasing the required resources.

On the other hand, single-case experimental designs can be more time and resource-efficient. Since they focus on a single individual or a small group, data collection and analysis can be more streamlined. Researchers can also implement interventions or treatments in a controlled manner, reducing the time and resources needed for data collection.

Ethical Considerations

Both case studies and single-case experimental designs require researchers to consider ethical implications. In case studies, researchers must ensure the privacy and confidentiality of the individuals or groups being studied. Informed consent and ethical guidelines for data collection and analysis should be followed to protect the rights and well-being of the participants.

Similarly, in single-case experimental designs, researchers must consider ethical considerations when implementing interventions or treatments. The well-being and safety of the individual or group being studied should be prioritized, and informed consent should be obtained. Additionally, researchers should carefully monitor and evaluate the potential risks and benefits associated with the intervention or treatment.

Case studies and single-case experimental designs are valuable research approaches that offer unique insights into specific phenomena. While case studies provide in-depth descriptions and exploratory analyses of individual cases, single-case experimental designs focus on establishing causal relationships and evaluating interventions or treatments. Researchers should carefully consider the attributes and goals of their study when choosing between these two designs, ensuring that the selected approach aligns with their research questions and objectives.

Comparisons may contain inaccurate information about people, places, or facts. Please report any issues.

  • Subject List
  • Take a Tour
  • For Authors
  • Subscriber Services
  • Publications
  • African American Studies
  • African Studies
  • American Literature
  • Anthropology
  • Architecture Planning and Preservation
  • Art History
  • Atlantic History
  • Biblical Studies
  • British and Irish Literature
  • Childhood Studies
  • Chinese Studies
  • Cinema and Media Studies
  • Communication
  • Criminology
  • Environmental Science
  • Evolutionary Biology
  • International Law
  • International Relations
  • Islamic Studies
  • Jewish Studies
  • Latin American Studies
  • Latino Studies
  • Linguistics
  • Literary and Critical Theory
  • Medieval Studies
  • Military History
  • Political Science
  • Public Health
  • Renaissance and Reformation
  • Social Work
  • Urban Studies
  • Victorian Literature
  • Browse All Subjects

How to Subscribe

  • Free Trials

In This Article Expand or collapse the "in this article" section Single-Case Experimental Designs

Introduction, general overviews and primary textbooks.

  • Textbooks in Applied Behavior Analysis
  • Types of Single-Case Experimental Designs
  • Model Building and Randomization in Single-Case Experimental Designs
  • Visual Analysis of Single-Case Experimental Designs
  • Effect Size Estimates in Single-Case Experimental Designs
  • Reporting Single-Case Design Intervention Research

Related Articles Expand or collapse the "related articles" section about

About related articles close popup.

Lorem Ipsum Sit Dolor Amet

Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Aliquam ligula odio, euismod ut aliquam et, vestibulum nec risus. Nulla viverra, arcu et iaculis consequat, justo diam ornare tellus, semper ultrices tellus nunc eu tellus.

  • Action Research
  • Ambulatory Assessment in Behavioral Science
  • Effect Size
  • Mediation Analysis
  • Path Models
  • Research Methods for Studying Daily Life

Other Subject Areas

Forthcoming articles expand or collapse the "forthcoming articles" section.

  • Data Visualization
  • Executive Functions in Childhood
  • Remote Work
  • Find more forthcoming articles...
  • Export Citations
  • Share This Facebook LinkedIn Twitter

Single-Case Experimental Designs by S. Andrew Garbacz , Thomas R. Kratochwill LAST REVIEWED: 29 July 2020 LAST MODIFIED: 29 July 2020 DOI: 10.1093/obo/9780199828340-0265

Single-case experimental designs are a family of experimental designs that are characterized by researcher manipulation of an independent variable and repeated measurement of a dependent variable before (i.e., baseline) and after (i.e., intervention phase) introducing the independent variable. In single-case experimental designs a case is the unit of intervention and analysis (e.g., a child, a school). Because measurement within each case is conducted before and after manipulation of the independent variable, the case typically serves as its own control. Experimental variants of single-case designs provide a basis for determining a causal relation by replication of the intervention through (a) introducing and withdrawing the independent variable, (b) manipulating the independent variable across different phases, and (c) introducing the independent variable in a staggered fashion across different points in time. Due to their economy of resources, single-case designs may be useful during development activities and allow for rapid replication across studies.

Several sources provide overviews of single-case experimental designs. Barlow, et al. 2009 includes an overview for the development of single-case experimental designs, describes key considerations for designing and conducting single-case experimental design research, and reviews procedural elements, assessment strategies, and replication considerations. Kazdin 2011 provides detailed coverage of single-case experimental design variants as well as approaches for evaluating data in single-case experimental designs. Kratochwill and Levin 2014 describes key methodological features that underlie single-case experimental designs, including philosophical and statistical foundations and data evaluation. Ledford and Gast 2018 covers research conceptualization and writing, design variants within single-case experimental design, definitions of variables and associated measurement, and approaches to organize and evaluate data. Riley-Tillman and Burns 2009 provides a practical orientation to single-case experimental designs to facilitate uptake and use in applied settings.

Barlow, D. H., M. K. Nock, and M. Hersen, eds. 2009. Single case experimental designs: Strategies for studying behavior change . 3d ed. New York: Pearson.

A comprehensive reference about the process of designing and conducting single-case experimental design studies. Chapters are integrative but can stand alone.

Kazdin, A. E. 2011. Single-case research designs: Methods for clinical and applied settings . 2d ed. New York: Oxford Univ. Press.

A complete overview and description of single-case experimental design variants as well as information about data evaluation.

Kratochwill, T. R., and J. R. Levin, eds. 2014. Single-case intervention research: Methodological and statistical advances . New York: Routledge.

The authors describe in depth the methodological and analytic considerations necessary for designing and conducting research that uses a single-case experimental design. In addition, the text includes chapters from leaders in psychology and education who provide critical perspectives about the use of single-case experimental designs.

Ledford, J. R., and D. L. Gast, eds. 2018. Single case research methodology: Applications in special education and behavioral sciences . New York: Routledge.

Covers the research process from writing literature reviews, to designing, conducting, and evaluating single-case experimental design studies.

Riley-Tillman, T. C., and M. K. Burns. 2009. Evaluating education interventions: Single-case design for measuring response to intervention . New York: Guilford Press.

Focuses on accelerating uptake and use of single-case experimental designs in applied settings. This book provides a practical, “nuts and bolts” orientation to conducting single-case experimental design research.

back to top

Users without a subscription are not able to see the full content on this page. Please subscribe or login .

Oxford Bibliographies Online is available by subscription and perpetual access to institutions. For more information or to contact an Oxford Sales Representative click here .

  • About Psychology »
  • Meet the Editorial Board »
  • Abnormal Psychology
  • Academic Assessment
  • Acculturation and Health
  • Action Regulation Theory
  • Addictive Behavior
  • Adolescence
  • Adoption, Social, Psychological, and Evolutionary Perspect...
  • Advanced Theory of Mind
  • Affective Forecasting
  • Affirmative Action
  • Ageism at Work
  • Allport, Gordon
  • Alzheimer’s Disease
  • Analysis of Covariance (ANCOVA)
  • Animal Behavior
  • Animal Learning
  • Anxiety Disorders
  • Art and Aesthetics, Psychology of
  • Artificial Intelligence, Machine Learning, and Psychology
  • Assessment and Clinical Applications of Individual Differe...
  • Attachment in Social and Emotional Development across the ...
  • Attention-Deficit/Hyperactivity Disorder (ADHD) in Adults
  • Attention-Deficit/Hyperactivity Disorder (ADHD) in Childre...
  • Attitudinal Ambivalence
  • Attraction in Close Relationships
  • Attribution Theory
  • Authoritarian Personality
  • Bayesian Statistical Methods in Psychology
  • Behavior Therapy, Rational Emotive
  • Behavioral Economics
  • Behavioral Genetics
  • Belief Perseverance
  • Bereavement and Grief
  • Biological Psychology
  • Birth Order
  • Body Image in Men and Women
  • Bystander Effect
  • Categorical Data Analysis in Psychology
  • Childhood and Adolescence, Peer Victimization and Bullying...
  • Clark, Mamie Phipps
  • Clinical Neuropsychology
  • Clinical Psychology
  • Cognitive Consistency Theories
  • Cognitive Dissonance Theory
  • Cognitive Neuroscience
  • Communication, Nonverbal Cues and
  • Comparative Psychology
  • Competence to Stand Trial: Restoration Services
  • Competency to Stand Trial
  • Computational Psychology
  • Conflict Management in the Workplace
  • Conformity, Compliance, and Obedience
  • Consciousness
  • Coping Processes
  • Correspondence Analysis in Psychology
  • Counseling Psychology
  • Creativity at Work
  • Critical Thinking
  • Cross-Cultural Psychology
  • Cultural Psychology
  • Daily Life, Research Methods for Studying
  • Data Science Methods for Psychology
  • Data Sharing in Psychology
  • Death and Dying
  • Deceiving and Detecting Deceit
  • Defensive Processes
  • Depressive Disorders
  • Development, Prenatal
  • Developmental Psychology (Cognitive)
  • Developmental Psychology (Social)
  • Diagnostic and Statistical Manual of Mental Disorders (DSM...
  • Discrimination
  • Dissociative Disorders
  • Drugs and Behavior
  • Eating Disorders
  • Ecological Psychology
  • Educational Settings, Assessment of Thinking in
  • Embodiment and Embodied Cognition
  • Emerging Adulthood
  • Emotional Intelligence
  • Empathy and Altruism
  • Employee Stress and Well-Being
  • Environmental Neuroscience and Environmental Psychology
  • Ethics in Psychological Practice
  • Event Perception
  • Evolutionary Psychology
  • Expansive Posture
  • Experimental Existential Psychology
  • Exploratory Data Analysis
  • Eyewitness Testimony
  • Eysenck, Hans
  • Factor Analysis
  • Festinger, Leon
  • Five-Factor Model of Personality
  • Flynn Effect, The
  • Forensic Psychology
  • Forgiveness
  • Friendships, Children's
  • Fundamental Attribution Error/Correspondence Bias
  • Gambler's Fallacy
  • Game Theory and Psychology
  • Geropsychology, Clinical
  • Global Mental Health
  • Habit Formation and Behavior Change
  • Health Psychology
  • Health Psychology Research and Practice, Measurement in
  • Heider, Fritz
  • Heuristics and Biases
  • History of Psychology
  • Human Factors
  • Humanistic Psychology
  • Implicit Association Test (IAT)
  • Industrial and Organizational Psychology
  • Inferential Statistics in Psychology
  • Insanity Defense, The
  • Intelligence
  • Intelligence, Crystallized and Fluid
  • Intercultural Psychology
  • Intergroup Conflict
  • International Classification of Diseases and Related Healt...
  • International Psychology
  • Interviewing in Forensic Settings
  • Intimate Partner Violence, Psychological Perspectives on
  • Introversion–Extraversion
  • Item Response Theory
  • Law, Psychology and
  • Lazarus, Richard
  • Learned Helplessness
  • Learning Theory
  • Learning versus Performance
  • LGBTQ+ Romantic Relationships
  • Lie Detection in a Forensic Context
  • Life-Span Development
  • Locus of Control
  • Loneliness and Health
  • Mathematical Psychology
  • Meaning in Life
  • Mechanisms and Processes of Peer Contagion
  • Media Violence, Psychological Perspectives on
  • Memories, Autobiographical
  • Memories, Flashbulb
  • Memories, Repressed and Recovered
  • Memory, False
  • Memory, Human
  • Memory, Implicit versus Explicit
  • Memory in Educational Settings
  • Memory, Semantic
  • Meta-Analysis
  • Metacognition
  • Metaphor, Psychological Perspectives on
  • Microaggressions
  • Military Psychology
  • Mindfulness
  • Mindfulness and Education
  • Minnesota Multiphasic Personality Inventory (MMPI)
  • Money, Psychology of
  • Moral Conviction
  • Moral Development
  • Moral Psychology
  • Moral Reasoning
  • Nature versus Nurture Debate in Psychology
  • Neuroscience of Associative Learning
  • Nonergodicity in Psychology and Neuroscience
  • Nonparametric Statistical Analysis in Psychology
  • Observational (Non-Randomized) Studies
  • Obsessive-Complusive Disorder (OCD)
  • Occupational Health Psychology
  • Olfaction, Human
  • Operant Conditioning
  • Optimism and Pessimism
  • Organizational Justice
  • Parenting Stress
  • Parenting Styles
  • Parents' Beliefs about Children
  • Peace Psychology
  • Perception, Person
  • Performance Appraisal
  • Personality and Health
  • Personality Disorders
  • Personality Psychology
  • Person-Centered and Experiential Psychotherapies: From Car...
  • Phenomenological Psychology
  • Placebo Effects in Psychology
  • Play Behavior
  • Positive Psychological Capital (PsyCap)
  • Positive Psychology
  • Posttraumatic Stress Disorder (PTSD)
  • Prejudice and Stereotyping
  • Pretrial Publicity
  • Prisoner's Dilemma
  • Problem Solving and Decision Making
  • Procrastination
  • Prosocial Behavior
  • Prosocial Spending and Well-Being
  • Protocol Analysis
  • Psycholinguistics
  • Psychological Literacy
  • Psychological Perspectives on Food and Eating
  • Psychology, Political
  • Psychoneuroimmunology
  • Psychophysics, Visual
  • Psychotherapy
  • Psychotic Disorders
  • Publication Bias in Psychology
  • Reasoning, Counterfactual
  • Rehabilitation Psychology
  • Relationships
  • Reliability–Contemporary Psychometric Conceptions
  • Religion, Psychology and
  • Replication Initiatives in Psychology
  • Research Methods
  • Risk Taking
  • Role of the Expert Witness in Forensic Psychology, The
  • Sample Size Planning for Statistical Power and Accurate Es...
  • Schizophrenic Disorders
  • School Psychology
  • School Psychology, Counseling Services in
  • Self, Gender and
  • Self, Psychology of the
  • Self-Construal
  • Self-Control
  • Self-Deception
  • Self-Determination Theory
  • Self-Efficacy
  • Self-Esteem
  • Self-Monitoring
  • Self-Regulation in Educational Settings
  • Self-Report Tests, Measures, and Inventories in Clinical P...
  • Sensation Seeking
  • Sex and Gender
  • Sexual Minority Parenting
  • Sexual Orientation
  • Signal Detection Theory and its Applications
  • Simpson's Paradox in Psychology
  • Single People
  • Single-Case Experimental Designs
  • Skinner, B.F.
  • Sleep and Dreaming
  • Small Groups
  • Social Class and Social Status
  • Social Cognition
  • Social Neuroscience
  • Social Support
  • Social Touch and Massage Therapy Research
  • Somatoform Disorders
  • Spatial Attention
  • Sports Psychology
  • Stanford Prison Experiment (SPE): Icon and Controversy
  • Stereotype Threat
  • Stereotypes
  • Stress and Coping, Psychology of
  • Student Success in College
  • Subjective Wellbeing Homeostasis
  • Taste, Psychological Perspectives on
  • Teaching of Psychology
  • Terror Management Theory
  • Testing and Assessment
  • The Concept of Validity in Psychological Assessment
  • The Neuroscience of Emotion Regulation
  • The Reasoned Action Approach and the Theories of Reasoned ...
  • The Weapon Focus Effect in Eyewitness Memory
  • Theory of Mind
  • Therapy, Cognitive-Behavioral
  • Thinking Skills in Educational Settings
  • Time Perception
  • Trait Perspective
  • Trauma Psychology
  • Twin Studies
  • Type A Behavior Pattern (Coronary Prone Personality)
  • Unconscious Processes
  • Video Games and Violent Content
  • Virtues and Character Strengths
  • Women and Science, Technology, Engineering, and Math (STEM...
  • Women, Psychology of
  • Work Well-Being
  • Workforce Training Evaluation
  • Wundt, Wilhelm
  • Privacy Policy
  • Cookie Policy
  • Legal Notice
  • Accessibility

Powered by:

  • [81.177.182.159]
  • 81.177.182.159

ASHA_org_pad

  • CREd Library , Research Design and Method

Single-Subject Experimental Design: An Overview

Cred library, julie wambaugh, and ralf schlosser.

  • December, 2014

DOI: 10.1044/cred-cred-ssd-r101-002

Single-subject experimental designs – also referred to as within-subject or single case experimental designs – are among the most prevalent designs used in CSD treatment research. These designs provide a framework for a quantitative, scientifically rigorous approach where each participant provides his or her own experimental control.

An Overview of Single-Subject Experimental Design

What is single-subject design.

Transcript of the video Q&A with Julie Wambaugh. The essence of single-subject design is using repeated measurements to really understand an individual’s variability, so that we can use our understanding of that variability to determine what the effects of our treatment are. For me, one of the first steps in developing a treatment is understanding what an individual does. So, if I were doing a group treatment study, I would not necessarily be able to see or to understand what was happening with each individual patient, so that I could make modifications to my treatment and understand all the details of what’s happening in terms of the effects of my treatment. For me it’s a natural first step in the progression of developing a treatment. Also with the disorders that we deal with, it’s very hard to get the number of participants that we would need for the gold standard randomized controlled trial. Using single-subject designs works around the possible limiting factor of not having enough subjects in a particular area of study. My mentor was Dr. Cynthia Thompson, who was trained by Leija McReynolds from the University of Kansas, which was where a lot of single-subject design in our field originated, and so I was fortunate to be on the cutting edge of this being implemented in our science back in the late ’70s early ’80s. We saw, I think, a nice revolution in terms of attention to these types of designs, giving credit to the type of data that could be obtained from these types of designs, and a flourishing of these designs really through the 1980s into the 1990s and into the 2000s. But I think — I’ve talked with other single-subject design investigators, and now we’re seeing maybe a little bit of a lapse of attention, and a lack of training again among our young folks. Maybe people assume that people understand the foundation, but they really don’t. And more problems are occurring with the science. I think we need to re-establish the foundations in our young scientists. And this project, I think, will be a big plus toward moving us in that direction.

What is the Role of Single-Subject Design?

Transcript of the video Q&A with Ralf Schlosser. So what has happened recently, is with the onset of evidence-based practice and the adoption of the common hierarchy of evidence in terms of designs. As you noted the randomized controlled trial and meta-analyses of randomized controlled trials are on top of common hierarchies. And that’s fine. But it doesn’t mean that single-subject cannot play a role. For example, single-subject design can be implemented prior to implementing a randomized controlled trial to get a better handle on the magnitude of the effects, the workings of the active ingredients, and all of that. It is very good to prepare that prior to developing a randomized controlled trial. After you have implemented the randomized controlled trial, and then you want to implement the intervention in a more naturalistic setting, it becomes very difficult to do that in a randomized form or at the group level. So again, single-subject design lends itself to more practice-oriented implementation. So I see it as a crucial methodology among several. What we can do to promote what single-subject design is good for is to speak up. It is important that it is being recognized for what it can do and what it cannot do.

Basic Features and Components of Single-Subject Experimental Designs

Defining Features Single-subject designs are defined by the following features:

  • An individual “case” is the unit of intervention and unit of data analysis.
  • The case provides its own control for purposes of comparison. For example, the case’s series of outcome variables are measured prior to the intervention and compared with measurements taken during (and after) the intervention.
  • The outcome variable is measured repeatedly within and across different conditions or levels of the independent variable.

See Kratochwill, et al. (2010)

Structure and Phases of the Design Single-subject designs are typically described according to the arrangement of baseline and treatment phases.

The conditions in a single-subject experimental study are often assigned letters such as the A phase and the B phase, with A being the baseline, or no-treatment phase, and B the experimental, or treatment phase. (Other letters are sometimes used to designate other experimental phases.) Generally, the A phase serves as a time period in which the behavior or behaviors of interest are counted or scored prior to introducing treatment. In the B phase, the same behavior of the individual is counted over time under experimental conditions while treatment is administered. Decisions regarding the effect of treatment are then made by comparing an individual’s performance during the treatment, B phase, and the no-treatment. McReynolds and Thompson (1986)

Basic Components Important primary components of a single-subject study include the following:

  • The participant is the unit of analysis, where a participant may be an individual or a unit such as a class or school.
  • Participant and setting descriptions are provided with sufficient detail to allow another researcher to recruit similar participants in similar settings.
  • Dependent variables are (a) operationally defined and (b) measured repeatedly.
  • An independent variable is actively manipulated, with the fidelity of implementation documented.
  • A baseline condition demonstrates a predictable pattern which can be compared with the intervention condition(s).
  • Experimental control is achieved through introduction and withdrawal/reversal, staggered introduction, or iterative manipulation of the independent variable.
  • Visual analysis is used to interpret the level, trend, and variability of the data within and across phases.
  • External validity of results is accomplished through replication of the effects.
  • Social validity is established by documenting that interventions are functionally related to change in socially important outcomes.

See Horner, et al. (2005)

Common Misconceptions

Single-Subject Experimental Designs versus Case Studies

Transcript of the video Q&A with Julie Wambaugh. One of the biggest mistakes, that is a huge problem, is misunderstanding that a case study is not a single-subject experimental design. There are controls that need to be implemented, and a case study does not equate to a single-subject experimental design. People misunderstand or they misinterpret the term “multiple baseline” to mean that because you are measuring multiple things, that that gives you the experimental control. You have to be demonstrating, instead, that you’ve measured multiple behaviors and that you’ve replicated your treatment effect across those multiple behaviors. So, one instance of one treatment being implemented with one behavior is not sufficient, even if you’ve measured other things. That’s a very common mistake that I see. There’s a design — an ABA design — that’s a very strong experimental design where you measure the behavior, you implement treatment, and you then to get experimental control need to see that treatment go back down to baseline, for you to have evidence of experimental control. It’s a hard behavior to implement in our field because we want our behaviors to stay up! We don’t want to see them return back to baseline. Oftentimes people will say they did an ABA. But really, in effect, all they did was an AB. They measured, they implemented treatment, and the behavior changed because the treatment was successful. That does not give you experimental control. They think they did an experimentally sound design, but because the behavior didn’t do what the design requires to get experimental control, they really don’t have experimental control with their design.

Single-subject studies should not be confused with case studies or other non-experimental designs.

In case study reports, procedures used in treatment of a particular client’s behavior are documented as carefully as possible, and the client’s progress toward habilitation or rehabilitation is reported. These investigations provide useful descriptions. . . .However, a demonstration of treatment effectiveness requires an experimental study. A better role for case studies is description and identification of potential variables to be evaluated in experimental studies. An excellent discussion of this issue can be found in the exchange of letters to the editor by Hoodin (1986) [Article] and Rubow and Swift (1986) [Article]. McReynolds and Thompson (1986)

Other Single-Subject Myths

Transcript of the video Q&A with Ralf Schlosser. Myth 1: Single-subject experiments only have one participant. Obviously, it requires only one subject, one participant. But that’s a misnomer to think that single-subject is just about one participant. You can have as many as twenty or thirty. Myth 2: Single-subject experiments only require one pre-test/post-test. I think a lot of students in the clinic are used to the measurement of one pre-test and one post-test because of the way the goals are written, and maybe there’s not enough time to collect continuous data.But single-case experimental designs require ongoing data collection. There’s this misperception that one baseline data point is enough. But for single-case experimental design you want to see at least three data points, because it allows you to see a trend in the data. So there’s a myth about the number of data points needed. The more data points we have, the better. Myth 3: Single-subject experiments are easy to do. Single-subject design has its own tradition of methodology. It seems very easy to do when you read up on one design. But there are lots of things to consider, and lots of things can go wrong.It requires quite a bit of training. It takes at least one three-credit course that you take over the whole semester.

Further Reading: Components of Single-Subject Designs

Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M. & Shadish, W. R. (2010). Single-case designs technical documentation. From the What Works Clearinghouse. http://ies.ed.gov/ncee/wwc/documentsum.aspx?sid=229

Further Reading: Single-Subject Design Textbooks

Kazdin, A. E. (2011). Single-case research designs: Methods for clinical and applied settings. Oxford University Press.

McReynolds, L. V. & Kearns, K. (1983). Single-subject experimental designs in communicative disorders. Baltimore: University Park Press.

Further Reading: Foundational Articles

Julie Wambaugh University of Utah

Ralf Schlosser Northeastern University

The content of this page is based on selected clips from video interviews conducted at the ASHA National Office.

Additional digested resources and references for further reading were selected and implemented by CREd Library staff.

Copyright © 2015 American Speech-Language-Hearing Association

logoCREDHeader

Clinical Research Education

More from the cred library, innovative treatments for persons with dementia, implementation science resources for crisp, when the ears interact with the brain, follow asha journals on twitter.

logoAcademy_Revised_2

© 1997-2024 American Speech-Language-Hearing Association Privacy Notice Terms of Use

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 05 April 2024

Single-case experimental designs: the importance of randomization and replication

  • René Tanious   ORCID: orcid.org/0000-0002-5466-1002 1 ,
  • Rumen Manolov   ORCID: orcid.org/0000-0002-9387-1926 2 ,
  • Patrick Onghena 3 &
  • Johan W. S. Vlaeyen   ORCID: orcid.org/0000-0003-0437-6665 1  

Nature Reviews Methods Primers volume  4 , Article number:  27 ( 2024 ) Cite this article

272 Accesses

9 Altmetric

Metrics details

  • Data acquisition
  • Human behaviour
  • Social sciences

Single-case experimental designs are rapidly growing in popularity. This popularity needs to be accompanied by transparent and well-justified methodological and statistical decisions. Appropriate experimental design including randomization, proper data handling and adequate reporting are needed to ensure reproducibility and internal validity. The degree of generalizability can be assessed through replication.

This is a preview of subscription content, access via your institution

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 1 digital issues and online access to articles

92,52 € per year

only 92,52 € per issue

Buy this article

  • Purchase on Springer Link
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

Kazdin, A. E. Single-case experimental designs: characteristics, changes, and challenges. J. Exp. Anal. Behav. 115 , 56–85 (2021).

Article   Google Scholar  

Shadish, W. & Sullivan, K. J. Characteristics of single-case designs used to assess intervention effects in 2008. Behav. Res. 43 , 971–980 (2011).

Tanious, R. & Onghena, P. A systematic review of applied single-case research published between 2016 and 2018: study designs, randomization, data aspects, and data analysis. Behav. Res. 53 , 1371–1384 (2021).

Ferron, J., Foster-Johnson, L. & Kromrey, J. D. The functioning of single-case randomization tests with and without random assignment. J. Exp. Educ. 71 , 267–288 (2003).

Michiels, B., Heyvaert, M., Meulders, A. & Onghena, P. Confidence intervals for single-case effect size measures based on randomization test inversion. Behav. Res. 49 , 363–381 (2017).

Aydin, O. Characteristics of missing data in single-case experimental designs: an investigation of published data. Behav. Modif. https://doi.org/10.1177/01454455231212265 (2023).

De, T. K., Michiels, B., Tanious, R. & Onghena, P. Handling missing data in randomization tests for single-case experiments: a simulation study. Behav. Res. 52 , 1355–1370 (2020).

Baek, E., Luo, W. & Lam, K. H. Meta-analysis of single-case experimental design using multilevel modeling. Behav. Modif. 47 , 1546–1573 (2023).

Michiels, B., Tanious, R., De, T. K. & Onghena, P. A randomization test wrapper for synthesizing single-case experiments using multilevel models: a Monte Carlo simulation study. Behav. Res. 52 , 654–666 (2020).

Tate, R. L. et al. The single-case reporting guideline in behavioural interventions (SCRIBE) 2016: explanation and elaboration. Arch. Sci. Psychol. 4 , 10–31 (2016).

Google Scholar  

Download references

Acknowledgements

R.T. and J.W.S.V. disclose support for the research of this work from the Dutch Research Council and the Dutch Ministry of Education, Culture and Science (NWO gravitation grant number 024.004.016) within the research project ‘New Science of Mental Disorders’ ( www.nsmd.eu ). R.M. discloses support from the Generalitat de Catalunya’s Agència de Gestió d’Ajusts Universitaris i de Recerca (grant number 2021SGR00366).

Author information

Authors and affiliations.

Experimental Health Psychology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands

René Tanious & Johan W. S. Vlaeyen

Department of Social Psychology and Quantitative Psychology, Faculty of Psychology, University of Barcelona, Barcelona, Spain

Rumen Manolov

Methodology of Educational Sciences Research Group, Faculty of Psychology and Educational Science, KU Leuven, Leuven, Belgium

Patrick Onghena

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to René Tanious .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Tanious, R., Manolov, R., Onghena, P. et al. Single-case experimental designs: the importance of randomization and replication. Nat Rev Methods Primers 4 , 27 (2024). https://doi.org/10.1038/s43586-024-00312-8

Download citation

Published : 05 April 2024

DOI : https://doi.org/10.1038/s43586-024-00312-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

single case experimental design vs case study

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts
  • PMC10016625

Logo of nihpa

The Family of Single-Case Experimental Designs

Leonard h. epstein.

1 Jacobs School of Medicine and Biomedical Sciences, Division of Behavioral Medicine, Department of Pediatrics, University at Buffalo, Buffalo, New York, United States of America,

Jesse Dallery

2 Department of Psychology, University of Florida, Gainesville, Florida, United States of America

Single-case experimental designs (SCEDs) represent a family of research designs that use experimental methods to study the effects of treatments on outcomes. The fundamental unit of analysis is the single case—which can be an individual, clinic, or community—ideally with replications of effects within and/or between cases. These designs are flexible and cost-effective and can be used for treatment development, translational research, personalized interventions, and the study of rare diseases and disorders. This article provides a broad overview of the family of single-case experimental designs with corresponding examples, including reversal designs, multiple baseline designs, combined multiple baseline/reversal designs, and integration of single-case designs to identify optimal treatments for individuals into larger randomized controlled trials (RCTs). Personalized N-of-1 trials can be considered a subcategory of SCEDs that overlaps with reversal designs. Relevant issues for each type of design—including comparisons of treatments, design issues such as randomization and blinding, standards for designs, and statistical approaches to complement visual inspection of single-case experimental designs—are also discussed.

1. Introduction

Single-case experimental designs (SCEDs) represent a family of experimental designs to examine the relationship between one or more treatments or levels of treatment and changes in biological or behavioral outcomes. These designs originated in early experimental psychology research ( Boring, 1929 ; Ebbinghaus, 1913 ; Pavlov, 1927 ), and were later expanded and formalized in the fields of basic and applied behavior analysis ( Morgan & Morgan, 2001 ; Sidman, 1960 ). SCEDs have been extended to a number of fields, including medicine ( Lillie et al., 2011 ; Schork, 2015 ), public health ( Biglan et al., 2000 ; Duan et al., 2013 ), education ( Horner et al., 2005 ), counseling psychology ( Lundervold & Belwood, 2000 ), clinical psychology ( Vlaeyen et al., 2020 ), health behavior ( McDonald et al., 2017 ), and neuroscience ( Soto, 2020 ).

SCEDs provide a framework to determine whether changes in a target behavior(s) or symptom are in fact a function of the intervention. The fundamentals of an SCED involve repeated measurement, replication of conditions (e.g., baseline and intervention conditions), and the analysis of effects with respect to each individual serving as his or her own control. This process can be useful for identifying the optimal treatment for an individual ( Dallery & Raiff, 2014 ; Davidson et al., 2021 ), treating rare diseases ( Abrahamyan et al., 2016 ), and implementing early phase translational research ( Czajkowski et al., 2015 ). SCEDs can be referred to as ‘personalized (N-of-1) trials’ when used this way, but they also have broad applicability to a range of scientific questions. Results from SCEDs can be aggregated using meta-analytic techniques to establish generalizable methods and treatment guidelines ( Shadish, 2014 ; Vannest et al., 2018 ). Figure 1 presents the main family of SCEDs, and shows how personalized (N-of-1) trials fit into these designs ( Vohra et al., 2016 ). The figure also distinguishes between experimental and nonexperimental single-case designs. In the current article, we provide an overview of SCEDs and thus a context for the articles in this special issue focused on personalized (N-of-1) trials. Our focus is to provide the fundamentals of these designs, and more detailed treatments of data analysis ( Moeyaert & Fingerhut, 2022 ; Schork, 2022 ) conduct and reporting standards ( Kravitz & Duan, 2022 ; Porcino & Vohra, 2022 ), and other methodological considerations are provided in this special issue. Our hope is that this article will inspire a diverse array of students, engineers, scientists, and practitioners to further explore the utility, rigor, and flexibility of these designs.

An external file that holds a picture, illustration, etc.
Object name is nihms-1842588-f0001.jpg

A = Baseline, B and C refer to different treatments.

The most common approach to evaluating the effectiveness of interventions on outcomes is using randomized controlled trials (RCTs). RCTs provide an idea of the average effect of an intervention on outcomes. People do not all change at the same rate or in the same way, however; variability in both how people change and the effect of the intervention is inevitable ( Fisher et al., 2018 ; Normand, 2016 ; Roustit et al., 2018 ). These sources of variability are conflated in a typical RCT, leading to heterogeneity of treatment effects (HTE). Research on HTE has shown variability in outcomes in RCTs, and in some studies very few people actually exhibit the benefits of that treatment ( Williams, 2010 ). One approach in RCTs is to assess moderators of treatment response to identify individual differences that may predict response to a treatment. This approach may not limit variability in response, and substantial reduction in variability of treatment for subgroups in comparison to the group as a whole is far from assured. Even if variability is reduced, the average effect for that subgroup may not be representative of individual members of the subgroup.

SCEDs can identify the optimal treatment for an individual person rather than the average person in a group ( Dallery & Raiff, 2014 ; Davidson et al., 2021 ; Hekler et al., 2020 ). SCEDs are multiphase experimental designs in which a great deal of data is collected on a single person, said person serves as his or her own control ( Kazdin, 2011 , 2021 ), and the order of presentation of conditions can be randomized to enhance experimental control. That is, a person’s outcomes in one phase are compared to outcomes in another phase. In a typical study, replications are achieved within and/or across several individuals; this allows for strong inferences about causation between behavior and the treatment (or levels thereof). Achieving replications is synonymous with achieving experimental control.

We provide an overview of three experimental designs that can be adapted for personalized medicine: reversal, multiple baseline, and combined reversal and multiple baseline designs, and we discuss how SCEDs can be integrated into RCTs. These designs focus on demonstrating experimental control of the relationship between treatment and outcome. Several general principles common to all of the designs are noteworthy ( Lobo et al., 2017 ). First, in many studies, treatment effects are compared with control conditions with a no- intervention baseline as the initial condition. To reduce threats to internal validity of the study, the order of assignment of interventions can be randomized ( Kratochwill & Levin, 2010 ) and, when possible, the intervention and data collection can be blinded. The demonstration of experimental control across conditions or people needs to be replicated several times (three replications is the minimum) to ensure confidence of the relationship between treatment and outcome ( Kratochwill et al., 2010 ; Kratochwill & Levin, 2015 ). Demonstrating stability of data within a phase or, otherwise, no trend in the direction of treatment effects prior to starting treatment is particularly important. Stability refers to the degree of variability in the data path over time (e.g., data points must fall within a 15% range of the median for a condition). Thus, phase length needs to be flexible for the sake of determining stability and trend within a phase, but a minimum of 5 data points per phase has been recommended ( Kratochwill et al., 2013 ). The focus of the intervention’s effects is on clinically rather than statistically significant effects with the target effect prespecified and considered in interpretation of the relevance of the effect for clinical practice ( Epstein et al., 2021 ). In addition, multiple dependent outcomes can be simultaneously measured ( Epstein et al., 2021 ). SCEDs can be used to test whether a variable mediates the effect of a treatment on symptoms or behavior ( Miočević et al., 2020 ; Riley & Gaynor, 2014 ). Visual inspection of graphical data is typically used to determine treatment effects, and statistical methods are commonly used to assist in interpretation of graphical data ( Epstein et al., 2021 ). Furthermore, a growing number of statistical approaches can summarize treatment effects and provide effect sizes ( Kazdin, 2021 ; Moeyaert & Fingerhut, this issue; Pustejovsky, 2019 ; Shadish et al., 2014 ). Data across many SCED trials can be aggregated to assess the generality of the treatment effects to help address for whom and under what conditions an intervention is effective ( Branch & Pennypacker, 2013 ; Shadish, 2014 ; Van den Noortgate & Onghena, 2003 ).

2. Reversal Designs

A reversal design collects behavioral or biological outcome data in at least two phases: a baseline or no treatment phase (labeled as ‘A’) and the experimental or treatment phase (labeled as ‘B’). The design is called a reversal design because there must be reversals or replications of phases for each individual; for example, in an ABA design, the baseline phase is replicated ( Kazdin, 2011 ). Ideally, three replications of treatment effects are used to demonstrate experimental control ( Kratochwill et al., 2010 ; Kratochwill & Levin, 1992 ). Figure 2 shows hypothetical results from an A1B1A2B2 design. The graph shows three replications of treatment effects (A1 versus B1, B1 versus A2, A2 versus B2) across four participants. Each phase was carried out until stability was evident from visual inspection of the data as well as absence of trends in the direction of the desired effect. The replication across participants increases the confidence in the effectiveness of the intervention. Extension of this design is possible by comparing multiple interventions, as well. The order of the treatments should be randomized, especially when the goal is to combine SCEDs across participants.

An external file that holds a picture, illustration, etc.
Object name is nihms-1842588-f0002.jpg

A1 = First Baseline, B1 First Treatment, A2 = Return to Baseline, B2 = Return to Treatment. P1–P4 represent different hypothetical participants.

Reversal designs can be more dynamic and compare several treatments. A common approach in personalized medicine would be to compare two or more doses of or different components of the same treatment ( Ward-Horner & Sturmey, 2010 ). For example, two drug doses could be compared using an A1B1C1B2C2 design, where A represents placebo and B and C represent the different drug doses ( Guyatt et al., 1990 ). In the case of drug studies, the drug/placebo administration can be double blinded. A more complex design could be A1B1A2C1A3C2A4B2, which would yield multiple replications of the comparison between drug and placebo. Based on the kinetics of the drug and the need for a washout period, the design could also be A1B1C1B2C2. This would provide three demonstration of treatment effects: B1 to C1, C1 to B2, and B2 to C2. Other permutations could be planned strategically to identify the optimal dose for each individual.

Advantages of SCED reversal designs are their ability to experimentally show that a particular treatment was functionally related to a particular change in an outcome variable for that person . This is the core principle of personalized medicine: an optimal treatment for an individual can be identified ( Dallery & Raiff, 2014 ; Davidson et al., 2021 ; Guyatt et al., 1990 ; Hekler et al., 2020 ; Lillie et al., 2011 ). These designs can work well for studying the effect of interventions on rare diseases in which collecting enough participants with similar characteristics for an RCT would be unlikely. An additional strength is the opportunity for the clinical researcher who also delivers clinical care to translate basic science findings or new findings from RCTs to their patients, who can potentially benefit ( Dallery & Raiff, 2014 ; Hayes, 1981 ). Research suggests that the trickledown of new developments and hypotheses to their support in RCTs can take more than 15 years; many important advancements in the medical and behavior sciences are likely not to be implemented rapidly enough ( Riley et al., 2013 ). The ability to test new intervention developments using scientific principles could speed up their translation into practice.

Limitations to SCED designs, however, are worth noting. Firstly, in line with the expectation that the outcome returns to baseline levels, reversals may require removal of the treatment. If the effect is not quickly reversible, then the designs are not relevant. A washout period may be placed in-between phases if the effect is not immediately reversible; for example, a drug washout period could be planned based on the half-life of drug. Secondly, the intervention should have a relatively immediate effect on the outcome. If many weeks to months are needed for some interventions to show effects, a reversal design may not be optimal unless the investigator is willing to plan a lengthy study. Thirdly, the design depends on comparing stable data over conditions. If achieving stability due to uncontrolled sources of biological or environmental variation is not possible, a reversal design may not be appropriate to evaluate a treatment, though it may be useful to identify the sources of variability ( Sidman, 1960 ). Finally, for a reversal to a baseline, a no-treatment phase may be inappropriate in investigating treatment effects for a very ill patient.

3. Multiple Baseline Designs

An alternative to a reversal design is the multiple baseline design, which does not require reversal of conditions to establish experimental control. There are three types of multiple baseline designs: multiple baseline across people, behaviors, and settings. The most popular is the multiple baseline across people, in which baselines are established for three or more people for the same outcome ( Cushing et al., 2011 ; Meredith et al., 2011 ). Treatment is implemented after different durations of baseline across individuals. The order of treatment implementation across people can be randomized ( Wen et al., 2019 ). Figure 3 shows an example across three individuals. In this hypothetical example, baseline data for each person are relatively stable and not decreasing, and reductions in the dependent variable are only observed after introduction of the intervention. Inclusion of one control person, who remains in baseline throughout the study and provides a control for extended monitoring, is also possible. Another variation is to collect baseline data intermittently in a ‘probe’ design, which can minimize burden associated with simultaneous and repeated measurement of outcomes ( Byiers et al., 2012 ; Horner & Baer, 1978 ). If the outcomes do not change during baseline conditions and the changes only occur across participants after the treatment has been implemented—and this sequence is replicated across several people—change in the outcome may be safely attributed to the treatment. The length of the baselines still must be long enough to show stability and no trend toward improvement until the treatment is implemented.

An external file that holds a picture, illustration, etc.
Object name is nihms-1842588-f0003.jpg

P1–P3 represent different hypothetical participants.

The two other multiple baseline designs focus on individual people: the multiple baseline across settings and the multiple baseline across behaviors ( Boles et al., 2008 ; Lane-Brown & Tate, 2010 ). An example of a multiple baseline across settings would be a dietary intervention implemented across meals. An intervention that targets a reduction in consumption of high–glycemic index foods, or foods with added sugar across meals, could be developed with the order of meals randomized. For example, someone may be randomized to reduce sugar-added or high–glycemic index foods for breakfast without any implementation at lunch or dinner. Implementation of the diet at lunch and then dinner would occur after different durations of baselines in these settings. An example of multiple baseline across behaviors might be to use feedback to develop a comprehensive exercise program that involves stretching, aerobic exercise, and resistance training. Feedback could target improvement in one of these randomly selected behaviors, implemented in a staggered manner.

The main limitation to a multiple baseline design is that some people (or behaviors) may be kept in baseline or control conditions for extended periods before treatment is implemented. Of course, failure to receive an effective treatment is common in RCTs for people who are randomized to control conditions, but unlike control groups in RCTs, all participants eventually receive treatment.

Finally, while the emphasis in personalized medicine is the identification of an optimal treatment plan for an individual person, situations in which multiple baselines across people prove relevant for precision medicine may arise. For example, identification of a small group of people with common characteristics—perhaps with a rare disease and for which a multiple-baseline-across-people design could be used to test an intervention more effectively than a series of personalized designs—is possible. In a similar vein, differential response to a common treatment in a multiple-baseline-across-people design can help to identify individual differences that can compromise the response to a treatment.

4. Integrating Multiple Baseline and Reversal Designs

While reversal designs can be used to compare effects of interventions, multiple baseline designs provide experimental control for testing one intervention but do not compare different interventions. One way to take advantage of the strengths of both designs is to combine them. For example, the effects of a first treatment could be studied using a multiple-baseline format and, after experimental control has been established, return to baseline prior to the commencement of a different treatment, which may be introduced in a different order. These comparisons can be made for several different interventions with the combination of both designs to demonstrate experimental control and compare effects of the interventions.

Figure 4 shows a hypothetical example of a combined approach to identify the best drug to decrease blood pressure. Baseline blood pressures are established for three people under placebo conditions before new drug X is introduced across participants in a staggered fashion to establish relative changes in blood pressure. All return to placebo after blood pressures reach stability, drug Y is introduced in a staggered sequence, participants are returned to placebo, and the most effective intervention for each individual (drug X or Y) is reintroduced to replicate the most important result: the most effective medication. This across-subjects design establishes experimental control for two different new drug interventions across three people while also establishing experimental control for five comparisons within subjects (placebo—drug X, drug Y—placebo, placebo—drug Y, drug Y—placebo, placebo—more effective drug). Though this combined design strengthens confidence beyond either reversal or multiple baseline designs, in many situations, experimental control demonstrated using a reversal design is sufficient.

An external file that holds a picture, illustration, etc.
Object name is nihms-1842588-f0004.jpg

BL = Baseline. Drug X and Drug Y represent hypothetical drugs to lower blood pressure, and Best Drug represents a reversal to the most effective drug as identified for each hypothetical participant, labeled P1–P3.

5. Other Varieties of Single-Case Experimental Designs

Other less commonly used designs within the family of SCEDs may be useful for personalized medicine. One of the most relevant may be the alternating treatment design ( Barlow & Hayes, 1979 ; Manolov et al., 2021 ), in which people are exposed to baseline and one or more treatments for very brief periods without the concern about stability before changing conditions. While the treatment period may be short, many more replications of treatments—and ineffective treatments—can be identified quickly. This type of design may be relevant for drugs that have rapid effects with a short half-life and behavioral interventions that have rapid effects ( Coyle & Robertson, 1998 )—for example, the effects of biofeedback on heart rate ( Weems, 1998 ). Another design is the changing criterion design, in which experimental control is demonstrated when the outcome meets certain preselected criteria that can be systematically increased or decreased over time ( Hartmann & Hall, 1976 ). The design is especially useful when learning a new skill or when outcomes change slowly over time ( Singh & Leung, 1988 )—for example, gradually increasing the range of foods chosen in a previously highly selective eater ( Russo et al., 2019 ).

6. Integrating Single-Case Experimental Designs Into Randomized Controlled Trials

SCEDs can be integrated into RCTs to compare the efficacy of treatments chosen for someone based on SCEDs versus a standardized or usual care treatment ( Epstein et al., 2021 ; Schork & Goetz, 2017 ). Such innovative designs may capture the best in SCEDs and randomized controlled designs. Kravitz et al. (2018) used an RCT in which one group ( n = 108) experienced a series of reversal AB conditions, or a personalized (N-of-1) trial. The specific conditions were chosen for each patient from among eight categories of treatments to reduce chronic musculoskeletal pain (e.g., acetaminophen, any nonsteroidal anti-inflammatory drug, acetaminophen/oxycodone, tramadol). The other group ( n = 107) received usual care. The study also incorporated mobile technology to record pain-related data daily (see Dallery et al., 2013 , for a discussion of technology and SCEDs). The results suggested that the N-of-1 approach was feasible and acceptable, but it did not yield statistically significant superior results in pain measures compared to the usual care group. However, as noted by Vohra and Punja (2019) , the results do not indicate a flaw in the methodological approach: finding that two treatments do not differ in superiority is a finding worth knowing.

Another example of a situation where an integrated approach may be useful is selecting a diet for weight control. Many diets for weight control that vary in their macronutrient intake—such as low carb, higher fat versus low fat, and higher carb—have their proponents and favorable biological mechanisms. However, direct comparisons of these diets basically show that they achieve similar weight control with large variability in outcome. Thus, while the average person on a low-fat diet does about the same as the average person on a lowcarb diet, some people on the low-carb diet do very well, while some fail. Some of the people who fail on the low-fat diet would undoubtedly do well on the low-carb diet, and some who fail on the low-fat diet would do well on the low-carb diet. Further, some would fail on both diets due to general problems in adherence.

Personalized medicine suggests that diets should be individualized to achieve the best results. SCEDs would be one way to show ‘proof of concept’ that a particular diet is better than a standard healthy diet. First, people would be randomized to experimental (including SCEDs) or control (not basing diet on SCEDs). Subject selection criteria would proceed as in any RCT. For the first 3 months, people in the experimental group would engage in individual reversal designs in which 2-week intervals of low-carb and low-fat diets would be interspersed with their usual eating, and weight loss, diet adherence, food preferences, and the reinforcing value of foods in the diet would be measured to assess biological, behavioral, and subjective changes.

Participants in the control group would experience a similar exposure to the different types of diets, but the diet to which they are assigned would be randomly chosen rather than chosen using SCED methods. In this way, they would have similar exposure to diets during the first 3 months of the study, but this experience would not impact group assignment. As with any RCT, the study would proceed with regular measures (e.g., 6, 12, 24 months) and the hypothesis that those assigned to a diet that results in better initial weight loss, and that they like and are motivated to continue, would do better than those receiving a randomly selected diet. The study could also be designed with three groups: a single-case design experimental group similar to the approach in the hypothetical study above and two control groups, one low-fat and one low-carb.

An alternative design would be to have everyone experience SCEDs for the first 3 months and then be randomized to either the optimal treatment identified during the first 3 months or an intervention randomly chosen among the interventions to be studied. This design has the advantage of randomization being after 3 months of study so that dropouts and non-adherers within the first 3 months would not be randomized in an intent-to-treat format.

The goal of either hypothesized study, or any study that attempts to incorporate SCEDs into RCTs, is that matching participants to treatments will provide superior results in comparison to providing the same treatment to everyone in a group. Two hypotheses can be generated in these types of designs: first, that the mean changes will differ between groups, and second, that the variability will differ between groups with less variability in outcome for people who have treatment selected after a single-case trial than people who have a treatment randomly selected. A reduction in variability plus mean differences in outcome should increase the effect size for people treated using individualized designs, increase power, and allow for a smaller sample size to ensure confidence about the differences observed between groups.

7. Limitations of Single-Case Experimental Designs

Single-case experimental designs have their common limitations. If a measure changes with repeated testing without intervention, it may not be useful for an SCED unless steps can be taken to mitigate such reactivity, such as more unobtrusive monitoring ( Kazdin, 2021 ). Given that the effects of interventions are evaluated over time, systematic environmental changes or maturation could influence the relationship between a treatment and outcome and thereby obscure the effect of a treatment. However, the design logic of reversal and multiple baseline designs largely control for such influences. Since SCEDs rely on repeated measures and a detailed study of the relationship between treatment and outcome, studies that use dependent measures that cannot be sampled frequently are not candidates for SCEDs. Likewise, the failure to identify a temporal relationship between the introduction of treatment and initiation of change in the outcome can make attribution of changes to the intervention challenging. A confounding variable’s association with introduction or removal of the intervention, which may cause inappropriate decisions about the effects of the intervention, is always possible. Dropout or uncontrolled events that occur to individuals can introduce confounding variables to the SCED. These problems are not unique to SCEDs and also occur with RCTs.

8. Single-Case Experimental Designs in Early Stage Translational Research

The emphasis of a research program may be on translating basic science findings to clinical interventions. The goal may be to collect early phase translational research as a step toward a fully powered RCT—( Epstein et al., 2021 ). The fact that a large amount of basic science does not get translated into clinical interventions is well known ( Butler, 2008 ; Seyhan, 2019 ); this served in part as the stimulus for the National Institutes of Health (NIH) to develop a network of clinical and translational science institutes in medical schools and universities throughout the United States. A common approach to early phase translational research is to implement a small, underpowered RCT to secure a ‘signal’ of a treatment effect and an effect size. This is a problematic approach to pilot research, and it is not advocated by the NIH as an approach to early phase translational research ( National Center for Complementary and Integrative Health, 2020 ). The number of participants needed for a fully powered RCT may be substantially different from the number projected from a small-sample RCT. These small, underpowered, early phase translational studies may provide too large an estimate of an effect size, leading to an underpowered RCT. Likewise, a small-sample RCT can lead to a small effect size that can, in turn, lead to a failure to implement a potentially effective intervention ( Kraemer et al., 2006 ). Therefore, SCEDs—especially reversal and multiple baseline designs—are evidently ideally suited to early phase translational research. This use complements the utility of SCEDs for identifying the optimal treatment for an individual or small group of individuals.

9. Conclusion

Single-case experimental designs provide flexible, rigorous, and cost-effective approaches that can be used in personalized medicine to identify the optimal treatment for an individual patient. SCEDs represent a broad array of designs, and personalized (N-of-1) designs are a prominent example, particularly in medicine. These designs can be incorporated into RCTs, and they can be integrated using meta-analysis techniques. SCEDs should become a standard part of the toolbox for clinical researchers to improve clinical care for their patients, and they can lead to the next generation of interventions that show maximal effects for individual cases as well as for early phase translational research to clinical practice.

Acknowledgments

We thank Lesleigh Stinson and Andrea Villegas for preparing the figures.

Disclosure Statement

Preparation of this special issue was supported by grants R01LM012836 from the National Library of Medicine of the National Institutes of Health and P30AG063786 from the National Institute on Aging of the National Institutes of Health. Funding to authors of this article was supported by grants U01 HL131552 from the National Heart, Lung, and Blood Institute, UH3 DK109543 from the National Institute of Diabetes, Digestive and Kidney Diseases, and RO1HD080292 and RO1HD088131 from the Eunice Kennedy Shriver National Institute of Child Health and Human Development. The funders had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; or decision to submit the manuscript for publication. The views expressed in this paper are those of the authors and do not represent the views of the National Institutes of Health, the U.S. Department of Health and Human Services, or any other government entity.

  • Abrahamyan L, Feldman BM, Tomlinson G, Faughnan ME, Johnson SR, Diamond IR, & Gupta S (2016). Alternative designs for clinical trials in rare diseases . American Journal of Medical Genetics, Part C: Seminars in Medical Genetics , 172 ( 4 ), 313–331. 10.1002/ajmg.c.31533 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Barlow DH, & Hayes SC (1979). Alternating treatments design: One strategy for comparing the effects of two treatments in a single subject . Journal of Applied Behavior Analysis , 12 ( 2 ), 199–210. 10.1901/jaba.1979.12-199 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Biglan A, Ary D, & Wagenaar AC (2000). The value of interrupted time-series experiments for community intervention research . Prevention Science , 1 ( 1 ), 31–49. 10.1023/a:1010024016308 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Boles RE, Roberts MC, & Vernberg EM (2008). Treating non-retentive encopresis with rewarded scheduled toilet visits . Behavior Analysis in Practice , 1 ( 2 ), 68–72. 10.1007/bf03391730 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Boring EG (1929). A history of experimental psychology . Appleton-Century-Crofts. [ Google Scholar ]
  • Branch MN, & Pennypacker HS (2013). Generality and generalization of research findings. In Madden GJ, Dube WV, Hackenberg TD, Hanley GP, & Lattal KA (Eds.), APA handbook of behavior analysis, Vol. 1. Methods and principles (pp. 151–175). American Psychological Association. 10.1037/13937-007 [ CrossRef ] [ Google Scholar ]
  • Butler D (2008). Translational research: Crossing the valley of death . Nature , 453 ( 7197 ), 840–842. 10.1038/453840a [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Byiers BJ, Reichle J, & Symons FJ (2012). Single-subject experimental design for evidence-based practice . American Journal of Speech-Language Pathology , 21 ( 4 ), 397–414. 10.1044/1058-0360(2012/11-0036) [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Coyle JA, & Robertson VJ (1998). Comparison of two passive mobilizing techniques following Colles’ fracture: A multi-element design . Manual Therapy , 3 ( 1 ), 34–41. 10.1054/math.1998.0314 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cushing CC, Jensen CD, & Steele RG (2011). An evaluation of a personal electronic device to enhance self-monitoring adherence in a pediatric weight management program using a multiple baseline design . Journal of Pediatric Psychology , 36 ( 3 ), 301–307. 10.1093/jpepsy/jsq074 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Czajkowski SM, Powell LH, Adler N, Naar-king S, Reynolds KD, Hunter CM, Laraia B, Olster DH, Perna FM, Peterson JC, Epel E, Boyington JE, Charlson ME, Related O, Czajkowski SM, Powell LH, Adler N, Reynolds KD, Hunter CM, … Boyington JE (2015). From ideas to efficacy: The ORBIT model for developing behavioral treatments for chronic diseases . Health Psychology , 34 ( 10 ), 971–982. 10.1037/hea0000161 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dallery J, Cassidy RN, & Raiff BR (2013). Single-case experimental designs to evaluate novel technology-based health interventions . Journal of Medical Internet Research , 15 ( 2 ), Article e22. 10.2196/jmir.2227 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dallery J, & Raiff BR (2014). Optimizing behavioral health interventions with single-case designs: From development to dissemination . Translational Behavioral Medicine , 4 ( 3 ), 290–303. 10.1007/s13142-014-0258-z [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Davidson KW, Silverstein M, Cheung K, Paluch RA, & Epstein LH (2021). Experimental designs to optimize treatments for individuals . JAMA Pediatrics , 175 ( 4 ), 404–409. 10.1001/jamapediatrics.2020.5801 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Duan N, Kravitz RL, & Schmid CH (2013). Single-patient (n-of-1) trials: A pragmatic clinical decision methodology for patient-centered comparative effectiveness research . Journal of Clinical Epidemiology , 66 ( 8 Suppl ), S21–S28. 10.1016/j.jclinepi.2013.04.006 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ebbinghaus H (1913). Memory; A contribution to experimental psychology . Teachers College, Columbia University. [ Google Scholar ]
  • Epstein LH, Bickel WK, Czajkowski SM, Paluch RA, Moeyaert M, & Davidson KW (2021). Single case designs for early phase behavioral translational research in health psychology . Health Psychology , 40 ( 12 ), 858–874. 10.1037/hea0001055 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fisher AJ, Medaglia JD, & Jeronimus BF (2018). Lack of group-to-individual generalizability is a threat to human subjects research . Proceedings of the National Academy of Sciences of the United States of America , 115 ( 27 ), E6106–E6115. 10.1073/pnas.1711978115 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Guyatt GH, Heyting A, Jaeschke R, Keller J, Adachi JD, & Roberts RS (1990). N of 1 randomized trials for investigating new drugs . Controlled Clinical Trials , 11 ( 2 ), 88–100. 10.1016/0197-2456(90)90003-k [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hartmann D, & Hall RV (1976). The changing criterion design . Journal of Applied Behavior Analysis , 9 ( 4 ), 527–532. 10.1901/jaba.1976.9-527 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hayes SC (1981). Single case experimental design and empirical clinical practice . Journal of Consulting and Clinical Psychology , 49 ( 2 ), 193–211. 10.1037/0022-006X.49.2.193 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hekler E, Tiro JA, Hunter CM, & Nebeker C (2020). Precision health: The role of the social and behavioral sciences in advancing the vision . Annals of Behavioral Medicine , 54 ( 11 ), 805–826. 10.1093/abm/kaaa018 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Horner RD, & Baer DM (1978). Multiple-probe technique: A variation on the multiple baseline1 . Journal of Applied Behavior Analysis , 11 ( 1 ), 189–196. 10.1901/jaba.1978.11-189 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Horner RH, Carr EG, Halle J, McGee G, Odom S, & Wolery M (2005). The use of single-subject research to identify evidence-based practice in special education . Exceptional Children , 71 ( 2 ), 165–179. https://search.ebscohost.com/login.aspx?direct=true&db=psyh&AN=2004-22378-004&site=ehost-live [ Google Scholar ]
  • Kazdin AE (2011). Single-case research designs: Methods for clinical and applied settings. In Single-case research designs: Methods for clinical and applied settings (2nd ed.). Oxford University Press. https://search.ebscohost.com/login.aspx?direct=true&db=psyh&AN=2010-18971-000&site=ehost-live [ Google Scholar ]
  • Kazdin AE (2021). Single-case experimental designs: Characteristics, changes, and challenges . Journal of the Experimental Analysis of Behavior , 115 ( 1 ), 56–85. 10.1002/jeab.638 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kraemer HC, Mintz J, Noda A, Tinklenberg J, & Yesavage JA (2006). Caution regarding the use of pilot studies to guide power calculations for study proposals . In Archives of General Psychiatry , 63 ( 5 ), 484–489. 10.1001/archpsyc.63.5.484 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kratochwill TR, Hitchcock J, Horner RH, Levin JR, Odom SL, Rindskopf DM, & Shadish WR (2010). Single-case designs technical documentation . What Works Clearinghouse. [ Google Scholar ]
  • Kratochwill TR & Levin JR (1992). Single-case research design and analysis: New directions for psychology and education . Lawrence Erlbaum. [ Google Scholar ]
  • Kratochwill TR, & Levin JR (2010). Enhancing the scientific credibility of single-case intervention research: Randomization to the rescue . Psychological Methods , 15 ( 2 ), 124–144. 10.1037/a0017736 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kratochwill TR, & Levin JR (2015). Single-case research design and analysis: New directions for psychology and education . Routledge. 10.4324/9781315725994 [ CrossRef ] [ Google Scholar ]
  • Kratochwill TR, Hitchcock JH, Horner RH, Levin JR, Odom SL, Rindskopf DM, & Shadish WR (2013). Single-case intervention research design standards . Remedial and Special Education , 34 ( 1 ), 26–38. 10.1177/0741932512452794 [ CrossRef ] [ Google Scholar ]
  • Kravitz R, & Duan N (2022). Conduct and implementation of personalized trials in research and practice . Harvard Data Science Review , ( Special Issue 3 ). 10.1162/99608f92.901255e7 [ CrossRef ] [ Google Scholar ]
  • Kravitz RL, Schmid CH, Marois M, Wilsey B, Ward D, Hays RD, Duan N, Wang Y, MacDonald S, Jerant A, Servadio JL, Haddad D, & Sim I (2018). Effect of mobile device-supported single-patient multi-crossover trials on treatment of chronic musculoskeletal pain: A randomized clinical trial . JAMA Internal Medicine , 178 ( 10 ), 1368–1377. 10.1001/jamainternmed.2018.3981 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lane-Brown A, & Tate R (2010). Evaluation of an intervention for apathy after traumatic brain injury: A multiple-baseline, single-case experimental design . Journal of Head Trauma Rehabilitation , 25 ( 6 ), 459–469. 10.1097/HTR.0b013e3181d98e1d [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lillie EO, Patay B, Diamant J, Issell B, Topol EJ, & Schork NJ (2011). The n-of-1 clinical trial: the ultimate strategy for individualizing medicine? Personalized Medicine , 8 ( 2 ), 161–173. 10.2217/pme.11.7 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lobo MA, Moeyaert M, Cunha AB, & Babik I (2017). Single-case design, analysis, and quality assessment for intervention research . Journal of Neurologic Physical Therapy , 41 ( 3 ), 187–197. 10.1097/NPT.0000000000000187 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lundervold DA, & Belwood MF (2000). The best kept secret in counseling: Single-case (N = 1) experimental designs . Journal of Counseling and Development , 78 ( 1 ), 92–102. 10.1002/j.1556-6676.2000.tb02565.x [ CrossRef ] [ Google Scholar ]
  • Manolov R, Tanious R, & Onghena P (2021). Quantitative techniques and graphical representations for interpreting results from alternating treatment design . Perspectives on Behavior Science . Advance online publication. 10.1007/s40614-021-00289-9 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • McDonald S, Quinn F, Vieira R, O’Brien N, White M, Johnston DW, & Sniehotta FF (2017). The state of the art and future opportunities for using longitudinal n-of-1 methods in health behaviour research: A systematic literature overview . Health Psychology Review , 11 ( 4 ), 307–323. 10.1080/17437199.2017.1316672 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Meredith SE, Grabinski MJ, & Dallery J (2011). Internet-based group contingency management to promote abstinence from cigarette smoking: A feasibility study . Drug and Alcohol Dependence , 118 ( 1 ), 23–30. 10.1016/j.drugalcdep.2011.02.012 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Miočević M, Klaassen F, Geuke G, Moeyaert M, & Maric M (2020). Using Bayesian methods to test mediators of intervention outcomes in single-case experimental designs . Evidence-Based Communication Assessment and Intervention , 14 ( 1–2 ), 52–68. 10.1080/17489539.2020.1732029 [ CrossRef ] [ Google Scholar ]
  • Moeyaert M, & Fingerhut J (2022). Quantitative synthesis of personalized trials studies: Meta-analysis of aggregated data versus individual patient data . Harvard Data Science Review , ( Special Issue 3 ). 10.1162/99608f92.3574f1dc [ CrossRef ] [ Google Scholar ]
  • Morgan DL, & Morgan RK (2001). Single-participant research design: Bringing science to managed care . American Psychologist , 56 ( 2 ), 119–127. 10.1037/0003-066X.56.2.119 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • National Center for Complementary and Integrative Health. (2020, May 18). Pilot studies: Common uses and misuses . National Institutes of Health. https://www.nccih.nih.gov/grants/pilot-studies-common-uses-andmisuses [ Google Scholar ]
  • Normand MP (2016). Less is more: Psychologists can learn more by studying fewer people . In Frontiers in Psychology , 7 ( 94 ). 10.3389/fpsyg.2016.00934 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pavlov IP (1927). Conditioned reflexes . Clarendon Press. [ Google Scholar ]
  • Porcino A, & Vohra S (2022). N-of-1 trials, their reporting guidelines, and the advancement of open science principles . Harvard Data Science Review , ( Special Issue 3 ). 10.1162/99608f92.a65a257a [ CrossRef ] [ Google Scholar ]
  • Pustejovsky JE (2019). Procedural sensitivities of effect sizes for single-case designs with directly observed behavioral outcome measures . Psychological Methods , 24 ( 2 ), 217–235. 10.1037/met0000179 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Riley AR, & Gaynor ST (2014). Identifying mechanisms of change: Utilizing single-participant methodology to better understand behavior therapy for child depression . Behavior Modification , 38 ( 5 ), 636–664. 10.1177/0145445514530756 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Riley WT, Glasgow RE, Etheredge L, & Abernethy AP (2013). Rapid, responsive, relevant (R3) research: A call for a rapid learning health research enterprise . Clinical and Translational Medicine , 2 ( 1 ), Article e10. 10.1186/2001-1326-2-10 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Roustit M, Giai J, Gaget O, Khouri C, Mouhib M, Lotito A, Blaise S, Seinturier C, Subtil F, Paris A, Cracowski C, Imbert B, Carpentier P, Vohra S, & Cracowski JL (2018). On-demand sildenafil as a treatment for raynaud phenomenon: A series of n -of-1 trials . Annals of Internal Medicine , 169 ( 10 ), 694–703. 10.7326/M18-0517 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Russo SR, Croner J, Smith S, Chirinos M, & Weiss MJ (2019). A further refinement of procedures addressing food selectivity . Behavioral Interventions , 34 ( 4 ), 495–503. 10.1002/bin.1686 [ CrossRef ] [ Google Scholar ]
  • Schork NJ (2015). Personalized medicine: Time for one-person trials . Nature , 520 ( 7549 ), 609–611. 10.1038/520609a [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Schork N (2022). Accommodating serial correlation and sequential design elements in personalized studies and aggregated personalized studies . Harvard Data Science Review , ( Special Issue 3 ). 10.1162/99608f92.f1eef6f4 [ CrossRef ] [ Google Scholar ]
  • Schork NJ, & Goetz LH (2017). Single-subject studies in translational nutrition research . Annual Review of Nutrition , 37 , 395–422. 10.1146/annurev-nutr-071816-064717 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Seyhan AA (2019). Lost in translation: The valley of death across preclinical and clinical divide – Identification of problems and overcoming obstacles . Translational Medicine Communications , 4 ( 1 ), Article 18. 10.1186/s41231-019-0050-7 [ CrossRef ] [ Google Scholar ]
  • Shadish WR (2014). Analysis and meta-analysis of single-case designs: An introduction . Journal of School Psychology , 52 ( 2 ), 109–122. 10.1016/j.jsp.2013.11.009 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Shadish WR, Hedges LV, & Pustejovsky JE (2014). Analysis and meta-analysis of single-case designs with a standardized mean difference statistic: A primer and applications . Journal of School Psychology , 52 ( 2 ), 123–147. 10.1016/j.jsp.2013.11.005 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sidman M (1960). Tactics of scientific research . Basic Books. [ Google Scholar ]
  • Singh NN, & Leung JP (1988). Smoking cessation through cigarette-fading, self-recording, and contracting: Treatment, maintenance and long-term followup . Addictive Behaviors , 13 ( 1 ), 101–105. 10.1016/0306-4603(88)90033-0 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Soto PL (2020). Single-case experimental designs for behavioral neuroscience . Journal of the Experimental Analysis of Behavior , 114 ( 3 ), 447–467. 10.1002/jeab.633 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Van den Noortgate W, & Onghena P (2003). Hierarchical linear models for the quantitative integration of effect sizes in single-case research . Behavior Research Methods, Instruments, & Computers , 35 ( 1 ), 1–10. 10.3758/bf03195492 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Vannest KJ, Peltier C, & Haas A (2018). Results reporting in single case experiments and single case meta-analysis . Research in Developmental Disabilities , 79 , 10–18. 10.1016/j.ridd.2018.04.029 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Vlaeyen JWS, Wicksell RK, Simons LE, Gentili C, De TK, Tate RL, Vohra S, Punja S, Linton SJ, Sniehotta FF, & Onghena P (2020). From boulder to stockholm in 70 years: Single case experimental designs in clinical research . Psychological Record , 70 ( 4 ), 659–670. 10.1007/s40732-020-00402-5 [ CrossRef ] [ Google Scholar ]
  • Vohra S, Punja S (2019). A case for n-of-1 trials . JAMA Internal Medicine , 179 ( 3 ), 452. 10.1001/jamainternmed.2018.7166 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Vohra S, Shamseer L, Sampson M, Bukutu C, Schmid CH, Tate R, Nikles J, Zucker DR, Kravitz R, Guyatt G, Altman DG, Moher D, & CENT Group (2016). CONSORT extension for reporting N-of-1 trials (CENT) 2015 Statement . Journal of Clinical Epidemiology , 76 , 9–17. 10.1016/j.jclinepi.2015.05.004 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ward-Horner J, & Sturmey P (2010). Component analyses using single-subject experimental designs: A review . Journal of Applied Behavior Analysis , 43 ( 4 ), 685–704. 10.1901/jaba.2010.43-685 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Weems CF (1998). The evaluation of heart rate biofeedback using a multi-element design . Journal of Behavior Therapy and Experimental Psychiatry , 29 ( 2 ), 157–162. 10.1016/S0005-7916(98)00005-6 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wen X, Eiden RD, Justicia-Linde FE, Wang Y, Higgins ST, Thor N, Haghdel A, Peters AR, & Epstein LH (2019). A multicomponent behavioral intervention for smoking cessation during pregnancy: A nonconcurrent multiple-baseline design . Translational Behavioral Medicine , 9 ( 2 ), 308–318. 10.1093/tbm/iby027 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Williams BA (2010). Perils of evidence-based medicine . Perspectives in Biology and Medicine , 53 ( 1 ), 106–120. 10.1353/pbm.0.0132 [ PubMed ] [ CrossRef ] [ Google Scholar ]

Single-Case Designs

  • First Online: 30 April 2023

Cite this chapter

single case experimental design vs case study

  • Lodi Lipien 3 ,
  • Megan Kirby 3 &
  • John M. Ferron 3  

Part of the book series: Autism and Child Psychopathology Series ((ACPS))

1903 Accesses

Single-case design (SCD), also known as single-case experimental design, single-subject design, or N-of-1 trials, refers to a research methodology that involves examining the effect of an intervention on a single individual over time by repeatedly measuring a target behavior across different intervention conditions. These designs may include replication across cases, but the focus is on individual effects. Differences in the target behaviors and individuals studied, as well as differences in the research questions posed, have spurred the development of a variety of single-case designs, each with distinct advantages in specific situations. These designs include reversal designs, multiple baseline designs (MBD), alternating treatments designs (ATD), and changing criterion designs (CCD). Our purpose is to describe these designs and their application in behavioral research. In doing so, we consider the questions they address and the conditions under which they are well suited to answer those questions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

single case experimental design vs case study

N-of-1 Trials in the Behavioral Sciences

single case experimental design vs case study

From Boulder to Stockholm in 70 Years: Single Case Experimental Designs in Clinical Research

Tutorial: small-n power analysis.

Reversal designs, first described by Leitenberg ( 1973 ) and later reviewed by Wine et al. ( 2015 ), originally referred to a type of design in which the effects of one IV on two topographically distinct DVs (DV 1, DV 2) were repeatedly measured across time. The intervention, such as reinforcement, was presented in each phase but was in effect for either DV 1 or DV 2. The purpose of the use is to show changes in rates of responding when an IV is introduced to DV 1 and withdrawn from DV 2, as the rate of responding for each would change across phases when in the presence or absence of the IV. However, the reversal design as described is rarely used in contemporary behavior analytic literature and is often used interchangeably with withdrawal design .

Alberto, P. A., & Troutman, A. C. (2009). Applied behavior analysis for teachers (8th ed.). Pearson Education.

Google Scholar  

Baer, D. M., Wolf, M. M., & Risley, T. R. (1968). Some current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis, 1 , 91–97.

Article   PubMed   PubMed Central   Google Scholar  

Barlow, D. H., & Hersen, M. (1984). Single case experimental designs: Strategies for studying behavior change . Pergamon.

Blair, B. J., Weiss, J. S., & Ahern, W. H. (2018). A comparison of task analysis training procedures. Education and Treatment of Children, 41 (3), 357–370.

Article   Google Scholar  

Bolanos, J. E., Reeve, K. F., Reeve, S. A., Sidener, T. M., Jennings, A. M., & Ostrosky, B. D. (2020). Using stimulus equivalence-based instruction to teach young children to sort recycling, trash, and compost items. Behavior and Social Issues . 29 , 78. https://doi-org.ezproxy.lib.usf.edu/10.1007/s42822-020-00028-w

Byiers, B., Reichle, J., & Symons, F. J. (2012). Single-subject experimental design for evidence-based practice. American Journal of Speech-Language Pathology, 21 (4), 397–414.

Article   PubMed   Google Scholar  

Craig, A. R., & Fisher, W. W. (2019). Randomization tests as alternative analysis methods for behavior-analytic data. Journal of the Experimental Analysis of Behavior, 11 (2), 309–328. https://doi.org/10.1002/jeab.500

Critchfield, T. S., & Shue, E. Z. H. (2018). The dead man test: A preliminary experimental analysis. Behavior Analysis in Practice, 11 , 381–384. https://doi.org/10.1007/s40617-018-0239-7

Engel, R. J., & Schutt, R. K. (2013). The practice of research in social work (3rd ed.). Sage Publications, Inc.

Ferron, J. M., & Jones, P. (2006). Tests for the visual analysis of response-guided multiple-baseline data. Journal of Experimental Education, 75 , 66–81.

Ferron, J. M., Rohrer, L. L., & Levin, J. R. (2019). Randomization procedures for changing criterion designs. Behavior Modification . Advance online publication. https://doi.org/10.1177/0145445519847627

Ferron, J., Goldstein, H., & Olszewski, & Rohrer, L. (2020). Indexing effects in single-case experimental designs by estimating the percent of goal obtained. Evidence-Based Communication Assessment and Intervention, 14 , 6–27. https://doi.org/10.1080/17489539.2020.1732024

Fontenot, B., Uwayo, M., Avendano, S. M., & Ross, D. (2019). A descriptive analysis of applied behavior analysis research with economically disadvantaged children. Behavior Analysis in Practice, 12 , 782–794.

Fuqua, R. W., & Schwade, J. (1986). Social validation of applied behavioral research. In A. Poling & R. W. Fuqua (Eds.), Research methods in applied behavior analysis . Springer. https://doi.org/10.1007/978-1-4684-8786-2_2

Chapter   Google Scholar  

Gast, D. L., & Ledford, J. R. (2014). Single case research methodology: Applications in special education and behavioral sciences (2nd ed.). Routledge.

Book   Google Scholar  

Gosens, L. C. F., Otten, R., Didden, R., & Poelen, E. A. P. (2020). Evaluating a personalized treatment for substance use disorder in people with mild intellectual disability or borderline intellectual functioning: A study protocol of a multiple baseline across individuals design. Contemporary Clinical Trials Communications, 19 , 100616.

Hartmann, D. P., & Hall, R. V. (1976). The changing criterion design. Journal of Applied Behavior Analysis, 9 , 527–532. https://doi.org/10.1901/jaba.1976.9-527

Horner, R. H., Carr, E. G., Halle, J., McGee, G., Odom, S., & Wolery, M. (2005). The use of single-subject research to identify evidence-based practice in special education. Exceptional Children, 71 (2), 165–179.

Johnston, J. M., & Pennypacker, H. S., Jr. (1980). Strategies and tactics of behavioral research . L. Erlbaum Associates.

Kazdin, A. E. (1977). Assessing the clinical or applied importance of behavior change through social validation. Behavior Modification, 1 , 427–452.

Klein, L. A., Houlihan, D., Vincent, J. L., & Panahon, C. J. (2015). Best practices in utilizing the changing criterion design. Behavior Analysis in Practice, 10 (1), 52–61. https://doi.org/10.1007/s40617-014-0036-x

Koehler, M. J., & Levin, J. R. (1998). Regulated randomization: A potentially sharper analytical tool for the multiple baseline design. Psychological Methods, 3 , 206–217.

Kratochwill, T. R., & Levin, J. R. (2010). Enhancing the scientific credibility of single-case intervention research: Randomization to the rescue. Psychological Methods, 15 (2), 124–144. https://doi.org/10.1037/a0017736

Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M, & Shadish, W. R. (2010). Single-case designs technical documentation . Retrieved from What Works Clearinghouse website: http://ies.ed.gov/ncee/wwc/pdf/wwc_scd.pdf

Ledford, J. R., Barton, E. E., Severini, K. E., & Zimmerman, K. N. (2019). A primer on single-case research designs: Contemporary use and analysis. American Journal on Intellectual and Developmental Disabilities, 124 (1), 35–56.

Leitenberg, H. (1973). The use of single-case methodology in psychotherapy research. Journal of Abnormal Psychology, 82 , 87–101.

Li, A., Wallace, L., Ehrhardt, K. E., & Poling, A. (2017). Reporting participant characteristics in intervention articles published in five behavior-analytic journals, 2013–2015. Behavior Analysis: Research and Practice, 17 (1), 84–91.

Lobo, M. A., Moeyaert, M., Baraldi Cunha, A., & Babik, I. (2017). Single-case design, analysis, and quality assessment for intervention research. Journal of Neurologic Physical Therapy, 41 (3), 187–197. https://doi.org/10.1097/NPT.0000000000000187

McDougall, D. (2005). The range-bound changing criterion design. Behavioral Interventions, 20 , 129–137.

McDougall, D., Hawkins, J., Brady, M., & Jenkins, A. (2006). Recent innovations in the changing criterion design: Implications for research and practice in special education. Journal of Special Education, 40 (1), 2–15.

Moeyaert, M., Ferron, J., Beretvas, S. N., & Van den Noortgate, W. (2014). From a single-level analysis to a multilevel analysis of single-case experimental designs. Journal of School Psychology, 52 , 191–211.

Morgan, D. L., & Morgan, R. K. (2009). Single-case research methods for the behavioral and health sciences . Sage Publications.

Odom, S. L., Brantlinger, E., Gersten, R., Horner, R. H., Thompson, B., & Harris, K. (2005). Research in special education: Scientific methods and evidence-based practices. Exceptional Children, 71 (2), 137–148.

Onghena, P. (1992). Randomization tests for extensions and variations of ABAB single-case experimental designs: A rejoinder. Behavioral Assessment, 14 , 153–171.

Onghena, P. (2005). Single-case designs. In B. S. Everitt & D. C. Howell (Eds.), Encyclopedia of statistics in behavioral science . Wiley. https://doi.org/10.1002/0470013192

Onghena, P., Tanious, R., De, T. K., & Michiels, B. (2019). Randomization tests for changing criterion designs. Behaviour Research and Therapy, 117 , 18. https://doi.org/10.1016/j.brat.2019.01.005

Parker, R. I., Vannest, K. J., & Davis, J. L. (2011). Effect size in single-case research: A review of nine nonoverlap techniques. Behavior Modification, 35 , 303–322.

Perone, M., & Hursh, D. E. (2013). Single-case experimental designs. In G. J. Madden (Ed.), APA handbook of behavior analysis: Vol. 1. Methods and principles . American Psychological Association.

Poling, A., & Grossett, D. (1986). Basic research designs in applied behavior analysis. In A. Poling & R. W. Fuqua (Eds.), Research methods in applied behavior analysis . Springer. https://doi.org/10.1007/978-1-4684-8786-2_2

Shadish, W. R., Hedges, L. V., & Pustejovsky, J. E. (2014). Analysis and meta-analysis of single-case designs with a standardized mean difference statistic: A primer and applications. Journal of School Psychology, 52 , 123–147.

Sidman, M. (1960). Tactics of scientific research: Evaluating experimental data in psychology . Authors Cooperative, Inc.

Skinner, B. F. (1938). The behavior of organisms: An experimental analysis . Appleton-Century.

Skinner, B. F. (1966). Operant behavior. In W. K. Honig (Ed.), Operant behavior: Areas of research and application . Cambridge University Press.

Spencer, E. J., Goldstein, H., Sherman, A., Noe, S., Tabbah, R., Ziolkowski, R., & Schneider, N. (2012). Effects of an automated vocabulary and comprehension intervention: An early efficacy study. Journal of Early Intervention, 34 (4), 195–221. https://doi.org/10.1177/1053815112471990

Wang, Y., Kang, S., Ramirez, J., & Tarbox, J. (2019). Multilingual diversity in the field of applied behavior analysis and autism: A brief review and discussion of future directions. Behavior Analysis in Practice, 12 , 795–804.

Weaver, E. S., & Lloyd, B. P. (2019). Randomization tests for single case designs with rapidly alternating conditions: An analysis of p-values from published experiments. Perspectives on Behavior Science, 42 (3), 617–645. https://doi.org/10.1007/s40614-018-0165-6

Wine, B., Freeman, T. R., & King, A. (2015). Withdrawal versus reversal: A necessary distinction? Behavioral Interventions, 30 , 87–93.

Wolf, M. M. (1978). Social validity: The case for subjective measurement or how applied behavior analysis is finding its heart. Journal of Applied Behavior Analysis, 11 , 203–214.

Download references

Author information

Authors and affiliations.

University of South Florida, Tampa, FL, USA

Lodi Lipien, Megan Kirby & John M. Ferron

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to John M. Ferron .

Editor information

Editors and affiliations.

Department of Psychology, Louisiana State University, Baton Rouge, LA, USA

Johnny L. Matson

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Lipien, L., Kirby, M., Ferron, J.M. (2023). Single-Case Designs. In: Matson, J.L. (eds) Handbook of Applied Behavior Analysis. Autism and Child Psychopathology Series. Springer, Cham. https://doi.org/10.1007/978-3-031-19964-6_20

Download citation

DOI : https://doi.org/10.1007/978-3-031-19964-6_20

Published : 30 April 2023

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-19963-9

Online ISBN : 978-3-031-19964-6

eBook Packages : Behavioral Science and Psychology Behavioral Science and Psychology (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Single-case experimental designs: Characteristics, changes, and challenges

Affiliation.

  • 1 Yale University.
  • PMID: 33205436
  • DOI: 10.1002/jeab.638

Tactics of Scientific Research (Sidman, 1960) provides a visionary treatise on single-case designs, their scientific underpinnings, and their critical role in understanding behavior. Since the foundational base was provided, single-case designs have proliferated especially in areas of application where they have been used to evaluate interventions with an extraordinary range of clients, settings, and target foci. This article highlights core features of single-case experimental designs, how key and ancillary features of the designs have evolved, the special strengths of the designs, and challenges that have impeded their integration in many areas where their contributions are sorely needed. The article ends by placing the methodological approach in the context of other research traditions. In this way, the discussion moves from the specific designs toward foundations and philosophy of science issues in keeping with the strengths of the person and book we are honoring.

Keywords: challenges; changes; characteristics.

© 2020 Society for the Experimental Analysis of Behavior.

PubMed Disclaimer

Similar articles

  • Historically recontextualizing Sidman's Tactics: How behavior analysis avoided psychology's methodological Ouroboros. Imam AA. Imam AA. J Exp Anal Behav. 2021 Jan;115(1):115-128. doi: 10.1002/jeab.661. Epub 2020 Dec 17. J Exp Anal Behav. 2021. PMID: 33336404
  • The future of Cochrane Neonatal. Soll RF, Ovelman C, McGuire W. Soll RF, et al. Early Hum Dev. 2020 Nov;150:105191. doi: 10.1016/j.earlhumdev.2020.105191. Epub 2020 Sep 12. Early Hum Dev. 2020. PMID: 33036834
  • Challenges and solutions for N-of-1 design studies in health psychology. Kwasnicka D, Inauen J, Nieuwenboom W, Nurmi J, Schneider A, Short CE, Dekkers T, Williams AJ, Bierbauer W, Haukkala A, Picariello F, Naughton F. Kwasnicka D, et al. Health Psychol Rev. 2019 Jun;13(2):163-178. doi: 10.1080/17437199.2018.1564627. Epub 2019 Jan 23. Health Psychol Rev. 2019. PMID: 30626274 Review.
  • Single-case experimental designs. Evaluating interventions in research and clinical practice. Kazdin AE. Kazdin AE. Behav Res Ther. 2019 Jun;117:3-17. doi: 10.1016/j.brat.2018.11.015. Epub 2018 Dec 2. Behav Res Ther. 2019. PMID: 30527785
  • Faculty development initiatives designed to promote leadership in medical education. A BEME systematic review: BEME Guide No. 19. Steinert Y, Naismith L, Mann K. Steinert Y, et al. Med Teach. 2012;34(6):483-503. doi: 10.3109/0142159X.2012.680937. Med Teach. 2012. PMID: 22578043 Review.
  • Treatment of harmful gambling: a scoping review of United Kingdom-based intervention research. Seel CJ, Jones M, Christensen DR, May R, Hoon AE, Dymond S. Seel CJ, et al. BMC Psychiatry. 2024 May 23;24(1):392. doi: 10.1186/s12888-024-05843-8. BMC Psychiatry. 2024. PMID: 38783231 Free PMC article. Review.
  • Internet-Delivered Therapy for Parents With Health Anxiety by Proxy: Protocol for a Single-Case Experimental Design Study. Ingeman K, Frostholm L, Wellnitz KB, Wright K, Frydendal DH, Onghena P, Rask CU. Ingeman K, et al. JMIR Res Protoc. 2023 Nov 24;12:e46927. doi: 10.2196/46927. JMIR Res Protoc. 2023. PMID: 37999936 Free PMC article.
  • Testing delayed, gradual, and temporary treatment effects in randomized single-case experiments: A general response function framework. Manolov R, Onghena P. Manolov R, et al. Behav Res Methods. 2024 Apr;56(4):3915-3936. doi: 10.3758/s13428-023-02230-1. Epub 2023 Sep 25. Behav Res Methods. 2024. PMID: 37749426 Free PMC article.
  • The Permutation Distancing Test for dependent single-case observational AB-phase design data: A Monte Carlo simulation study. Vroegindeweij A, Nijhof LN, Onghena P, van de Putte EM, Nijhof SL, Houtveen J. Vroegindeweij A, et al. Behav Res Methods. 2024 Mar;56(3):2569-2580. doi: 10.3758/s13428-023-02167-5. Epub 2023 Aug 1. Behav Res Methods. 2024. PMID: 37528291 Free PMC article.
  • Exploring the Efficacy of a Set of Smart Devices for Postural Awareness for Workers in an Industrial Context: Protocol for a Single-Subject Experimental Design. Lopes M, Lopes S, Monteiro M, Rodrigues M, Fertuzinhos A, Coelho AS, Matos P, Borges A, Leite T, Sampaio C, Costa R, Alvarelhão J. Lopes M, et al. JMIR Res Protoc. 2023 May 4;12:e43637. doi: 10.2196/43637. JMIR Res Protoc. 2023. PMID: 37140979 Free PMC article.
  • Aalbersberg, I. J., Appleyard, T., Brookhart, S., Carpenter, T., Clarke, M., Curry, S., Dahl, J., DeHaven, A., Eich, E., Franco, M., Freedman, L., Graf, C., Grant, S., Hanson, B., Joseph, H., Kiermer, V., Kramer, B., Kraut, A., Karn, R. K. … Vazire, S. (2018, February 15). Making science transparent by default; Introducing the TOP Statement. https://doi.org/10.31219/osf.io/sm78t
  • Andersson, G., Titov, N., Dear, B. F., Rozental, A., & Carlbring, P. (2019). Internet-delivered psychological treatments: From innovation to implementation. World Psychiatry, 18(1), 20-28. https://doi.org/10.1002/wps.20610
  • Appelbaum, M., Cooper, H., Kline, R. B., Mayo-Wilson, E., Nezu, A. M., & Rao, S. M. (2018). Journal article reporting standards for quantitative research in psychology: The APA Publications and Communications Board Task Force Report. American Psychologist, 73(1), 3-25. https://doi.org/10.1037/amp0000389
  • Arnett, J. J. (2008). The neglected 95%: Why American psychology needs to become less American. American Psychologist, 63(7), 602-614. https://doi.org/10.1037/0003-066X.63.7.602
  • Artman, K., Wolery, M., & Yoder, P. (2012). Embracing our visual inspection and analysis tradition: Graphing interobserver agreement data. Remedial and Special Education, 33(2), 71-77. https://doi.org/10.1177/0741932510381653
  • Search in MeSH

LinkOut - more resources

Full text sources.

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

IMAGES

  1. Mixed Methods Single Case Research: State of the Art and Future

    single case experimental design vs case study

  2. PPT

    single case experimental design vs case study

  3. single-case experimental design example

    single case experimental design vs case study

  4. PPT

    single case experimental design vs case study

  5. PPT

    single case experimental design vs case study

  6. PPT

    single case experimental design vs case study

VIDEO

  1. Single Case Experimental Desain II (Bagian 3)

  2. BCBA Task List 5: D 4

  3. Example of non-experimental research design (11 of 11)

  4. VS Case study About positioning the CEO as a true leader in the IT industry

  5. Difference between observational studies and randomized experiments?

  6. One Way Single Factor Analysis of Variance ANOVA Completely Randomized Design Analysis in MS Excel

COMMENTS

  1. Case Study vs. Single-Case Experimental Designs

    Case studies involve in-depth examination of a single case, such as an individual, group, or organization, to gain a comprehensive understanding of the phenomenon being studied. On the other hand, single-case experimental designs focus on studying the effects of an intervention or treatment on a single subject over time.

  2. Single-Case Experimental Designs: A Systematic Review of Published

    This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer-reviewed journals between 2000 and 2010. SCEDs provide researchers with a flexible and viable ...

  3. The Family of Single-Case Experimental Designs

    Abstract. Single-case experimental designs (SCEDs) represent a family of research designs that use experimental methods to study the effects of treatments on outcomes. The fundamental unit of analysis is the single case—which can be an individual, clinic, or community—ideally with replications of effects within and/or between cases.

  4. Single-Case Design, Analysis, and Quality Assessment for Intervention

    The purpose of this article is to describe single-case studies, and contrast them with case studies and randomized clinical trials. We will highlight current research designs, analysis techniques, and quality appraisal tools relevant for single-case rehabilitation ...

  5. Single-Case Designs

    Single-case designs (also called single-case experimental designs) are system of research design strategies that can provide strong evidence of intervention effectiveness by using repeated measurement to establish each participant (or case) as his or her own control. The flexibility of the designs, and the focus on the individual as the unit of ...

  6. Single-Case Experimental Designs

    Single-case experimental designs are a family of experimental designs that are characterized by researcher manipulation of an independent variable and repeated measurement of a dependent variable before (i.e., baseline) and after (i.e., intervention phase) introducing the independent variable. In single-case experimental designs a case is the ...

  7. Single-Subject Experimental Design: An Overview

    Single-subject experimental designs - also referred to as within-subject or single case experimental designs - are among the most prevalent designs used in CSD treatment research. These designs provide a framework for a quantitative, scientifically rigorous approach where each participant provides his or her own experimental control.

  8. Single-case experimental designs: the importance of ...

    Single-case experimental designs (SCEDs) involve repeat measurements of the dependent variable under different experimental conditions within a single case, for example, a patient or a classroom 1.

  9. Single‐case experimental designs: Characteristics, changes, and

    This article highlights core features of single-case experimental designs, how key and ancillary features of the designs have evolved, the special strengths of the designs, and challenges that have impeded their integration in many areas where their contributions are sorely needed.

  10. A systematic review of applied single-case research ...

    Single-case experimental designs (SCEDs) have become a popular research methodology in educational science, psychology, and beyond. The growing popularity has been accompanied by the development of specific guidelines for the conduct and analysis of SCEDs. In this paper, we examine recent practices in the conduct and analysis of SCEDs by systematically reviewing applied SCEDs published over a ...

  11. The Family of Single-Case Experimental Designs

    Abstract. Single-case experimental designs (SCEDs) represent a family of research designs that use experimental methods to study the effects of treatments on outcomes. The fundamental unit of analysis is the single case—which can be an individual, clinic, or community—ideally with replications of effects within and/or between cases.

  12. Single‐case experimental designs: Characteristics, changes, and challenges

    This article highlights core features of single‐case experimental designs, how key and ancillary features of the designs have evolved, the special strengths of the designs, and challenges that have impeded their integration in many areas where their contributions are sorely needed. The article ends by placing the methodological approach in ...

  13. Single-Case Designs

    Single-case design (SCD), also known as single-subject design, single-case experimental design, or N-of-1 trials, refers to a research methodology that involves examining the effect of an intervention on an individual or on each of multiple individuals. Unlike case studies, SCDs involve the systematic manipulation of an independent variable (IV ...

  14. Single-Case Design, Analysis, and Quality Assessment for ...

    When rigorously designed, single-case studies can be particularly useful experimental designs in a variety of situations, such as when research resources are limited, studied conditions have low incidences, or when examining effects of novel or expensive interventions. Readers will be directed to ex …

  15. Single-case experimental designs: A systematic review of published

    This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer-reviewed journals between 2000 and 2010. SCEDs provide researchers with a flexible and viable alternative to group designs with large sample sizes.

  16. Single-case experimental designs to assess intervention effectiveness

    The term single-case experimental designs (SCEDs) refers to a set of experimental methods that can be used to test the efficacy of an intervention using a small number of patients (typically one to three), and involve repeated measurements, sequential (± randomized) introduction of an intervention, specific data analysis and statistics.

  17. Research Note: Single-case experimental designs

    Single-case experimental designs use a within-subjects research paradigm, which fundamentally differs from the familiar between-groups design such as the randomised controlled trial. Well-designed SCEDs with high internal validity can establish cause-effect (ie, functional) relationships.

  18. Single-Case Designs

    Single-case Experimental Designs in Clinical Settings W.C. Follette, in International Encyclopedia of the Social & Behavioral Sciences, 2001 2 Characteristics of Single-case Design Single-case designs study intensively the process of change by taking many measures on the same individual subject over a period of time. The degree of control in single-case design experiments can often lead to the ...

  19. Single-case experimental designs. Evaluating interventions in research

    Abstract. Single-case designs refer to a methodological approach that can be used to investigate the effectiveness of treatment with the individual client. The designs permit scientifically valid inferences to be drawn about the effects of treatment and hence offer advantages over alternative strategies such as the uncontrolled case study or ...

  20. PDF SINGLE-CASE EXPERIMENTAL DESIGNS

    Single-case experimental designs are characterized by repeated measurements of an individual's behav ior, comparisons across experimental conditions imposed on that individual, and assessment of the measurements' reliability within and across the con ditions.

  21. Single-case experimental designs: Characteristics, changes, and

    This article highlights core features of single-case experimental designs, how key and ancillary features of the designs have evolved, the special strengths of the designs, and challenges that have impeded their integration in many areas where their contributions are sorely needed. The article ends by placing the methodological approach in the ...

  22. Single-Case Experimental Design

    Single-Case Experimental Designs for Treatment Studies Snyder & Shaw (this volume) provide a substantive discussion of the use of single-case experimental designs (also referred to as "small-n designs") to answer an assortment of questions about sexuality.

  23. PDF Single case experimental designs; practical tips on how to integrate

    Single case experimental designs; practical tips on how to integrate the methodology into your everyday clinical practice and an example of scaling up across cases Stephen Kellett Clinical Psychology Unit University of Sheffield