Type 1 and Type 2 Errors in Statistics

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

On This Page:

A statistically significant result cannot prove that a research hypothesis is correct (which implies 100% certainty). Because a p -value is based on probabilities, there is always a chance of making an incorrect conclusion regarding accepting or rejecting the null hypothesis ( H 0 ).

Anytime we make a decision using statistics, there are four possible outcomes, with two representing correct decisions and two representing errors.

type 1 and type 2 errors

The chances of committing these two types of errors are inversely proportional: that is, decreasing type I error rate increases type II error rate and vice versa.

As the significance level (α) increases, it becomes easier to reject the null hypothesis, decreasing the chance of missing a real effect (Type II error, β). If the significance level (α) goes down, it becomes harder to reject the null hypothesis , increasing the chance of missing an effect while reducing the risk of falsely finding one (Type I error).

Type I error 

A type 1 error is also known as a false positive and occurs when a researcher incorrectly rejects a true null hypothesis. Simply put, it’s a false alarm.

This means that you report that your findings are significant when they have occurred by chance.

The probability of making a type 1 error is represented by your alpha level (α), the p- value below which you reject the null hypothesis.

A p -value of 0.05 indicates that you are willing to accept a 5% chance of getting the observed data (or something more extreme) when the null hypothesis is true.

You can reduce your risk of committing a type 1 error by setting a lower alpha level (like α = 0.01). For example, a p-value of 0.01 would mean there is a 1% chance of committing a Type I error.

However, using a lower value for alpha means that you will be less likely to detect a true difference if one really exists (thus risking a type II error).

Scenario: Drug Efficacy Study

Imagine a pharmaceutical company is testing a new drug, named “MediCure”, to determine if it’s more effective than a placebo at reducing fever. They experimented with two groups: one receives MediCure, and the other received a placebo.

  • Null Hypothesis (H0) : MediCure is no more effective at reducing fever than the placebo.
  • Alternative Hypothesis (H1) : MediCure is more effective at reducing fever than the placebo.

After conducting the study and analyzing the results, the researchers found a p-value of 0.04.

If they use an alpha (α) level of 0.05, this p-value is considered statistically significant, leading them to reject the null hypothesis and conclude that MediCure is more effective than the placebo.

However, MediCure has no actual effect, and the observed difference was due to random variation or some other confounding factor. In this case, the researchers have incorrectly rejected a true null hypothesis.

Error : The researchers have made a Type 1 error by concluding that MediCure is more effective when it isn’t.

Implications

Resource Allocation : Making a Type I error can lead to wastage of resources. If a business believes a new strategy is effective when it’s not (based on a Type I error), they might allocate significant financial and human resources toward that ineffective strategy.

Unnecessary Interventions : In medical trials, a Type I error might lead to the belief that a new treatment is effective when it isn’t. As a result, patients might undergo unnecessary treatments, risking potential side effects without any benefit.

Reputation and Credibility : For researchers, making repeated Type I errors can harm their professional reputation. If they frequently claim groundbreaking results that are later refuted, their credibility in the scientific community might diminish.

Type II error

A type 2 error (or false negative) happens when you accept the null hypothesis when it should actually be rejected.

Here, a researcher concludes there is not a significant effect when actually there really is.

The probability of making a type II error is called Beta (β), which is related to the power of the statistical test (power = 1- β). You can decrease your risk of committing a type II error by ensuring your test has enough power.

You can do this by ensuring your sample size is large enough to detect a practical difference when one truly exists.

Scenario: Efficacy of a New Teaching Method

Educational psychologists are investigating the potential benefits of a new interactive teaching method, named “EduInteract”, which utilizes virtual reality (VR) technology to teach history to middle school students.

They hypothesize that this method will lead to better retention and understanding compared to the traditional textbook-based approach.

  • Null Hypothesis (H0) : The EduInteract VR teaching method does not result in significantly better retention and understanding of history content than the traditional textbook method.
  • Alternative Hypothesis (H1) : The EduInteract VR teaching method results in significantly better retention and understanding of history content than the traditional textbook method.

The researchers designed an experiment where one group of students learns a history module using the EduInteract VR method, while a control group learns the same module using a traditional textbook.

After a week, the student’s retention and understanding are tested using a standardized assessment.

Upon analyzing the results, the psychologists found a p-value of 0.06. Using an alpha (α) level of 0.05, this p-value isn’t statistically significant.

Therefore, they fail to reject the null hypothesis and conclude that the EduInteract VR method isn’t more effective than the traditional textbook approach.

However, let’s assume that in the real world, the EduInteract VR truly enhances retention and understanding, but the study failed to detect this benefit due to reasons like small sample size, variability in students’ prior knowledge, or perhaps the assessment wasn’t sensitive enough to detect the nuances of VR-based learning.

Error : By concluding that the EduInteract VR method isn’t more effective than the traditional method when it is, the researchers have made a Type 2 error.

This could prevent schools from adopting a potentially superior teaching method that might benefit students’ learning experiences.

Missed Opportunities : A Type II error can lead to missed opportunities for improvement or innovation. For example, in education, if a more effective teaching method is overlooked because of a Type II error, students might miss out on a better learning experience.

Potential Risks : In healthcare, a Type II error might mean overlooking a harmful side effect of a medication because the research didn’t detect its harmful impacts. As a result, patients might continue using a harmful treatment.

Stagnation : In the business world, making a Type II error can result in continued investment in outdated or less efficient methods. This can lead to stagnation and the inability to compete effectively in the marketplace.

How do Type I and Type II errors relate to psychological research and experiments?

Type I errors are like false alarms, while Type II errors are like missed opportunities. Both errors can impact the validity and reliability of psychological findings, so researchers strive to minimize them to draw accurate conclusions from their studies.

How does sample size influence the likelihood of Type I and Type II errors in psychological research?

Sample size in psychological research influences the likelihood of Type I and Type II errors. A larger sample size reduces the chances of Type I errors, which means researchers are less likely to mistakenly find a significant effect when there isn’t one.

A larger sample size also increases the chances of detecting true effects, reducing the likelihood of Type II errors.

Are there any ethical implications associated with Type I and Type II errors in psychological research?

Yes, there are ethical implications associated with Type I and Type II errors in psychological research.

Type I errors may lead to false positive findings, resulting in misleading conclusions and potentially wasting resources on ineffective interventions. This can harm individuals who are falsely diagnosed or receive unnecessary treatments.

Type II errors, on the other hand, may result in missed opportunities to identify important effects or relationships, leading to a lack of appropriate interventions or support. This can also have negative consequences for individuals who genuinely require assistance.

Therefore, minimizing these errors is crucial for ethical research and ensuring the well-being of participants.

Further Information

  • Publication manual of the American Psychological Association
  • Statistics for Psychology Book Download

Print Friendly, PDF & Email

Related Articles

Exploratory Data Analysis

Exploratory Data Analysis

What Is Face Validity In Research? Importance & How To Measure

Research Methodology , Statistics

What Is Face Validity In Research? Importance & How To Measure

Criterion Validity: Definition & Examples

Criterion Validity: Definition & Examples

Convergent Validity: Definition and Examples

Convergent Validity: Definition and Examples

Content Validity in Research: Definition & Examples

Content Validity in Research: Definition & Examples

Construct Validity In Psychology Research

Construct Validity In Psychology Research

Encyclopedia Britannica

  • Games & Quizzes
  • History & Society
  • Science & Tech
  • Biographies
  • Animals & Nature
  • Geography & Travel
  • Arts & Culture
  • On This Day
  • One Good Fact
  • New Articles
  • Lifestyles & Social Issues
  • Philosophy & Religion
  • Politics, Law & Government
  • World History
  • Health & Medicine
  • Browse Biographies
  • Birds, Reptiles & Other Vertebrates
  • Bugs, Mollusks & Other Invertebrates
  • Environment
  • Fossils & Geologic Time
  • Entertainment & Pop Culture
  • Sports & Recreation
  • Visual Arts
  • Demystified
  • Image Galleries
  • Infographics
  • Top Questions
  • Britannica Kids
  • Saving Earth
  • Space Next 50
  • Student Center

experiments disproving spontaneous generation

scientific hypothesis

Our editors will review what you’ve submitted and determine whether to revise the article.

  • National Center for Biotechnology Information - PubMed Central - On the scope of scientific hypotheses
  • LiveScience - What is a scientific hypothesis?
  • The Royal Society - On the scope of scientific hypotheses

scientific hypothesis , an idea that proposes a tentative explanation about a phenomenon or a narrow set of phenomena observed in the natural world. The two primary features of a scientific hypothesis are falsifiability and testability, which are reflected in an “If…then” statement summarizing the idea and in the ability to be supported or refuted through observation and experimentation. The notion of the scientific hypothesis as both falsifiable and testable was advanced in the mid-20th century by Austrian-born British philosopher Karl Popper .

The formulation and testing of a hypothesis is part of the scientific method , the approach scientists use when attempting to understand and test ideas about natural phenomena. The generation of a hypothesis frequently is described as a creative process and is based on existing scientific knowledge, intuition , or experience. Therefore, although scientific hypotheses commonly are described as educated guesses, they actually are more informed than a guess. In addition, scientists generally strive to develop simple hypotheses, since these are easier to test relative to hypotheses that involve many different variables and potential outcomes. Such complex hypotheses may be developed as scientific models ( see scientific modeling ).

Depending on the results of scientific evaluation, a hypothesis typically is either rejected as false or accepted as true. However, because a hypothesis inherently is falsifiable, even hypotheses supported by scientific evidence and accepted as true are susceptible to rejection later, when new evidence has become available. In some instances, rather than rejecting a hypothesis because it has been falsified by new evidence, scientists simply adapt the existing idea to accommodate the new information. In this sense a hypothesis is never incorrect but only incomplete.

The investigation of scientific hypotheses is an important component in the development of scientific theory . Hence, hypotheses differ fundamentally from theories; whereas the former is a specific tentative explanation and serves as the main tool by which scientists gather data, the latter is a broad general explanation that incorporates data from many different scientific investigations undertaken to explore hypotheses.

false hypothesis meaning

Countless hypotheses have been developed and tested throughout the history of science . Several examples include the idea that living organisms develop from nonliving matter, which formed the basis of spontaneous generation , a hypothesis that ultimately was disproved (first in 1668, with the experiments of Italian physician Francesco Redi , and later in 1859, with the experiments of French chemist and microbiologist Louis Pasteur ); the concept proposed in the late 19th century that microorganisms cause certain diseases (now known as germ theory ); and the notion that oceanic crust forms along submarine mountain zones and spreads laterally away from them ( seafloor spreading hypothesis ).

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Indian J Crit Care Med
  • v.23(Suppl 3); 2019 Sep

An Introduction to Statistics: Understanding Hypothesis Testing and Statistical Errors

Priya ranganathan.

1 Department of Anesthesiology, Critical Care and Pain, Tata Memorial Hospital, Mumbai, Maharashtra, India

2 Department of Surgical Oncology, Tata Memorial Centre, Mumbai, Maharashtra, India

The second article in this series on biostatistics covers the concepts of sample, population, research hypotheses and statistical errors.

How to cite this article

Ranganathan P, Pramesh CS. An Introduction to Statistics: Understanding Hypothesis Testing and Statistical Errors. Indian J Crit Care Med 2019;23(Suppl 3):S230–S231.

Two papers quoted in this issue of the Indian Journal of Critical Care Medicine report. The results of studies aim to prove that a new intervention is better than (superior to) an existing treatment. In the ABLE study, the investigators wanted to show that transfusion of fresh red blood cells would be superior to standard-issue red cells in reducing 90-day mortality in ICU patients. 1 The PROPPR study was designed to prove that transfusion of a lower ratio of plasma and platelets to red cells would be superior to a higher ratio in decreasing 24-hour and 30-day mortality in critically ill patients. 2 These studies are known as superiority studies (as opposed to noninferiority or equivalence studies which will be discussed in a subsequent article).

SAMPLE VERSUS POPULATION

A sample represents a group of participants selected from the entire population. Since studies cannot be carried out on entire populations, researchers choose samples, which are representative of the population. This is similar to walking into a grocery store and examining a few grains of rice or wheat before purchasing an entire bag; we assume that the few grains that we select (the sample) are representative of the entire sack of grains (the population).

The results of the study are then extrapolated to generate inferences about the population. We do this using a process known as hypothesis testing. This means that the results of the study may not always be identical to the results we would expect to find in the population; i.e., there is the possibility that the study results may be erroneous.

HYPOTHESIS TESTING

A clinical trial begins with an assumption or belief, and then proceeds to either prove or disprove this assumption. In statistical terms, this belief or assumption is known as a hypothesis. Counterintuitively, what the researcher believes in (or is trying to prove) is called the “alternate” hypothesis, and the opposite is called the “null” hypothesis; every study has a null hypothesis and an alternate hypothesis. For superiority studies, the alternate hypothesis states that one treatment (usually the new or experimental treatment) is superior to the other; the null hypothesis states that there is no difference between the treatments (the treatments are equal). For example, in the ABLE study, we start by stating the null hypothesis—there is no difference in mortality between groups receiving fresh RBCs and standard-issue RBCs. We then state the alternate hypothesis—There is a difference between groups receiving fresh RBCs and standard-issue RBCs. It is important to note that we have stated that the groups are different, without specifying which group will be better than the other. This is known as a two-tailed hypothesis and it allows us to test for superiority on either side (using a two-sided test). This is because, when we start a study, we are not 100% certain that the new treatment can only be better than the standard treatment—it could be worse, and if it is so, the study should pick it up as well. One tailed hypothesis and one-sided statistical testing is done for non-inferiority studies, which will be discussed in a subsequent paper in this series.

STATISTICAL ERRORS

There are two possibilities to consider when interpreting the results of a superiority study. The first possibility is that there is truly no difference between the treatments but the study finds that they are different. This is called a Type-1 error or false-positive error or alpha error. This means falsely rejecting the null hypothesis.

The second possibility is that there is a difference between the treatments and the study does not pick up this difference. This is called a Type 2 error or false-negative error or beta error. This means falsely accepting the null hypothesis.

The power of the study is the ability to detect a difference between groups and is the converse of the beta error; i.e., power = 1-beta error. Alpha and beta errors are finalized when the protocol is written and form the basis for sample size calculation for the study. In an ideal world, we would not like any error in the results of our study; however, we would need to do the study in the entire population (infinite sample size) to be able to get a 0% alpha and beta error. These two errors enable us to do studies with realistic sample sizes, with the compromise that there is a small possibility that the results may not always reflect the truth. The basis for this will be discussed in a subsequent paper in this series dealing with sample size calculation.

Conventionally, type 1 or alpha error is set at 5%. This means, that at the end of the study, if there is a difference between groups, we want to be 95% certain that this is a true difference and allow only a 5% probability that this difference has occurred by chance (false positive). Type 2 or beta error is usually set between 10% and 20%; therefore, the power of the study is 90% or 80%. This means that if there is a difference between groups, we want to be 80% (or 90%) certain that the study will detect that difference. For example, in the ABLE study, sample size was calculated with a type 1 error of 5% (two-sided) and power of 90% (type 2 error of 10%) (1).

Table 1 gives a summary of the two types of statistical errors with an example

Statistical errors

(a) Types of statistical errors
: Null hypothesis is
TrueFalse
Null hypothesis is actuallyTrueCorrect results!Falsely rejecting null hypothesis - Type I error
FalseFalsely accepting null hypothesis - Type II errorCorrect results!
(b) Possible statistical errors in the ABLE trial
There is difference in mortality between groups receiving fresh RBCs and standard-issue RBCsThere difference in mortality between groups receiving fresh RBCs and standard-issue RBCs
TruthThere is difference in mortality between groups receiving fresh RBCs and standard-issue RBCsCorrect results!Falsely rejecting null hypothesis - Type I error
There difference in mortality between groups receiving fresh RBCs and standard-issue RBCsFalsely accepting null hypothesis - Type II errorCorrect results!

In the next article in this series, we will look at the meaning and interpretation of ‘ p ’ value and confidence intervals for hypothesis testing.

Source of support: Nil

Conflict of interest: None

false hypothesis meaning

User Preferences

Content preview.

Arcu felis bibendum ut tristique et egestas quis:

  • Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris
  • Duis aute irure dolor in reprehenderit in voluptate
  • Excepteur sint occaecat cupidatat non proident

Keyboard Shortcuts

6a.1 - introduction to hypothesis testing, basic terms section  .

The first step in hypothesis testing is to set up two competing hypotheses. The hypotheses are the most important aspect. If the hypotheses are incorrect, your conclusion will also be incorrect.

The two hypotheses are named the null hypothesis and the alternative hypothesis.

The goal of hypothesis testing is to see if there is enough evidence against the null hypothesis. In other words, to see if there is enough evidence to reject the null hypothesis. If there is not enough evidence, then we fail to reject the null hypothesis.

Consider the following example where we set up these hypotheses.

Example 6-1 Section  

A man, Mr. Orangejuice, goes to trial and is tried for the murder of his ex-wife. He is either guilty or innocent. Set up the null and alternative hypotheses for this example.

Putting this in a hypothesis testing framework, the hypotheses being tested are:

  • The man is guilty
  • The man is innocent

Let's set up the null and alternative hypotheses.

\(H_0\colon \) Mr. Orangejuice is innocent

\(H_a\colon \) Mr. Orangejuice is guilty

Remember that we assume the null hypothesis is true and try to see if we have evidence against the null. Therefore, it makes sense in this example to assume the man is innocent and test to see if there is evidence that he is guilty.

The Logic of Hypothesis Testing Section  

We want to know the answer to a research question. We determine our null and alternative hypotheses. Now it is time to make a decision.

The decision is either going to be...

  • reject the null hypothesis or...
  • fail to reject the null hypothesis.

Consider the following table. The table shows the decision/conclusion of the hypothesis test and the unknown "reality", or truth. We do not know if the null is true or if it is false. If the null is false and we reject it, then we made the correct decision. If the null hypothesis is true and we fail to reject it, then we made the correct decision.

Decision Reality
\(H_0\) is true \(H_0\) is false
Reject \(H_0\), (conclude \(H_a\))   Correct decision
Fail to reject \(H_0\) Correct decision  

So what happens when we do not make the correct decision?

When doing hypothesis testing, two types of mistakes may be made and we call them Type I error and Type II error. If we reject the null hypothesis when it is true, then we made a type I error. If the null hypothesis is false and we failed to reject it, we made another error called a Type II error.

Decision Reality
\(H_0\) is true \(H_0\) is false
Reject \(H_0\), (conclude \(H_a\)) Type I error Correct decision
Fail to reject \(H_0\) Correct decision Type II error

Types of errors

The “reality”, or truth, about the null hypothesis is unknown and therefore we do not know if we have made the correct decision or if we committed an error. We can, however, define the likelihood of these events.

\(\alpha\) and \(\beta\) are probabilities of committing an error so we want these values to be low. However, we cannot decrease both. As \(\alpha\) decreases, \(\beta\) increases.

Example 6-1 Cont'd... Section  

A man, Mr. Orangejuice, goes to trial and is tried for the murder of his ex-wife. He is either guilty or not guilty. We found before that...

  • \( H_0\colon \) Mr. Orangejuice is innocent
  • \( H_a\colon \) Mr. Orangejuice is guilty

Interpret Type I error, \(\alpha \), Type II error, \(\beta \).

As you can see here, the Type I error (putting an innocent man in jail) is the more serious error. Ethically, it is more serious to put an innocent man in jail than to let a guilty man go free. So to minimize the probability of a type I error we would choose a smaller significance level.

Try it! Section  

An inspector has to choose between certifying a building as safe or saying that the building is not safe. There are two hypotheses:

  • Building is safe
  • Building is not safe

Set up the null and alternative hypotheses. Interpret Type I and Type II error.

\( H_0\colon\) Building is not safe vs \(H_a\colon \) Building is safe

Decision Reality
\(H_0\) is true \(H_0\) is false
Reject \(H_0\), (conclude  \(H_a\)) Reject "building is not safe" when it is not safe (Type I Error) Correct decision
Fail to reject  \(H_0\) Correct decision Failing to reject 'building not is safe' when it is safe (Type II Error)

Power and \(\beta \) are complements of each other. Therefore, they have an inverse relationship, i.e. as one increases, the other decreases.

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

How to Write a Great Hypothesis

Hypothesis Definition, Format, Examples, and Tips

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

false hypothesis meaning

Amy Morin, LCSW, is a psychotherapist and international bestselling author. Her books, including "13 Things Mentally Strong People Don't Do," have been translated into more than 40 languages. Her TEDx talk,  "The Secret of Becoming Mentally Strong," is one of the most viewed talks of all time.

false hypothesis meaning

Verywell / Alex Dos Diaz

  • The Scientific Method

Hypothesis Format

Falsifiability of a hypothesis.

  • Operationalization

Hypothesis Types

Hypotheses examples.

  • Collecting Data

A hypothesis is a tentative statement about the relationship between two or more variables. It is a specific, testable prediction about what you expect to happen in a study. It is a preliminary answer to your question that helps guide the research process.

Consider a study designed to examine the relationship between sleep deprivation and test performance. The hypothesis might be: "This study is designed to assess the hypothesis that sleep-deprived people will perform worse on a test than individuals who are not sleep-deprived."

At a Glance

A hypothesis is crucial to scientific research because it offers a clear direction for what the researchers are looking to find. This allows them to design experiments to test their predictions and add to our scientific knowledge about the world. This article explores how a hypothesis is used in psychology research, how to write a good hypothesis, and the different types of hypotheses you might use.

The Hypothesis in the Scientific Method

In the scientific method , whether it involves research in psychology, biology, or some other area, a hypothesis represents what the researchers think will happen in an experiment. The scientific method involves the following steps:

  • Forming a question
  • Performing background research
  • Creating a hypothesis
  • Designing an experiment
  • Collecting data
  • Analyzing the results
  • Drawing conclusions
  • Communicating the results

The hypothesis is a prediction, but it involves more than a guess. Most of the time, the hypothesis begins with a question which is then explored through background research. At this point, researchers then begin to develop a testable hypothesis.

Unless you are creating an exploratory study, your hypothesis should always explain what you  expect  to happen.

In a study exploring the effects of a particular drug, the hypothesis might be that researchers expect the drug to have some type of effect on the symptoms of a specific illness. In psychology, the hypothesis might focus on how a certain aspect of the environment might influence a particular behavior.

Remember, a hypothesis does not have to be correct. While the hypothesis predicts what the researchers expect to see, the goal of the research is to determine whether this guess is right or wrong. When conducting an experiment, researchers might explore numerous factors to determine which ones might contribute to the ultimate outcome.

In many cases, researchers may find that the results of an experiment  do not  support the original hypothesis. When writing up these results, the researchers might suggest other options that should be explored in future studies.

In many cases, researchers might draw a hypothesis from a specific theory or build on previous research. For example, prior research has shown that stress can impact the immune system. So a researcher might hypothesize: "People with high-stress levels will be more likely to contract a common cold after being exposed to the virus than people who have low-stress levels."

In other instances, researchers might look at commonly held beliefs or folk wisdom. "Birds of a feather flock together" is one example of folk adage that a psychologist might try to investigate. The researcher might pose a specific hypothesis that "People tend to select romantic partners who are similar to them in interests and educational level."

Elements of a Good Hypothesis

So how do you write a good hypothesis? When trying to come up with a hypothesis for your research or experiments, ask yourself the following questions:

  • Is your hypothesis based on your research on a topic?
  • Can your hypothesis be tested?
  • Does your hypothesis include independent and dependent variables?

Before you come up with a specific hypothesis, spend some time doing background research. Once you have completed a literature review, start thinking about potential questions you still have. Pay attention to the discussion section in the  journal articles you read . Many authors will suggest questions that still need to be explored.

How to Formulate a Good Hypothesis

To form a hypothesis, you should take these steps:

  • Collect as many observations about a topic or problem as you can.
  • Evaluate these observations and look for possible causes of the problem.
  • Create a list of possible explanations that you might want to explore.
  • After you have developed some possible hypotheses, think of ways that you could confirm or disprove each hypothesis through experimentation. This is known as falsifiability.

In the scientific method ,  falsifiability is an important part of any valid hypothesis. In order to test a claim scientifically, it must be possible that the claim could be proven false.

Students sometimes confuse the idea of falsifiability with the idea that it means that something is false, which is not the case. What falsifiability means is that  if  something was false, then it is possible to demonstrate that it is false.

One of the hallmarks of pseudoscience is that it makes claims that cannot be refuted or proven false.

The Importance of Operational Definitions

A variable is a factor or element that can be changed and manipulated in ways that are observable and measurable. However, the researcher must also define how the variable will be manipulated and measured in the study.

Operational definitions are specific definitions for all relevant factors in a study. This process helps make vague or ambiguous concepts detailed and measurable.

For example, a researcher might operationally define the variable " test anxiety " as the results of a self-report measure of anxiety experienced during an exam. A "study habits" variable might be defined by the amount of studying that actually occurs as measured by time.

These precise descriptions are important because many things can be measured in various ways. Clearly defining these variables and how they are measured helps ensure that other researchers can replicate your results.

Replicability

One of the basic principles of any type of scientific research is that the results must be replicable.

Replication means repeating an experiment in the same way to produce the same results. By clearly detailing the specifics of how the variables were measured and manipulated, other researchers can better understand the results and repeat the study if needed.

Some variables are more difficult than others to define. For example, how would you operationally define a variable such as aggression ? For obvious ethical reasons, researchers cannot create a situation in which a person behaves aggressively toward others.

To measure this variable, the researcher must devise a measurement that assesses aggressive behavior without harming others. The researcher might utilize a simulated task to measure aggressiveness in this situation.

Hypothesis Checklist

  • Does your hypothesis focus on something that you can actually test?
  • Does your hypothesis include both an independent and dependent variable?
  • Can you manipulate the variables?
  • Can your hypothesis be tested without violating ethical standards?

The hypothesis you use will depend on what you are investigating and hoping to find. Some of the main types of hypotheses that you might use include:

  • Simple hypothesis : This type of hypothesis suggests there is a relationship between one independent variable and one dependent variable.
  • Complex hypothesis : This type suggests a relationship between three or more variables, such as two independent and dependent variables.
  • Null hypothesis : This hypothesis suggests no relationship exists between two or more variables.
  • Alternative hypothesis : This hypothesis states the opposite of the null hypothesis.
  • Statistical hypothesis : This hypothesis uses statistical analysis to evaluate a representative population sample and then generalizes the findings to the larger group.
  • Logical hypothesis : This hypothesis assumes a relationship between variables without collecting data or evidence.

A hypothesis often follows a basic format of "If {this happens} then {this will happen}." One way to structure your hypothesis is to describe what will happen to the  dependent variable  if you change the  independent variable .

The basic format might be: "If {these changes are made to a certain independent variable}, then we will observe {a change in a specific dependent variable}."

A few examples of simple hypotheses:

  • "Students who eat breakfast will perform better on a math exam than students who do not eat breakfast."
  • "Students who experience test anxiety before an English exam will get lower scores than students who do not experience test anxiety."​
  • "Motorists who talk on the phone while driving will be more likely to make errors on a driving course than those who do not talk on the phone."
  • "Children who receive a new reading intervention will have higher reading scores than students who do not receive the intervention."

Examples of a complex hypothesis include:

  • "People with high-sugar diets and sedentary activity levels are more likely to develop depression."
  • "Younger people who are regularly exposed to green, outdoor areas have better subjective well-being than older adults who have limited exposure to green spaces."

Examples of a null hypothesis include:

  • "There is no difference in anxiety levels between people who take St. John's wort supplements and those who do not."
  • "There is no difference in scores on a memory recall task between children and adults."
  • "There is no difference in aggression levels between children who play first-person shooter games and those who do not."

Examples of an alternative hypothesis:

  • "People who take St. John's wort supplements will have less anxiety than those who do not."
  • "Adults will perform better on a memory task than children."
  • "Children who play first-person shooter games will show higher levels of aggression than children who do not." 

Collecting Data on Your Hypothesis

Once a researcher has formed a testable hypothesis, the next step is to select a research design and start collecting data. The research method depends largely on exactly what they are studying. There are two basic types of research methods: descriptive research and experimental research.

Descriptive Research Methods

Descriptive research such as  case studies ,  naturalistic observations , and surveys are often used when  conducting an experiment is difficult or impossible. These methods are best used to describe different aspects of a behavior or psychological phenomenon.

Once a researcher has collected data using descriptive methods, a  correlational study  can examine how the variables are related. This research method might be used to investigate a hypothesis that is difficult to test experimentally.

Experimental Research Methods

Experimental methods  are used to demonstrate causal relationships between variables. In an experiment, the researcher systematically manipulates a variable of interest (known as the independent variable) and measures the effect on another variable (known as the dependent variable).

Unlike correlational studies, which can only be used to determine if there is a relationship between two variables, experimental methods can be used to determine the actual nature of the relationship—whether changes in one variable actually  cause  another to change.

The hypothesis is a critical part of any scientific exploration. It represents what researchers expect to find in a study or experiment. In situations where the hypothesis is unsupported by the research, the research still has value. Such research helps us better understand how different aspects of the natural world relate to one another. It also helps us develop new hypotheses that can then be tested in the future.

Thompson WH, Skau S. On the scope of scientific hypotheses .  R Soc Open Sci . 2023;10(8):230607. doi:10.1098/rsos.230607

Taran S, Adhikari NKJ, Fan E. Falsifiability in medicine: what clinicians can learn from Karl Popper [published correction appears in Intensive Care Med. 2021 Jun 17;:].  Intensive Care Med . 2021;47(9):1054-1056. doi:10.1007/s00134-021-06432-z

Eyler AA. Research Methods for Public Health . 1st ed. Springer Publishing Company; 2020. doi:10.1891/9780826182067.0004

Nosek BA, Errington TM. What is replication ?  PLoS Biol . 2020;18(3):e3000691. doi:10.1371/journal.pbio.3000691

Aggarwal R, Ranganathan P. Study designs: Part 2 - Descriptive studies .  Perspect Clin Res . 2019;10(1):34-36. doi:10.4103/picr.PICR_154_18

Nevid J. Psychology: Concepts and Applications. Wadworth, 2013.

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

What 'Fail to Reject' Means in a Hypothesis Test

Casarsa Guru/Getty Images

  • Inferential Statistics
  • Statistics Tutorials
  • Probability & Games
  • Descriptive Statistics
  • Applications Of Statistics
  • Math Tutorials
  • Pre Algebra & Algebra
  • Exponential Decay
  • Worksheets By Grade
  • Ph.D., Mathematics, Purdue University
  • M.S., Mathematics, Purdue University
  • B.A., Mathematics, Physics, and Chemistry, Anderson University

In statistics , scientists can perform a number of different significance tests to determine if there is a relationship between two phenomena. One of the first they usually perform is a null hypothesis test. In short, the null hypothesis states that there is no meaningful relationship between two measured phenomena. After a performing a test, scientists can:

  • Reject the null hypothesis (meaning there is a definite, consequential relationship between the two phenomena), or
  • Fail to reject the null hypothesis (meaning the test has not identified a consequential relationship between the two phenomena)

Key Takeaways: The Null Hypothesis

• In a test of significance, the null hypothesis states that there is no meaningful relationship between two measured phenomena.

• By comparing the null hypothesis to an alternative hypothesis, scientists can either reject or fail to reject the null hypothesis.

• The null hypothesis cannot be positively proven. Rather, all that scientists can determine from a test of significance is that the evidence collected does or does not disprove the null hypothesis.

It is important to note that a failure to reject does not mean that the null hypothesis is true—only that the test did not prove it to be false. In some cases, depending on the experiment, a relationship may exist between two phenomena that is not identified by the experiment. In such cases, new experiments must be designed to rule out alternative hypotheses.

Null vs. Alternative Hypothesis

The null hypothesis is considered the default in a scientific experiment . In contrast, an alternative hypothesis is one that claims that there is a meaningful relationship between two phenomena. These two competing hypotheses can be compared by performing a statistical hypothesis test, which determines whether there is a statistically significant relationship between the data.

For example, scientists studying the water quality of a stream may wish to determine whether a certain chemical affects the acidity of the water. The null hypothesis—that the chemical has no effect on the water quality—can be tested by measuring the pH level of two water samples, one of which contains some of the chemical and one of which has been left untouched. If the sample with the added chemical is measurably more or less acidic—as determined through statistical analysis—it is a reason to reject the null hypothesis. If the sample's acidity is unchanged, it is a reason to not reject the null hypothesis.

When scientists design experiments, they attempt to find evidence for the alternative hypothesis. They do not try to prove that the null hypothesis is true. The null hypothesis is assumed to be an accurate statement until contrary evidence proves otherwise. As a result, a test of significance does not produce any evidence pertaining to the truth of the null hypothesis.

Failing to Reject vs. Accept

In an experiment, the null hypothesis and the alternative hypothesis should be carefully formulated such that one and only one of these statements is true. If the collected data supports the alternative hypothesis, then the null hypothesis can be rejected as false. However, if the data does not support the alternative hypothesis, this does not mean that the null hypothesis is true. All it means is that the null hypothesis has not been disproven—hence the term "failure to reject." A "failure to reject" a hypothesis should not be confused with acceptance.

In mathematics, negations are typically formed by simply placing the word “not” in the correct place. Using this convention, tests of significance allow scientists to either reject or not reject the null hypothesis. It sometimes takes a moment to realize that “not rejecting” is not the same as "accepting."

Null Hypothesis Example

In many ways, the philosophy behind a test of significance is similar to that of a trial. At the beginning of the proceedings, when the defendant enters a plea of “not guilty,” it is analogous to the statement of the null hypothesis. While the defendant may indeed be innocent, there is no plea of “innocent” to be formally made in court. The alternative hypothesis of “guilty” is what the prosecutor attempts to demonstrate.

The presumption at the outset of the trial is that the defendant is innocent. In theory, there is no need for the defendant to prove that he or she is innocent. The burden of proof is on the prosecuting attorney, who must marshal enough evidence to convince the jury that the defendant is guilty beyond a reasonable doubt. Likewise, in a test of significance, a scientist can only reject the null hypothesis by providing evidence for the alternative hypothesis.

If there is not enough evidence in a trial to demonstrate guilt, then the defendant is declared “not guilty.” This claim has nothing to do with innocence; it merely reflects the fact that the prosecution failed to provide enough evidence of guilt. In a similar way, a failure to reject the null hypothesis in a significance test does not mean that the null hypothesis is true. It only means that the scientist was unable to provide enough evidence for the alternative hypothesis.

For example, scientists testing the effects of a certain pesticide on crop yields might design an experiment in which some crops are left untreated and others are treated with varying amounts of pesticide. Any result in which the crop yields varied based on pesticide exposure—assuming all other variables are equal—would provide strong evidence for the alternative hypothesis (that the pesticide does affect crop yields). As a result, the scientists would have reason to reject the null hypothesis.

  • Null Hypothesis Examples
  • Hypothesis Test for the Difference of Two Population Proportions
  • Type I and Type II Errors in Statistics
  • Null Hypothesis and Alternative Hypothesis
  • How to Conduct a Hypothesis Test
  • An Example of a Hypothesis Test
  • What Is a P-Value?
  • The Difference Between Type I and Type II Errors in Hypothesis Testing
  • What Is a Hypothesis? (Science)
  • Null Hypothesis Definition and Examples
  • Hypothesis Test Example
  • What Level of Alpha Determines Statistical Significance?
  • How to Do Hypothesis Tests With the Z.TEST Function in Excel
  • The Runs Test for Random Sequences
  • What Is the Difference Between Alpha and P-Values?
  • Scientific Method Vocabulary Terms
  • Skip to secondary menu
  • Skip to main content
  • Skip to primary sidebar

Statistics By Jim

Making statistics intuitive

Null Hypothesis: Definition, Rejecting & Examples

By Jim Frost 6 Comments

What is a Null Hypothesis?

The null hypothesis in statistics states that there is no difference between groups or no relationship between variables. It is one of two mutually exclusive hypotheses about a population in a hypothesis test.

Photograph of Rodin's statue, The Thinker who is pondering the null hypothesis.

  • Null Hypothesis H 0 : No effect exists in the population.
  • Alternative Hypothesis H A : The effect exists in the population.

In every study or experiment, researchers assess an effect or relationship. This effect can be the effectiveness of a new drug, building material, or other intervention that has benefits. There is a benefit or connection that the researchers hope to identify. Unfortunately, no effect may exist. In statistics, we call this lack of an effect the null hypothesis. Researchers assume that this notion of no effect is correct until they have enough evidence to suggest otherwise, similar to how a trial presumes innocence.

In this context, the analysts don’t necessarily believe the null hypothesis is correct. In fact, they typically want to reject it because that leads to more exciting finds about an effect or relationship. The new vaccine works!

You can think of it as the default theory that requires sufficiently strong evidence to reject. Like a prosecutor, researchers must collect sufficient evidence to overturn the presumption of no effect. Investigators must work hard to set up a study and a data collection system to obtain evidence that can reject the null hypothesis.

Related post : What is an Effect in Statistics?

Null Hypothesis Examples

Null hypotheses start as research questions that the investigator rephrases as a statement indicating there is no effect or relationship.

Does the vaccine prevent infections? The vaccine does not affect the infection rate.
Does the new additive increase product strength? The additive does not affect mean product strength.
Does the exercise intervention increase bone mineral density? The intervention does not affect bone mineral density.
As screen time increases, does test performance decrease? There is no relationship between screen time and test performance.

After reading these examples, you might think they’re a bit boring and pointless. However, the key is to remember that the null hypothesis defines the condition that the researchers need to discredit before suggesting an effect exists.

Let’s see how you reject the null hypothesis and get to those more exciting findings!

When to Reject the Null Hypothesis

So, you want to reject the null hypothesis, but how and when can you do that? To start, you’ll need to perform a statistical test on your data. The following is an overview of performing a study that uses a hypothesis test.

The first step is to devise a research question and the appropriate null hypothesis. After that, the investigators need to formulate an experimental design and data collection procedures that will allow them to gather data that can answer the research question. Then they collect the data. For more information about designing a scientific study that uses statistics, read my post 5 Steps for Conducting Studies with Statistics .

After data collection is complete, statistics and hypothesis testing enter the picture. Hypothesis testing takes your sample data and evaluates how consistent they are with the null hypothesis. The p-value is a crucial part of the statistical results because it quantifies how strongly the sample data contradict the null hypothesis.

When the sample data provide sufficient evidence, you can reject the null hypothesis. In a hypothesis test, this process involves comparing the p-value to your significance level .

Rejecting the Null Hypothesis

Reject the null hypothesis when the p-value is less than or equal to your significance level. Your sample data favor the alternative hypothesis, which suggests that the effect exists in the population. For a mnemonic device, remember—when the p-value is low, the null must go!

When you can reject the null hypothesis, your results are statistically significant. Learn more about Statistical Significance: Definition & Meaning .

Failing to Reject the Null Hypothesis

Conversely, when the p-value is greater than your significance level, you fail to reject the null hypothesis. The sample data provides insufficient data to conclude that the effect exists in the population. When the p-value is high, the null must fly!

Note that failing to reject the null is not the same as proving it. For more information about the difference, read my post about Failing to Reject the Null .

That’s a very general look at the process. But I hope you can see how the path to more exciting findings depends on being able to rule out the less exciting null hypothesis that states there’s nothing to see here!

Let’s move on to learning how to write the null hypothesis for different types of effects, relationships, and tests.

Related posts : How Hypothesis Tests Work and Interpreting P-values

How to Write a Null Hypothesis

The null hypothesis varies by the type of statistic and hypothesis test. Remember that inferential statistics use samples to draw conclusions about populations. Consequently, when you write a null hypothesis, it must make a claim about the relevant population parameter . Further, that claim usually indicates that the effect does not exist in the population. Below are typical examples of writing a null hypothesis for various parameters and hypothesis tests.

Related posts : Descriptive vs. Inferential Statistics and Populations, Parameters, and Samples in Inferential Statistics

Group Means

T-tests and ANOVA assess the differences between group means. For these tests, the null hypothesis states that there is no difference between group means in the population. In other words, the experimental conditions that define the groups do not affect the mean outcome. Mu (µ) is the population parameter for the mean, and you’ll need to include it in the statement for this type of study.

For example, an experiment compares the mean bone density changes for a new osteoporosis medication. The control group does not receive the medicine, while the treatment group does. The null states that the mean bone density changes for the control and treatment groups are equal.

  • Null Hypothesis H 0 : Group means are equal in the population: µ 1 = µ 2 , or µ 1 – µ 2 = 0
  • Alternative Hypothesis H A : Group means are not equal in the population: µ 1 ≠ µ 2 , or µ 1 – µ 2 ≠ 0.

Group Proportions

Proportions tests assess the differences between group proportions. For these tests, the null hypothesis states that there is no difference between group proportions. Again, the experimental conditions did not affect the proportion of events in the groups. P is the population proportion parameter that you’ll need to include.

For example, a vaccine experiment compares the infection rate in the treatment group to the control group. The treatment group receives the vaccine, while the control group does not. The null states that the infection rates for the control and treatment groups are equal.

  • Null Hypothesis H 0 : Group proportions are equal in the population: p 1 = p 2 .
  • Alternative Hypothesis H A : Group proportions are not equal in the population: p 1 ≠ p 2 .

Correlation and Regression Coefficients

Some studies assess the relationship between two continuous variables rather than differences between groups.

In these studies, analysts often use either correlation or regression analysis . For these tests, the null states that there is no relationship between the variables. Specifically, it says that the correlation or regression coefficient is zero. As one variable increases, there is no tendency for the other variable to increase or decrease. Rho (ρ) is the population correlation parameter and beta (β) is the regression coefficient parameter.

For example, a study assesses the relationship between screen time and test performance. The null states that there is no correlation between this pair of variables. As screen time increases, test performance does not tend to increase or decrease.

  • Null Hypothesis H 0 : The correlation in the population is zero: ρ = 0.
  • Alternative Hypothesis H A : The correlation in the population is not zero: ρ ≠ 0.

For all these cases, the analysts define the hypotheses before the study. After collecting the data, they perform a hypothesis test to determine whether they can reject the null hypothesis.

The preceding examples are all for two-tailed hypothesis tests. To learn about one-tailed tests and how to write a null hypothesis for them, read my post One-Tailed vs. Two-Tailed Tests .

Related post : Understanding Correlation

Neyman, J; Pearson, E. S. (January 1, 1933).  On the Problem of the most Efficient Tests of Statistical Hypotheses .  Philosophical Transactions of the Royal Society A .  231  (694–706): 289–337.

Share this:

false hypothesis meaning

Reader Interactions

' src=

January 11, 2024 at 2:57 pm

Thanks for the reply.

January 10, 2024 at 1:23 pm

Hi Jim, In your comment you state that equivalence test null and alternate hypotheses are reversed. For hypothesis tests of data fits to a probability distribution, the null hypothesis is that the probability distribution fits the data. Is this correct?

' src=

January 10, 2024 at 2:15 pm

Those two separate things, equivalence testing and normality tests. But, yes, you’re correct for both.

Hypotheses are switched for equivalence testing. You need to “work” (i.e., collect a large sample of good quality data) to be able to reject the null that the groups are different to be able to conclude they’re the same.

With typical hypothesis tests, if you have low quality data and a low sample size, you’ll fail to reject the null that they’re the same, concluding they’re equivalent. But that’s more a statement about the low quality and small sample size than anything to do with the groups being equal.

So, equivalence testing make you work to obtain a finding that the groups are the same (at least within some amount you define as a trivial difference).

For normality testing, and other distribution tests, the null states that the data follow the distribution (normal or whatever). If you reject the null, you have sufficient evidence to conclude that your sample data don’t follow the probability distribution. That’s a rare case where you hope to fail to reject the null. And it suffers from the problem I describe above where you might fail to reject the null simply because you have a small sample size. In that case, you’d conclude the data follow the probability distribution but it’s more that you don’t have enough data for the test to register the deviation. In this scenario, if you had a larger sample size, you’d reject the null and conclude it doesn’t follow that distribution.

I don’t know of any equivalence testing type approach for distribution fit tests where you’d need to work to show the data follow a distribution, although I haven’t looked for one either!

' src=

February 20, 2022 at 9:26 pm

Is a null hypothesis regularly (always) stated in the negative? “there is no” or “does not”

February 23, 2022 at 9:21 pm

Typically, the null hypothesis includes an equal sign. The null hypothesis states that the population parameter equals a particular value. That value is usually one that represents no effect. In the case of a one-sided hypothesis test, the null still contains an equal sign but it’s “greater than or equal to” or “less than or equal to.” If you wanted to translate the null hypothesis from its native mathematical expression, you could use the expression “there is no effect.” But the mathematical form more specifically states what it’s testing.

It’s the alternative hypothesis that typically contains does not equal.

There are some exceptions. For example, in an equivalence test where the researchers want to show that two things are equal, the null hypothesis states that they’re not equal.

In short, the null hypothesis states the condition that the researchers hope to reject. They need to work hard to set up an experiment and data collection that’ll gather enough evidence to be able to reject the null condition.

' src=

February 15, 2022 at 9:32 am

Dear sir I always read your notes on Research methods.. Kindly tell is there any available Book on all these..wonderfull Urgent

Comments and Questions Cancel reply

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons

Margin Size

  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Statistics LibreTexts

9.1: Null and Alternative Hypotheses

  • Last updated
  • Save as PDF
  • Page ID 23459

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

The actual test begins by considering two hypotheses . They are called the null hypothesis and the alternative hypothesis . These hypotheses contain opposing viewpoints.

\(H_0\): The null hypothesis: It is a statement of no difference between the variables—they are not related. This can often be considered the status quo and as a result if you cannot accept the null it requires some action.

\(H_a\): The alternative hypothesis: It is a claim about the population that is contradictory to \(H_0\) and what we conclude when we reject \(H_0\). This is usually what the researcher is trying to prove.

Since the null and alternative hypotheses are contradictory, you must examine evidence to decide if you have enough evidence to reject the null hypothesis or not. The evidence is in the form of sample data.

After you have determined which hypothesis the sample supports, you make a decision. There are two options for a decision. They are "reject \(H_0\)" if the sample information favors the alternative hypothesis or "do not reject \(H_0\)" or "decline to reject \(H_0\)" if the sample information is insufficient to reject the null hypothesis.

Table \(\PageIndex{1}\): Mathematical Symbols Used in \(H_{0}\) and \(H_{a}\):
equal (=) not equal \((\neq)\) greater than (>) less than (<)
greater than or equal to \((\geq)\) less than (<)
less than or equal to \((\geq)\) more than (>)

\(H_{0}\) always has a symbol with an equal in it. \(H_{a}\) never has a symbol with an equal in it. The choice of symbol depends on the wording of the hypothesis test. However, be aware that many researchers (including one of the co-authors in research work) use = in the null hypothesis, even with > or < as the symbol in the alternative hypothesis. This practice is acceptable because we only make the decision to reject or not reject the null hypothesis.

Example \(\PageIndex{1}\)

  • \(H_{0}\): No more than 30% of the registered voters in Santa Clara County voted in the primary election. \(p \leq 30\)
  • \(H_{a}\): More than 30% of the registered voters in Santa Clara County voted in the primary election. \(p > 30\)

Exercise \(\PageIndex{1}\)

A medical trial is conducted to test whether or not a new medicine reduces cholesterol by 25%. State the null and alternative hypotheses.

  • \(H_{0}\): The drug reduces cholesterol by 25%. \(p = 0.25\)
  • \(H_{a}\): The drug does not reduce cholesterol by 25%. \(p \neq 0.25\)

Example \(\PageIndex{2}\)

We want to test whether the mean GPA of students in American colleges is different from 2.0 (out of 4.0). The null and alternative hypotheses are:

  • \(H_{0}: \mu = 2.0\)
  • \(H_{a}: \mu \neq 2.0\)

Exercise \(\PageIndex{2}\)

We want to test whether the mean height of eighth graders is 66 inches. State the null and alternative hypotheses. Fill in the correct symbol \((=, \neq, \geq, <, \leq, >)\) for the null and alternative hypotheses.

  • \(H_{0}: \mu \_ 66\)
  • \(H_{a}: \mu \_ 66\)
  • \(H_{0}: \mu = 66\)
  • \(H_{a}: \mu \neq 66\)

Example \(\PageIndex{3}\)

We want to test if college students take less than five years to graduate from college, on the average. The null and alternative hypotheses are:

  • \(H_{0}: \mu \geq 5\)
  • \(H_{a}: \mu < 5\)

Exercise \(\PageIndex{3}\)

We want to test if it takes fewer than 45 minutes to teach a lesson plan. State the null and alternative hypotheses. Fill in the correct symbol ( =, ≠, ≥, <, ≤, >) for the null and alternative hypotheses.

  • \(H_{0}: \mu \_ 45\)
  • \(H_{a}: \mu \_ 45\)
  • \(H_{0}: \mu \geq 45\)
  • \(H_{a}: \mu < 45\)

Example \(\PageIndex{4}\)

In an issue of U. S. News and World Report , an article on school standards stated that about half of all students in France, Germany, and Israel take advanced placement exams and a third pass. The same article stated that 6.6% of U.S. students take advanced placement exams and 4.4% pass. Test if the percentage of U.S. students who take advanced placement exams is more than 6.6%. State the null and alternative hypotheses.

  • \(H_{0}: p \leq 0.066\)
  • \(H_{a}: p > 0.066\)

Exercise \(\PageIndex{4}\)

On a state driver’s test, about 40% pass the test on the first try. We want to test if more than 40% pass on the first try. Fill in the correct symbol (\(=, \neq, \geq, <, \leq, >\)) for the null and alternative hypotheses.

  • \(H_{0}: p \_ 0.40\)
  • \(H_{a}: p \_ 0.40\)
  • \(H_{0}: p = 0.40\)
  • \(H_{a}: p > 0.40\)

COLLABORATIVE EXERCISE

Bring to class a newspaper, some news magazines, and some Internet articles . In groups, find articles from which your group can write null and alternative hypotheses. Discuss your hypotheses with the rest of the class.

In a hypothesis test , sample data is evaluated in order to arrive at a decision about some type of claim. If certain conditions about the sample are satisfied, then the claim can be evaluated for a population. In a hypothesis test, we:

  • Evaluate the null hypothesis , typically denoted with \(H_{0}\). The null is not rejected unless the hypothesis test shows otherwise. The null statement must always contain some form of equality \((=, \leq \text{or} \geq)\)
  • Always write the alternative hypothesis , typically denoted with \(H_{a}\) or \(H_{1}\), using less than, greater than, or not equals symbols, i.e., \((\neq, >, \text{or} <)\).
  • If we reject the null hypothesis, then we can assume there is enough evidence to support the alternative hypothesis.
  • Never state that a claim is proven true or false. Keep in mind the underlying fact that hypothesis testing is based on probability laws; therefore, we can talk only in terms of non-absolute certainties.

Formula Review

\(H_{0}\) and \(H_{a}\) are contradictory.

equal \((=)\) greater than or equal to \((\geq)\) less than or equal to \((\leq)\)
has: not equal \((\neq)\) greater than \((>)\) less than \((<)\) less than \((<)\) greater than \((>)\)
  • If \(\alpha \leq p\)-value, then do not reject \(H_{0}\).
  • If\(\alpha > p\)-value, then reject \(H_{0}\).

\(\alpha\) is preconceived. Its value is set before the hypothesis test starts. The \(p\)-value is calculated from the data.References

Data from the National Institute of Mental Health. Available online at http://www.nimh.nih.gov/publicat/depression.cfm .

  • More from M-W
  • To save this word, you'll need to log in. Log In

Definition of hypothesis

Did you know.

The Difference Between Hypothesis and Theory

A hypothesis is an assumption, an idea that is proposed for the sake of argument so that it can be tested to see if it might be true.

In the scientific method, the hypothesis is constructed before any applicable research has been done, apart from a basic background review. You ask a question, read up on what has been studied before, and then form a hypothesis.

A hypothesis is usually tentative; it's an assumption or suggestion made strictly for the objective of being tested.

A theory , in contrast, is a principle that has been formed as an attempt to explain things that have already been substantiated by data. It is used in the names of a number of principles accepted in the scientific community, such as the Big Bang Theory . Because of the rigors of experimentation and control, it is understood to be more likely to be true than a hypothesis is.

In non-scientific use, however, hypothesis and theory are often used interchangeably to mean simply an idea, speculation, or hunch, with theory being the more common choice.

Since this casual use does away with the distinctions upheld by the scientific community, hypothesis and theory are prone to being wrongly interpreted even when they are encountered in scientific contexts—or at least, contexts that allude to scientific study without making the critical distinction that scientists employ when weighing hypotheses and theories.

The most common occurrence is when theory is interpreted—and sometimes even gleefully seized upon—to mean something having less truth value than other scientific principles. (The word law applies to principles so firmly established that they are almost never questioned, such as the law of gravity.)

This mistake is one of projection: since we use theory in general to mean something lightly speculated, then it's implied that scientists must be talking about the same level of uncertainty when they use theory to refer to their well-tested and reasoned principles.

The distinction has come to the forefront particularly on occasions when the content of science curricula in schools has been challenged—notably, when a school board in Georgia put stickers on textbooks stating that evolution was "a theory, not a fact, regarding the origin of living things." As Kenneth R. Miller, a cell biologist at Brown University, has said , a theory "doesn’t mean a hunch or a guess. A theory is a system of explanations that ties together a whole bunch of facts. It not only explains those facts, but predicts what you ought to find from other observations and experiments.”

While theories are never completely infallible, they form the basis of scientific reasoning because, as Miller said "to the best of our ability, we’ve tested them, and they’ve held up."

  • proposition
  • supposition

hypothesis , theory , law mean a formula derived by inference from scientific data that explains a principle operating in nature.

hypothesis implies insufficient evidence to provide more than a tentative explanation.

theory implies a greater range of evidence and greater likelihood of truth.

law implies a statement of order and relation in nature that has been found to be invariable under the same conditions.

Examples of hypothesis in a Sentence

These examples are programmatically compiled from various online sources to illustrate current usage of the word 'hypothesis.' Any opinions expressed in the examples do not represent those of Merriam-Webster or its editors. Send us feedback about these examples.

Word History

Greek, from hypotithenai to put under, suppose, from hypo- + tithenai to put — more at do

1641, in the meaning defined at sense 1a

Phrases Containing hypothesis

  • counter - hypothesis
  • nebular hypothesis
  • null hypothesis
  • planetesimal hypothesis
  • Whorfian hypothesis

Articles Related to hypothesis

hypothesis

This is the Difference Between a...

This is the Difference Between a Hypothesis and a Theory

In scientific reasoning, they're two completely different things

Dictionary Entries Near hypothesis

hypothermia

hypothesize

Cite this Entry

“Hypothesis.” Merriam-Webster.com Dictionary , Merriam-Webster, https://www.merriam-webster.com/dictionary/hypothesis. Accessed 23 Jun. 2024.

Kids Definition

Kids definition of hypothesis, medical definition, medical definition of hypothesis, more from merriam-webster on hypothesis.

Nglish: Translation of hypothesis for Spanish Speakers

Britannica English: Translation of hypothesis for Arabic Speakers

Britannica.com: Encyclopedia article about hypothesis

Subscribe to America's largest dictionary and get thousands more definitions and advanced search—ad free!

Play Quordle: Guess all four words in a limited number of tries.  Each of your guesses must be a real 5-letter word.

Can you solve 4 words at once?

Word of the day.

See Definitions and Examples »

Get Word of the Day daily email!

Popular in Grammar & Usage

Plural and possessive names: a guide, more commonly misspelled words, your vs. you're: how to use them correctly, every letter is silent, sometimes: a-z list of examples, more commonly mispronounced words, popular in wordplay, 8 words for lesser-known musical instruments, birds say the darndest things, 10 words from taylor swift songs (merriam's version), 10 scrabble words without any vowels, 12 more bird names that sound like insults (and sometimes are), games & quizzes.

Play Blossom: Solve today's spelling word game by finding as many words as you can using just 7 letters. Longer words score more points.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Null and Alternative Hypotheses | Definitions & Examples

Null & Alternative Hypotheses | Definitions, Templates & Examples

Published on May 6, 2022 by Shaun Turney . Revised on June 22, 2023.

The null and alternative hypotheses are two competing claims that researchers weigh evidence for and against using a statistical test :

  • Null hypothesis ( H 0 ): There’s no effect in the population .
  • Alternative hypothesis ( H a or H 1 ) : There’s an effect in the population.

Table of contents

Answering your research question with hypotheses, what is a null hypothesis, what is an alternative hypothesis, similarities and differences between null and alternative hypotheses, how to write null and alternative hypotheses, other interesting articles, frequently asked questions.

The null and alternative hypotheses offer competing answers to your research question . When the research question asks “Does the independent variable affect the dependent variable?”:

  • The null hypothesis ( H 0 ) answers “No, there’s no effect in the population.”
  • The alternative hypothesis ( H a ) answers “Yes, there is an effect in the population.”

The null and alternative are always claims about the population. That’s because the goal of hypothesis testing is to make inferences about a population based on a sample . Often, we infer whether there’s an effect in the population by looking at differences between groups or relationships between variables in the sample. It’s critical for your research to write strong hypotheses .

You can use a statistical test to decide whether the evidence favors the null or alternative hypothesis. Each type of statistical test comes with a specific way of phrasing the null and alternative hypothesis. However, the hypotheses can also be phrased in a general way that applies to any test.

Prevent plagiarism. Run a free check.

The null hypothesis is the claim that there’s no effect in the population.

If the sample provides enough evidence against the claim that there’s no effect in the population ( p ≤ α), then we can reject the null hypothesis . Otherwise, we fail to reject the null hypothesis.

Although “fail to reject” may sound awkward, it’s the only wording that statisticians accept . Be careful not to say you “prove” or “accept” the null hypothesis.

Null hypotheses often include phrases such as “no effect,” “no difference,” or “no relationship.” When written in mathematical terms, they always include an equality (usually =, but sometimes ≥ or ≤).

You can never know with complete certainty whether there is an effect in the population. Some percentage of the time, your inference about the population will be incorrect. When you incorrectly reject the null hypothesis, it’s called a type I error . When you incorrectly fail to reject it, it’s a type II error.

Examples of null hypotheses

The table below gives examples of research questions and null hypotheses. There’s always more than one way to answer a research question, but these null hypotheses can help you get started.

( )
Does tooth flossing affect the number of cavities? Tooth flossing has on the number of cavities. test:

The mean number of cavities per person does not differ between the flossing group (µ ) and the non-flossing group (µ ) in the population; µ = µ .

Does the amount of text highlighted in the textbook affect exam scores? The amount of text highlighted in the textbook has on exam scores. :

There is no relationship between the amount of text highlighted and exam scores in the population; β = 0.

Does daily meditation decrease the incidence of depression? Daily meditation the incidence of depression.* test:

The proportion of people with depression in the daily-meditation group ( ) is greater than or equal to the no-meditation group ( ) in the population; ≥ .

*Note that some researchers prefer to always write the null hypothesis in terms of “no effect” and “=”. It would be fine to say that daily meditation has no effect on the incidence of depression and p 1 = p 2 .

The alternative hypothesis ( H a ) is the other answer to your research question . It claims that there’s an effect in the population.

Often, your alternative hypothesis is the same as your research hypothesis. In other words, it’s the claim that you expect or hope will be true.

The alternative hypothesis is the complement to the null hypothesis. Null and alternative hypotheses are exhaustive, meaning that together they cover every possible outcome. They are also mutually exclusive, meaning that only one can be true at a time.

Alternative hypotheses often include phrases such as “an effect,” “a difference,” or “a relationship.” When alternative hypotheses are written in mathematical terms, they always include an inequality (usually ≠, but sometimes < or >). As with null hypotheses, there are many acceptable ways to phrase an alternative hypothesis.

Examples of alternative hypotheses

The table below gives examples of research questions and alternative hypotheses to help you get started with formulating your own.

Does tooth flossing affect the number of cavities? Tooth flossing has an on the number of cavities. test:

The mean number of cavities per person differs between the flossing group (µ ) and the non-flossing group (µ ) in the population; µ ≠ µ .

Does the amount of text highlighted in a textbook affect exam scores? The amount of text highlighted in the textbook has an on exam scores. :

There is a relationship between the amount of text highlighted and exam scores in the population; β ≠ 0.

Does daily meditation decrease the incidence of depression? Daily meditation the incidence of depression. test:

The proportion of people with depression in the daily-meditation group ( ) is less than the no-meditation group ( ) in the population; < .

Null and alternative hypotheses are similar in some ways:

  • They’re both answers to the research question.
  • They both make claims about the population.
  • They’re both evaluated by statistical tests.

However, there are important differences between the two types of hypotheses, summarized in the following table.

A claim that there is in the population. A claim that there is in the population.

Equality symbol (=, ≥, or ≤) Inequality symbol (≠, <, or >)
Rejected Supported
Failed to reject Not supported

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

false hypothesis meaning

To help you write your hypotheses, you can use the template sentences below. If you know which statistical test you’re going to use, you can use the test-specific template sentences. Otherwise, you can use the general template sentences.

General template sentences

The only thing you need to know to use these general template sentences are your dependent and independent variables. To write your research question, null hypothesis, and alternative hypothesis, fill in the following sentences with your variables:

Does independent variable affect dependent variable ?

  • Null hypothesis ( H 0 ): Independent variable does not affect dependent variable.
  • Alternative hypothesis ( H a ): Independent variable affects dependent variable.

Test-specific template sentences

Once you know the statistical test you’ll be using, you can write your hypotheses in a more precise and mathematical way specific to the test you chose. The table below provides template sentences for common statistical tests.

( )
test 

with two groups

The mean dependent variable does not differ between group 1 (µ ) and group 2 (µ ) in the population; µ = µ . The mean dependent variable differs between group 1 (µ ) and group 2 (µ ) in the population; µ ≠ µ .
with three groups The mean dependent variable does not differ between group 1 (µ ), group 2 (µ ), and group 3 (µ ) in the population; µ = µ = µ . The mean dependent variable of group 1 (µ ), group 2 (µ ), and group 3 (µ ) are not all equal in the population.
There is no correlation between independent variable and dependent variable in the population; ρ = 0. There is a correlation between independent variable and dependent variable in the population; ρ ≠ 0.
There is no relationship between independent variable and dependent variable in the population; β = 0. There is a relationship between independent variable and dependent variable in the population; β ≠ 0.
Two-proportions test The dependent variable expressed as a proportion does not differ between group 1 ( ) and group 2 ( ) in the population; = . The dependent variable expressed as a proportion differs between group 1 ( ) and group 2 ( ) in the population; ≠ .

Note: The template sentences above assume that you’re performing one-tailed tests . One-tailed tests are appropriate for most studies.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Descriptive statistics
  • Measures of central tendency
  • Correlation coefficient

Methodology

  • Cluster sampling
  • Stratified sampling
  • Types of interviews
  • Cohort study
  • Thematic analysis

Research bias

  • Implicit bias
  • Cognitive bias
  • Survivorship bias
  • Availability heuristic
  • Nonresponse bias
  • Regression to the mean

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

Null and alternative hypotheses are used in statistical hypothesis testing . The null hypothesis of a test always predicts no effect or no relationship between variables, while the alternative hypothesis states your research prediction of an effect or relationship.

The null hypothesis is often abbreviated as H 0 . When the null hypothesis is written using mathematical symbols, it always includes an equality symbol (usually =, but sometimes ≥ or ≤).

The alternative hypothesis is often abbreviated as H a or H 1 . When the alternative hypothesis is written using mathematical symbols, it always includes an inequality symbol (usually ≠, but sometimes < or >).

A research hypothesis is your proposed answer to your research question. The research hypothesis usually includes an explanation (“ x affects y because …”).

A statistical hypothesis, on the other hand, is a mathematical statement about a population parameter. Statistical hypotheses always come in pairs: the null and alternative hypotheses . In a well-designed study , the statistical hypotheses correspond logically to the research hypothesis.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Turney, S. (2023, June 22). Null & Alternative Hypotheses | Definitions, Templates & Examples. Scribbr. Retrieved June 18, 2024, from https://www.scribbr.com/statistics/null-and-alternative-hypotheses/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, inferential statistics | an easy introduction & examples, hypothesis testing | a step-by-step guide with easy examples, type i & type ii errors | differences, examples, visualizations, what is your plagiarism score.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 10 June 2024

Prominent misinformation interventions reduce misperceptions but increase scepticism

  • Emma Hoes   ORCID: orcid.org/0000-0002-8063-5430 1 ,
  • Brian Aitken   ORCID: orcid.org/0009-0009-3917-9718 2 ,
  • Jingwen Zhang   ORCID: orcid.org/0000-0003-1733-6857 3 ,
  • Tomasz Gackowski   ORCID: orcid.org/0000-0002-5521-3295 4 &
  • Magdalena Wojcieszak   ORCID: orcid.org/0000-0001-5456-4483 3  

Nature Human Behaviour ( 2024 ) Cite this article

2690 Accesses

159 Altmetric

Metrics details

  • Cultural and media studies
  • Politics and international relations
  • Science, technology and society

Current interventions to combat misinformation, including fact-checking, media literacy tips and media coverage of misinformation, may have unintended consequences for democracy. We propose that these interventions may increase scepticism towards all information, including accurate information. Across three online survey experiments in three diverse countries (the United States, Poland and Hong Kong; total n  = 6,127), we tested the negative spillover effects of existing strategies and compared them with three alternative interventions against misinformation. We examined how exposure to fact-checking, media literacy tips and media coverage of misinformation affects individuals’ perception of both factual and false information, as well as their trust in key democratic institutions. Our results show that while all interventions successfully reduce belief in false information, they also negatively impact the credibility of factual information. This highlights the need for further improved strategies that minimize the harms and maximize the benefits of interventions against misinformation.

Similar content being viewed by others

false hypothesis meaning

Understanding and combatting misinformation across 16 countries on six continents

false hypothesis meaning

Toolbox of individual-level interventions against online misinformation

false hypothesis meaning

Framing fact-checks as a “confirmation” increases engagement with corrections of misinformation: a four-country study

Scholars, observers and policymakers worry that information that is false, fabricated, untrustworthy or unsubstantiated by credible evidence can have dramatic consequences for democracy. These worries are sparked by recent events feared to be triggered by misleading claims. For instance, Trump tweeting that the 2020 election was rigged allegedly mobilized his supporters and led to what we now know as the Capitol Riots on 6 January 2021. Similarly, false information claiming that COVID vaccines are harmful may have led to clusters of communities refusing effective vaccines for pandemic mitigation 1 , 2 . Some research suggests that continued exposure to misinformation may lead to lasting misperceptions as a result of increased familiarity with factually inaccurate information 3 , 4 , 5 . Other work goes as far as saying that misinformation can influence political behaviour or election outcomes 6 . Because of such fears, institutions, agencies, platforms and scholars have directed plentiful resources to determining how to fight misinformation and make citizens more resilient. Combined efforts have resulted in well-known and established intervention strategies—namely, fact-checking, media literacy, and news media covering and correcting misinformation. These interventions are hoped to counter the spread of and belief in misinformation.

From 2016 to 2018 alone, an estimated 50 independent fact-checking organizations were established 7 , and numerous news outlets incorporated fact-checking practices as part of their business, such as the ‘Reality Check’ page on the BBC’s website, the New York Times ’s ‘Fact Checks’ and CNN’s ‘Facts First’. Media literacy interventions also burgeoned 8 , 9 , 10 . These aim to prevent rather than correct the potential impact of misinformation by educating the public on how to critically evaluate the quality of information 9 . Examples of this are Facebook’s ‘10 Tips to Spot Fake News’ and professional (journalistic) training programmes offered by the growing organization First Draft. Finally, news media organizations increased their coverage of misinformation more generally with the aim of raising awareness about its prevalence and effects. News media, whether partisan or not, increasingly focus on fake news and misinformation. Plotting the occurrence of the terms ‘fake news’, ‘misinformation’ and ‘disinformation’ in major US newspapers over time (as archived by LexisNexis) and the frequency of people searching for these terms on Google Search and Google News in the United States 11 show a remarkable increase in the popularity of these terms, starting around the 2016 US presidential election. For example, whereas there were 1,979 articles mentioning one of the terms in 2010, there were 9,012 such articles in 2017.

How effective are these strategies against misinformation? On the one hand, some studies suggest that fact-checking shows promising effects depending on the timing 12 , the source 13 and the kinds of labels used 14 , 15 . On the other hand, fact-checking alone is deemed insufficient to correct misperceptions 16 or, in some cases, is even shown to be counterproductive such that it can reinforce inaccurate beliefs 6 , 17 , 18 . In a similar vein, media literacy courses and general warnings about the presence of misinformation can also have unintended spillover effects. Such spillover effects (that is, the unintended consequences for democracy of interventions against misinformation) may make people critical towards not only misinformation but also factually accurate information 14 , 19 , 20 . Furthermore, recent research looking beyond misperceptions as an outcome finds that news media’s attention to misinformation decreases trust in science and politics 11 .

Accordingly, this project addresses an overarching question of theoretical and practical importance: how can we improve interventions against misinformation to minimize their negative spillover effects? We rely on pre-registered online survey experiments (see https://osf.io/t3nqe for the full pre-registration administered on 22 August 2022) in three countries—the United States, Poland and Hong Kong—to offer a comprehensive test of both positive and negative effects of fact-checking, media literacy interventions and the coverage of misinformation, side by side. We argue that the reason that these interventions may generate misperceptions, scepticism towards verified facts, and political and institutional distrust has to do with the way the message is delivered, as detailed below. For instance, strategies against misinformation often adopt a negative tone (for example, “Fight the Fake” or “Proceed with Caution”), put blame on political actors and media outlets, or amplify the harms of misinformation. We thus additionally propose applicable ways to prevent these negative effects from emerging. We compare existing delivery strategies of fact-checking, media literacy efforts and media coverage of misinformation with alternatives that incorporate theoretically driven adjustments to existing interventions. For coherence and parsimony, we focus on the effects of each strategy on three main outcomes: misperceptions, scepticism and trust. We report the results for the remaining pre-registered outcomes in Supplementary Section A.3 and refer the reader to our pre-registration for the rationale of these remaining outcomes.

For fact-checking, one common approach is to put emphasis on the (political) actor making inaccurate claims, which can be the originator of a false claim (for example, a politician) or the medium that spreads it (for example, a news outlet). The main fact-checking organizations in the United States, such as PolitiFact and Snopes, often explicitly and visibly name the ‘source’ of the fact-checked claim. For instance, PolitiFact’s fact-checked claims are accompanied by a logo (in the case of (social) media) or picture (in the case of public figures) and the name of the source: for example, “Robert F. Kennedy, Jr. stated on July 11, 2023 in a video…”. This approach—which we call the Accountability Strategy—may negatively affect people’s trust in politicians and media (the sources of false claims) because it explicitly blames the spreader of misinformation 21 . In addition, fact-checking efforts emphasizing the accountable actor may accidentally foster misperceptions by increasing the ease of (mis)information retrieval and the familiarity of the claim 22 , 23 , 24 , 25 , 26 .

Many (recent) media literacy efforts—including those coming from big social media companies such as Twitter and Facebook—typically focus on how to spot misinformation 20 . A good and prominent example of this is Facebook’s ‘10 Tips to Spot Fake News’. We call this the Misinformation Focus. Although this strategy may be successful at helping people identify inaccurate claims 27 , 28 by triggering accuracy motivations 29 , it may also generate negative spillover effects and increase scepticism towards otherwise true or factual pieces of information 20 . In addition, this strategy can decrease trust in various democratic actors (for example, (social) media, scientists and journalists) by emphasizing that it is difficult to know whom and what to trust.

Furthermore, news media coverage of misinformation in general and of false claims in particular often repeats misinformation and emphasizes its existence, spread and threats 30 , without putting these threats in the necessary context. That is, by repeating false claims, news media inadvertently increase the reach of these claims, and by giving disproportionate attention to misinformation, news media may generate the perception that misinformation is prevalent. This runs counter to recent empirical evidence suggesting that exposure to and effects of misinformation and untrustworthy sources are very limited 31 , 32 , 33 , 34 , 35 , 36 . We call such coverage the Decontextualized Approach and suspect that it may have unintended consequences such that it decreases trust and increases scepticism towards verified facts 11 , 20 . Moreover, by repeating falsehoods, news coverage of misinformation may come with a risk of fostering misperceptions by making them more familiar 26 , more easily retrievable from memory and therefore easier to process 25 .

In addition to identifying the effects of these existing strategies, we examine how they can be improved to prevent undesired spillover effects (that is, decreased trust, increased scepticism and inaccurate beliefs) and maximize positive outcomes (that is, preventing misperceptions). We propose that fact-checkers should consider what is important to emphasize when addressing (mis)information: the source (Accountability Strategy) or the verification of the relevant claim (which we call the Correctability Strategy). This approach relies on fact-checkers’ and journalists’ expertise in issue and frame selection, in which they engage as part of their daily practice 37 , 38 , and calls for an assessment of the need for Accountability versus the need for Correctability. More specifically, we suggest that—when appropriate—putting more emphasis on the claim itself (as opposed to emphasizing the source) might overcome negative spillover effects on trust and misperceptions. By focusing on the correctability of a claim, fact-checkers can emphasize evidence-based thinking without attributing blame to a politician or a news source. As pre-registered, we hypothesize that the Correctability Strategy will lead to lower levels of both misperceptions (H1 a ) and scepticism (H1 b ), as well as higher levels of trust (H1 c ) than the Accountability Strategy.

To improve existing media literacy interventions, we propose that they should not limit their attention to misinformation (the Misinformation Focus) 16 but also focus on detecting partisan bias, which we call the Bias Focus. After all, news bias and hyper-partisan reporting are more prevalent and also bigger problems for various democratic processes than misinformation 39 . This should help citizens better evaluate the quality of information in general while at the same time reducing the negative effects on scepticism and trust by not overemphasizing the role of misinformation in the news media ecosystem. Both strategies have in common that they should trigger accuracy motivations 29 , making people invest more cognitive resources in problem-solving and analytical thinking 27 , 28 and thus helping people recognize misinformation. Yet, only the Bias Focus should help identify misinformation without increasing scepticism towards all information, for several reasons.

Because this strategy specifically teaches people to identify balanced legacy media on top of biased media, it should help individuals identify accurate information. In addition, the Bias Focus should minimize scepticism towards all information because it may prompt people to think about how information is presented and framed. These tips should encourage people to evaluate the underlying assumptions and motivations behind news stories. This nuanced thinking should enhance media consumers’ ability to identify not only reliable information or overt misinformation but also subtler forms of manipulation, such as selective reporting or framing. Furthermore, focusing on biases highlights the importance of context in news reporting. Media literacy interventions can teach individuals to look for multiple sources and perspectives to gain a more comprehensive understanding of an issue. This can lead to the discovery of different viewpoints, without necessarily making individuals sceptical of all information. Lastly, the Bias Focus still encourages critical thinking, but it shifts the emphasis from outright distrust to informed scepticism. Participants learn to evaluate news stories on the basis of factors such as evidence, source credibility and logical coherence. Taken together, we propose that the Bias Focus empowers individuals to make more informed judgements when consuming media. In line with these arguments, we hypothesize that participants exposed to the Bias or Misinformation Focus will be less likely to endorse misperceptions (H2 a ), but those exposed to the Bias Focus will have lower levels of scepticism (H2 b ) and higher levels of trust (H2 c ) than participants exposed to the Misinformation Focus. Note that with these hypotheses, we deviate from the pre-registration in two ways. First, we had additionally formulated hypotheses on the identification of false, accurate and biased news using different measurements. For the sake of parsimony and coherence, in the main paper we only focus on three main outcomes (trust, scepticism and misperceptions) of interest for each of the independent variables, but we present the results of these hypothesized outcomes in Supplementary Information . Second, in the pre-registration we formulated no directional hypothesis on the effect of the Bias Focus and the Misinformation Focus on misperceptions, but we include it here to be able to show the effect of each treatment on the same outcomes, and in the direction based on the rationale outlined above. Note that we thus formulated this hypothesis after pre-registration but before data analysis.

Finally, to counteract the negative spillover effects of media coverage of misinformation, we propose that when covering misinformation, journalists should give context to the issue. In addition to informing about misinformation or a particular falsehood (that is, raising awareness), media should put it in the context of the most recent and best available scientific research. Currently, such evidence points out that—given the limited exposure to 34 and effects of 35 misinformation—misinformation is not as grave a problem as is often suggested. We thus compare the Contextualized Approach (that is, covering the problem in its broader context) to the Decontextualized Approach (that is, news media coverage without the context). While in both approaches news media may still report a particular falsehood to correct it and raise public awareness, putting the false claim in a wider context may deflate as opposed to inflate the salience of such a claim. We predict that participants exposed to the Contextualized Approach will have higher levels of trust (H3 a ) and lower levels of misperceptions (H3 b ) and scepticism (H3 c ) than participants in the Decontextualized Approach.

An overview of all the discussed strategies and their definitions can be found in Table 1 . To systematically isolate the causal effects of our treatments, we opted for subtle differences between existing strategies to fight misinformation versus our proposed strategies by changing one key component of each strategy. Larger differences between experimental stimuli increase the chances of observing significant differences between treatments, but at the cost of uncertainty of what is driving these differences. Moreover, previous studies have shown that even small differences in experimental stimuli, such as altering a single word 14 , can lead to substantial differential effects 40 .

To offer a comprehensive portrayal of the effects—positive and negative—of the three existing strategies versus our proposed strategies, we conducted online experiments in the United States ( n  = 2,008), Poland ( n  = 2,147) and Hong Kong ( n  = 1,972). While we did not pre-register any country-specific hypotheses, we selected Poland, Hong Kong and the United States for this survey experiment because these countries represent diverse cultural and political contexts, thus allowing us to explore the generalizability of interventions against misinformation across different societies. We randomized participants to one of the six treatment groups (each strategy makes up one treatment group) or a control group. A visualization of what participants in each treatment condition were presented with can be found in Supplementary Section A . Note that the materials in Supplementary Information have been redacted for legal reasons, but all original materials can be found on OSF at https://osf.io/5xc7k/ . As can be seen from the original materials, the treatments simulate realistic social media posts on a professionally designed and interactive social media site.

After exposure to treatment, all participants were asked to rate the accuracy of several true (measuring scepticism) and false (measuring misperceptions) claims as well as indicate their levels of trust in various institutions (for example, journalists, social media, traditional media and scientists). To test our hypotheses, we compared each proposed strategy to its corresponding existing strategy and the control group. Our key take-away is that most interventions against misinformation—including our proposed strategies to improve fact-checking, media literacy and the coverage of misinformation—come at a cost. While some seem to effectively decrease misperceptions, these interventions at the same time increase scepticism in true information. This means that people tend to rate not only false but also accurate information about important political topics as unreliable, untrustworthy and inaccurate. These effects are most pronounced in the United States and Poland, but less so in Hong Kong, where the effects are largely insignificant, although the coefficients follow similar directions. Given the pronounced dominance of true or trustworthy content over false or untrustworthy content in (social) media environments, this is a worrisome trend.

We began by assessing our key question of interest: compared with existing interventions to fight misinformation, do our proposed strategies overcome negative spillover effects on scepticism in verified facts, misperceptions and trust? We used the statistical software program R (version 2022.12.0+353; https://posit.co/download/rstudio-desktop/ ) to estimate pre-registered (see https://osf.io/t3nqe for the full pre-registration), two-tailed ordinary least squares regressions with the assigned treatment as the independent variable, the three outcomes as dependent variables and all the pre-treatment variables as covariates (that is, demographics; see Supplementary Section B.2 for the full list of covariates included in the analysis). While not pre-registered, we additionally clustered standard errors at the respondent level on the basis of reviewer request. The clustering did not change any of the results. Also on the basis of reviewer request, we computed a discernment measure that subtracts the perceived accuracy of false claims from that of true claims. These results can be found in Supplementary Section C.4 . Following our pre-analysis plan, we did these analyses separately for the United States, Poland and Hong Kong. The Methods section provides the details on the data and methodology, including the exact measurement of our dependent variables.

General effects

Figure 1 shows the effects of the three interventions (fact-checking, media literacy and media coverage of misinformation), comparing the existing and our proposed strategies to the control group for each of the three countries. Figure 2 provides an overview of all effects. Supplementary Section C.1 contains detailed regression output of all the results. Almost all interventions—including our proposed strategies—are successful at reducing misperceptions (measured as the perceived accuracy of false claims), with some minor (mostly statistically insignificant, but see A5 for the Bayes factor (BF) analyses) differences between the different strategies (see Supplementary Tables 4 and 22 for the fully reported results for the United States and Poland, respectively). However, the same interventions also increase scepticism (measured as the perceived accuracy of true claims; Supplementary Tables 5 and 23 ). This is largely true for the United States and Poland, but not Hong Kong, where none of the interventions yield statistically significant effects (Supplementary Table 40 ; see Supplementary Table 91 for BFs), although the coefficients move in similar directions. In none of the countries did most of the interventions affect people’s general trust (see Supplementary Table 6 for the United States and Supplementary Table 24 for Poland) or trust in any specific institutions (using the individual items) (see Supplementary Tables 7 – 13 for the US results and Supplementary Tables 25 – 31 for Poland). Furthermore, in almost all cases, we found no statistically significant differences between the existing and our proposed strategies (see Supplementary Tables 14 – 21 for the US results, Supplementary Tables 32 – 39 for Poland and Supplementary Table 91 for BFs). These results hold when running the analyses only on those who passed the manipulation check (Supplementary Section C.2 ). It seems, however, that the mentioned effects, whether positive (reduced misperceptions) or negative (increased scepticism), are fairly short-lived. In the United States, we administered a follow-up survey one week after exposure to the treatments. We found that all the effects disappeared (Supplementary Tables 64 and 65 ). We also rejected all trust-related hypotheses and relegate the presentation of the results for Hong Kong to Supplementary Section C.1.3 .

figure 1

The misperception (light blue), scepticism (light green) and trust (brown) coefficient estimates for each of the six treatment conditions by country are shown. The error bars represent 95% CIs. The fully reported results can be found in Supplementary Section A.1 . For the United States, see Supplementary Tables 4 – 6 . For Poland, see Supplementary Tables 22 – 24 . For Hong Kong, see Supplementary Tables 40 – 42 . For the United States, there were 2,008 participants over seven independent experiments; for Poland, 2,147 participants over seven independent experiments; and for Hong Kong, 1,972 participants over seven independent experiments.

figure 2

The checks represent support for hypotheses, whereas the Xs represent unsupported hypotheses.

Effects per country

We next turned to a more detailed discussion of our results per country (see Fig. 3 for a visualization of the mean levels of scepticism and misperception per treatment group and country). In the United States, all strategies except the Bias Focus decreased misperceptions compared with the control group (Accountability Strategy: β  = −0.352; P  = 0.001; 95% confidence interval (CI), −0.48, −0.22; Correctability Strategy: β  = −0.374; P  = 0.001; 95% CI, −0.51, −0.24; Misinformation Focus: β  = −0.14; P  = 0.038; 95% CI, −0.27, −0.01; Contextualized Approach: β  = −0.2709; P  = 0.001; 95% CI, −0.40, −0.14; Decontextualized Approach: β  = −0.142; P  = 0.034; 95% CI, −0.27, −0.01).

figure 3

The mean misperception scores (in green) and the mean scepticism scores (in blue) across all seven independent experiment treatments (United States: n  = 2,008; Poland: n  = 2,147) are shown. Misperception scores were constructed by averaging the respondents’ accuracy ratings of false statements, whereas scepticism scores were created by averaging the respondents’ accuracy ratings of true statements. The data are presented as mean values ± s.d.

However, in the United States, the Contextualized Approach ( β  = 0.132; P  = 0.008; 95% CI, 0.04, 0.23), the Decontextualized Approach ( β  = 0.125; P  = 0.0121; 95% CI, 0.03, 0.22), the Accountability Strategy ( β  = 0.106; P  = 0.029; 95% CI, 0.01, 0.2), and the Bias Focus ( β  = 0.1; P  = 0.05; 95% CI, 0, 0.19) all increased scepticism relative to the control group. This means that most tested strategies came with the negative spillover effect of making citizens more sceptical towards true and verified information.

We additionally found that only the two fact-checking strategies improved discernment between the false and true claims (that is, the subtractive measure; Supplementary Table 88 ), whereas none of the media literacy or media coverage strategies did. We further reflect on this finding in Discussion .

We see a similar trend in Poland. Compared with the control group, all strategies except for the Misinformation Focus and the Accountability Strategy increased scepticism (Contextualized Approach: β  = 0.138; P  = 0.002; 95% CI, 0.05, 0.23; Decontextualized Approach: β  = 0.128; P  = 0.005; 95% CI, 0.04, 0.22; Correctability Strategy: β  = 0.159; P  = 0.001; 95% CI, 0.08, 0.24; Bias Focus: β  = 0.16; P  = 0.001; 95% CI, 0.07, 0.25). There are again no statistically significant differences between the corresponding strategies. In Poland, both fact-checking strategies, the Correctability Strategy ( β  = −0.29; P  = 0.001; 95% CI, −0.39, −0.17) and the Accountability Strategy ( β  = −0.21; P  = 0.001; 95% CI, −0.32, −0.1), were successful at reducing misperceptions. In sum, just like in the United States, in Poland most tested interventions also came with the negative spillover effects of making citizens more sceptical towards true and verified information, while—in the case of Poland—having only minimal positive effects of reducing misperceptions.

We also found that the Decontextualized Approach decreased the ability to discern between false and true claims (that is, the subtractive measure; Supplementary Table 89 ). All other strategies did not significantly affect discernment in Poland. We further reflect on these findings in Discussion .

Finally, two of our covariates—political interest and age—predicted our outcomes strongly and fairly consistently across all treatment groups and in both the United States (Supplementary Section C.1.1 ) and Poland (Supplementary Section C.1.2 ). Political interest is negatively correlated with scepticism towards true information in both the United States ( β  = −0.086; P  = 0.001; 95% CI, −0.12, −0.06) and Poland ( β  = −0.044; P  = 0.001; 95% CI, −0.07, −0.02). It also predicts decreases in misperceptions in Poland ( β  = −0.036; P  = 0.03; 95% CI, −0.07, −0.003), while predicting increases in misperceptions in the United States ( β  = 0.054; P  = 0.001; 95% CI, 0.02, 0.09). Older participants in both countries (United States: β  = −0.119; P  = 0.001; 95% CI, −0.14, −0.09; Poland: β  = −0.059; P  = 0.001; 95% CI, −0.08, −0.04) hold lower misperceptions than younger participants, with these effects being greater in the US sample. In the United States, age ( β  = 0.1; P  = 0.001; 95% CI, 0.08, 0.12) also increases scepticism towards true information; a similar effect for Poland misses the traditional threshold for statistical significance ( β  = 0.014; P  = 0.08; 95% CI, −0.001, 0.03).

This project delves into the potential negative consequences of current strategies to fight misinformation: fact-checking, media literacy tips and media coverage of misinformation. Online survey experiments in three diverse countries—the United States, Poland and Hong Kong—examined how these three strategies impacted individuals’ perception of both inaccurate and factual information as well as trust in key democratic institutions.

While dominant interventions, such as fact-checking, media literacy tips and news coverage of misinformation, aim to prevent the spread and endorsement of misinformation, we found that they may inadvertently prime individuals to approach all information, whether false or true, with heightened suspicion and scepticism. This is concerning, as this suggests that mere exposure to alarming labels such as ‘misinformation’ or ‘fake news’ in the media and public discourse may lead to negative consequences and reduce people’s trust in verified information. These findings, along with similar recent evidence 20 , 41 , suggest that existing misinformation mitigation approaches need to be redesigned.

To address these challenges, we proposed three alternative strategies for fact-checking, media literacy tips and media coverage of misinformation, and compared their effects to those of the existing approaches. We thus aimed to offer systematic evidence on a key question of relevance to the current societal climate: how can existing interventions be improved such that they do not reduce trust, increase scepticism or foster misperceptions? Answering this question and identifying which messaging and delivery strategies of fact-checking, media literacy and coverage of misinformation can minimize the harms while maximizing the benefits of these interventions could offer practical guidelines for media organizations, the educational sector and policymakers.

Our results demonstrate that while the tested interventions are successful at reducing belief in false information, they simultaneously increase scepticism in the credibility of factual information. This is the case not only for existing strategies but also for our proposed ‘improved’ strategies. In other words, individuals who are exposed to all these interventions may become more likely to perceive true information as untrustworthy or inaccurate. This is particularly alarming given the prevalence of the discussion of and interventions against misinformation in today’s media ecosystem. Public discourse about misinformation has the potential to prime individuals to be excessively distrustful of all information, a phenomenon that has been observed in previous research 11 , 20 .

Given that the average citizen is very unlikely to encounter misinformation 32 , 33 , wide and far-reaching fact-checking efforts or frequent news media attention to misinformation may incur more harms than benefits. Put differently, because most people are much more likely to encounter reliable news than misinformation, any increase in general scepticism may have a much stronger negative effect than the positive effect of reducing misperceptions 42 . While one could still argue that an increase in scepticism may be worth the positive effect of decreased misperceptions as long as it improves discernment between true and false information, our additional analyses indicate that the majority of strategies, apart from fact-checking within the US context, have an indeterminate impact on discernment. This suggests that there is insufficient evidence to conclusively determine their effects on enhancing the discernment between truth and falsehood. Decontextualized coverage of misinformation even worsened discernment in Poland. This latter finding in particular underscores that the benefits of reduced belief in false information may not outweigh the negative consequences of decreased belief in accurate and reliable news. In essence, the potential gains from reducing misperceptions must be carefully weighed against the broader implications of heightened scepticism in our information landscape.

There are several potential explanations for the limited evidence of differences in effects between the existing and our proposed strategies. First, the differences between our treatments were subtle: more than one third of our sample failed the manipulation check. This indicates that these differences were not noticeable to many participants. We note, however, that our interventions did not yield different effects among participants who did pass the manipulation check. Nevertheless, it is important to acknowledge that our BF analyses indicate moderate evidence against the null hypothesis of no differences between the existing and proposed strategies, suggesting that our results should be interpreted with caution (see Supplementary Table 91 for BF values). We encourage future work to both strengthen and run more experiments with our adapted interventions. Scholars could design strategies that differ more substantially from existing strategies. This way it may be harder to determine what aspect of the intervention is driving the effects, but more substantial changes to existing interventions against misinformation are arguably needed to maximize their benefits and limit their harms.

Second, our findings raise the question of whether the public effectively distinguishes between concepts such as falsehoods and bias, especially given that the Bias Focus in media literacy did not affect scepticism or misperceptions. As detailed in Supplementary Table 91 , additional BF analyses present a nuanced picture: in Poland, the evidence leans towards our interventions having a notable effect, while in the United States, the evidence is less conclusive or even suggests minimal impact. This variation indicates that cultural or contextual factors might play an important role in how media literacy interventions are perceived and their effectiveness. Because a part of the public may equate partisan or biased news with ‘fake news’, the differential impact of our interventions could have been obscured. This underlines the importance for scholars, journalists and educators to develop strategies that not only elucidate the differences between falsity and bias but also provide practical tools for identifying and evaluating these elements in diverse real-world contexts.

Third, our Contextualized news coverage treatment only informed people that misinformation is not widespread. Yet, the extent of misinformation could have been more clearly contextualized. Although many individuals struggle with numerical concepts, treatments that present a numerical anchor may be more impactful. Without such an anchor, people may still overestimate the prevalence and impact of misinformation, even when informed about its limited extent. Still, it is important to acknowledge that there is a delicate balance between pinpointing the appropriate level of manipulation strength in experimental setups and designing treatments that can effectively shape real interventions.

Finally, we offer two potential explanations for the (lack of) findings for Hong Kong (see Supplementary Section C.5 for BFs). First, the Hong Kong findings might be due to the sample age difference. We had difficulties in recruiting older adults from the Hong Kong survey vendor. In the Hong Kong sample, only 9.2% of participants were aged 55 and older. This age category was 33.1% in the Poland sample and 31.9% in the US sample. The younger sample in Hong Kong might have more baseline media literacy and experiences with misinformation corrections, which could attenuate the intervention effects. Another possible explanation has to do with Hong Kong’s political environment. Some scholars argue that the lack of impact of misinformation or misinformation intervention is due to largely steady political inclinations of the Hong Kong population. The overall trend in public opinion regarding whether Hong Kong should be governed under the ‘one country, two systems’ principle has not changed much since 2014 or even earlier. Thus, regardless of frequent exposures to misinformation and increasing availability of misinformation interventions, the influence of these interventions might be limited 43 .

We acknowledge several limitations to our study. First, although survey experiments are a powerful tool for determining causality, they have limited external validity. We aimed to increase the external validity of our design by delivering treatments as realistic social media posts featured on a professionally designed and interactive social media site. Nevertheless, it is possible that our findings do not generalize to real-world situations where individuals are not exposed to interventions against misinformation in such a controlled manner. Second, our measurement of misperceptions was based on the perceived accuracy of self-fabricated claims. This may limit the ecological validity of our findings. At the same time, relying on existing false claims may have introduced the possibility that participants had encountered these claims prior to the study, which could impact their perceived accuracy. Finally, we offer potential explanations for the (lack of) findings for Hong Kong.

Regardless, these findings highlight the need for caution when attempting to combat misinformation and signal the difficulty of designing interventions that do so. Naturally, it is important to address false information. Yet, when doing so, it is critical to carefully craft and test strategies to not inadvertently erode citizens’ trust in accurate information. As scholars, policymakers, educators and journalists navigate the ever-changing (mis)information landscape, it is imperative that we continue to delve into the intricate dynamics between misinformation, public scepticism and the effectiveness of interventions aimed at promoting accuracy and the consumption of verified information. Future research should focus on examining the specific strategies and techniques that best preserve trust in reliable sources while combating falsehoods. Such comprehensive exploration will be instrumental in shaping more targeted and effective approaches to addressing the challenges posed by misinformation in our information-driven society.

We conducted online survey experiments in the United States, Poland and Hong Kong. The study in each country was conducted from 30 August 2022 to 27 October 2022. The follow-up survey in the United States was conducted from 8 September 2022 to 5 November 2022. This study received institutional review board approval from the University of California, Davis, approval no. 1792005-2. Participants were recruited using Dynata in the United States, Panel Ariadna in Poland and Qualtrics in Hong Kong. People were compensated by Dynata, Panel Ariadna and Qualtrics directly, and the price per respondent we paid was US$3.25, US$2.50 and US$4.80, respectively. These opinion-polling companies used stratification based on census information on age, gender and education level. Our US dataset included a total of 2,008 participants (mean age, 45 years; 50.22% female; 70.21% white). Our Poland dataset included a total of 2,147 participants (mean age, 45.64 years; 35.61% female). Finally, our Hong Kong dataset included a total of 1,972 participants (mean age, 37.93 years; 43.81% female). We based the sample size on a power analysis using the software G*Power version 3.1 (ref. 44 ) (see section 3.5 in the pre-registration at https://osf.io/t3nqe for the full calculation and rationale). After giving informed consent to participate in the study (see Supplementary Section B.3 for the consent form), the respondents first completed a pre-survey answering questions about their sociodemographic characteristics, political attitudes and beliefs (see Supplementary Section B.1 for all covariates and their measurement). Next, the participants were presented with the following instructions: “On the next pages you will see two Facebook messages posted by (media) organizations and Facebook users in the last few days. Please read each message carefully. At the end of the survey we will ask you some questions about them. Please note that the posts are not interactive in this part of the study.”

Each participant was randomly assigned to one of six treatment conditions or a control group. Each treatment featured two Facebook posts on the top of the newsfeed on a mock Facebook site, with a standardized number of comments and likes across conditions and with the background functions blurred (see Supplementary Section A for a redacted version of all stimulus materials per country, and https://osf.io/5xc7k/ for the original materials). All survey and stimulus materials were translated by native Polish and Cantonese speakers with minor contextual adaptations for the two countries for the claims and specific sources.

Specifically, we kept the media literacy tips consistent across the countries. For the other treatments and study materials, we kept all the statements (in both the treatment texts and the used false/true claims) as close to one another as possible. For instance, the treatment text in the Accountability Strategy started from the same ‘skeleton’—for example, the statement “Recently, [ACTOR] claimed that [STATEMENT]” was exactly the same for Poland and Hong Kong. We replaced [ACTOR] with comparable left- or right-leaning politicians in each country. For Poland, we chose known/unknown politicians from the governing coalition and from the opposition. For the Hong Kong stimuli, we similarly chose known/unknown politicians from the Pro-Establishment (also called Pro-Beijing) camp or the Pro-Democracy camp. The contentions between these political factions in Poland and in Hong Kong are similar to the left/right division in the United States. The same parallel approach goes for all true/false claims in the project. For instance, we used the same false claims across all countries (merely changing, for example, “Local government officials in Michigan” to “Local government officials from [a specific region in Poland/Hong Kong]”). This careful selection and adaptation for both the treatment texts and other study materials assures that the politicians are equally known and partisan across the countries, that the sentences have the same baseline level of plausibility, and so forth. The original true and false claims in each language can be found in Supplementary Section A.2.1 , and the used ‘skeletons’ are provided in Supplementary Section A.1 .

To increase external validity, the two Facebook posts that made up each treatment were interactive, such that participants could use the range of Facebook reactions to each post (for example, like, love or laugh) as well as comment below them. Naturally, the website did not have the functionality of resharing.

After exposure to treatment, the participants were redirected to the questionnaire measuring the core outcomes (Supplementary Section B.2 ). Scepticism was measured by asking the participants how accurate they thought three true statements were to the best of their knowledge on a four-point scale (from 1, “Not at all accurate”, to 4, “Very accurate”). Misperceptions were measured using the same scale, but about two false statements. The false statements were self-fabricated (that is, made up), and both the true and the false statements were selected from a pre-tested pool of claims that were rated as similarly easy to read, interesting, easy to understand, likely to be true (false), (un)believable, and equally plausible among Democrats and Republicans. Trust was measured by asking the participants to report how much they trusted seven institutions—journalists, scientists, fact-checkers, traditional media, university professors, social media and the government—on a seven-point scale (from 1, “I don’t trust it/them at all”, to 7, “I completely trust it/them”). For each outcome, we aggregated the items to create one single measure of scepticism, misperceptions or trust. In addition, after exposure to treatment, we presented the participants with a statement serving as a manipulation check (see Supplementary Section B.5 for the item wording). Across the samples, 62.8% of US participants, 69.9% of Polish participants and 62.1% of Hong Kong participants passed the manipulation check. There were no statistically significant differences by demographics (for example, age or education) between those who passed and failed (Supplementary Section C.2 ). Finally, the respondents were informed about the nature of this study through a debriefing (Supplementary Section B.3.2 ).

The mean level of scepticism was 2.13 with s.d. 0.68 in the United States, 3.24 with s.d. 1.4 in Poland and 2.28 with s.d. 0.52 in Hong Kong. For misperceptions, this was 2.23 with s.d. 0.86 in the United States, 1.26 with s.d. 1.11 in Poland and 2.32 with s.d. 0.67 in Hong Kong. The mean trust level was 3.81 with s.d. 1.37 in the United States, 2.31 with s.d. 1.89 in Poland and 4.3 with s.d. 1.05 in Hong Kong. We completed a pre-registration for all analyses ( https://osf.io/t3nqe ). We subsequently realized that our pre-registration plan did not include a hypothesis for the effect of the media literacy treatments on misperceptions and thus included a relevant hypothesis after data collection. All other predictions and analyses were according to our pre-registration.

Reporting summary

Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.

Data availability

The replication data, including all (stimulus) materials used in this study, are available at https://osf.io/5xc7k/ .

Code availability

The replication code is available at https://osf.io/5xc7k/ .

Pierri, F. et al. Online misinformation is linked to early COVID-19 vaccination hesitancy and refusal. Sci. Rep. 12 , 5966 (2022).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Loomba, S., de Figueiredo, A., Piatek, S. J., de Graaf, K. & Larson, H. J. Measuring the impact of COVID-19 vaccine misinformation on vaccination intent in the UK and USA. Nat. Hum. Behav. 5 , 337–348 (2021).

Article   PubMed   Google Scholar  

DiFonzo, N., Beckstead, J., Stupak, N. & Walders, K. Validity judgments of rumors heard multiple times: the shape of the truth effect. Soc. Influ. 11 , 22–39 (2016).

Article   Google Scholar  

Eveland, W. P. Jr & Cooper, K. An integrated model of communication influence on beliefs. Proc. Natl Acad. Sci. USA 110 , 14088–14095 (2013).

Swire, B., Ecker, U. & Lewandowsky, S. The role of familiarity in correcting inaccurate information. J. Exp. Psychol. Learn. Mem. Cogn. 43 , 1948–1961 (2017).

Lazer, D. M. et al. The science of fake news. Science 359 , 1094–1096 (2018).

Article   CAS   PubMed   Google Scholar  

Graves, L. & Cherubini, F. The Rise of Fact-Checking Sites in Europe Digital News Project Report (Reuters Institute, 2016).

Abramowitz, M. J. Stop the manipulation of democracy online. The New York Times https://www.nytimes.com/2017/12/11/opinion/fake-news-russia-kenya.html (11 December 2017).

Bor, A., Osmundsen, M., Rasmussen, S. H. R., Bechmann, A. & Petersen, M. B. ‘Fact-checking’ videos reduce belief in misinformation and improve the quality of news shared on Twitter. Preprint at PsyArXiv https://doi.org/10.31234/osf.io/a7huq (2020).

Qian, S., Shen, C. & Zhang, J. Fighting cheapfakes: using a digital media literacy intervention to motivate reverse search of out-of-context visual misinformation. J. Comput. Mediat. Commun. 28 , zmac024 (2023).

Hoes, E., Clemm, B., Gessler, T., Qian, S. & Wojcieszak, M. Elusive effects of misinformation and the media’s attention to it. Preprint at PsyArXiv https://doi.org/10.31234/osf.io/4m92p (2022).

Brashier, N. M. & Schacter, D. L. Aging in an era of fake news. Curr. Dir. Psychol. Sci. 29 , 316–323 (2020).

Article   PubMed   PubMed Central   Google Scholar  

Zhang, J., Featherstone, J. D., Calabrese, C. & Wojcieszak, M. Effects of fact-checking social media vaccine misinformation on attitudes toward vaccines. Prev. Med. 145 , 106408 (2021).

Clayton, K. et al. Real solutions for fake news? Measuring the effectiveness of general warnings and fact-check tags in reducing belief in false stories on social media. Polit. Behav. 42 , 1073–1095 (2020).

Freeze, M. et al. Fake claims of fake news: political misinformation, warnings, and the tainted truth effect. Polit. Behav. 43 , 1433–1465 (2021).

Hameleers, M. Separating truth from lies: comparing the effects of news media literacy interventions and fact-checkers in response to political misinformation in the US and Netherlands. Inf. Commun. Soc. 25 , 110–126 (2022).

Prike, T., Blackley, P., Swire-Thompson, B. & Ecker, U. K. Examining the replicability of backfire effects after standalone corrections. Cogn. Res. Princ. Implic. 8 , 39 (2023).

Wittenberg, C., Zong, J. & Rand, D. The (minimal) persuasive advantage of political video over text. Proc. Natl Acad. Sci. USA 118 , e2114388118 (2021).

Guess, A. M. et al. A digital media literacy intervention increases discernment between mainstream and false news in the United States and India. Proc. Natl Acad. Sci. USA 117 , 15536–15545 (2020).

van der Meer, T. G., Hameleers, M. & Ohme, J. Can fighting misinformation have a negative spillover effect? How warnings for the threat of misinformation can decrease general news credibility. Journal. Stud. 24 , 803–823 (2023).

Humprecht, E. Where ‘fake news’ flourishes: a comparison across four western democracies. Inf. Commun. Soc. 22 , 1973–1988 (2019).

Bigsby, E., Bigman, C. A. & Martinez Gonzalez, A. Exemplification theory: a review and meta-analysis of exemplar messages. Ann. Int. Commun. Assoc. 43 , 273–296 (2019).

Google Scholar  

Busselle, R. W. & Shrum, L. Media exposure and exemplar accessibility. Media Psychol. 5 , 255–282 (2003).

Nyhan, B. & Reifler, J. When corrections fail: the persistence of political misperceptions. Polit. Behav. 32 , 303–330 (2010).

Swire, B., Berinsky, A. J., Lewandowsky, S. & Ecker, U. K. Processing political misinformation: comprehending the Trump phenomenon. R. Soc. Open Sci. 4 , 160802 (2017).

Zajonc, R. B. Attitudinal effects of mere exposure. J. Pers. Soc. Psychol. 9 , 1–27 (1968).

Pennycook, G. & Rand, D. G. Lazy, not biased: susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition 188 , 39–50 (2019).

Pennycook, G. & Rand, D. G. Who falls for fake news? The roles of bullshit receptivity, overclaiming, familiarity, and analytic thinking. J. Pers. 88 , 185–200 (2020).

Chaiken, S. & Trope, Y. Dual-Process Theories in Social Psychology (Guilford, 1999).

Altay, S., Berriche, M. & Acerbi, A. Misinformation on misinformation: conceptual and methodological challenges. Soc. Media Soc. 9 , 20563051221150412 (2023).

Aslett, K., Guess, A. M., Bonneau, R., Nagler, J. & Tucker, J. A. News credibility labels have limited average effects on news diet quality and fail to reduce misperceptions. Sci. Adv. 8 , eabl3844 (2022).

Grinberg, N., Joseph, K., Friedland, L., Swire-Thompson, B. & Lazer, D. Fake news on Twitter during the 2016 US presidential election. Science 363 , 374–378 (2019).

Guess, A. et al. ‘Fake news’ may have limited effects beyond increasing beliefs in false claims. Harv. Kennedy Sch. Misinformation Rev. https://doi.org/10.37016/mr-2020-004 (2020).

Guess, A., Nagler, J. & Tucker, J. Less than you think: prevalence and predictors of fake news dissemination on Facebook. Sci. Adv. 5 , eaau4586 (2019).

Guess, A. M., Nyhan, B. & Reifler, J. Exposure to untrustworthy websites in the 2016 US election. Nat. Hum. Behav. 4 , 472–480 (2020).

Weeks, B. E., Menchen-Trevino, E., Calabrese, C., Casas, A. & Wojcieszak, M. Partisan media, untrustworthy news sites, and political misperceptions. New Media Soc. https://doi.org/10.1177/14614448211033300 (2021).

Patterson, T. E. & Donsbach, W. News decisions: journalists as partisan actors. Polit. Commun. 13 , 455–468 (1996).

Shoemaker, P. J., Vos, T. P. & Reese, S. D. in The Handbook of Journalism Studies (eds Wahl-Jorgensen, K. & Hanitzsch, T.) Ch. 6 (Routledge, 2009).

Watts, D. J., Rothschild, D. M. & Mobius, M. Measuring the news and its impact on democracy. Proc. Natl Acad. Sci. USA 118 , e1912443118 (2021).

Pennycook, G. et al. Shifting attention to accuracy can reduce misinformation online. Nature 592 , 590–595 (2021).

Hameleers, M. The (un)intended consequences of emphasizing the threats of mis- and disinformation. Media Commun. 11 , 5–14 (2023).

Acerbi, A., Altay, S. & Mercier, H. Research note: fighting misinformation or fighting for information? Misinformation Rev. https://doi.org/10.37016/mr-2020-87 (2022).

Kajimoto, M. in The Palgrave Handbook of Media Misinformation (eds Fowler-Watt, K. & McDougal, J.) 121–137 (Springer, 2022).

Faul, F., Erdfelder, E., Buchner, A. & Lang, A.-G. Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses. Behav. Res. Methods 41 , 1149–1160 (2009).

Download references

Acknowledgements

We thank Y. Cheng and C. Y. Song from Hong Kong Baptist University for their help with translation and data collection in Hong Kong. We acknowledge the support of the following funding sources: Facebook/Meta (Foundational Integrity & Impact Research: Misinformation and Polarization; principal investigator, M.W.; co-principal investigator, E.H.) and the European Research Council, ‘Europeans exposed to dissimilar views in the media: investigating backfire effects’ Proposal EXPO-756301 (ERC Starting Grant; principal investigator, M.W.). Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of Facebook/Meta or the European Research Council. The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript.

Open access funding provided by University of Zurich.

Author information

Authors and affiliations.

Department of Political Science, University of Zurich, Zurich, Switzerland

Huron Consulting Group, Chicago, IL, USA

Brian Aitken

Department of Communication, University of California, Davis, Davis, CA, USA

Jingwen Zhang & Magdalena Wojcieszak

Department of Communication and Public Relations, University of Warsaw, Warsaw, Poland

Tomasz Gackowski

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization: E.H. and M.W. Analyses: E.H. and B.A. Investigation: E.H., B.A., J.Z., M.W. and T.G. Visualization: B.A. and E.H. Writing—original draft: E.H. and M.W. Writing—review and editing: E.H., B.A., J.Z. and M.W.

Corresponding author

Correspondence to Emma Hoes .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature Human Behaviour thanks the anonymous reviewers for their contribution to the peer review of this work. Peer reviewer reports are available.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary information.

Supplementary Figs. 1–27 and Tables 1–91.

Reporting Summary

Peer review file, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Hoes, E., Aitken, B., Zhang, J. et al. Prominent misinformation interventions reduce misperceptions but increase scepticism. Nat Hum Behav (2024). https://doi.org/10.1038/s41562-024-01884-x

Download citation

Received : 26 May 2023

Accepted : 10 April 2024

Published : 10 June 2024

DOI : https://doi.org/10.1038/s41562-024-01884-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

false hypothesis meaning

COMMENTS

  1. Type I & Type II Errors

    Example: Null and alternative hypothesis. You test whether a new drug intervention can alleviate symptoms of an autoimmune disease. In this case: The null hypothesis (H 0) is that the new drug has no effect on symptoms of the disease. The alternative hypothesis (H 1) is that the drug is effective for alleviating symptoms of the disease.

  2. Type I and type II errors

    In statistical hypothesis testing, a type I error, or a false positive, is the rejection of the null hypothesis when it is actually true. For example, an innocent person may be convicted. A type II error, or a false negative, is the failure to reject a null hypothesis that is actually false. For example: a guilty person may be not convicted.

  3. What are Type 1 and Type 2 Errors in Statistics?

    A statistically significant result cannot prove that a research hypothesis is correct (which implies 100% certainty). Because a p-value is based on probabilities, there is always a chance of making an incorrect conclusion regarding accepting or rejecting the null hypothesis (H 0).

  4. Types I & Type II Errors in Hypothesis Testing

    The significance level is an evidentiary standard that you set to determine whether your sample data are strong enough to reject the null hypothesis. Hypothesis tests define that standard using the probability of rejecting a null hypothesis that is actually true. You set this value based on your willingness to risk a false positive.

  5. 7.1: Basics of Hypothesis Testing

    Test Statistic: z = x¯¯¯ −μo σ/ n−−√ z = x ¯ − μ o σ / n since it is calculated as part of the testing of the hypothesis. Definition 7.1.4 7.1. 4. p - value: probability that the test statistic will take on more extreme values than the observed test statistic, given that the null hypothesis is true.

  6. Scientific hypothesis

    hypothesis. science. scientific hypothesis, an idea that proposes a tentative explanation about a phenomenon or a narrow set of phenomena observed in the natural world. The two primary features of a scientific hypothesis are falsifiability and testability, which are reflected in an "If…then" statement summarizing the idea and in the ...

  7. Hypothesis Testing

    Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is most often used by scientists to test specific predictions, called hypotheses, that arise from theories. ... Stating results in a statistics assignment In our comparison of mean height between men and women we found an average difference ...

  8. An Introduction to Statistics: Understanding Hypothesis Testing and

    HYPOTHESIS TESTING. A clinical trial begins with an assumption or belief, and then proceeds to either prove or disprove this assumption. In statistical terms, this belief or assumption is known as a hypothesis. Counterintuitively, what the researcher believes in (or is trying to prove) is called the "alternate" hypothesis, and the opposite ...

  9. PDF Type I and Type II errors

    False Discovery Rate. For large-scale multiple testing (for example, as is very common in genomics when using technologies such as DNA microarrays) one can instead control the false discovery rate (FDR), defined to be the expected proportion of false positives among all significant tests.

  10. Hypothesis Testing: Uses, Steps & Example

    The researchers write their hypotheses. These statements apply to the population, so they use the mu (μ) symbol for the population mean parameter.. Null Hypothesis (H 0): The population means of the test scores for the two groups are equal (μ 1 = μ 2).; Alternative Hypothesis (H A): The population means of the test scores for the two groups are unequal (μ 1 ≠ μ 2).

  11. 6a.1

    The first step in hypothesis testing is to set up two competing hypotheses. The hypotheses are the most important aspect. If the hypotheses are incorrect, your conclusion will also be incorrect. The two hypotheses are named the null hypothesis and the alternative hypothesis. The null hypothesis is typically denoted as H 0.

  12. Hypothesis Testing

    Assume that the null hypothesis is false for a given hypothesis test. All else being equal, we have the following: Larger samples result in a greater chance to reject the null hypothesis which means an increase in the power of the hypothesis test. If the effect size is larger, it will become easier for us to detect. This results in a greater ...

  13. 9.1: Introduction to Hypothesis Testing

    In hypothesis testing, the goal is to see if there is sufficient statistical evidence to reject a presumed null hypothesis in favor of a conjectured alternative hypothesis.The null hypothesis is usually denoted \(H_0\) while the alternative hypothesis is usually denoted \(H_1\). An hypothesis test is a statistical decision; the conclusion will either be to reject the null hypothesis in favor ...

  14. The scientific method (article)

    The scientific method. At the core of biology and other sciences lies a problem-solving approach called the scientific method. The scientific method has five basic steps, plus one feedback step: Make an observation. Ask a question. Form a hypothesis, or testable explanation. Make a prediction based on the hypothesis.

  15. What is a Hypothesis

    Definition: Hypothesis is an educated guess or proposed explanation for a phenomenon, based on some initial observations or data. It is a tentative statement that can be tested and potentially proven or disproven through further investigation and experimentation. Hypothesis is often used in scientific research to guide the design of experiments ...

  16. Hypothesis: Definition, Examples, and Types

    A hypothesis is a tentative statement about the relationship between two or more variables. It is a specific, testable prediction about what you expect to happen in a study. It is a preliminary answer to your question that helps guide the research process. Consider a study designed to examine the relationship between sleep deprivation and test ...

  17. What 'Fail to Reject' Means in a Hypothesis Test

    Reject the null hypothesis (meaning there is a definite, consequential relationship between the two phenomena), or ; Fail to reject the null hypothesis ... that a failure to reject does not mean that the null hypothesis is true—only that the test did not prove it to be false. In some cases, depending on the experiment, a relationship may ...

  18. How to Write a Strong Hypothesis

    Developing a hypothesis (with example) Step 1. Ask a question. Writing a hypothesis begins with a research question that you want to answer. The question should be focused, specific, and researchable within the constraints of your project. Example: Research question.

  19. Null Hypothesis: Definition, Rejecting & Examples

    The null hypothesis in statistics states that there is no difference between groups or no relationship between variables. It is one of two mutually exclusive hypotheses about a population in a hypothesis test. When your sample contains sufficient evidence, you can reject the null and conclude that the effect is statistically significant.

  20. False positives and false negatives

    The false positive rate depends on the significance level. The specificity of the test is equal to 1 minus the false positive rate. In statistical hypothesis testing, this fraction is given the Greek letter α, and 1 − α is defined as the specificity of the test. Increasing the specificity of the test lowers the probability of type I errors ...

  21. 9.1: Null and Alternative Hypotheses

    Review. In a hypothesis test, sample data is evaluated in order to arrive at a decision about some type of claim.If certain conditions about the sample are satisfied, then the claim can be evaluated for a population. In a hypothesis test, we: Evaluate the null hypothesis, typically denoted with \(H_{0}\).The null is not rejected unless the hypothesis test shows otherwise.

  22. Hypothesis Definition & Meaning

    hypothesis: [noun] an assumption or concession made for the sake of argument. an interpretation of a practical situation or condition taken as the ground for action.

  23. False Hypothesis

    the idea of using a 'diagonal board' to exploit the self-inverse nature of the plugboard complication, and. (2) 'simultaneous scanning': the following of all possible false implications of a false hypothesis about plugboard settings. The ideas are independent, as is brought out in Good's discussion [5].

  24. Null & Alternative Hypotheses

    The alternative hypothesis (H a) is the other answer to your research question. It claims that there's an effect in the population. Often, your alternative hypothesis is the same as your research hypothesis. In other words, it's the claim that you expect or hope will be true. The alternative hypothesis is the complement to the null hypothesis.

  25. Prominent misinformation interventions reduce misperceptions but

    The false statements were self-fabricated (that is, made up), and both the true and the false statements were selected from a pre-tested pool of claims that were rated as similarly easy to read ...

  26. arXiv:2406.14452v1 [cs.HC] 20 Jun 2024

    to estimate the mean or variance of scalar fields. This assumption follows work showing that average 'brightness' is an ensemble fea-ture that can be rapidly extracted by the visual system [20]. This hypothesis favors sequential colormaps (e.g., a plain greyscale or a multi-hue ramp like viridis) over more colorful designs (e.g., di-