Understanding the Interpretation of Results in Research
Doing the interpretation of results in research is crucial to obtaining valuable findings. Learn how to achieve a good interpretation here!
Research is a powerful tool for gaining insights into the world around us. Whether in academia, industry, or the public sector, research studies can inform decision-making, drive innovation, and improve our understanding of complex phenomena. However, the value of research lies not only in the data collected but also in the interpretation of results. Properly interpreting research findings is critical to extracting meaningful insights, drawing accurate conclusions, and informing future research directions.
In this Mind the Graph article, you’ll understand the basic concept of interpretation of results in research. The article will go over the right procedure for checking, cleaning, and editing your data as well as how to organize it effectively to aid interpretation.
What is the interpretation of results in research?
The process of interpreting and making meaning of data produced in a research study is known as research result interpretation. It entails studying the data’s patterns, trends, and correlations in order to develop reliable findings and make meaningful conclusions.
Interpretation is a crucial step in the research process as it helps researchers to determine the relevance of their results, relate them to existing knowledge, and shape subsequent research goals. A thorough interpretation of results in research may assist guarantee that the findings are legitimate and trustworthy and that they contribute to the development of knowledge in an area of study.
The interpretation of results in research requires multiple steps, including checking, cleaning, and editing data to ensure its accuracy, and properly organizing it in order to simplify interpretation. To examine data and derive reliable findings, researchers must employ suitable statistical methods. They must additionally consider the larger ramifications of their results and how they apply to everyday scenarios.
It’s crucial to keep in mind that coming to precise conclusions while generating meaningful inferences is an iterative process that needs thorough investigation.
The process of checking, cleaning, and editing data
The process of data checking, cleaning, and editing may be separated into three stages: screening, diagnostic, and treatment . Each step has a distinct goal and set of tasks to verify the data’s accuracy and reliability.
Screening phase
The screening process consists of a first inspection of the data to find any errors or anomalies. Running basic descriptive statistics, reviewing data distributions, and discovering missing values may all be part of this. This phase’s goal is to discover any concerns with the data that need to be investigated further.
Diagnostic phase
The diagnostic phase entails a more extensive review of the data to identify particular concerns that must be addressed. Identifying outliers, investigating relationships between variables, and spotting abnormalities in the data are all examples of this. This phase’s goal is to identify any problems with the data and propose suitable treatment options.
Treatment phase
The treatment phase entails taking action to resolve any difficulties found during the diagnostic phase. This may involve eliminating outliers, filling in missing values, transforming data, and editing data. This phase’s goal is to guarantee that the data is reliable, precise, and in the appropriate format for analysis.
Researchers may guarantee that their data is high-quality and acceptable for analysis by using a structured approach to data checking, cleaning, and editing.
How to organize data display and description?
Organizing data display and description is another critical stage in the process of analyzing study results. The format in which data is presented has a significant influence on how quickly it may be comprehended and interpreted. The following are some best practices for data display and description organization.
Best practices for qualitative data include the following:
- Use quotes and anecdotes: Use quotes and anecdotes from participants to illustrate key themes and patterns in the data.
- Group similar responses: Similar replies should be grouped together to find major themes and patterns in the data.
- Use tables: Tables to arrange and summarize major themes, categories, or subcategories revealed by the data.
- Use figures: Figures, such as charts or graphs, may help you visualize data and spot patterns or trends.
- Provide context: Explain the research project’s topic or hypothesis being examined, as well as any important background information, before presenting the findings.
- Use simple and direct language: To describe the data being given, use clear and succinct language.
Best practices for quantitative data include the following:
- Use relevant charts and graphs: Select the right chart or graph for the data being presented. A bar chart, for example, could be ideal for categorical data, but a scatter plot might be appropriate for continuous data.
- Label the axes and include a legend: Label the axes of the chart or graph and include a legend to explain any symbols or colors used. This makes it easier for readers to comprehend the information offered.
- Provide context: Give context to the data that is being given. This may include a brief summary of the research issue or hypothesis under consideration, as well as any pertinent background information.
- Use clear and succinct language: To describe the data being given, use clear and concise language. Avoid using technical jargon or complex language that readers may find difficult to grasp.
- Highlight significant findings: Highlight noteworthy findings in the provided data. Identifying any trends, patterns, or substantial disparities across groups is one example.
- Create a summary table: Provide a summary table that explains the data being provided. Key data such as means, medians, and standard deviations may be included.
3 Tips for interpretation of results in research
Here are some key tips to keep in mind when interpreting research results:
- Keep your research question in mind: The most important piece of advice for interpreting the results is to keep your research question in mind. Your interpretation should be centered on addressing your research question, and all of your analysis should be directed in that direction.
- Consider alternate explanations: It’s critical to think about alternative explanations for your results. Ask yourself whether any other circumstances might be impacting your findings, and carefully assess them. This can assist guarantee that your interpretation is based on the evidence and not on assumptions or biases.
- Contextualize the results: Put the results into perspective by comparing them to past research in the topic at hand. This can assist in identifying trends, patterns, or discrepancies that you may have missed otherwise, as well as providing a foundation for subsequent research.
By following these three tips, you may assist guarantee that your interpretation of data is correct, useful, and relevant to your research topic and the larger context of your field of research.
Professional and custom designs for your publications
Mind the Graph is a sophisticated tool that provides professional and customizable research publication designs. Enhance the visual impact of your research by using eye-catching visuals, charts, and graphs. With Mind the Graph, you can simply generate visually appealing and informative publications that captivate your audience and successfully explain the research’s findings.
Subscribe to our newsletter
Exclusive high quality content about effective visual communication in science.
Sign Up for Free
Try the best infographic maker and promote your research with scientifically-accurate beautiful figures
no credit card required
About Jessica Abbadia
Jessica Abbadia is a lawyer that has been working in Digital Marketing since 2020, improving organic performance for apps and websites in various regions through ASO and SEO. Currently developing scientific and intellectual knowledge for the community's benefit. Jessica is an animal rights activist who enjoys reading and drinking strong coffee.
Content tags
An official website of the United States government
The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
- Publications
- Account settings
- Browse Titles
NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2024 Jan-.
StatPearls [Internet].
Hypothesis testing, p values, confidence intervals, and significance.
Jacob Shreffler ; Martin R. Huecker .
Affiliations
Last Update: March 13, 2023 .
- Definition/Introduction
Medical providers often rely on evidence-based medicine to guide decision-making in practice. Often a research hypothesis is tested with results provided, typically with p values, confidence intervals, or both. Additionally, statistical or research significance is estimated or determined by the investigators. Unfortunately, healthcare providers may have different comfort levels in interpreting these findings, which may affect the adequate application of the data.
- Issues of Concern
Without a foundational understanding of hypothesis testing, p values, confidence intervals, and the difference between statistical and clinical significance, it may affect healthcare providers' ability to make clinical decisions without relying purely on the research investigators deemed level of significance. Therefore, an overview of these concepts is provided to allow medical professionals to use their expertise to determine if results are reported sufficiently and if the study outcomes are clinically appropriate to be applied in healthcare practice.
Hypothesis Testing
Investigators conducting studies need research questions and hypotheses to guide analyses. Starting with broad research questions (RQs), investigators then identify a gap in current clinical practice or research. Any research problem or statement is grounded in a better understanding of relationships between two or more variables. For this article, we will use the following research question example:
Research Question: Is Drug 23 an effective treatment for Disease A?
Research questions do not directly imply specific guesses or predictions; we must formulate research hypotheses. A hypothesis is a predetermined declaration regarding the research question in which the investigator(s) makes a precise, educated guess about a study outcome. This is sometimes called the alternative hypothesis and ultimately allows the researcher to take a stance based on experience or insight from medical literature. An example of a hypothesis is below.
Research Hypothesis: Drug 23 will significantly reduce symptoms associated with Disease A compared to Drug 22.
The null hypothesis states that there is no statistical difference between groups based on the stated research hypothesis.
Researchers should be aware of journal recommendations when considering how to report p values, and manuscripts should remain internally consistent.
Regarding p values, as the number of individuals enrolled in a study (the sample size) increases, the likelihood of finding a statistically significant effect increases. With very large sample sizes, the p-value can be very low significant differences in the reduction of symptoms for Disease A between Drug 23 and Drug 22. The null hypothesis is deemed true until a study presents significant data to support rejecting the null hypothesis. Based on the results, the investigators will either reject the null hypothesis (if they found significant differences or associations) or fail to reject the null hypothesis (they could not provide proof that there were significant differences or associations).
To test a hypothesis, researchers obtain data on a representative sample to determine whether to reject or fail to reject a null hypothesis. In most research studies, it is not feasible to obtain data for an entire population. Using a sampling procedure allows for statistical inference, though this involves a certain possibility of error. [1] When determining whether to reject or fail to reject the null hypothesis, mistakes can be made: Type I and Type II errors. Though it is impossible to ensure that these errors have not occurred, researchers should limit the possibilities of these faults. [2]
Significance
Significance is a term to describe the substantive importance of medical research. Statistical significance is the likelihood of results due to chance. [3] Healthcare providers should always delineate statistical significance from clinical significance, a common error when reviewing biomedical research. [4] When conceptualizing findings reported as either significant or not significant, healthcare providers should not simply accept researchers' results or conclusions without considering the clinical significance. Healthcare professionals should consider the clinical importance of findings and understand both p values and confidence intervals so they do not have to rely on the researchers to determine the level of significance. [5] One criterion often used to determine statistical significance is the utilization of p values.
P values are used in research to determine whether the sample estimate is significantly different from a hypothesized value. The p-value is the probability that the observed effect within the study would have occurred by chance if, in reality, there was no true effect. Conventionally, data yielding a p<0.05 or p<0.01 is considered statistically significant. While some have debated that the 0.05 level should be lowered, it is still universally practiced. [6] Hypothesis testing allows us to determine the size of the effect.
An example of findings reported with p values are below:
Statement: Drug 23 reduced patients' symptoms compared to Drug 22. Patients who received Drug 23 (n=100) were 2.1 times less likely than patients who received Drug 22 (n = 100) to experience symptoms of Disease A, p<0.05.
Statement:Individuals who were prescribed Drug 23 experienced fewer symptoms (M = 1.3, SD = 0.7) compared to individuals who were prescribed Drug 22 (M = 5.3, SD = 1.9). This finding was statistically significant, p= 0.02.
For either statement, if the threshold had been set at 0.05, the null hypothesis (that there was no relationship) should be rejected, and we should conclude significant differences. Noticeably, as can be seen in the two statements above, some researchers will report findings with < or > and others will provide an exact p-value (0.000001) but never zero [6] . When examining research, readers should understand how p values are reported. The best practice is to report all p values for all variables within a study design, rather than only providing p values for variables with significant findings. [7] The inclusion of all p values provides evidence for study validity and limits suspicion for selective reporting/data mining.
While researchers have historically used p values, experts who find p values problematic encourage the use of confidence intervals. [8] . P-values alone do not allow us to understand the size or the extent of the differences or associations. [3] In March 2016, the American Statistical Association (ASA) released a statement on p values, noting that scientific decision-making and conclusions should not be based on a fixed p-value threshold (e.g., 0.05). They recommend focusing on the significance of results in the context of study design, quality of measurements, and validity of data. Ultimately, the ASA statement noted that in isolation, a p-value does not provide strong evidence. [9]
When conceptualizing clinical work, healthcare professionals should consider p values with a concurrent appraisal study design validity. For example, a p-value from a double-blinded randomized clinical trial (designed to minimize bias) should be weighted higher than one from a retrospective observational study [7] . The p-value debate has smoldered since the 1950s [10] , and replacement with confidence intervals has been suggested since the 1980s. [11]
Confidence Intervals
A confidence interval provides a range of values within given confidence (e.g., 95%), including the accurate value of the statistical constraint within a targeted population. [12] Most research uses a 95% CI, but investigators can set any level (e.g., 90% CI, 99% CI). [13] A CI provides a range with the lower bound and upper bound limits of a difference or association that would be plausible for a population. [14] Therefore, a CI of 95% indicates that if a study were to be carried out 100 times, the range would contain the true value in 95, [15] confidence intervals provide more evidence regarding the precision of an estimate compared to p-values. [6]
In consideration of the similar research example provided above, one could make the following statement with 95% CI:
Statement: Individuals who were prescribed Drug 23 had no symptoms after three days, which was significantly faster than those prescribed Drug 22; there was a mean difference between the two groups of days to the recovery of 4.2 days (95% CI: 1.9 – 7.8).
It is important to note that the width of the CI is affected by the standard error and the sample size; reducing a study sample number will result in less precision of the CI (increase the width). [14] A larger width indicates a smaller sample size or a larger variability. [16] A researcher would want to increase the precision of the CI. For example, a 95% CI of 1.43 – 1.47 is much more precise than the one provided in the example above. In research and clinical practice, CIs provide valuable information on whether the interval includes or excludes any clinically significant values. [14]
Null values are sometimes used for differences with CI (zero for differential comparisons and 1 for ratios). However, CIs provide more information than that. [15] Consider this example: A hospital implements a new protocol that reduced wait time for patients in the emergency department by an average of 25 minutes (95% CI: -2.5 – 41 minutes). Because the range crosses zero, implementing this protocol in different populations could result in longer wait times; however, the range is much higher on the positive side. Thus, while the p-value used to detect statistical significance for this may result in "not significant" findings, individuals should examine this range, consider the study design, and weigh whether or not it is still worth piloting in their workplace.
Similarly to p-values, 95% CIs cannot control for researchers' errors (e.g., study bias or improper data analysis). [14] In consideration of whether to report p-values or CIs, researchers should examine journal preferences. When in doubt, reporting both may be beneficial. [13] An example is below:
Reporting both: Individuals who were prescribed Drug 23 had no symptoms after three days, which was significantly faster than those prescribed Drug 22, p = 0.009. There was a mean difference between the two groups of days to the recovery of 4.2 days (95% CI: 1.9 – 7.8).
- Clinical Significance
Recall that clinical significance and statistical significance are two different concepts. Healthcare providers should remember that a study with statistically significant differences and large sample size may be of no interest to clinicians, whereas a study with smaller sample size and statistically non-significant results could impact clinical practice. [14] Additionally, as previously mentioned, a non-significant finding may reflect the study design itself rather than relationships between variables.
Healthcare providers using evidence-based medicine to inform practice should use clinical judgment to determine the practical importance of studies through careful evaluation of the design, sample size, power, likelihood of type I and type II errors, data analysis, and reporting of statistical findings (p values, 95% CI or both). [4] Interestingly, some experts have called for "statistically significant" or "not significant" to be excluded from work as statistical significance never has and will never be equivalent to clinical significance. [17]
The decision on what is clinically significant can be challenging, depending on the providers' experience and especially the severity of the disease. Providers should use their knowledge and experiences to determine the meaningfulness of study results and make inferences based not only on significant or insignificant results by researchers but through their understanding of study limitations and practical implications.
- Nursing, Allied Health, and Interprofessional Team Interventions
All physicians, nurses, pharmacists, and other healthcare professionals should strive to understand the concepts in this chapter. These individuals should maintain the ability to review and incorporate new literature for evidence-based and safe care.
- Review Questions
- Access free multiple choice questions on this topic.
- Comment on this article.
Disclosure: Jacob Shreffler declares no relevant financial relationships with ineligible companies.
Disclosure: Martin Huecker declares no relevant financial relationships with ineligible companies.
This book is distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) ( http://creativecommons.org/licenses/by-nc-nd/4.0/ ), which permits others to distribute the work, provided that the article is not altered or used commercially. You are not required to obtain permission to distribute this article, provided that you credit the author and journal.
- Cite this Page Shreffler J, Huecker MR. Hypothesis Testing, P Values, Confidence Intervals, and Significance. [Updated 2023 Mar 13]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2024 Jan-.
In this Page
Bulk download.
- Bulk download StatPearls data from FTP
Similar articles in PubMed
- The reporting of p values, confidence intervals and statistical significance in Preventive Veterinary Medicine (1997-2017). [PeerJ. 2021] The reporting of p values, confidence intervals and statistical significance in Preventive Veterinary Medicine (1997-2017). Messam LLM, Weng HY, Rosenberger NWY, Tan ZH, Payet SDM, Santbakshsing M. PeerJ. 2021; 9:e12453. Epub 2021 Nov 24.
- Review Clinical versus statistical significance: interpreting P values and confidence intervals related to measures of association to guide decision making. [J Pharm Pract. 2010] Review Clinical versus statistical significance: interpreting P values and confidence intervals related to measures of association to guide decision making. Ferrill MJ, Brown DA, Kyle JA. J Pharm Pract. 2010 Aug; 23(4):344-51. Epub 2010 Apr 13.
- Interpreting "statistical hypothesis testing" results in clinical research. [J Ayurveda Integr Med. 2012] Interpreting "statistical hypothesis testing" results in clinical research. Sarmukaddam SB. J Ayurveda Integr Med. 2012 Apr; 3(2):65-9.
- Confidence intervals in procedural dermatology: an intuitive approach to interpreting data. [Dermatol Surg. 2005] Confidence intervals in procedural dermatology: an intuitive approach to interpreting data. Alam M, Barzilai DA, Wrone DA. Dermatol Surg. 2005 Apr; 31(4):462-6.
- Review Is statistical significance testing useful in interpreting data? [Reprod Toxicol. 1993] Review Is statistical significance testing useful in interpreting data? Savitz DA. Reprod Toxicol. 1993; 7(2):95-100.
Recent Activity
- Hypothesis Testing, P Values, Confidence Intervals, and Significance - StatPearl... Hypothesis Testing, P Values, Confidence Intervals, and Significance - StatPearls
Your browsing activity is empty.
Activity recording is turned off.
Turn recording back on
Connect with NLM
National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894
Web Policies FOIA HHS Vulnerability Disclosure
Help Accessibility Careers
Jump to navigation
Cochrane Cochrane Interactive Learning
Cochrane interactive learning, module 7: interpreting the findings, about this module.
Part of the Cochrane Interactive Learning course on Conducting an Intervention Review, this module explains how to interpret the findings of your review. In this module you will learn how to interpret the results of your statistical analysis, reporting bias, and use the GRADE approach to judging and reporting on the certainty of evidence.
90-120 minutes
What you can expect to learn (learning outcomes).
This module will teach you to:
- Understand confidence intervals in the interpretation of results of meta-analysis
- Identify ways of re-expressing the standardized mean difference
- Interpret a funnel plot asymmetry
- Determine the certainty of the evidence
- Decide on rating up a body of evidence
Authors, contributors, and how to cite this module
Module 7 has been written and compiled by:
Dario Sambunjak, Miranda Cumpston and Chris Watts, Cochrane Central Executive Team ..
Matthew J. Page, School of Social and Community Medicine, University of Bristol, UK, and School of Public Health and Preventive Medicine, Monash University, Australia.
Nancy Santesso, Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, Canada.
This module should be cited as: Sambunjak D, Cumpston M, Watts C, Page MJ, Santesso N. Module 7: Interpreting the findings. In: Cochrane Interactive Learning: Conducting an intervention review. Cochrane, 2017. Available from https://training.cochrane.org/interactivelearning/module-7-interpreting-findings .
Update and feedback
The module was last updated on September 2022.
We're pleased to hear your thoughts. If you have any questions, comments or feedback about the content of this module, please contact us .
IMAGES
VIDEO
COMMENTS
In this chapter, we address first one of the key aspects of interpreting findings that is also fundamental in completing a ‘Summary of findings’ table: the certainty of evidence related to each of the outcomes.
You are seeking ways to understand what you have found by comparing your findings both within and across groups, and by comparing your study’s findings with those of other studies. In qualitative research, we are open to dif-ferent ways of seeing the world. We make assumptions about how things work.
It important to properly collect, code, clean and edit the data before interpreting and displaying the research results. Computers play a major role in different phases of research starting from conceptual, design and planning, data collection, data analysis and research publication phases.
Properly interpreting research findings is critical to extracting meaningful insights, drawing accurate conclusions, and informing future research directions. In this Mind the Graph article, you’ll understand the basic concept of interpretation of results in research.
To test a hypothesis, researchers obtain data on a representative sample to determine whether to reject or fail to reject a null hypothesis. In most research studies, it is not feasible to obtain data for an entire population. Using a sampling procedure allows for statistical inference, though this involves a certain possibility of error. [1] .
This blog post delves into the art of interpreting findings, offering guidance on making sense of coded data, and drawing conclusions that balance the researcher's insights with the perspectives of the participants.
It identifies the key questions to ask when evaluating a research report, explains why the answers matter and offers tips on where to find the information in the body of the report. What makes the study important? What makes a study newsworthy, or useful for informing policies and programs?
90-120 minutes. What you can expect to learn (learning outcomes) This module will teach you to: Understand confidence intervals in the interpretation of results of meta-analysis. Identify ways of re-expressing the standardized mean difference. Interpret a funnel plot asymmetry. Determine the certainty of the evidence.
This helps prevent unnecessary duplication of research findings in the scientific literature and, to some extent, ensures that each research article provides new information. Ultimately, the journal editor will compile all of the peer reviewer feedback and determine whether the article will be published in its current state (a rare occurrence ...
Your results section should objectively report your findings, presenting only brief observations in relation to each question, hypothesis, or theme. It should not speculate about the meaning of the results or attempt to answer your main research question.