Principles of Experimental Design

  • First Online: 16 April 2021

Cite this chapter

example of experimental research design pdf

  • Hans-Michael Kaltenbach 4  

Part of the book series: Statistics for Biology and Health ((SBH))

2467 Accesses

1 Altmetric

We introduce the statistical design of experiments and put the topic into the larger context of scientific experimentation. We give a non-technical discussion of some key ideas of experimental design, including the role of randomization, replication, and the basic idea of blocking for increasing precision and power. We also take a more high-level view and consider the construct, internal and external validities of an experiment, and the corresponding tools that experimental design offers to achieve them.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Abelson, R. P. (1995). Statistics as Principled Argument. Psychology Press.

Google Scholar  

Bailar III, J. C. (1981). “Bailar’s laws of data analysis”. In: Clinical Pharmacology & Therapeutics 20.1, pp. 113–119.

Article   Google Scholar  

Couzin-Frankel, J. (2013). “When mice mislead”. In: Science 342.6161, pp. 922–925.

Cox, D. R. (1958). Planning of Experiments. Wiley-Blackwell.

Cox, D. R. (2009). “Randomization in the design of experiments”. In: International Statistical Review 77, pp. 415–429.

Coxon, C. H., C. Longstaff, and C. Burns (2019). “Applying the science of measurement to biology: Why bother?” In: PLOS Biology 17.6, e3000338.

Dong, Y. and C. Y. J. Peng (2013). “Principled missing data methods for researchers”. In: SpringerPlus 2.1, pp. 1–17.

Fisher, R. A. (1938). “Presidential Address to the First Indian Statistical Congress”. In: Sankhya: The Indian Journal of Statistics 4, pp. 14–17.

Gigerenzer, G. (2002). Adaptive Thinking: Rationality in the Real World. Oxford Univ Press.

Gigerenzer, G. and J. N. Marewski (2014). “Surrogate Science: The Idol of a Universal Method for Scientific Inference”. In: Journal of Management 41.2, pp. 421–440.

Hand, D. J. (1996). “Statistics and the theory of measurement”. In: Journal of the Royal Statistical Society A 159.3, pp. 445–492.

Kafkafi, N. et al. (2017). “Addressing reproducibility in single-laboratory phenotyping experiments”. In: Nature Methods 14.5, pp. 462–464.

Karp, N. A. (2018). “Reproducible preclinical research-Is embracing variability the answer?” In: PLOS Biology 16.3, e2005413.

Kilkenny, C. et al. (2010). “Improving Bioscience Research Reporting: The ARRIVE Guidelines for Reporting Animal Research”. In: PLOS Biology 8.6, e1000412.

Kimmelman, J., J. S. Mogil, and U. Dirnagl (2014). “Distinguishing between Exploratory and Confirmatory Preclinical Research Will Improve Translation”. In: PLOS Biology 12.5, e1001863.

Llovera, G. and A. Liesz (2016). “The next step in translational research: lessons learned from the first preclinical randomized controlled trial”. In: Journal of Neurochemistry 139, pp. 271–279.

Moher D.and Hopewell, S. et al. (2010). “CONSORT 2010 Explanation and Elaboration: updated guidelines for reporting parallel group randomised trials”. In: BMJ: British Medical Journal 340.

Moore, C. G. et al. (2011). “Recommendations for planning pilot studies in clinical and translational research.” In: Clinical and Translational Science 4.5, pp. 332–337.

Pound, P. and M. Ritskes-Hoitinga (2018). “Is it possible to overcome issues of external validity in preclinical animal research? Why most animal models are bound to fail”. In: Journal of Translational Medicine 16.1, p. 304.

Richter, S. H. (2017). “Systematic heterogenization for better reproducibility in animal experimentation”. In: Lab Animal 46.9, pp. 343–349.

Richter, S. H. et al. (2010). “Systematic variation improves reproducibility of animal experiments”. In: Nature Methods 7.3, pp. 167–168.

Sansone, S.-A. et al. (2019). “FAIRsharing as a community approach to standards, repositories and policies”. In: Nature Biotechnology 37.4, pp. 358–367.

Sim, J. (2019). “Should treatment effects be estimated in pilot and feasibility studies?” In: Pilot and Feasibility Studies 5.107, e1–e7.

Thabane, L. et al. (2010). “A tutorial on pilot studies: the what, why and how”. In: BMC Medical Research Methodology 10.1, p. 1.

Travers J.and Marsh, S. et al. (2007). “External validity of randomised controlled trials in asthma: To whom do the results of the trials apply?” In: Thorax 62.3, pp. 219–233.

Tufte, E. (1997). Visual Explanations: Images and Quantities, Evidence and Narrative. 1st. Graphics Press.

Voelkl, B. et al. (2018). “Reproducibility of preclinical animal research improves with heterogeneity of study samples”. In: PLOS Biology 16.2, e2003693.

Würbel, H. (2017). “More than 3Rs: The importance of scientific validity for harm-benefit analysis of animal research”. In: Lab Animal 46.4, pp. 164–166.

Download references

Author information

Authors and affiliations.

Department of Biosystems Science and Engineering, ETH Zürich, Basel, Switzerland

Hans-Michael Kaltenbach

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Hans-Michael Kaltenbach .

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this chapter

Kaltenbach, HM. (2021). Principles of Experimental Design. In: Statistical Design and Analysis of Biological Experiments. Statistics for Biology and Health. Springer, Cham. https://doi.org/10.1007/978-3-030-69641-2_1

Download citation

DOI : https://doi.org/10.1007/978-3-030-69641-2_1

Published : 16 April 2021

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-69640-5

Online ISBN : 978-3-030-69641-2

eBook Packages : Mathematics and Statistics Mathematics and Statistics (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Guide to Experimental Design | Overview, Steps, & Examples

Guide to Experimental Design | Overview, 5 steps & Examples

Published on December 3, 2019 by Rebecca Bevans . Revised on June 21, 2023.

Experiments are used to study causal relationships . You manipulate one or more independent variables and measure their effect on one or more dependent variables.

Experimental design create a set of procedures to systematically test a hypothesis . A good experimental design requires a strong understanding of the system you are studying.

There are five key steps in designing an experiment:

  • Consider your variables and how they are related
  • Write a specific, testable hypothesis
  • Design experimental treatments to manipulate your independent variable
  • Assign subjects to groups, either between-subjects or within-subjects
  • Plan how you will measure your dependent variable

For valid conclusions, you also need to select a representative sample and control any  extraneous variables that might influence your results. If random assignment of participants to control and treatment groups is impossible, unethical, or highly difficult, consider an observational study instead. This minimizes several types of research bias, particularly sampling bias , survivorship bias , and attrition bias as time passes.

Table of contents

Step 1: define your variables, step 2: write your hypothesis, step 3: design your experimental treatments, step 4: assign your subjects to treatment groups, step 5: measure your dependent variable, other interesting articles, frequently asked questions about experiments.

You should begin with a specific research question . We will work with two research question examples, one from health sciences and one from ecology:

To translate your research question into an experimental hypothesis, you need to define the main variables and make predictions about how they are related.

Start by simply listing the independent and dependent variables .

Research question Independent variable Dependent variable
Phone use and sleep Minutes of phone use before sleep Hours of sleep per night
Temperature and soil respiration Air temperature just above the soil surface CO2 respired from soil

Then you need to think about possible extraneous and confounding variables and consider how you might control  them in your experiment.

Extraneous variable How to control
Phone use and sleep in sleep patterns among individuals. measure the average difference between sleep with phone use and sleep without phone use rather than the average amount of sleep per treatment group.
Temperature and soil respiration also affects respiration, and moisture can decrease with increasing temperature. monitor soil moisture and add water to make sure that soil moisture is consistent across all treatment plots.

Finally, you can put these variables together into a diagram. Use arrows to show the possible relationships between variables and include signs to show the expected direction of the relationships.

Diagram of the relationship between variables in a sleep experiment

Here we predict that increasing temperature will increase soil respiration and decrease soil moisture, while decreasing soil moisture will lead to decreased soil respiration.

Prevent plagiarism. Run a free check.

Now that you have a strong conceptual understanding of the system you are studying, you should be able to write a specific, testable hypothesis that addresses your research question.

Null hypothesis (H ) Alternate hypothesis (H )
Phone use and sleep Phone use before sleep does not correlate with the amount of sleep a person gets. Increasing phone use before sleep leads to a decrease in sleep.
Temperature and soil respiration Air temperature does not correlate with soil respiration. Increased air temperature leads to increased soil respiration.

The next steps will describe how to design a controlled experiment . In a controlled experiment, you must be able to:

  • Systematically and precisely manipulate the independent variable(s).
  • Precisely measure the dependent variable(s).
  • Control any potential confounding variables.

If your study system doesn’t match these criteria, there are other types of research you can use to answer your research question.

How you manipulate the independent variable can affect the experiment’s external validity – that is, the extent to which the results can be generalized and applied to the broader world.

First, you may need to decide how widely to vary your independent variable.

  • just slightly above the natural range for your study region.
  • over a wider range of temperatures to mimic future warming.
  • over an extreme range that is beyond any possible natural variation.

Second, you may need to choose how finely to vary your independent variable. Sometimes this choice is made for you by your experimental system, but often you will need to decide, and this will affect how much you can infer from your results.

  • a categorical variable : either as binary (yes/no) or as levels of a factor (no phone use, low phone use, high phone use).
  • a continuous variable (minutes of phone use measured every night).

How you apply your experimental treatments to your test subjects is crucial for obtaining valid and reliable results.

First, you need to consider the study size : how many individuals will be included in the experiment? In general, the more subjects you include, the greater your experiment’s statistical power , which determines how much confidence you can have in your results.

Then you need to randomly assign your subjects to treatment groups . Each group receives a different level of the treatment (e.g. no phone use, low phone use, high phone use).

You should also include a control group , which receives no treatment. The control group tells us what would have happened to your test subjects without any experimental intervention.

When assigning your subjects to groups, there are two main choices you need to make:

  • A completely randomized design vs a randomized block design .
  • A between-subjects design vs a within-subjects design .

Randomization

An experiment can be completely randomized or randomized within blocks (aka strata):

  • In a completely randomized design , every subject is assigned to a treatment group at random.
  • In a randomized block design (aka stratified random design), subjects are first grouped according to a characteristic they share, and then randomly assigned to treatments within those groups.
Completely randomized design Randomized block design
Phone use and sleep Subjects are all randomly assigned a level of phone use using a random number generator. Subjects are first grouped by age, and then phone use treatments are randomly assigned within these groups.
Temperature and soil respiration Warming treatments are assigned to soil plots at random by using a number generator to generate map coordinates within the study area. Soils are first grouped by average rainfall, and then treatment plots are randomly assigned within these groups.

Sometimes randomization isn’t practical or ethical , so researchers create partially-random or even non-random designs. An experimental design where treatments aren’t randomly assigned is called a quasi-experimental design .

Between-subjects vs. within-subjects

In a between-subjects design (also known as an independent measures design or classic ANOVA design), individuals receive only one of the possible levels of an experimental treatment.

In medical or social research, you might also use matched pairs within your between-subjects design to make sure that each treatment group contains the same variety of test subjects in the same proportions.

In a within-subjects design (also known as a repeated measures design), every individual receives each of the experimental treatments consecutively, and their responses to each treatment are measured.

Within-subjects or repeated measures can also refer to an experimental design where an effect emerges over time, and individual responses are measured over time in order to measure this effect as it emerges.

Counterbalancing (randomizing or reversing the order of treatments among subjects) is often used in within-subjects designs to ensure that the order of treatment application doesn’t influence the results of the experiment.

Between-subjects (independent measures) design Within-subjects (repeated measures) design
Phone use and sleep Subjects are randomly assigned a level of phone use (none, low, or high) and follow that level of phone use throughout the experiment. Subjects are assigned consecutively to zero, low, and high levels of phone use throughout the experiment, and the order in which they follow these treatments is randomized.
Temperature and soil respiration Warming treatments are assigned to soil plots at random and the soils are kept at this temperature throughout the experiment. Every plot receives each warming treatment (1, 3, 5, 8, and 10C above ambient temperatures) consecutively over the course of the experiment, and the order in which they receive these treatments is randomized.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

example of experimental research design pdf

Finally, you need to decide how you’ll collect data on your dependent variable outcomes. You should aim for reliable and valid measurements that minimize research bias or error.

Some variables, like temperature, can be objectively measured with scientific instruments. Others may need to be operationalized to turn them into measurable observations.

  • Ask participants to record what time they go to sleep and get up each day.
  • Ask participants to wear a sleep tracker.

How precisely you measure your dependent variable also affects the kinds of statistical analysis you can use on your data.

Experiments are always context-dependent, and a good experimental design will take into account all of the unique considerations of your study system to produce information that is both valid and relevant to your research question.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Likert scale

Research bias

  • Implicit bias
  • Framing effect
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic

Experimental design means planning a set of procedures to investigate a relationship between variables . To design a controlled experiment, you need:

  • A testable hypothesis
  • At least one independent variable that can be precisely manipulated
  • At least one dependent variable that can be precisely measured

When designing the experiment, you decide:

  • How you will manipulate the variable(s)
  • How you will control for any potential confounding variables
  • How many subjects or samples will be included in the study
  • How subjects will be assigned to treatment levels

Experimental design is essential to the internal and external validity of your experiment.

The key difference between observational studies and experimental designs is that a well-done observational study does not influence the responses of participants, while experiments do have some sort of treatment condition applied to at least some participants by random assignment .

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word “between” means that you’re comparing different conditions between groups, while the word “within” means you’re comparing different conditions within the same group.

An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bevans, R. (2023, June 21). Guide to Experimental Design | Overview, 5 steps & Examples. Scribbr. Retrieved July 10, 2024, from https://www.scribbr.com/methodology/experimental-design/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Other students also liked, random assignment in experiments | introduction & examples, quasi-experimental design | definition, types & examples, how to write a lab report, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Chapter 13 Experimental Research How to Design and Evaluate Research in Education 8th

Profile image of Cut Eka Para Samya

In Part 4, we begin a more detailed discussion of some of the methodologies that educational researchers use. We concentrate here on quantitative research, with a separate chapter devoted to group-comparison experimental research, single-subject experimental research, correlational research, causal-comparative research, and survey research. In each chapter, we not only discuss the method in some detail, but we also provide examples of published studies in which the researchers used one of these methods. We conclude each chapter with an analysis of a particular study's strengths and weaknesses.

Related Papers

Gary Morrison

example of experimental research design pdf

Open Access Publishing Group

The experimental method is more exact compared to other quantitative methods, although it can have drawbacks, related to the positivist epistemological position. Determining exactly the educational effects of existing, innovative and new pedagogical concepts, programmes, systems, models, methods and instruments commands the use of experiments in pedagogical research. If completed pedagogical research projects are analysed, the conclusion is experiments are used much less frequently than other methods. This study determines the prevalence of parallel-group designs as compared to how frequently other experimental designs are used. A representative sample of scientific and professional papers was analysed and it was ascertained that the conducted experiments partly satisfy relevant theoretical and methodological criteria. It is evident that result reliability when using the experimental method is still relatively low, which may have negative effects on the development of pedagogical sciences and related scientific disciplines, as well as on scientifically grounded innovation of the teaching and learning process and enhancement of the educational process. Hence, it is crucial to use multi-method research approaches (employing the experimental method as appropriate, depending on the research problem) in preparing (and approving) doctoral dissertations, writing reviews and publishing research papers.

Journal of Computing in Higher Education

E. Wayne Ross

This course is an introduction to educational and social research for practitioners in schools and human services. The focus will be on fundamental issues in empirical research—that is research based on, concerned with, or verifiable by observation or experience, rather than theory or pure logic—including research methodology and research techniques (e.g., data collection, analysis and interpretation). This is not a research design or statistics course. In this course we will focus on: (a) developing an understanding of various kinds of educational and social research; (b) developing skills that will facilitate critical reading of educational and social research; and (c) exploring the role and use of research techniques to reflect upon and improve practice. The course includes “qualitative” and “quantitative” approaches to research. The terms “qualitative” and “quantitative” are commonly used to distinguish between experimental and non-experimental approaches, however, the difference between these “families” of research are more complex. Throughout the course we will explore the methodological as well as the technical differences between the two.

makes every effort to ensure the accuracy of all the information (the "Content") contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic re...

Richard Shavelson

EIKI Journal of Effective Teaching Methods

Nataliya Bhinder

To modernize the national system of education, to enhance teachers’ excellence or even to change the direction of professional teaching activity, it is important to upgrade pedagogical science and provide its practical orientation. In this regard, there is a necessity to look differently at the role and methods of experimental research in education. Accordingly, modern educator must fulfill the requirements of modern society; he/she be creative and learned, be able to analyze the experience and see prospects that means to be a researcher. The existing views on experimental research are developed under conditions of functioning of stable or standard programme in the unified state school. But currently, institutional diversity and a wide range of teaching methods can be considered the most significant features of educational system. These features suggest that pedagogical staff at the institution can design and implement their own educational technologies. Consequently, we observe a s...

Oroiyo K Peter

Iffatul Muna

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.

RELATED PAPERS

Marja Van den Heuvel-Panhuizen

phpderslerim.com

Mustafa Serdar Köksal

Lorenzo D Baber

Najam Aneel

Exceptional Children

Mark Innocenti

Ridwan Osman

International Review of Education

Colin Evers

Proceedings of the 3rd International Conference on Learning Innovation and Quality Education (ICLIQE 2019)

urip tisngati

Christopher Halter

Sandra Mathison

Larry Suter

Cut Eka Para Samya

British Journal of Educational Technology

HANDBOOK MANUAL IN EDUCATION RESEARCH FOR STUDENTS: A Guide to Practical Approach to Research Problems

Delight O Idika

Leslie Gail Roman

AJP: Advances in Physiology Education

Margaret Eisenhart

ShaherBano Sharif

Clement Ali

Dr. VISHAL VARIA

BRIJENDRA GAUTAM

Pedagogika (Praha)

Marit Honerød Hoveid

Educational Researcher 31(8):18-20

David C . Berliner

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024
  • Open access
  • Published: 03 July 2024

The impact of evidence-based nursing leadership in healthcare settings: a mixed methods systematic review

  • Maritta Välimäki 1 , 2 ,
  • Shuang Hu 3 ,
  • Tella Lantta 1 ,
  • Kirsi Hipp 1 , 4 ,
  • Jaakko Varpula 1 ,
  • Jiarui Chen 3 ,
  • Gaoming Liu 5 ,
  • Yao Tang 3 ,
  • Wenjun Chen 3 &
  • Xianhong Li 3  

BMC Nursing volume  23 , Article number:  452 ( 2024 ) Cite this article

458 Accesses

Metrics details

The central component in impactful healthcare decisions is evidence. Understanding how nurse leaders use evidence in their own managerial decision making is still limited. This mixed methods systematic review aimed to examine how evidence is used to solve leadership problems and to describe the measured and perceived effects of evidence-based leadership on nurse leaders and their performance, organizational, and clinical outcomes.

We included articles using any type of research design. We referred nurses, nurse managers or other nursing staff working in a healthcare context when they attempt to influence the behavior of individuals or a group in an organization using an evidence-based approach. Seven databases were searched until 11 November 2021. JBI Critical Appraisal Checklist for Quasi-experimental studies, JBI Critical Appraisal Checklist for Case Series, Mixed Methods Appraisal Tool were used to evaluate the Risk of bias in quasi-experimental studies, case series, mixed methods studies, respectively. The JBI approach to mixed methods systematic reviews was followed, and a parallel-results convergent approach to synthesis and integration was adopted.

Thirty-one publications were eligible for the analysis: case series ( n  = 27), mixed methods studies ( n  = 3) and quasi-experimental studies ( n  = 1). All studies were included regardless of methodological quality. Leadership problems were related to the implementation of knowledge into practice, the quality of nursing care and the resource availability. Organizational data was used in 27 studies to understand leadership problems, scientific evidence from literature was sought in 26 studies, and stakeholders’ views were explored in 24 studies. Perceived and measured effects of evidence-based leadership focused on nurses’ performance, organizational outcomes, and clinical outcomes. Economic data were not available.

Conclusions

This is the first systematic review to examine how evidence is used to solve leadership problems and to describe its measured and perceived effects from different sites. Although a variety of perceptions and effects were identified on nurses’ performance as well as on organizational and clinical outcomes, available knowledge concerning evidence-based leadership is currently insufficient. Therefore, more high-quality research and clinical trial designs are still needed.

Trail registration

The study was registered (PROSPERO CRD42021259624).

Peer Review reports

Global health demands have set new roles for nurse leaders [ 1 ].Nurse leaders are referred to as nurses, nurse managers, or other nursing staff working in a healthcare context who attempt to influence the behavior of individuals or a group based on goals that are congruent with organizational goals [ 2 ]. They are seen as professionals “armed with data and evidence, and a commitment to mentorship and education”, and as a group in which “leaders innovate, transform, and achieve quality outcomes for patients, health care professionals, organizations, and communities” [ 3 ]. Effective leadership occurs when team members critically follow leaders and are motivated by a leader’s decisions based on the organization’s requests and targets [ 4 ]. On the other hand, problems caused by poor leadership may also occur, regarding staff relations, stress, sickness, or retention [ 5 ]. Therefore, leadership requires an understanding of different problems to be solved using synthesizing evidence from research, clinical expertise, and stakeholders’ preferences [ 6 , 7 ]. If based on evidence, leadership decisions, also referred as leadership decision making [ 8 ], could ensure adequate staffing [ 7 , 9 ] and to produce sufficient and cost-effective care [ 10 ]. However, nurse leaders still rely on their decision making on their personal [ 11 ] and professional experience [ 10 ] over research evidence, which can lead to deficiencies in the quality and safety of care delivery [ 12 , 13 , 14 ]. As all nurses should demonstrate leadership in their profession, their leadership competencies should be strengthened [ 15 ].

Evidence-informed decision-making, referred to as evidence appraisal and application, and evaluation of decisions [ 16 ], has been recognized as one of the core competencies for leaders [ 17 , 18 ]. The role of evidence in nurse leaders’ managerial decision making has been promoted by public authorities [ 19 , 20 , 21 ]. Evidence-based management, another concept related to evidence-based leadership, has been used as the potential to improve healthcare services [ 22 ]. It can guide nursing leaders, in developing working conditions, staff retention, implementation practices, strategic planning, patient care, and success of leadership [ 13 ]. Collins and Holton [ 23 ] in their systematic review and meta-analysis examined 83 studies regarding leadership development interventions. They found that leadership training can result in significant improvement in participants’ skills, especially in knowledge level, although the training effects varied across studies. Cummings et al. [ 24 ] reviewed 100 papers (93 studies) and concluded that participation in leadership interventions had a positive impact on the development of a variety of leadership styles. Clavijo-Chamorro et al. [ 25 ] in their review of 11 studies focused on leadership-related factors that facilitate evidence implementation: teamwork, organizational structures, and transformational leadership. The role of nurse managers was to facilitate evidence-based practices by transforming contexts to motivate the staff and move toward a shared vision of change.

As far as we are aware, however, only a few systematic reviews have focused on evidence-based leadership or related concepts in the healthcare context aiming to analyse how nurse leaders themselves uses evidence in the decision-making process. Young [ 26 ] targeted definitions and acceptance of evidence-based management (EBMgt) in healthcare while Hasanpoor et al. [ 22 ] identified facilitators and barriers, sources of evidence used, and the role of evidence in the process of decision making. Both these reviews concluded that EBMgt was of great importance but used limitedly in healthcare settings due to a lack of time, a lack of research management activities, and policy constraints. A review by Williams [ 27 ] showed that the usage of evidence to support management in decision making is marginal due to a shortage of relevant evidence. Fraser [ 28 ] in their review further indicated that the potential evidence-based knowledge is not used in decision making by leaders as effectively as it could be. Non-use of evidence occurs and leaders base their decisions mainly on single studies, real-world evidence, and experts’ opinions [ 29 ]. Systematic reviews and meta-analyses rarely provide evidence of management-related interventions [ 30 ]. Tate et al. [ 31 ] concluded based on their systematic review and meta-analysis that the ability of nurse leaders to use and critically appraise research evidence may influence the way policy is enacted and how resources and staff are used to meet certain objectives set by policy. This can further influence staff and workforce outcomes. It is therefore important that nurse leaders have the capacity and motivation to use the strongest evidence available to effect change and guide their decision making [ 27 ].

Despite of a growing body of evidence, we found only one review focusing on the impact of evidence-based knowledge. Geert et al. [ 32 ] reviewed literature from 2007 to 2016 to understand the elements of design, delivery, and evaluation of leadership development interventions that are the most reliably linked to outcomes at the level of the individual and the organization, and that are of most benefit to patients. The authors concluded that it is possible to improve individual-level outcomes among leaders, such as knowledge, motivation, skills, and behavior change using evidence-based approaches. Some of the most effective interventions included, for example, interactive workshops, coaching, action learning, and mentoring. However, these authors found limited research evidence describing how nurse leaders themselves use evidence to support their managerial decisions in nursing and what the outcomes are.

To fill the knowledge gap and compliment to existing knowledgebase, in this mixed methods review we aimed to (1) examine what leadership problems nurse leaders solve using an evidence-based approach and (2) how they use evidence to solve these problems. We also explored (3) the measured and (4) perceived effects of the evidence-based leadership approach in healthcare settings. Both qualitative and quantitative components of the effects of evidence-based leadership were examined to provide greater insights into the available literature [ 33 ]. Together with the evidence-based leadership approach, and its impact on nursing [ 34 , 35 ], this knowledge gained in this review can be used to inform clinical policy or organizational decisions [ 33 ]. The study is registered (PROSPERO CRD42021259624). The methods used in this review were specified in advance and documented in a priori in a published protocol [ 36 ]. Key terms of the review and the search terms are defined in Table  1 (population, intervention, comparison, outcomes, context, other).

In this review, we used a mixed methods approach [ 37 ]. A mixed methods systematic review was selected as this approach has the potential to produce direct relevance to policy makers and practitioners [ 38 ]. Johnson and Onwuegbuzie [ 39 ] have defined mixed methods research as “the class of research in which the researcher mixes or combines quantitative and qualitative research techniques, methods, approaches, concepts or language into a single study.” Therefore, we combined quantitative and narrative analysis to appraise and synthesize empirical evidence, and we held them as equally important in informing clinical policy or organizational decisions [ 34 ]. In this review, a comprehensive synthesis of quantitative and qualitative data was performed first and then discussed in discussion part (parallel-results convergent design) [ 40 ]. We hoped that different type of analysis approaches could complement each other and deeper picture of the topic in line with our research questions could be gained [ 34 ].

Inclusion and exclusion criteria

Inclusion and exclusion criteria of the study are described in Table  1 .

Search strategy

A three-step search strategy was utilized. First, an initial limited search with #MEDLINE was undertaken, followed by analysis of the words used in the title, abstract, and the article’s key index terms. Second, the search strategy, including identified keywords and index terms, was adapted for each included data base and a second search was undertaken on 11 November 2021. The full search strategy for each database is described in Additional file 1 . Third, the reference list of all studies included in the review were screened for additional studies. No year limits or language restrictions were used.

Information sources

The database search included the following: CINAHL (EBSCO), Cochrane Library (academic database for medicine and health science and nursing), Embase (Elsevier), PsycINFO (EBSCO), PubMed (MEDLINE), Scopus (Elsevier) and Web of Science (academic database across all scientific and technical disciplines, ranging from medicine and social sciences to arts and humanities). These databases were selected as they represent typical databases in health care context. Subject headings from each of the databases were included in the search strategies. Boolean operators ‘AND’ and ‘OR’ were used to combine the search terms. An information specialist from the University of Turku Library was consulted in the formation of the search strategies.

Study selection

All identified citations were collated and uploaded into Covidence software (Covidence systematic review software, Veritas Health Innovation, Melbourne, Australia www.covidence.org ), and duplicates were removed by the software. Titles and abstracts were screened and assessed against the inclusion criteria independently by two reviewers out of four, and any discrepancies were resolved by the third reviewer (MV, KH, TL, WC). Studies meeting the inclusion criteria were retrieved in full and archived in Covidence. Access to one full-text article was lacking: the authors for one study were contacted about the missing full text, but no full text was received. All remaining hits of the included studies were retrieved and assessed independently against the inclusion criteria by two independent reviewers of four (MV, KH, TL, WC). Studies that did not meet the inclusion criteria were excluded, and the reasons for exclusion were recorded in Covidence. Any disagreements that arose between the reviewers were resolved through discussions with XL.

Assessment of methodological quality

Eligible studies were critically appraised by two independent reviewers (YT, SH). Standardized critical appraisal instruments based on the study design were used. First, quasi-experimental studies were assessed using the JBI Critical Appraisal Checklist for Quasi-experimental studies [ 44 ]. Second, case series were assessed using the JBI Critical Appraisal Checklist for Case Series [ 45 ]. Third, mixed methods studies were appraised using the Mixed Methods Appraisal Tool [ 46 ].

To increase inter-reviewer reliability, the review agreement was calculated (SH) [ 47 ]. A kappa greater than 0.8 was considered to represent a high level of agreement (0–0.1). In our data, the agreement was 0.75. Discrepancies raised between two reviewers were resolved through discussion and modifications and confirmed by XL. As an outcome, studies that met the inclusion criteria were proceeded to critical appraisal and assessed as suitable for inclusion in the review. The scores for each item and overall critical appraisal scores were presented.

Data extraction

For data extraction, specific tables were created. First, study characteristics (author(s), year, country, design, number of participants, setting) were extracted by two authors independently (JC, MV) and reviewed by TL. Second, descriptions of the interventions were extracted by two reviewers (JV, JC) using the structure of the TIDIeR (Template for Intervention Description and Replication) checklist (brief name, the goal of the intervention, material and procedure, models of delivery and location, dose, modification, adherence and fidelity) [ 48 ]. The extractions were confirmed (MV).

Third, due to a lack of effectiveness data and a wide heterogeneity between study designs and presentation of outcomes, no attempt was made to pool the quantitative data statistically; the findings of the quantitative data were presented in narrative form only [ 44 ]. The separate data extraction tables for each research question were designed specifically for this study. For both qualitative (and a qualitative component of mixed-method studies) and quantitative studies, the data were extracted and tabulated into text format according to preplanned research questions [ 36 ]. To test the quality of the tables and the data extraction process, three authors independently extracted the data from the first five studies (in alphabetical order). After that, the authors came together to share and determine whether their approaches of the data extraction were consistent with each other’s output and whether the content of each table was in line with research question. No reason was found to modify the data extraction tables or planned process. After a consensus of the data extraction process was reached, the data were extracted in pairs by independent reviewers (WC, TY, SH, GL). Any disagreements that arose between the reviewers were resolved through discussion and with a third reviewer (MV).

Data analysis

We were not able to conduct a meta-analysis due to a lack of effectiveness data based on clinical trials. Instead, we used inductive thematic analysis with constant comparison to answer the research question [ 46 , 49 ] using tabulated primary data from qualitative and quantitative studies as reported by the original authors in narrative form only [ 47 ]. In addition, the qualitizing process was used to transform quantitative data to qualitative data; this helped us to convert the whole data into themes and categories. After that we used the thematic analysis for the narrative data as follows. First, the text was carefully read, line by line, to reveal topics answering each specific review question (MV). Second, the data coding was conducted, and the themes in the data were formed by data categorization. The process of deriving the themes was inductive based on constant comparison [ 49 ]. The results of thematic analysis and data categorization was first described in narrative format and then the total number of studies was calculated where the specific category was identified (%).

Stakeholder involvement

The method of reporting stakeholders’ involvement follows the key components by [ 50 ]: (1) people involved, (2) geographical location, (3) how people were recruited, (4) format of involvement, (5) amount of involvement, (6) ethical approval, (7) financial compensation, and (8) methods for reporting involvement.

In our review, stakeholder involvement targeted nurses and nurse leader in China. Nurse Directors of two hospitals recommended potential participants who received a personal invitation letter from researchers to participate in a discussion meeting. Stakeholders’ participation was based on their own free will. Due to COVID-19, one online meeting (1 h) was organized (25 May 2022). Eleven participants joined the meeting. Ethical approval was not applied and no financial compensation was offered. At the end of the meeting, experiences of stakeholders’ involvement were explored.

The meeting started with an introductory presentation with power points. The rationale, methods, and preliminary review results were shared with the participants [ 51 ].The meeting continued with general questions for the participants: (1) Are you aware of the concepts of evidence-based practice or evidence-based leadership?; (2) How important is it to use evidence to support decisions among nurse leaders?; (3) How is the evidence-based approach used in hospital settings?; and (4) What type of evidence is currently used to support nurse leaders’ decision making (e.g. scientific literature, organizational data, stakeholder views)?

Two people took notes on the course and content of the conversation. The notes were later transcripted in verbatim, and the key points of the discussions were summarised. Although answers offered by the stakeholders were very short, the information was useful to validate the preliminary content of the results, add the rigorousness of the review, and obtain additional perspectives. A recommendation of the stakeholders was combined in the Discussion part of this review increasing the applicability of the review in the real world [ 50 ]. At the end of the discussion, the value of stakeholders’ involvement was asked. Participants shared that the experience of participating was unique and the topic of discussion was challenging. Two authors of the review group further represented stakeholders by working together with the research team throughout the review study.

Search results

From seven different electronic databases, 6053 citations were identified as being potentially relevant to the review. Then, 3133 duplicates were removed by an automation tool (Covidence: www.covidence.org ), and one was removed manually. The titles and abstracts of 3040 of citations were reviewed, and a total of 110 full texts were included (one extra citation was found on the reference list but later excluded). Based on the eligibility criteria, 31 studies (32 hits) were critically appraised and deemed suitable for inclusion in the review. The search results and selection process are presented in the PRISMA [ 52 ] flow diagram Fig.  1 . The full list of references for included studies can be find in Additional file 2 . To avoid confusion between articles of the reference list and studies included in the analysis, the studies included in the review are referred inside the article using the reference number of each study (e.g. ref 1, ref 2).

figure 1

Search results and study selection and inclusion process [ 52 ]

Characteristics of included studies

The studies had multiple purposes, aiming to develop practice, implement a new approach, improve quality, or to develop a model. The 31 studies (across 32 hits) were case series studies ( n  = 27), mixed methods studies ( n  = 3) and a quasi-experimental study ( n  = 1). All studies were published between the years 2004 and 2021. The highest number of papers was published in year 2020.

Table  2 describes the characteristics of included studies and Additional file 3 offers a narrative description of the studies.

Methodological quality assessment

Quasi-experimental studies.

We had one quasi-experimental study (ref 31). All questions in the critical appraisal tool were applicable. The total score of the study was 8 (out of a possible 9). Only one response of the tool was ‘no’ because no control group was used in the study (see Additional file 4 for the critical appraisal of included studies).

Case series studies . A case series study is typically defined as a collection of subjects with common characteristics. The studies do not include a comparison group and are often based on prevalent cases and on a sample of convenience [ 53 ]. Munn et al. [ 45 ] further claim that case series are best described as observational studies, lacking experimental and randomized characteristics, being descriptive studies, without a control or comparator group. Out of 27 case series studies included in our review, the critical appraisal scores varied from 1 to 9. Five references were conference abstracts with empirical study results, which were scored from 1 to 3. Full reports of these studies were searched in electronic databases but not found. Critical appraisal scores for the remaining 22 studies ranged from 1 to 9 out of a possible score of 10. One question (Q3) was not applicable to 13 studies: “Were valid methods used for identification of the condition for all participants included in the case series?” Only two studies had clearly reported the demographic of the participants in the study (Q6). Twenty studies met Criteria 8 (“Were the outcomes or follow-up results of cases clearly reported?”) and 18 studies met Criteria 7 (“Q7: Was there clear reporting of clinical information of the participants?”) (see Additional file 4 for the critical appraisal of included studies).

Mixed-methods studies

Mixed-methods studies involve a combination of qualitative and quantitative methods. This is a common design and includes convergent design, sequential explanatory design, and sequential exploratory design [ 46 ]. There were three mixed-methods studies. The critical appraisal scores for the three studies ranged from 60 to 100% out of a possible 100%. Two studies met all the criteria, while one study fulfilled 60% of the scored criteria due to a lack of information to understand the relevance of the sampling strategy well enough to address the research question (Q4.1) or to determine whether the risk of nonresponse bias was low (Q4.4) (see Additional file 4 for the critical appraisal of included studies).

Intervention or program components

The intervention of program components were categorized and described using the TiDier checklist: name and goal, theory or background, material, procedure, provider, models of delivery, location, dose, modification, and adherence and fidelity [ 48 ]. A description of intervention in each study is described in Additional file 5 and a narrative description in Additional file 6 .

Leadership problems

In line with the inclusion criteria, data for the leadership problems were categorized in all 31 included studies (see Additional file 7 for leadership problems). Three types of leadership problems were identified: implementation of knowledge into practice, the quality of clinical care, and resources in nursing care. A narrative summary of the results is reported below.

Implementing knowledge into practice

Eleven studies (35%) aimed to solve leadership problems related to implementation of knowledge into practice. Studies showed how to support nurses in evidence-based implementation (EBP) (ref 3, ref 5), how to engage nurses in using evidence in practice (ref 4), how to convey the importance of EBP (ref 22) or how to change practice (ref 4). Other problems were how to facilitate nurses to use guideline recommendations (ref 7) and how nurses can make evidence-informed decisions (ref 8). General concerns also included the linkage between theory and practice (ref 1) as well as how to implement the EBP model in practice (ref 6). In addition, studies were motivated by the need for revisions or updates of protocols to improve clinical practice (ref 10) as well as the need to standardize nursing activities (ref 11, ref 14).

The quality of the care

Thirteen (42%) focused on solving problems related to the quality of clinical care. In these studies, a high number of catheter infections led a lack of achievement of organizational goals (ref 2, ref 9). A need to reduce patient symptoms in stem cell transplant patients undergoing high-dose chemotherapy (ref 24) was also one of the problems to be solved. In addition, the projects focused on how to prevent pressure ulcers (ref 26, ref 29), how to enhance the quality of cancer treatment (ref 25) and how to reduce the need for invasive constipation treatment (ref 30). Concerns about patient safety (ref 15), high fall rates (ref 16, ref 19), dissatisfaction of patients (ref 16, ref 18) and nurses (ref 16, ref 30) were also problems that had initiated the projects. Studies addressed concerns about how to promote good contingency care in residential aged care homes (ref 20) and about how to increase recognition of human trafficking problems in healthcare (ref 21).

Resources in nursing care

Nurse leaders identified problems in their resources, especially in staffing problems. These problems were identified in seven studies (23%), which involved concerns about how to prevent nurses from leaving the job (ref 31), how to ensure appropriate recruitment, staffing and retaining of nurses (ref 13) and how to decrease nurses’ burden and time spent on nursing activities (ref 12). Leadership turnover was also reported as a source of dissatisfaction (ref 17); studies addressed a lack of structured transition and training programs, which led to turnover (ref 23), as well as how to improve intershift handoff among nurses (ref 28). Optimal design for new hospitals was also examined (ref 27).

Main features of evidence-based leadership

Out of 31 studies, 17 (55%) included all four domains of an evidence-based leadership approach, and four studies (13%) included evidence of critical appraisal of the results (see Additional file 8 for the main features of evidence-based Leadership) (ref 11, ref 14, ref 23, ref 27).

Organizational evidence

Twenty-seven studies (87%) reported how organizational evidence was collected and used to solve leadership problems (ref 2). Retrospective chart reviews (ref 5), a review of the extent of specific incidents (ref 19), and chart auditing (ref 7, ref 25) were conducted. A gap between guideline recommendations and actual care was identified using organizational data (ref 7) while the percentage of nurses’ working time spent on patient care was analyzed using an electronic charting system (ref 12). Internal data (ref 22), institutional data, and programming metrics were also analyzed to understand the development of the nurse workforce (ref 13).

Surveys (ref 3, ref 25), interviews (ref 3, ref 25) and group reviews (ref 18) were used to better understand the leadership problem to be solved. Employee opinion surveys on leadership (ref 17), a nurse satisfaction survey (ref 30) and a variety of reporting templates were used for the data collection (ref 28) reported. Sometimes, leadership problems were identified by evidence facilitators or a PI’s team who worked with staff members (ref 15, ref 17). Problems in clinical practice were also identified by the Nursing Professional Council (ref 14), managers (ref 26) or nurses themselves (ref 24). Current practices were reviewed (ref 29) and a gap analysis was conducted (ref 4, ref 16, ref 23) together with SWOT analysis (ref 16). In addition, hospital mission and vision statements, research culture established and the proportion of nursing alumni with formal EBP training were analyzed (ref 5). On the other hand, it was stated that no systematic hospital-specific sources of data regarding job satisfaction or organizational commitment were used (ref 31). In addition, statements of organizational analysis were used on a general level only (ref 1).

Scientific evidence identified

Twenty-six studies (84%) reported the use of scientific evidence in their evidence-based leadership processes. A literature search was conducted (ref 21) and questions, PICO, and keywords were identified (ref 4) in collaboration with a librarian. Electronic databases, including PubMed (ref 14, ref 31), Cochrane, and EMBASE (ref 31) were searched. Galiano (ref 6) used Wiley Online Library, Elsevier, CINAHL, Health Source: Nursing/Academic Edition, PubMed, and the Cochrane Library while Hoke (ref 11) conducted an electronic search using CINAHL and PubMed to retrieve articles.

Identified journals were reviewed manually (ref 31). The findings were summarized using ‘elevator speech’ (ref 4). In a study by Gifford et al. (ref 9) evidence facilitators worked with participants to access, appraise, and adapt the research evidence to the organizational context. Ostaszkiewicz (ref 20) conducted a scoping review of literature and identified and reviewed frameworks and policy documents about the topic and the quality standards. Further, a team of nursing administrators, directors, staff nurses, and a patient representative reviewed the literature and made recommendations for practice changes.

Clinical practice guidelines were also used to offer scientific evidence (ref 7, ref 19). Evidence was further retrieved from a combination of nursing policies, guidelines, journal articles, and textbooks (ref 12) as well as from published guidelines and literature (ref 13). Internal evidence, professional practice knowledge, relevant theories and models were synthesized (ref 24) while other study (ref 25) reviewed individual studies, synthesized with systematic reviews or clinical practice guidelines. The team reviewed the research evidence (ref 3, ref 15) or conducted a literature review (ref 22, ref 28, ref 29), a literature search (ref 27), a systematic review (ref 23), a review of the literature (ref 30) or ‘the scholarly literature was reviewed’ (ref 18). In addition, ‘an extensive literature review of evidence-based best practices was carried out’ (ref 10). However, detailed description how the review was conducted was lacking.

Views of stakeholders

A total of 24 studies (77%) reported methods for how the views of stakeholders, i.e., professionals or experts, were considered. Support to run this study was received from nursing leadership and multidisciplinary teams (ref 29). Experts and stakeholders joined the study team in some cases (ref 25, ref 30), and in other studies, their opinions were sought to facilitate project success (ref 3). Sometimes a steering committee was formed by a Chief Nursing Officer and Clinical Practice Specialists (ref 2). More specifically, stakeholders’ views were considered using interviews, workshops and follow-up teleconferences (ref 7). The literature review was discussed with colleagues (ref 11), and feedback and support from physicians as well as the consensus of staff were sought (ref 16).

A summary of the project findings and suggestions for the studies were discussed at 90-minute weekly meetings by 11 charge nurses. Nurse executive directors were consulted over a 10-week period (ref 31). An implementation team (nurse, dietician, physiotherapist, occupational therapist) was formed to support the implementation of evidence-based prevention measures (ref 26). Stakeholders volunteered to join in the pilot implementation (ref 28) or a stakeholder team met to determine the best strategy for change management, shortcomings in evidence-based criteria were discussed, and strategies to address those areas were planned (ref 5). Nursing leaders, staff members (ref 22), ‘process owners (ref 18) and program team members (ref 18, ref 19, ref 24) met regularly to discuss the problems. Critical input was sought from clinical educators, physicians, nutritionists, pharmacists, and nurse managers (ref 24). The unit director and senior nursing staff reviewed the contents of the product, and the final version of clinical pathways were reviewed and approved by the Quality Control Commission of the Nursing Department (ref 12). In addition, two co-design workshops with 18 residential aged care stakeholders were organized to explore their perspectives about factors to include in a model prototype (ref 20). Further, an agreement of stakeholders in implementing continuous quality services within an open relationship was conducted (ref 1).

Critical appraisal

In five studies (16%), a critical appraisal targeting the literature search was carried out. The appraisals were conducted by interns and teams who critiqued the evidence (ref 4). In Hoke’s study, four areas that had emerged in the literature were critically reviewed (ref 11). Other methods were to ‘critically appraise the search results’ (ref 14). Journal club team meetings (ref 23) were organized to grade the level and quality of evidence and the team ‘critically appraised relevant evidence’ (ref 27). On the other hand, the studies lacked details of how the appraisals were done in each study.

The perceived effects of evidence-based leadership

Perceived effects of evidence-based leadership on nurses’ performance.

Eleven studies (35%) described perceived effects of evidence-based leadership on nurses’ performance (see Additional file 9 for perceived effects of evidence-based leadership), which were categorized in four groups: awareness and knowledge, competence, ability to understand patients’ needs, and engagement. First, regarding ‘awareness and knowledge’, different projects provided nurses with new learning opportunities (ref 3). Staff’s knowledge (ref 20, ref 28), skills, and education levels improved (ref 20), as did nurses’ knowledge comprehension (ref 21). Second, interventions and approaches focusing on management and leadership positively influenced participants’ competence level to improve the quality of services. Their confidence level (ref 1) and motivation to change practice increased, self-esteem improved, and they were more positive and enthusiastic in their work (ref 22). Third, some nurses were relieved that they had learned to better handle patients’ needs (ref 25). For example, a systematic work approach increased nurses’ awareness of the patients who were at risk of developing health problems (ref 26). And last, nurse leaders were more engaged with staff, encouraging them to adopt the new practices and recognizing their efforts to change (ref 8).

Perceived effects on organizational outcomes

Nine studies (29%) described the perceived effects of evidence-based leadership on organizational outcomes (see Additional file 9 for perceived effects of evidence-based leadership). These were categorized into three groups: use of resources, staff commitment, and team effort. First, more appropriate use of resources was reported (ref 15, ref 20), and working time was more efficiently used (ref 16). In generally, a structured approach made implementing change more manageable (ref 1). On the other hand, in the beginning of the change process, the feedback from nurses was unfavorable, and they experienced discomfort in the new work style (ref 29). New approaches were also perceived as time consuming (ref 3). Second, nurse leaders believed that fewer nursing staff than expected left the organization over the course of the study (ref 31). Third, the project helped staff in their efforts to make changes, and it validated the importance of working as a team (ref 7). Collaboration and support between the nurses increased (ref 26). On the other hand, new work style caused challenges in teamwork (ref 3).

Perceived effects on clinical outcomes

Five studies (16%) reported the perceived effects of evidence-based leadership on clinical outcomes (see Additional file 9 for perceived effects of evidence-based leadership), which were categorized in two groups: general patient outcomes and specific clinical outcomes. First, in general, the project assisted in connecting the guideline recommendations and patient outcomes (ref 7). The project was good for the patients in general, and especially to improve patient safety (ref 16). On the other hand, some nurses thought that the new working style did not work at all for patients (ref 28). Second, the new approach used assisted in optimizing patients’ clinical problems and person-centered care (ref 20). Bowel management, for example, received very good feedback (ref 30).

The measured effects of evidence-based leadership

The measured effects on nurses’ performance.

Data were obtained from 20 studies (65%) (see Additional file 10 for measured effects of evidence-based leadership) and categorized nurse performance outcomes for three groups: awareness and knowledge, engagement, and satisfaction. First, six studies (19%) measured the awareness and knowledge levels of participants. Internship for staff nurses was beneficial to help participants to understand the process for using evidence-based practice and to grow professionally, to stimulate for innovative thinking, to give knowledge needed to use evidence-based practice to answer clinical questions, and to make possible to complete an evidence-based practice project (ref 3). Regarding implementation program of evidence-based practice, those with formal EBP training showed an improvement in knowledge, attitude, confidence, awareness and application after intervention (ref 3, ref 11, ref 20, ref 23, ref 25). On the contrary, in other study, attitude towards EBP remained stable ( p  = 0.543). and those who applied EBP decreased although no significant differences over the years ( p  = 0.879) (ref 6).

Second, 10 studies (35%) described nurses’ engagement to new practices (ref 5, ref 6, ref 7, ref 10, ref 16, ref 17, ref 18, ref 21, ref 25, ref 27). 9 studies (29%) studies reported that there was an improvement of compliance level of participants (ref 6, ref 7, ref 10, ref 16, ref 17, ref 18, ref 21, ref 25, ref 27). On the contrary, in DeLeskey’s (ref 5) study, although improvement was found in post-operative nausea and vomiting’s (PONV) risk factors documented’ (2.5–63%), and ’risk factors communicated among anaesthesia and surgical staff’ (0–62%), the improvement did not achieve the goal. The reason was a limited improvement was analysed. It was noted that only those patients who had been seen by the pre-admission testing nurse had risk assessments completed. Appropriate treatment/prophylaxis increased from 69 to 77%, and from 30 to 49%; routine assessment for PONV/rescue treatment 97% and 100% was both at 100% following the project. The results were discussed with staff but further reasons for a lack of engagement in nursing care was not reported.

And third, six studies (19%) reported nurses’ satisfaction with project outcomes. The study results showed that using evidence in managerial decisions improved nurses’ satisfaction and attitudes toward their organization ( P  < 0.05) (ref 31). Nurses’ overall job satisfaction improved as well (ref 17). Nurses’ satisfaction with usability of the electronic charting system significantly improved after introduction of the intervention (ref 12). In handoff project in seven hospitals, improvement was reported in all satisfaction indicators used in the study although improvement level varied in different units (ref 28). In addition, positive changes were reported in nurses’ ability to autonomously perform their job (“How satisfied are you with the tools and resources available for you treat and prevent patient constipation?” (54%, n  = 17 vs. 92%, n  = 35, p  < 0.001) (ref 30).

The measured effects on organizational outcomes

Thirteen studies (42%) described the effects of a project on organizational outcomes (see Additional file 10 for measured effects of evidence-based leadership), which were categorized in two groups: staff compliance, and changes in practices. First, studies reported improved organizational outcomes due to staff better compliance in care (ref 4, ref 13, ref 17, ref 23, ref 27, ref 31). Second, changes in organization practices were also described (ref 11) like changes in patient documentation (ref 12, ref 21). Van Orne (ref 30) found a statistically significant reduction in the average rate of invasive medication administration between pre-intervention and post-intervention ( p  = 0.01). Salvador (ref 24) also reported an improvement in a proactive approach to mucositis prevention with an evidence-based oral care guide. On the contrary, concerns were also raised such as not enough time for new bedside report (ref 16) or a lack of improvement of assessment of diabetic ulcer (ref 8).

The measured effects on clinical outcomes

A variety of improvements in clinical outcomes were reported (see Additional file 10 for measured effects of evidence-based leadership): improvement in patient clinical status and satisfaction level. First, a variety of improvement in patient clinical status was reported. improvement in Incidence of CAUTI decreased 27.8% between 2015 and 2019 (ref 2) while a patient-centered quality improvement project reduced CAUTI rates to 0 (ref 10). A significant decrease in transmission rate of MRSA transmission was also reported (ref 27) and in other study incidences of CLABSIs dropped following of CHG bathing (ref 14). Further, it was possible to decrease patient nausea from 18 to 5% and vomiting to 0% (ref 5) while the percentage of patients who left the hospital without being seen was below 2% after the project (ref 17). In addition, a significant reduction in the prevalence of pressure ulcers was found (ref 26, ref 29) and a significant reduction of mucositis severity/distress was achieved (ref 24). Patient falls rate decreased (ref 15, ref 16, ref 19, ref 27).

Second, patient satisfaction level after project implementation improved (ref 28). The scale assessing healthcare providers by consumers showed improvement, but the changes were not statistically significant. Improvement in an emergency department leadership model and in methods of communication with patients improved patient satisfaction scores by 600% (ref 17). In addition, new evidence-based unit improved patient experiences about the unit although not all items improved significantly (ref 18).

Stakeholder involvement in the mixed-method review

To ensure stakeholders’ involvement in the review, the real-world relevance of our research [ 53 ], achieve a higher level of meaning in our review results, and gain new perspectives on our preliminary findings [ 50 ], a meeting with 11 stakeholders was organized. First, we asked if participants were aware of the concepts of evidence-based practice or evidence-based leadership. Responses revealed that participants were familiar with the concept of evidence-based practice, but the topic of evidence-based leadership was totally new. Examples of nurses and nurse leaders’ responses are as follows: “I have heard a concept of evidence-based practice but never a concept of evidence-based leadership.” Another participant described: “I have heard it [evidence-based leadership] but I do not understand what it means.”

Second, as stakeholder involvement is beneficial to the relevance and impact of health research [ 54 ], we asked how important evidence is to them in supporting decisions in health care services. One participant described as follows: “Using evidence in decisions is crucial to the wards and also to the entire hospital.” Third, we asked how the evidence-based approach is used in hospital settings. Participants expressed that literature is commonly used to solve clinical problems in patient care but not to solve leadership problems. “In [patient] medication and care, clinical guidelines are regularly used. However, I am aware only a few cases where evidence has been sought to solve leadership problems.”

And last, we asked what type of evidence is currently used to support nurse leaders’ decision making (e.g. scientific literature, organizational data, stakeholder views)? The participants were aware that different types of information were collected in their organization on a daily basis (e.g. patient satisfaction surveys). However, the information was seldom used to support decision making because nurse leaders did not know how to access this information. Even so, the participants agreed that the use of evidence from different sources was important in approaching any leadership or managerial problems in the organization. Participants also suggested that all nurse leaders should receive systematic training related to the topic; this could support the daily use of the evidence-based approach.

To our knowledge, this article represents the first mixed-methods systematic review to examine leadership problems, how evidence is used to solve these problems and what the perceived and measured effects of evidence-based leadership are on nurse leaders and their performance, organizational, and clinical outcomes. This review has two key findings. First, the available research data suggests that evidence-based leadership has potential in the healthcare context, not only to improve knowledge and skills among nurses, but also to improve organizational outcomes and the quality of patient care. Second, remarkably little published research was found to explore the effects of evidence-based leadership with an efficient trial design. We validated the preliminary results with nurse stakeholders, and confirmed that nursing staff, especially nurse leaders, were not familiar with the concept of evidence-based leadership, nor were they used to implementing evidence into their leadership decisions. Our data was based on many databases, and we screened a large number of studies. We also checked existing registers and databases and found no registered or ongoing similar reviews being conducted. Therefore, our results may not change in the near future.

We found that after identifying the leadership problems, 26 (84%) studies out of 31 used organizational data, 25 (81%) studies used scientific evidence from the literature, and 21 (68%) studies considered the views of stakeholders in attempting to understand specific leadership problems more deeply. However, only four studies critically appraised any of these findings. Considering previous critical statements of nurse leaders’ use of evidence in their decision making [ 14 , 30 , 31 , 34 , 55 ], our results are still quite promising.

Our results support a previous systematic review by Geert et al. [ 32 ], which concluded that it is possible to improve leaders’ individual-level outcomes, such as knowledge, motivation, skills, and behavior change using evidence-based approaches. Collins and Holton [ 23 ] particularly found that leadership training resulted in significant knowledge and skill improvements, although the effects varied widely across studies. In our study, evidence-based leadership was seen to enable changes in clinical practice, especially in patient care. On the other hand, we understand that not all efforts to changes were successful [ 56 , 57 , 58 ]. An evidence-based approach causes negative attitudes and feelings. Negative emotions in participants have also been reported due to changes, such as discomfort with a new working style [ 59 ]. Another study reported inconvenience in using a new intervention and its potential risks for patient confidentiality. Sometimes making changes is more time consuming than continuing with current practice [ 60 ]. These findings may partially explain why new interventions or program do not always fully achieve their goals. On the other hand, Dubose et al. [ 61 ] state that, if prepared with knowledge of resistance, nurse leaders could minimize the potential negative consequences and capitalize on a powerful impact of change adaptation.

We found that only six studies used a specific model or theory to understand the mechanism of change that could guide leadership practices. Participants’ reactions to new approaches may be an important factor in predicting how a new intervention will be implemented into clinical practice. Therefore, stronger effort should be put to better understanding the use of evidence, how participants’ reactions and emotions or practice changes could be predicted or supported using appropriate models or theories, and how using these models are linked with leadership outcomes. In this task, nurse leaders have an important role. At the same time, more responsibilities in developing health services have been put on the shoulders of nurse leaders who may already be suffering under pressure and increased burden at work. Working in a leadership position may also lead to role conflict. A study by Lalleman et al. [ 62 ] found that nurses were used to helping other people, often in ad hoc situations. The helping attitude of nurses combined with structured managerial role may cause dilemmas, which may lead to stress. Many nurse leaders opt to leave their positions less than 5 years [ 63 ].To better fulfill the requirements of health services in the future, the role of nurse leaders in evidence-based leadership needs to be developed further to avoid ethical and practical dilemmas in their leadership practices.

It is worth noting that the perceived and measured effects did not offer strong support to each other but rather opened a new venue to understand the evidence-based leadership. Specifically, the perceived effects did not support to measured effects (competence, ability to understand patients’ needs, use of resources, team effort, and specific clinical outcomes) while the measured effects could not support to perceived effects (nurse’s performance satisfaction, changes in practices, and clinical outcomes satisfaction). These findings may indicate that different outcomes appear if the effects of evidence-based leadership are looked at using different methodological approach. Future study is encouraged using well-designed study method including mixed-method study to examine the consistency between perceived and measured effects of evidence-based leadership in health care.

There is a potential in nursing to support change by demonstrating conceptual and operational commitment to research-based practices [ 64 ]. Nurse leaders are well positioned to influence and lead professional governance, quality improvement, service transformation, change and shared governance [ 65 ]. In this task, evidence-based leadership could be a key in solving deficiencies in the quality, safety of care [ 14 ] and inefficiencies in healthcare delivery [ 12 , 13 ]. As WHO has revealed, there are about 28 million nurses worldwide, and the demand of nurses will put nurse resources into the specific spotlight [ 1 ]. Indeed, evidence could be used to find solutions for how to solve economic deficits or other problems using leadership skills. This is important as, when nurses are able to show leadership and control in their own work, they are less likely to leave their jobs [ 66 ]. On the other hand, based on our discussions with stakeholders, nurse leaders are not used to using evidence in their own work. Further, evidence-based leadership is not possible if nurse leaders do not have access to a relevant, robust body of evidence, adequate funding, resources, and organizational support, and evidence-informed decision making may only offer short-term solutions [ 55 ]. We still believe that implementing evidence-based strategies into the work of nurse leaders may create opportunities to protect this critical workforce from burnout or leaving the field [ 67 ]. However, the role of the evidence-based approach for nurse leaders in solving these problems is still a key question.

Limitations

This study aimed to use a broad search strategy to ensure a comprehensive review but, nevertheless, limitations exist: we may have missed studies not included in the major international databases. To keep search results manageable, we did not use specific databases to systematically search grey literature although it is a rich source of evidence used in systematic reviews and meta-analysis [ 68 ]. We still included published conference abstract/proceedings, which appeared in our scientific databases. It has been stated that conference abstracts and proceedings with empirical study results make up a great part of studies cited in systematic reviews [ 69 ]. At the same time, a limited space reserved for published conference publications can lead to methodological issues reducing the validity of the review results [ 68 ]. We also found that the great number of studies were carried out in western countries, restricting the generalizability of the results outside of English language countries. The study interventions and outcomes were too different across studies to be meaningfully pooled using statistical methods. Thus, our narrative synthesis could hypothetically be biased. To increase transparency of the data and all decisions made, the data, its categorization and conclusions are based on original studies and presented in separate tables and can be found in Additional files. Regarding a methodological approach [ 34 ], we used a mixed methods systematic review, with the core intention of combining quantitative and qualitative data from primary studies. The aim was to create a breadth and depth of understanding that could confirm to or dispute evidence and ultimately answer the review question posed [ 34 , 70 ]. Although the method is gaining traction due to its usefulness and practicality, guidance in combining quantitative and qualitative data in mixed methods systematic reviews is still limited at the theoretical stage [ 40 ]. As an outcome, it could be argued that other methodologies, for example, an integrative review, could have been used in our review to combine diverse methodologies [ 71 ]. We still believe that the results of this mixed method review may have an added value when compared with previous systematic reviews concerning leadership and an evidence-based approach.

Our mixed methods review fills the gap regarding how nurse leaders themselves use evidence to guide their leadership role and what the measured and perceived impact of evidence-based leadership is in nursing. Although the scarcity of controlled studies on this topic is concerning, the available research data suggest that evidence-based leadership intervention can improve nurse performance, organizational outcomes, and patient outcomes. Leadership problems are also well recognized in healthcare settings. More knowledge and a deeper understanding of the role of nurse leaders, and how they can use evidence in their own managerial leadership decisions, is still needed. Despite the limited number of studies, we assume that this narrative synthesis can provide a good foundation for how to develop evidence-based leadership in the future.

Implications

Based on our review results, several implications can be recommended. First, the future of nursing success depends on knowledgeable, capable, and strong leaders. Therefore, nurse leaders worldwide need to be educated about the best ways to manage challenging situations in healthcare contexts using an evidence-based approach in their decisions. This recommendation was also proposed by nurses and nurse leaders during our discussion meeting with stakeholders.

Second, curriculums in educational organizations and on-the-job training for nurse leaders should be updated to support general understanding how to use evidence in leadership decisions. And third, patients and family members should be more involved in the evidence-based approach. It is therefore important that nurse leaders learn how patients’ and family members’ views as stakeholders are better considered as part of the evidence-based leadership approach.

Future studies should be prioritized as follows: establishment of clear parameters for what constitutes and measures evidence-based leadership; use of theories or models in research to inform mechanisms how to effectively change the practice; conducting robust effectiveness studies using trial designs to evaluate the impact of evidence-based leadership; studying the role of patient and family members in improving the quality of clinical care; and investigating the financial impact of the use of evidence-based leadership approach within respective healthcare systems.

Data availability

The authors obtained all data for this review from published manuscripts.

World Health Organization. State of the world’s nursing 2020: investing in education, jobs and leadership. 2020. https://www.who.int/publications/i/item/9789240003279 . Accessed 29 June 2024.

Hersey P, Campbell R. Leadership: a behavioral science approach. The Center for; 2004.

Cline D, Crenshaw JT, Woods S. Nurse leader: a definition for the 21st century. Nurse Lead. 2022;20(4):381–4. https://doi.org/10.1016/j.mnl.2021.12.017 .

Article   Google Scholar  

Chen SS. Leadership styles and organization structural configurations. J Hum Resource Adult Learn. 2006;2(2):39–46.

Google Scholar  

McKibben L. Conflict management: importance and implications. Br J Nurs. 2017;26(2):100–3.

Article   PubMed   Google Scholar  

Haghgoshayie E, Hasanpoor E. Evidence-based nursing management: basing Organizational practices on the best available evidence. Creat Nurs. 2021;27(2):94–7. https://doi.org/10.1891/CRNR-D-19-00080 .

Majers JS, Warshawsky N. Evidence-based decision-making for nurse leaders. Nurse Lead. 2020;18(5):471–5.

Tichy NM, Bennis WG. Making judgment calls. Harvard Business Rev. 2007;85(10):94.

Sousa MJ, Pesqueira AM, Lemos C, Sousa M, Rocha Á. Decision-making based on big data analytics for people management in healthcare organizations. J Med Syst. 2019;43(9):1–10.

Guo R, Berkshire SD, Fulton LV, Hermanson PM. %J L in HS. Use of evidence-based management in healthcare administration decision-making. 2017;30(3): 330–42.

Liang Z, Howard P, Rasa J. Evidence-informed managerial decision-making: what evidence counts?(part one). Asia Pac J Health Manage. 2011;6(1):23–9.

Hasanpoor E, Janati A, Arab-Zozani M, Haghgoshayie E. Using the evidence-based medicine and evidence-based management to minimise overuse and maximise quality in healthcare: a hybrid perspective. BMJ evidence-based Med. 2020;25(1):3–5.

Shingler NA, Gonzalez JZ. Ebm: a pathway to evidence-based nursing management. Nurs 2022. 2017;47(2):43–6.

Farokhzadian J, Nayeri ND, Borhani F, Zare MR. Nurse leaders’ attitudes, self-efficacy and training needs for implementing evidence-based practice: is it time for a change toward safe care? Br J Med Med Res. 2015;7(8):662.

Article   PubMed   PubMed Central   Google Scholar  

American Nurses Association. ANA leadership competency model. Silver Spring, MD; 2018.

Royal College of Nursing. Leadership skills. 2022. https://www.rcn.org.uk/professional-development/your-career/nurse/leadership-skills . Accessed 29 June 2024.

Kakemam E, Liang Z, Janati A, Arab-Zozani M, Mohaghegh B, Gholizadeh M. Leadership and management competencies for hospital managers: a systematic review and best-fit framework synthesis. J Healthc Leadersh. 2020;12:59.

Liang Z, Howard PF, Leggat S, Bartram T. Development and validation of health service management competencies. J Health Organ Manag. 2018;32(2):157–75.

World Health Organization. Global Strategic Directions for Nursing and Midwifery. 2021. https://apps.who.int/iris/bitstream/handle/10665/344562/9789240033863-eng.pdf . Accessed 29 June 2024.

NHS Leadership Academy. The nine leadership dimensions. 2022. https://www.leadershipacademy.nhs.uk/resources/healthcare-leadership-model/nine-leadership-dimensions/ . Accessed 29 June 2024.

Canadian Nurses Association. Evidence-informed decision-making and nursing practice: Position statement. 2018. https://hl-prod-ca-oc-download.s3-ca-central-1.amazonaws.com/CNA/2f975e7e-4a40-45ca-863c-5ebf0a138d5e/UploadedImages/documents/Evidence_informed_Decision_making_and_Nursing_Practice_position_statement_Dec_2018.pdf . Accessed 29 June 2024.

Hasanpoor E, Hajebrahimi S, Janati A, Abedini Z, Haghgoshayie E. Barriers, facilitators, process and sources of evidence for evidence-based management among health care managers: a qualitative systematic review. Ethiop J Health Sci. 2018;28(5):665–80.

PubMed   PubMed Central   Google Scholar  

Collins DB, Holton EF III. The effectiveness of managerial leadership development programs: a meta-analysis of studies from 1982 to 2001. Hum Res Dev Q. 2004;15(2):217–48.

Cummings GG, Lee S, Tate K, Penconek T, Micaroni SP, Paananen T, et al. The essentials of nursing leadership: a systematic review of factors and educational interventions influencing nursing leadership. Int J Nurs Stud. 2021;115:103842.

Clavijo-Chamorro MZ, Romero-Zarallo G, Gómez-Luque A, López-Espuela F, Sanz-Martos S, López-Medina IM. Leadership as a facilitator of evidence implementation by nurse managers: a metasynthesis. West J Nurs Res. 2022;44(6):567–81.

Young SK. Evidence-based management: a literature review. J Nurs Adm Manag. 2002;10(3):145–51.

Williams LL. What goes around comes around: evidence-based management. Nurs Adm Q. 2006;30(3):243–51.

Fraser I. Organizational research with impact: working backwards. Worldviews Evidence-Based Nurs. 2004;1:S52–9.

Roshanghalb A, Lettieri E, Aloini D, Cannavacciuolo L, Gitto S, Visintin F. What evidence on evidence-based management in healthcare? Manag Decis. 2018;56(10):2069–84.

Jaana M, Vartak S, Ward MM. Evidence-based health care management: what is the research evidence available for health care managers? Eval Health Prof. 2014;37(3):314–34.

Tate K, Hewko S, McLane P, Baxter P, Perry K, Armijo-Olivo S, et al. Learning to lead: a review and synthesis of literature examining health care managers’ use of knowledge. J Health Serv Res Policy. 2019;24(1):57–70.

Geerts JM, Goodall AH, Agius S, %J SS. Medicine. Evidence-based leadership development for physicians: a systematic literature review. 2020;246: 112709.

Barends E, Rousseau DM, Briner RB. Evidence-based management: The basic principles. Amsterdam; 2014. https://research.vu.nl/ws/portalfiles/portal/42141986/complete+dissertation.pdf#page=203 . Accessed 29 June 2024.

Stern C, Lizarondo L, Carrier J, Godfrey C, Rieger K, Salmond S, et al. Methodological guidance for the conduct of mixed methods systematic reviews. JBI Evid Synthesis. 2020;18(10):2108–18. https://doi.org/10.11124/JBISRIR-D-19-00169 .

Lancet T. 2020: unleashing the full potential of nursing. Lancet (London, England). 2019. p. 1879.

Välimäki MA, Lantta T, Hipp K, Varpula J, Liu G, Tang Y, et al. Measured and perceived impacts of evidence-based leadership in nursing: a mixed-methods systematic review protocol. BMJ Open. 2021;11(10):e055356. https://doi.org/10.1136/bmjopen-2021-055356 .

The Joanna Briggs Institute. Joanna Briggs Institute reviewers’ manual: 2014 edition. Joanna Briggs Inst. 2014; 88–91.

Pearson A, White H, Bath-Hextall F, Salmond S, Apostolo J, Kirkpatrick P. A mixed-methods approach to systematic reviews. JBI Evid Implement. 2015;13(3):121–31.

Johnson RB, Onwuegbuzie AJ. Mixed methods research: a research paradigm whose time has come. Educational Researcher. 2004;33(7):14–26.

Hong, Pluye P, Bujold M, Wassef M. Convergent and sequential synthesis designs: implications for conducting and reporting systematic reviews of qualitative and quantitative evidence. Syst Reviews. 2017;6(1):61. https://doi.org/10.1186/s13643-017-0454-2 .

Ramis MA, Chang A, Conway A, Lim D, Munday J, Nissen L. Theory-based strategies for teaching evidence-based practice to undergraduate health students: a systematic review. BMC Med Educ. 2019;19(1):1–13.

Sackett DL, Rosenberg WM, Gray JM, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn’t. Bmj. British Medical Journal Publishing Group; 1996. pp. 71–2.

Goodman JS, Gary MS, Wood RE. Bibliographic search training for evidence-based management education: a review of relevant literatures. Acad Manage Learn Educ. 2014;13(3):322–53.

Aromataris E, Munn Z. Chapter 3: Systematic reviews of effectiveness. JBI Manual for Evidence Synthesis. 2020; https://synthesismanual.jbi.global .

Munn Z, Barker TH, Moola S, Tufanaru C, Stern C, McArthur A et al. Methodological quality of case series studies: an introduction to the JBI critical appraisal tool. 2020;18(10): 2127–33.

Hong Q, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, et al. Mixed methods Appraisal Tool (MMAT) Version 2018: user guide. Montreal: McGill University; 2018.

McKenna J, Jeske D. Ethical leadership and decision authority effects on nurses’ engagement, exhaustion, and turnover intention. J Adv Nurs. 2021;77(1):198–206.

Maxwell M, Hibberd C, Aitchison P, Calveley E, Pratt R, Dougall N, et al. The TIDieR (template for intervention description and replication) checklist. The patient Centred Assessment Method for improving nurse-led biopsychosocial assessment of patients with long-term conditions: a feasibility RCT. NIHR Journals Library; 2018.

Braun V, Clarke V. Using thematic analysis in psychology. Qualitative Res Psychol. 2006;3(2):77–101.

Pollock A, Campbell P, Struthers C, Synnot A, Nunn J, Hill S, et al. Stakeholder involvement in systematic reviews: a scoping review. Syst Reviews. 2018;7:1–26.

Braye S, Preston-Shoot M. Emerging from out of the shadows? Service user and carer involvement in systematic reviews. Evid Policy. 2005;1(2):173–93.

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Syst Reviews. 2021;10(1):1–11.

Porta M. Pilot investigation, study. A dictionary of epidemiology. Oxford University Press Oxford; 2014. p. 215.

Kreis J, Puhan MA, Schünemann HJ, Dickersin K. Consumer involvement in systematic reviews of comparative effectiveness research. Health Expect. 2013;16(4):323–37.

Joseph ML, Nelson-Brantley HV, Caramanica L, Lyman B, Frank B, Hand MW, et al. Building the science to guide nursing administration and leadership decision making. JONA: J Nurs Adm. 2022;52(1):19–26.

Gifford W, Davies BL, Graham ID, Tourangeau A, Woodend AK, Lefebre N. Developing Leadership Capacity for Guideline Use: a pilot cluster Randomized Control Trial: Leadership Pilot Study. Worldviews Evidence-Based Nurs. 2013;10(1):51–65. https://doi.org/10.1111/j.1741-6787.2012.00254.x .

Hsieh HY, Henker R, Ren D, Chien WY, Chang JP, Chen L, et al. Improving effectiveness and satisfaction of an electronic charting system in Taiwan. Clin Nurse Specialist. 2016;30(6):E1–6. https://doi.org/10.1097/NUR.0000000000000250 .

McAllen E, Stephens K, Swanson-Biearman B, Kerr K, Whiteman K. Moving Shift Report to the Bedside: an evidence-based Quality Improvement Project. OJIN: Online J Issues Nurs. 2018;23(2). https://doi.org/10.3912/OJIN.Vol23No02PPT22 .

Thomas M, Autencio K, Cesario K. Positive outcomes of an evidence-based pressure injury prevention program. J Wound Ostomy Cont Nurs. 2020;47:S24.

Cullen L, Titler MG. Promoting evidence-based practice: an internship for Staff nurses. Worldviews Evidence-Based Nurs. 2004;1(4):215–23. https://doi.org/10.1111/j.1524-475X.2004.04027.x .

DuBose BM, Mayo AM. Resistance to change: a concept analysis. Nursing forum. Wiley Online Library; 2020. pp. 631–6.

Lalleman PCB, Smid GAC, Lagerwey MD, Shortridge-Baggett LM, Schuurmans MJ. Curbing the urge to care: a bourdieusian analysis of the effect of the caring disposition on nurse middle managers’ clinical leadership in patient safety practices. Int J Nurs Stud. 2016;63:179–88.

Article   CAS   PubMed   Google Scholar  

Martin E, Warshawsky N. Guiding principles for creating value and meaning for the next generation of nurse leaders. JONA: J Nurs Adm. 2017;47(9):418–20.

Griffiths P, Recio-Saucedo A, Dall’Ora C, Briggs J, Maruotti A, Meredith P, et al. The association between nurse staffing and omissions in nursing care: a systematic review. J Adv Nurs. 2018;74(7):1474–87. https://doi.org/10.1111/jan.13564 .

Lúanaigh PÓ, Hughes F. The nurse executive role in quality and high performing health services. J Nurs Adm Manag. 2016;24(1):132–6.

de Kok E, Weggelaar-Jansen AM, Schoonhoven L, Lalleman P. A scoping review of rebel nurse leadership: descriptions, competences and stimulating/hindering factors. J Clin Nurs. 2021;30(17–18):2563–83.

Warshawsky NE. Building nurse manager well-being by reducing healthcare system demands. JONA: J Nurs Adm. 2022;52(4):189–91.

Paez A. Gray literature: an important resource in systematic reviews. J Evidence-Based Med. 2017;10(3):233–40.

McAuley L, Tugwell P, Moher D. Does the inclusion of grey literature influence estimates of intervention effectiveness reported in meta-analyses? Lancet. 2000;356(9237):1228–31.

Sarah S. Introduction to mixed methods systematic reviews. https://jbi-global-wiki.refined.site/space/MANUAL/4689215/8.1+Introduction+to+mixed+methods+systematic+reviews . Accessed 29 June 2024.

Whittemore R, Knafl K. The integrative review: updated methodology. J Adv Nurs. 2005;52(5):546–53.

Download references

Acknowledgements

We want to thank the funding bodies, the Finnish National Agency of Education, Asia Programme, the Department of Nursing Science at the University of Turku, and Xiangya School of Nursing at the Central South University. We also would like to thank the nurses and nurse leaders for their valuable opinions on the topic.

The work was supported by the Finnish National Agency of Education, Asia Programme (grant number 26/270/2020) and the University of Turku (internal fund 26003424). The funders had no role in the study design and will not have any role during its execution, analysis, interpretation of the data, decision to publish, or preparation of the manuscript.

Author information

Authors and affiliations.

Department of Nursing Science, University of Turku, Turku, FI-20014, Finland

Maritta Välimäki, Tella Lantta, Kirsi Hipp & Jaakko Varpula

School of Public Health, University of Helsinki, Helsinki, FI-00014, Finland

Maritta Välimäki

Xiangya Nursing, School of Central South University, Changsha, 410013, China

Shuang Hu, Jiarui Chen, Yao Tang, Wenjun Chen & Xianhong Li

School of Health and Social Services, Häme University of Applied Sciences, Hämeenlinna, Finland

Hunan Cancer Hospital, Changsha, 410008, China

Gaoming Liu

You can also search for this author in PubMed   Google Scholar

Contributions

Study design: MV, XL. Literature search and study selection: MV, KH, TL, WC, XL. Quality assessment: YT, SH, XL. Data extraction: JC, MV, JV, WC, YT, SH, GL. Analysis and interpretation: MV, SH. Manuscript writing: MV. Critical revisions for important intellectual content: MV, XL. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Xianhong Li .

Ethics declarations

Ethics approval and consent to participate.

No ethical approval was required for this study.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Differences between the original protocol

We modified criteria for the included studies: we included published conference abstracts/proceedings, which form a relatively broad knowledge base in scientific knowledge. We originally planned to conduct a survey with open-ended questions followed by a face-to-face meeting to discuss the preliminary results of the review. However, to avoid extra burden in nurses due to COVID-19, we decided to limit the validation process to the online discussion only.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary material 2, supplementary material 3, supplementary material 4, supplementary material 5, supplementary material 6, supplementary material 7, supplementary material 8, supplementary material 9, supplementary material 10, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Välimäki, M., Hu, S., Lantta, T. et al. The impact of evidence-based nursing leadership in healthcare settings: a mixed methods systematic review. BMC Nurs 23 , 452 (2024). https://doi.org/10.1186/s12912-024-02096-4

Download citation

Received : 28 April 2023

Accepted : 13 June 2024

Published : 03 July 2024

DOI : https://doi.org/10.1186/s12912-024-02096-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Evidence-based leadership
  • Health services administration
  • Organizational development
  • Quality in healthcare

BMC Nursing

ISSN: 1472-6955

example of experimental research design pdf

IMAGES

  1. Experimental Research

    example of experimental research design pdf

  2. Samples Of Research Design In Thesis

    example of experimental research design pdf

  3. What Is Experimental Research Design

    example of experimental research design pdf

  4. (PDF) EXPERIMENTAL RESEARCH METHODS

    example of experimental research design pdf

  5. (PDF) Experimental Research Methods

    example of experimental research design pdf

  6. (PDF) Experimental Research Design-types & process

    example of experimental research design pdf

VIDEO

  1. Example of non-experimental research design (11 of 11)

  2. Needs of Experimental Design

  3. non experimental research design with examples and characteristics

  4. What is experimental research design? (4 of 11)

  5. What is Research Design

  6. Experimental Research in Urdu by Khurram Shehzad

COMMENTS

  1. (PDF) Exploring Experimental Research: Methodologies, Designs, and

    Experimental research serves as a fundamental scientific method aimed at unraveling. cause-and-effect relationships between variables across various disciplines. This. paper delineates the key ...

  2. PDF Topic 1: INTRODUCTION TO PRINCIPLES OF EXPERIMENTAL DESIGN

    Figure 2. Example of the process of research. A designed experiment must satisfy all requirements of the objectives of a study but is also subject to the limitations of available resources. Below we will give examples of how the objective and hypothesis of a study influences the design of an experiment. 1.

  3. PDF Experimental Research Designs

    validity of an experimental design. Each research study poses different challenges that require thoughtful, often creative solutions. As Sandelowski and colleagues (2012) have observed, we actually have to reinvent these research methods every time we use them to accommodate the real world of research practice (p. 320). The True Experiment

  4. PDF EXPERIMENTAL RESEARCH DESIGNS

    ARTHUR—PSYC 302 (EXPERIMENTAL PSYCHOLOGY) 17C LECTURE NOTES [10/11/17] EXPERIMENTAL RESEARCH DESIGNS—PAGE 5 WITHIN-SUBJECTS, BETWEEN-SUBJECTS, AND MIXED FACTORIAL DESIGNS 1. Within-subjects design—a research design in which each participant experiences every condition of the experiment or study. A. Advantages 1. do not need as many participants 2. equivalence is certain

  5. PDF Experimental Design 1

    expe. experiments must have three essential characteristics: random assignment to groups, an intervention given to at least one group and an alternate or no. intervention for at least one other group, and a comparison of group. (Gall. 2005). y allocated to eitherthe control group or t. experimental group. A caut.

  6. (PDF) Experimental Research Design-types & process

    Experimental design is the process of carrying out research in an objective and controlled fashion. so that precision is maximized and specific conclusions can be drawn regarding a hypothesis ...

  7. Experimental Research Design

    However, the term "research design" typically does not refer to the issues discussed above. The term "experimental research design" is centrally concerned with constructing research that is high in causal (or internal) validity. Causal validity concerns the accuracy of statements regarding cause and efect relationships.

  8. PDF Chapter 4 Experimental Designs and Their Analysis

    Design of experiment means how to design an experiment in the sense that how the observations or measurements should be obtained to answer a query in a valid, efficient and economical way. The designing of the experiment and the analysis of obtained data are inseparable. If the experiment is designed properly keeping in mind the question, then ...

  9. PDF Principles of Experimental Design

    Three main pillars of experimental design are randomization, replica-tion, and blocking, and we will flesh out their effects on the subsequent analysis as well as their implementation in an experimental design. An experimental design is always tailored toward pre-defined (primary) analyses, and an efficient analysis and unambiguous ...

  10. (PDF) An Introduction to Experimental Design Research

    P. Cash et al. (eds.), Experimental Design Research, DOI 10.1007/978-3-319-33781-4_1. Abstract Design research brings together influences from the whole gamut of. social, psychological, and more ...

  11. PDF Quantitative Research Designs: Experimental, Quasi-Experimental, and

    A placebo is an intervention with no effect, such as a dummy pill. Standard or usual health care. In nursing studies, for example, patients in the control group typically receive "usual care" because no care would be unethical. A lower dose of treatment or an alternative treatment.

  12. PDF CHAPTER 5 Experimental and Quasi-Experimental Designs for Research

    design was introduced, and, indeed, had been used as early as 1916 by Thorndike, McCall, andChapman (1916), in both 5 X 5 and 2 X 2 forms, Le., some 10 years before Fisher (1926) incorporated it systematically intohis scheme of experimental design, with randomization.2 McCall's mode of using the Cfrotation ex­ periment" serves well to denote ...

  13. PDF Experimental and quasi-experimental designs

    to as the treatment. Researchers typically draw upon either experimental or quasi-experimental research designs to determine whether there is a causal relationship between the treatment and the outcome. This chapter outlines key features and provides examples of common experimental and quasi-experimental research designs. We also make

  14. PDF Chapter 9: Experimental Research

    1. Classical Experimental Design a. All designs are variations of the classical experimental design, the type of design discussed so far, which has random assignment, a pretest and a posttest, an experimental group, and a control group. 2. Pre-Experimental Designs a. Some designs lack random assignment and are compromises or shortcuts.

  15. Experimental Research Design

    Experimental research design is centrally concerned with constructing research that is high in causal (internal) validity. Randomized experimental designs provide the highest levels of causal validity. Quasi-experimental designs have a number of potential threats to their causal validity. Yet, new quasi-experimental designs adopted from fields ...

  16. Guide to Experimental Design

    Step 1: Define your variables. You should begin with a specific research question. We will work with two research question examples, one from health sciences and one from ecology: Example question 1: Phone use and sleep. You want to know how phone use before bedtime affects sleep patterns.

  17. PDF Group Experimental Designs

    By the end of this chapter, you should be able to weigh the evidence from experimental research studies about the effectiveness of social work interventions and to design group experimental research projects. In subsequent chapters, we will show how causal criteria are established in other research designs. 22. Causal Explanation

  18. PDF Experimental Design

    examples of the most common research designs in psychology. For additional information about various research methodologies, we encourage you to consult a traditional research ... new experimental design evokes a complex set of emotions. This is the moment when the ideas that inspire you, the theories that guide you, and the many practical ...

  19. PDF Chapter 10. Experimental Design: Statistical Analysis of Data Purpose

    Now, if we divide the frequency with which a given mean was obtained by the total number of sample means (36), we obtain the probability of selecting that mean (last column in Table 10.5). Thus, eight different samples of n = 2 would yield a mean equal to 3.0. The probability of selecting that mean is 8/36 = 0.222.

  20. (PDF) An Introduction to Experimental and Exploratory Research

    Abstract. Experimental research is a study that strictly adheres to a scientific research design. It includes a hypothesis, a variable that can be manipulated by the researcher, and variables that ...

  21. (PDF) Chapter 13 Experimental Research How to Design and Evaluate

    This course is an introduction to educational and social research for practitioners in schools and human services. The focus will be on fundamental issues in empirical research—that is research based on, concerned with, or verifiable by observation or experience, rather than theory or pure logic—including research methodology and research techniques (e.g., data collection, analysis and ...

  22. The impact of evidence-based nursing leadership in healthcare settings

    Design. In this review, we used a mixed methods approach [].A mixed methods systematic review was selected as this approach has the potential to produce direct relevance to policy makers and practitioners [].Johnson and Onwuegbuzie [] have defined mixed methods research as "the class of research in which the researcher mixes or combines quantitative and qualitative research techniques ...

  23. (PDF) Experimental and quasi-experimental designs

    The research method used is quantitative research with pre-experimental design. Data is collected through pretest and posttest tests as well as nontes in the form of interviews, observations, and ...

  24. (PDF) Experimental Research Methods

    PDF | On Jan 1, 2003, S.M. Ross and others published Experimental Research Methods | Find, read and cite all the research you need on ResearchGate