U.S. flag

Official websites use .gov

A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS

A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Logic Models

CDC Approach to Evaluation

A logic model is a graphic depiction (road map) that presents the shared relationships among the resources, activities, outputs, outcomes, and impact for your program. It depicts the relationship between your program’s activities and its intended effects. Learn more about logic models and the key steps to developing a useful logic model on the CDC Program Evaluation Framework Checklist for Step 2 page .

For additional logic model information:

Division for Heart Disease and Stroke Prevention: Developing and Using a Logic Model

E-mail: [email protected]

To receive email updates about this page, enter your email address:

Exit Notification / Disclaimer Policy

  • The Centers for Disease Control and Prevention (CDC) cannot attest to the accuracy of a non-federal website.
  • Linking to a non-federal website does not constitute an endorsement by CDC or any of its employees of the sponsors or the information and products presented on the website.
  • You will be subject to the destination website's privacy policy when you follow the link.
  • CDC is not responsible for Section 508 compliance (accessibility) on other federal or private website.

U.S. flag

An official website of the United States government, Department of Justice.

Here's how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( Lock A locked padlock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Center for Research Partnerships and Program Evaluation (CRPPE)

Logic models.

While there are many forms, logic models specify relationships among program goals, objectives, activities, outputs, and outcomes. Logic models are often developed using graphics or schematics and allow the program manager or evaluator to clearly indicate the theoretical connections among program components: that is, how program activities will lead to the accomplishment of objectives, and how accomplishing objectives will lead to the fulfillment of goals. In addition, logic models used for evaluation include the measures that will be used to determine if activities were carried out as planned (output measures) and if the program's objectives have been met (outcome measures).

Why Use a Logic Model?

Logic models are a useful tool for program development and evaluation planning for several reasons:

  • They serve as a format for clarifying what the program hopes to achieve.
  • They are an effective way to monitor program activities.
  • They can be used for either performance measurement or evaluation.
  • They help programs stay on track as well as plan for the future.
  • They are an excellent way to document what a program intends to do and what it is actually doing.

Learn More about What a Logic Model Is and Why to Use It

Logic Model for Program Planning and Evaluation (University of Idaho-Extension)

How to Develop a Logic Model

Developing a logic model requires a program planner to think systematically about what they want their program to accomplish and how it will be done. The logic model should illustrate the linkages among the elements of the program including the goal, objectives, resources, activities, process measures, outcomes, outcome measures, and external factors.

Logic Model Schematic

The following logic model format and discussion was developed by the Juvenile Justice Evaluation Center (JJEC) and maintained online by the Justice Research and Statistics Association (www.jrsa.org) from 1998 to 2007.

At the top of the logic model example is a  goal  that represents a broad, measurable statement describing the desired long-term impact of the program.  Knowing the expected long-term achievements a program is expected to make will help in determining what the overall program goal should be. Sometimes goals are not always achieved during the operation of a program. However, evaluators or program planners should continually re-visit the program's goals during program planning.

An  objective  is a more specific, measurable concept focused on the immediate or direct outcomes of the program that support accomplishment of your goal.  Unlike goals, objectives should be achieved during the program. A clear objective will provide information concerning the direction, target, and timeframe for the program. Knowing the difference your program will make, as well as who will be impacted, and when will be helpful when developing focused objectives for your program.

Resources  or inputs can include staff, facilities, materials, or funds, etc--anything invested in the program to accomplish the work that must be done.  The resources needed to conduct a program should be articulated during the early stages of program development to ensure that a program is realistically implemented and capable of meeting its stated goal(s).

Activities  represent efforts conducted to achieve the program objectives.  After considering the resources a program will need, the specific activities that will be used to bring about the intended changes or results must be determined.

Process Measures  are data used to demonstrate the implementation of activities. These include products of activities and indicators of services provided.  Process measures provide documentation of whether a program is being implemented as originally intended. For example, process measures for a mental health court program might include the number of treatment contacts or the type of treatment received.

Outcome measures  represent the actual change(s) or lack thereof in the target (e.g., clients or system) of the program that are directly related to the goal(s) and objectives.  Outcomes may include intended or unintended consequences. Three levels of outcomes to consider include:

External Factors , located at the bottom of the logic model example, are  factors within the system that may affect program operation . External factors vary according to program setting and may include influences such as development of or revisions to state/federal laws, unexpected changes in data sharing procedures, or other similar simultaneously running programs. It is important to think about external factors that might change how your program operates or affect program outcomes. External factors should be included during the development of the logic model so that they can be taken into account when assessing program operations or when interpreting the absence or presence of program changes.

If-Then Logic Model

Another way to develop a logic model is by using an "if-then" sequence that indicates how each component relates to each other. Conceptually, the if-then logic model works like this: IF [program  activity ] THEN [program  objective ] and IF [program  objective ] THEN [program  goal ].

In reality, the if-then logic model looks like this: IF a truancy reduction program is offered to youth who have been truant from school THEN their school attendance will increase and IF their school attendance is increased THEN their graduation rates will increase.

Another way to conceptualize the "if-then" format:

  • If the required resources are invested, then those resources can be used to conduct the program activities.
  • If the activities are completed, then the desired outputs for the target population will be produced.
  • If the outputs are produced, then the outcomes will indicate that the objectives of the program have been accomplished.

Developing program logic using an "if-then" sequence can help a program manager or evaluator maintain focus and direction for the project and help specify what will be measured through the evaluation.

Common Problems When Developing Logic Models

Links among elements (e.g., objectives, activities, outcome measures) of the logic model are unclear or missing. It should be obvious which objective is tied to which activity, process measure, etc. Oftentimes logic models contain lists of each of the elements of a logic model without specifying which item on one list is related to which item on another list. This can easily lead to confusion regarding the relationship among elements or result in accidental omission of an item on a list of elements.

Too much (or too little) information is provided on the logic model. The logic model should include only the primary elements related to program/project design and operation. As a general rule, it should provide the "big picture" of the program/project and avoid providing very specific details related to how, for example, interventions will occur, or a list of all the agencies that will serve to improve collaboration efforts. If you feel that a model with all those details is necessary, consider developing two models; a model with the fundamental elements and a model with the details.

Objectives are confused with activities. Make sure that items listed as objectives are in fact objectives rather than activities. Anything related to program implementation or a task that is being carried out in order to accomplish something is an activity rather than an objective. For example, 'hire 10 staff members' is an activity that is being carried out in order to accomplish an objective such as 'improve response time for incoming phone calls.'

Objectives are not measurable. Unlike goals, which are not considered measurable because they are broad, mission-like statements, objectives should be measurable and directly related to the accomplishment of the goal. An objective is measurable when it specifically identifies the target (who or what will be affected), is time-oriented (when it will be accomplished), and indicates direction of desired change. In many cases, measurable objectives also include the amount of change desired.

Other Logic Model Examples

  • Phoenix Gang Logic Model
  • OJJDP Generic Logic Model
  • Project Safe Neighborhoods Example

Site logo

  • Logic Model: A Comprehensive Guide to Program Planning, Implementation, and Evaluation
  • Learning Center

Learn how to use a logic model to guide your program planning, implementation, and evaluation. This comprehensive guide covers everything you need to know to get started.

Table of Contents

  • What are Logic Models and Why are They Important in Evaluation?
  • The Key Components of a Logic Model: Inputs, Activities, Outputs, Outcomes, and Impacts
  • Creating a Logic Model: Step-by-Step Guide and Best Practices
  • Using Logic Models to Guide Evaluation Planning, Implementation, and Reporting
  • Common Challenges and Solutions in Developing and Using Logic Models in Evaluation
  • Enhancing the Usefulness and Credibility of Logic Models: Tips for Effective Communication and Stakeholder Engagement
  • Advanced Topics in Logic Modeling: Theory of Change, Program Theory, and Impact Pathways
  • Resources and Tools for Developing and Using Logic Models in Evaluation.

1. What are Logic Models and Why are They Important in Evaluation?

Logic models are visual representations or diagrams that illustrate how a program or intervention is intended to work. They map out the relationships between program inputs, activities, outputs, and outcomes, and can be used to communicate program goals and objectives, as well as guide program design, implementation, and evaluation.

Logic models are important in evaluation because they provide a clear and systematic way to identify and measure program inputs, activities, outputs, and outcomes. By mapping out the underlying assumptions and theories of change that drive a program, logic models help evaluators identify potential gaps, inconsistencies, and areas of improvement in program design and implementation. They also help evaluators develop evaluation plans and strategies, identify appropriate indicators and measures, and track progress toward program goals and objectives.

Logic models provide a structured and systematic approach to program evaluation that helps ensure that programs are designed, implemented, and evaluated in a rigorous and effective manner.

2. The Key Components of a Logic Model: Inputs, Activities, Outputs, Outcomes, and Impacts

The key components of a logic model are typically organized into five main categories: inputs, activities, outputs, outcomes, and impacts. Here is a brief description of each component:

  • Inputs : These are the resources, both human and material, that are invested in the program. Inputs can include things like funding, staff time, equipment, and materials.
  • Activities : These are the specific actions or interventions that the program undertakes in order to achieve its objectives. Activities can include things like training, outreach, or counseling.
  • Outputs : These are the immediate products or services that result from the program’s activities. Outputs can include things like the number of people trained, the number of workshops conducted, or the number of brochures distributed.
  • Outcomes : These are the changes that occur as a result of the program’s outputs. Outcomes can be short-term, intermediate, or long-term and can include changes in knowledge, behavior, or attitudes.
  • Impacts : These are the broader changes that occur as a result of the program’s outcomes. Impacts can include changes in social, economic, or environmental conditions and are often difficult to measure.

By clearly identifying and mapping out each of these components, a logic model provides a clear and systematic way to understand how a program is designed to work, what resources are needed to implement it, and what outcomes and impacts it is expected to achieve.

3. Creating a Logic Model: Step-by-Step Guide and Best Practices

Creating a logic model is an iterative process that involves collaboration among stakeholders to develop a shared understanding of the program’s goals, objectives, and expected outcomes. Here is a step-by-step guide to creating a logic model, along with some best practices:

Step 1: Identify the Program Goal

The first step in creating a logic model is to identify the program’s overall goal. This should be a broad statement that reflects the program’s purpose and the desired change it seeks to achieve.

  • Best Practice: The goal should be specific, measurable, achievable, relevant, and time-bound ( SMART ).

Step 2: Identify the Program Inputs

The next step is to identify the resources required to implement the program. Inputs can include staff, volunteers, funding, equipment, and other resources necessary to implement the program.

  • Best Practice: Inputs should be clearly defined and quantified to help with budgeting and resource allocation.

Step 3: Identify the Program Activities

Once the inputs have been identified, the next step is to identify the specific activities that will be undertaken to achieve the program’s goal. These activities should be based on evidence-based best practices and should be feasible given the available resources.

  • Best Practice: Activities should be designed to address the root causes of the problem the program is addressing.

Step 4: Identify the Program Outputs

Outputs are the immediate products or services that result from the program’s activities. These should be measurable and directly linked to the program’s activities.

  • Best Practice: Outputs should be defined in terms of quantity, quality, and timeliness to ensure that they are meaningful and relevant.

Step 5: Identify the Program Outcomes

Outcomes are the changes that occur as a result of the program’s outputs. These should be specific, measurable, and relevant to the program’s goal and should reflect changes in knowledge, skills, behaviors, or attitudes.

  • Best Practice: Outcomes should be defined in terms of short-term, intermediate, and long-term changes to provide a comprehensive picture of program impact.

Step 6: Identify the Program Impacts

Impacts are the broader changes that occur as a result of the program’s outcomes. These may be difficult to measure and may require longer-term evaluation efforts.

  • Best Practice: Impacts should be defined in terms of their relevance and importance to stakeholders and should be used to guide ongoing program improvement efforts.

Step 7: Create the Logic Model Diagram

Once all of the components have been identified and defined, it is time to create the logic model diagram. This should be a visual representation of the program’s inputs, activities, outputs, outcomes, and impacts that illustrates how they are linked to one another.

  • Best Practice: The logic model diagram should be clear and easy to understand, with each component labeled and defined.

Step 8: Use the Logic Model for Program Planning, Implementation, and Evaluation

Finally, the logic model should be used to guide program planning, implementation, and evaluation efforts. It should be shared with all stakeholders to ensure that everyone has a clear understanding of the program’s goals and objectives and how they will be achieved.

  • Best Practice: The logic model should be reviewed and updated regularly to ensure that it remains relevant and useful over time.

4. Using Logic Models to Guide Evaluation Planning, Implementation, and Reporting

A logic model is a visual representation of the relationships among the inputs, activities, outputs, and outcomes of a program or intervention. It can be used to guide evaluation planning, implementation, and reporting by providing a framework for understanding the logic behind the program and how it is expected to produce results.

Here are some ways in which logic models can be used to guide evaluation:

  • Planning: Logic models can be used during the planning phase to identify the program’s goals and objectives, the activities needed to achieve those goals, and the resources required. It can also help identify potential barriers and facilitators to implementation.
  • Implementation: Logic models can help ensure that program activities are being implemented as intended. By tracking the inputs and outputs, it can be determined whether the program is being implemented as planned and whether the program is on track to achieve its goals.
  • Evaluation: Logic models can guide the evaluation process by helping to identify the program’s intended outcomes and how they will be measured. It can also help to identify potential confounding variables that may influence the outcomes.
  • Reporting: Logic models can be used to report on the program’s progress and impact. By comparing the program’s outputs and outcomes to the original logic model, it can be determined whether the program was successful in achieving its goals.

Logic models provide a useful tool for program planning, implementation, and evaluation. By using logic models to guide these processes, it is possible to ensure that programs are being implemented effectively and efficiently and that they are producing the desired outcomes.

5. Common Challenges and Solutions in Developing and Using Logic Models in Evaluation

Developing and using logic models in evaluation can be a challenging process. Here are some common challenges and solutions to consider:

  • Challenge: Lack of stakeholder buy-in. If stakeholders are not involved in the development of the logic model or do not understand its purpose, they may not support its use in the evaluation process. Solution: Involve stakeholders in the development process and explain the purpose and benefits of using a logic model for evaluation.
  • Challenge: Overcomplicated or unrealistic models. Logic models that are too complex or unrealistic can be difficult to implement and evaluate effectively. Solution: Keep the logic model simple and focused on the most important program components. Ensure that it is based on realistic assumptions and achievable outcomes.
  • Challenge: Insufficient data. Lack of data can make it difficult to develop a logic model or to evaluate program outcomes. Solution: Collect baseline data before program implementation and ongoing data during implementation. Use multiple sources of data to validate the model and outcomes.
  • Challenge: Difficulty in identifying outcomes. Outcomes can be challenging to identify and measure, especially in complex programs. Solution: Involve stakeholders in identifying outcomes and ensure that they are realistic, measurable, and aligned with the program goals.
  • Challenge: Lack of flexibility. Logic models may need to be revised or updated as the program progresses or in response to changes in the environment. Solution: Build in flexibility to the logic model and be willing to modify it as needed to reflect changes in the program or environment.
  • Challenge: Misuse of the logic model. If the logic model is not used consistently throughout the evaluation process, it may not be effective in guiding the evaluation or communicating results. Solution: Ensure that all stakeholders understand the purpose of the logic model and how it will be used throughout the evaluation process. Train staff on its use and encourage consistent use across the organization.

By addressing these common challenges, organizations can develop and use logic models effectively in program evaluation, leading to better-informed decision-making and improved program outcomes.

6. Enhancing the Usefulness and Credibility of Logic Models: Tips for Effective Communication and Stakeholder Engagement

To enhance the usefulness and credibility of logic models, effective communication and stakeholder engagement are essential. Here are some tips to help organizations communicate their logic models effectively and engage stakeholders:

  • Use plain language: Avoid using technical jargon or acronyms that stakeholders may not understand. Use plain language to explain the logic model and its purpose.
  • Provide context: Provide stakeholders with context about the program, its goals, and its intended outcomes. This will help stakeholders better understand the logic model and its relevance to the program.
  • Use visuals: Visual aids such as diagrams, flowcharts, and infographics can help stakeholders better understand the logic model and how it relates to the program.
  • Solicit feedback: Solicit feedback from stakeholders on the logic model, including its assumptions, activities, outputs, and outcomes. This will help ensure that the logic model is accurate and reflects stakeholders’ perspectives.
  • Involve stakeholders: Involve stakeholders in the development and implementation of the logic model. This will help ensure that the logic model is relevant and useful to stakeholders and will increase stakeholder buy-in.
  • Communicate results: Communicate the results of the evaluation using the logic model. This will help stakeholders understand how the program has progressed and how it has achieved its intended outcomes.
  • Provide training: Provide training on the use of the logic model to stakeholders. This will help ensure that all stakeholders understand how to use the logic model and can communicate its importance to others.

By following these tips, organizations can effectively communicate their logic models and engage stakeholders in the evaluation process, leading to better-informed decision making and improved program outcomes.

7. Advanced Topics in Logic Modeling: Theory of Change, Program Theory, and Impact Pathways

Logic models are a useful tool for program evaluation, but there are some advanced topics that can enhance their effectiveness. Here are some advanced topics in logic modeling to consider:

  • Theory of Change: A theory of change is a framework that outlines how a program will create change or achieve its intended outcomes. It provides a roadmap for how activities and outputs will lead to outcomes and impact. A theory of change can help identify assumptions and gaps in the logic model, and can be used to guide program planning and evaluation.
  • Program Theory: Program theory is a conceptual framework that explains how a program is intended to work. It provides a detailed explanation of the underlying assumptions, logic, and mechanisms of the program. Program theory can be used to guide the development of a logic model and to help stakeholders better understand the program.
  • Impact Pathways: Impact pathways are a visual representation of how a program’s activities and outputs lead to outcomes and impact. They can be used to help stakeholders understand the sequence of events that lead to impact and to identify the key points in the program where outcomes and impact can be measured.

These advanced topics can help organizations develop more effective logic models and better understand their programs. By incorporating a theory of change, program theory, and impact pathways into the logic model, organizations can identify the underlying assumptions, mechanisms, and causal pathways of their programs. This can help guide program planning, implementation, and evaluation, leading to better-informed decision making and improved program outcomes.

8. Resources and Tools for Developing and Using Logic Models in Evaluation

Developing and using logic models in evaluation can be a complex process. Fortunately, there are many resources and tools available to help organizations create effective logic models. Here are some resources and tools to consider:

  • The Kellogg Foundation Logic Model Development Guide: This guide provides a comprehensive overview of logic models, including their purpose, components, and development process. It also includes case studies and examples.
  • The W.K. Kellogg Foundation Evaluation Handbook: This handbook provides guidance on all aspects of program evaluation, including logic model development. It includes information on how to develop a logic model, how to use it in evaluation, and how to communicate the results.
  • The CDC Framework for Program Evaluation: This framework provides a step-by-step process for conducting program evaluation, including developing a logic model. It also includes guidance on selecting evaluation methods and analyzing data.
  • The University of Wisconsin-Extension Logic Model Resources: This website provides a variety of resources for developing and using logic models, including templates, examples, and guides.
  • The Aspen Institute Program Planning and Evaluation Toolkit: This toolkit provides guidance on program planning and evaluation, including logic model development. It includes templates and worksheets to help organizations develop and use logic models.
  • The Evaluation Toolbox: This online resource provides guidance on all aspects of program evaluation, including logic model development. It includes examples, templates, and guides.

By using these resources and tools, organizations can develop effective logic models for program evaluation. These tools can help organizations identify the key components of their program, define their intended outcomes, and develop a roadmap for program planning, implementation, and evaluation.

' data-src=

This piece of write up is insightful and pointed to the tenets of project Implementation needs. Kindly request that you share with me this piece through email, I and any other relevant information necessary for project Implementation.

' data-src=

Thank you for the elaborate write-up. I was looking for a detailed explanation of logic model. this has helped me. Thanks

' data-src=

Your write-up has helped me a lot. May i ask for the material, good outcomes bring joy in everyone’s life.

Leave a Comment Cancel Reply

Your email address will not be published.

How strong is my Resume?

Only 2% of resumes land interviews.

Land a better, higher-paying career

logic model for research project

Jobs for You

Maternal, newborn, child health, and nutrition team senior technical advisor.

  • United States (Remote)

Junior Program Analyst/Admin Assistant – USAID LAC/FO

  • United States

Tax Coordinator – USAID Uganda

Monitoring and evaluation advisor.

  • Cuso International

Monitoring, Evaluation &Learning (MEL) Specialist

  • Brussels, Belgium
  • European Endowment for Democracy (EED)

Economics and Business Management Expert

Governance and sustainability expert, agriculture expert with irrigation background, nutritionist with food security background, director of impact and evaluation.

  • Glendale Heights, IL 60137, USA
  • Bridge Communities

USAID Benin Advisor / Program Officer

Usaid/drc elections advisor.

  • Democratic Republic of the Congo

Business Development Associate

Agriculture and resilience advisor, usaid/drc program officer, services you might be interested in, useful guides ....

How to Create a Strong Resume

Monitoring And Evaluation Specialist Resume

Resume Length for the International Development Sector

Types of Evaluation

Monitoring, Evaluation, Accountability, and Learning (MEAL)

LAND A JOB REFERRAL IN 2 WEEKS (NO ONLINE APPS!)

Sign Up & To Get My Free Referral Toolkit Now:

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

The Implementation Research Logic Model: a method for planning, executing, reporting, and synthesizing implementation projects

Affiliations.

  • 1 Department of Population Health Sciences, University of Utah School of Medicine, Salt Lake City, Utah, USA. [email protected].
  • 2 Center for Prevention Implementation Methodology for Drug Abuse and HIV, Department of Psychiatry and Behavioral Sciences, Department of Preventive Medicine, Department of Medical Social Sciences, and Department of Pediatrics, Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA. [email protected].
  • 3 Center for Prevention Implementation Methodology for Drug Abuse and HIV, Department of Psychiatry and Behavioral Sciences, Feinberg School of Medicine; Institute for Sexual and Gender Minority Health and Wellbeing, Northwestern University Chicago, Chicago, Illinois, USA.
  • 4 Shirley Ryan AbilityLab and Center for Prevention Implementation Methodology for Drug Abuse and HIV, Department of Psychiatry and Behavioral Sciences and Department of Physical Medicine and Rehabilitation, Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA.
  • PMID: 32988389
  • PMCID: PMC7523057
  • DOI: 10.1186/s13012-020-01041-8

Background: Numerous models, frameworks, and theories exist for specific aspects of implementation research, including for determinants, strategies, and outcomes. However, implementation research projects often fail to provide a coherent rationale or justification for how these aspects are selected and tested in relation to one another. Despite this need to better specify the conceptual linkages between the core elements involved in projects, few tools or methods have been developed to aid in this task. The Implementation Research Logic Model (IRLM) was created for this purpose and to enhance the rigor and transparency of describing the often-complex processes of improving the adoption of evidence-based interventions in healthcare delivery systems.

Methods: The IRLM structure and guiding principles were developed through a series of preliminary activities with multiple investigators representing diverse implementation research projects in terms of contexts, research designs, and implementation strategies being evaluated. The utility of the IRLM was evaluated in the course of a 2-day training to over 130 implementation researchers and healthcare delivery system partners.

Results: Preliminary work with the IRLM produced a core structure and multiple variations for common implementation research designs and situations, as well as guiding principles and suggestions for use. Results of the survey indicated a high utility of the IRLM for multiple purposes, such as improving rigor and reproducibility of projects; serving as a "roadmap" for how the project is to be carried out; clearly reporting and specifying how the project is to be conducted; and understanding the connections between determinants, strategies, mechanisms, and outcomes for their project.

Conclusions: The IRLM is a semi-structured, principle-guided tool designed to improve the specification, rigor, reproducibility, and testable causal pathways involved in implementation research projects. The IRLM can also aid implementation researchers and implementation partners in the planning and execution of practice change initiatives. Adaptation and refinement of the IRLM are ongoing, as is the development of resources for use and applications to diverse projects, to address the challenges of this complex scientific field.

Keywords: Integration; Logic models; Program theory; Study specification.

PubMed Disclaimer

Conflict of interest statement

None declared.

Implementation Research Logic Model (IRLM)…

Implementation Research Logic Model (IRLM) Standard Form. Notes. Domain names in the determinants…

Implementation Research Logic Model (IRLM) Standard Form with Intervention. Notes. Domain names in…

  • Letter to the editor on "the implementation research logic model: a method for planning, executing, reporting, and synthesizing implementation projects" (Smith JD, Li DH, Rafferty MR. the implementation research logic model: a method for planning, executing, reporting, and synthesizing implementation projects. Implement Sci. 2020;15 (1):84. Doi:10.1186/s13012-020-01041-8). Sales AE, Barnaby DP, Rentes VC. Sales AE, et al. Implement Sci. 2021 Nov 17;16(1):97. doi: 10.1186/s13012-021-01169-1. Implement Sci. 2021. PMID: 34789294 Free PMC article. No abstract available.

Similar articles

  • Using the Implementation Research Logic Model to design and implement community-based management of possible serious bacterial infection during COVID-19 pandemic in Ethiopia. Tiruneh GT, Nigatu TG, Magge H, Hirschhorn LR. Tiruneh GT, et al. BMC Health Serv Res. 2022 Dec 13;22(1):1515. doi: 10.1186/s12913-022-08945-9. BMC Health Serv Res. 2022. PMID: 36514111 Free PMC article.
  • Developing an implementation research logic model: using a multiple case study design to establish a worked exemplar. Czosnek L, Zopf EM, Cormie P, Rosenbaum S, Richards J, Rankin NM. Czosnek L, et al. Implement Sci Commun. 2022 Aug 16;3(1):90. doi: 10.1186/s43058-022-00337-8. Implement Sci Commun. 2022. PMID: 35974402 Free PMC article.
  • Culture of Care: Organizational Responsibilities. Brown MJ, Symonowicz C, Medina LV, Bratcher NA, Buckmaster CA, Klein H, Anderson LC. Brown MJ, et al. In: Weichbrod RH, Thompson GA, Norton JN, editors. Management of Animal Care and Use Programs in Research, Education, and Testing. 2nd edition. Boca Raton (FL): CRC Press/Taylor & Francis; 2018. Chapter 2. In: Weichbrod RH, Thompson GA, Norton JN, editors. Management of Animal Care and Use Programs in Research, Education, and Testing. 2nd edition. Boca Raton (FL): CRC Press/Taylor & Francis; 2018. Chapter 2. PMID: 29787190 Free Books & Documents. Review.
  • The Effectiveness of Integrated Care Pathways for Adults and Children in Health Care Settings: A Systematic Review. Allen D, Gillen E, Rixson L. Allen D, et al. JBI Libr Syst Rev. 2009;7(3):80-129. doi: 10.11124/01938924-200907030-00001. JBI Libr Syst Rev. 2009. PMID: 27820426
  • Family planning operations research in Africa: reviewing a decade of experience. Wawer MJ, McNamara R, McGinn T, Lauro D. Wawer MJ, et al. Stud Fam Plann. 1991 Sep-Oct;22(5):279-93. Stud Fam Plann. 1991. PMID: 1759274 Review.
  • Pre-exposure prophylaxis (PrEP) among people who use drugs: a qualitative scoping review of implementation determinants and change methods. Merle JL, Zapata JP, Quieroz A, Zamantakis A, Sanuade O, Mustanski B, Smith JD. Merle JL, et al. Addict Sci Clin Pract. 2024 May 30;19(1):46. doi: 10.1186/s13722-024-00478-2. Addict Sci Clin Pract. 2024. PMID: 38816889 Free PMC article. Review.
  • Incorporating implementation science principles into curricular design. Gottlieb M, Bobitt J, Kotini-Shah P, Khosla S, Watson DP. Gottlieb M, et al. AEM Educ Train. 2024 May 27;8(3):e10996. doi: 10.1002/aet2.10996. eCollection 2024 Jun. AEM Educ Train. 2024. PMID: 38808130
  • Exploring the content and delivery of feedback facilitation co-interventions: a systematic review. Sykes M, Rosenberg-Yunger ZRS, Quigley M, Gupta L, Thomas O, Robinson L, Caulfield K, Ivers N, Alderson S. Sykes M, et al. Implement Sci. 2024 May 28;19(1):37. doi: 10.1186/s13012-024-01365-9. Implement Sci. 2024. PMID: 38807219 Free PMC article. Review.
  • MyHospitalVoice - a digital tool co-created with children and adolescents that captures patient-reported experience measures: a study protocol. Hybschmann J, Sørensen JL, Thestrup J, Pappot H, Boisen KA, Frandsen TL, Gjærde LK. Hybschmann J, et al. Res Involv Engagem. 2024 May 21;10(1):49. doi: 10.1186/s40900-024-00582-2. Res Involv Engagem. 2024. PMID: 38773648 Free PMC article. Review.
  • Getting cozy with causality: Advances to the causal pathway diagramming method to enhance implementation precision. Klasnja P, Meza RD, Pullmann MD, Mettert KD, Hawkes R, Palazzo L, Weiner BJ, Lewis CC. Klasnja P, et al. Implement Res Pract. 2024 Apr 30;5:26334895241248851. doi: 10.1177/26334895241248851. eCollection 2024 Jan-Dec. Implement Res Pract. 2024. PMID: 38694167 Free PMC article.
  • Nosek BA, Alter G, Banks GC, Borsboom D, Bowman SD, Breckler SJ, Buck S, Chambers CD, Chin G, Christensen G, et al. Promoting an open research culture. Science. 2015;348:1422–1425. doi: 10.1126/science.aab2374. - DOI - PMC - PubMed
  • Slaughter SE, Hill JN, Snelgrove-Clarke E. What is the extent and quality of documentation and reporting of fidelity to implementation strategies: a scoping review. Implement Sci. 2015;10:1–12. doi: 10.1186/s13012-015-0320-3. - DOI - PMC - PubMed
  • Brown CH, Curran G, Palinkas LA, Aarons GA, Wells KB, Jones L, Collins LM, Duan N, Mittman BS, Wallace A, et al: An overview of research and evaluation designs for dissemination and implementation. Annual Review of Public Health 2017, 38:null. - PMC - PubMed
  • Hwang S, Birken SA, Melvin CL, Rohweder CL, Smith JD: Designs and methods for implementation research: advancing the mission of the CTSA program. Journal of Clinical and Translational Science 2020:Available online. - PMC - PubMed
  • Smith JD. An Implementation Research Logic Model: a step toward improving scientific rigor, transparency, reproducibility, and specification. Implement Sci. 2018;14:S39.

Publication types

  • Search in MeSH

Related information

  • Cited in Books

Grants and funding

  • R25 MH080916/MH/NIMH NIH HHS/United States
  • UL1 TR001422/TR/NCATS NIH HHS/United States
  • R01 MH118213/MH/NIMH NIH HHS/United States
  • P30 DA027828/DA/NIDA NIH HHS/United States
  • P30 AI117943/AI/NIAID NIH HHS/United States
  • UM1 CA233035/CA/NCI NIH HHS/United States
  • F32 HS025077/HS/AHRQ HHS/United States
  • U18 DP006255/DP/NCCDPHP CDC HHS/United States
  • R56 HL148192/HL/NHLBI NIH HHS/United States

LinkOut - more resources

Full text sources.

  • BioMed Central
  • Europe PubMed Central
  • PubMed Central
  • MedlinePlus Health Information

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

logic model for research project

Search form

logic model for research project

  • Table of Contents
  • Troubleshooting Guide
  • A Model for Getting Started
  • Justice Action Toolkit
  • Best Change Processes
  • Databases of Best Practices
  • Online Courses
  • Ask an Advisor
  • Subscribe to eNewsletter
  • Community Stories
  • YouTube Channel
  • About the Tool Box
  • How to Use the Tool Box
  • Privacy Statement
  • Workstation/Check Box Sign-In
  • Online Training Courses
  • Capacity Building Training
  • Training Curriculum - Order Now
  • Community Check Box Evaluation System
  • Build Your Toolbox
  • Facilitation of Community Processes
  • Community Health Assessment and Planning
  • Section 1. Developing a Logic Model or Theory of Change

Chapter 2 Sections

  • Section 2. PRECEDE/PROCEED
  • Section 3. Healthy Cities/Healthy Communities
  • Section 4. Asset Development
  • Section 5. Collective Impact
  • Section 6. The Institute of Medicine's Community Health Improvement Process (CHIP)
  • Section 7. Ten Essential Public Health Services
  • Section 8. Communities That Care
  • Section 9. Community Readiness
  • Section 10. The Strategic Prevention Framework
  • Section 11. Health Impact Assessment
  • Section 12. Documenting Health Promotion Initiatives Using the PAHO Guide
  • Section 13. MAPP: Mobilizing for Action through Planning and Partnerships
  • Section 14. MAP-IT: A Model for Implementing Healthy People 2020
  • Section 15. The County Health Rankings & Roadmaps Take Action Cycle
  • Section 16. Building Compassionate Communities
  • Section 17. Addressing Social Determinants of Health in Your Community
  • Section 18. PACE EH: Protocol for Assessing Community Excellence in Environmental Health

 

The Tool Box needs your help
to remain available.

Your contribution can help change lives.
.

 

Sixteen training modules
for teaching core skills.
.

  • Main Section
Learn how to create and use a logic model, a visual representation of your initiative's activities, outputs, and expected outcomes.

What is a logic model?

When can a logic model be used, how do you create a logic model, what makes a logic model effective, what are the benefits and limitations of logic modeling.

A logic model presents a picture of how your effort or initiative is supposed to work. It explains why your strategy is a good solution to the problem at hand. Effective logic models make an explicit, often visual, statement of the activities that will bring about change and the results you expect to see for the community and its people. A logic model keeps participants in the effort moving in the same direction by providing a common language and point of reference.

More than an observer's tool, logic models become part of the work itself. They energize and rally support for an initiative by declaring precisely what you're trying to accomplish and how.

In this section, the term logic model is used as a generic label for the many ways of displaying how change unfolds.

Some other names include:

  • road map, conceptual map, or pathways map
  • mental model
  • blueprint for change
  • framework for action or program framework
  • program theory or program hypothesis
  • theoretical underpinning or rationale
  • causal chain or chain of causation
  • theory of change or model of change

Each mapping or modeling technique uses a slightly different approach, but they all rest on a foundation of logic - specifically, the logic of how change happens. By whatever name you call it, a logic model supports the work of health promotion and community development by charting the course of community transformation as it evolves.

A word about logic

The word "logic" has many definitions. As a branch of philosophy, scholars devote entire careers to its practice. As a structured method of reasoning, mathematicians depend on it for proofs. In the world of machines, the only language a computer understands is the logic of its programmer.

There is, however, another meaning that lies closer to heart of community change: the logic of how things work. Consider, for example, the logic to the motion of rush-hour traffic. No one plans it. No one controls it. Yet, through experience and awareness of recurrent patterns, we comprehend it, and, in many cases, can successfully avoid its problems (by carpooling, taking alternative routes, etc.).

Logic in this sense refers to "the relationship between elements and between an element and the whole." All of us have a great capacity to see patterns in complex phenomena. We see systems at work and find within them an inner logic, a set of rules or relationships that govern behavior. Working alone, we can usually discern the logic of a simple system. And by working in teams, persistently over time if necessary, there is hardly any system past or present whose logic we can't decipher.

On the flip side, we can also project logic into the future. With an understanding of context and knowledge about cause and effect, we can construct logical theories of change, hypotheses about how things will unfold either on their own or under the influence of planned interventions. Like all predictions, these hypotheses are only as good as their underlying logic. Magical assumptions, poor reasoning, and fuzzy thinking increase the chances that despite our efforts, the future will turn out differently than we expect or hope. On the other hand, some events that seem unexpected to the uninitiated will not be a surprise to long-time residents and careful observers.

The challenge for a logic modeler is to find and accurately represent the wisdom of those who know best how community change happens.

The logic in logic modeling

Like a road map, a logic model shows the route traveled (or steps taken) to reach a certain destination. A detailed model indicates precisely how each activity will lead to desired changes. Alternatively, a broader plan sketches out the chosen routes and how far you will go. This road map aspect of a logic model reveals what causes what, and in what order. At various points on the map, you may need to stop and review your progress and make any necessary adjustments.

A logic model also expresses the thinking behind an initiative's plan. It explains why the program ought to work, why it can succeed where other attempts have failed. This is the "program theory" or "rationale" aspect of a logic model. By defining the problem or opportunity and showing how intervention activities will respond to it, a logic model makes the program planners' assumptions explicit.

The form that a logic model takes is flexible and does not have to be linear (unless your program's logic is itself linear). Flow charts, maps, or tables are the most common formats. It is also possible to use a network, concept map, or web to describe the relationships among more complex program components. Models can even be built around cultural symbols that describe transformation, such as the Native American medicine wheel, if the stakeholders feel it is appropriate. See the "Generic Model for Disease/Injury Control and Prevention" in the Examples section for an illustration of how the same information can be presented in a linear or nonlinear format.

Whatever form you choose, a logic model ought to provide direction and clarity by presenting the big picture of change along with certain important details. Let's illustrate the typical components of a logic model, using as an example a mentoring program in a community where the high-school dropout rate is very high. We'll call this program "On Track."

  • Purpose , or mission. What motivates the need for change? This can also be expressed as the problems or opportunities that the program is addressing. (For On Track, the community focused advocates on the mission of enhancing healthy youth development to improve the high-school dropout rate.)
  • Context , or conditions. What is the climate in which change will take place? (How will new policies and programs for On Track be aligned with existing ones? What trends compete with the effort to engage youth in positive activities? What is the political and economic climate for investing in youth development?)
  • Inputs , or resources or infrastructure. What raw materials will be used to conduct the effort or initiative? (In On Track, these materials are coordinator and volunteers in the mentoring program, agreements with participating school districts, and the endorsement of parent groups and community agencies.) Inputs can also include constraints on the program, such as regulations or funding gaps, which are barriers to your objectives.
  • Activities , or interventions. What will the initiative do with its resources to direct the course of change? (In our example, the program will train volunteer mentors and refer young people who might benefit from a mentor.) Your intervention, and thus your logic model, should be guided by a clear analysis of risk and protective factors .
  • Outputs . What evidence is there that the activities were performed as planned? (Indicators might include the number of mentors trained and youth referred, and the frequency, type, duration, and intensity of mentoring contacts.)
  • Effects , or results, consequences, outcomes, or impacts. What kinds of changes came about as a direct or indirect effect of the activities? (Two examples are bonding between adult mentors and youth and increased self-esteem among youth.)

Putting these elements together graphically gives the following basic structure for a logic model. The arrows between the boxes indicate that review and adjustment are an ongoing process - both in enacting the initiative and developing the model.

Using this generic model as a template, let's fill in the details with another example of a logic model, one that describes a community health effort to prevent tuberculosis.

Remember , although this example uses boxes and arrows, you and your partners in change can use any format or imagery that communicates more effectively with your stakeholders. As mentioned earlier, the generic model for Disease/Injury Control and Prevention in Examples depicts the same relationship of activities and effects in a linear and a nonlinear format. The two formats helped communicate with different groups of stakeholders and made different points. The linear model better guided discussions of cause and effect and how far down the chain of effects a particular program was successful. The circular model more effectively depicted the interdependence of the components to produce the intended effects.

When exploring the results of an intervention, remember that there can be long delays between actions and their effects. Also, certain system changes can trigger feedback loops, which further complicate and delay our ability to see all the effects. (A definition from the System Dynamics Society , might help here: "Feedback refers to the situation of X affecting Y and Y in turn affecting X perhaps through a chain of causes and effects. One cannot study the link between X and Y and, independently, the link between Y and X and predict how the system will behave. Only the study of the whole system as a feedback system will lead to correct results.")

For these reasons, logic models indicate when to expect certain changes. Many planners like to use the following three categories of effects (illustrated in the models above), although you may choose to have more or fewer depending on your situation.

  • Short-term or immediate effects. (In the On Track example, this would be that young people who participate in mentoring improve their self-confidence and understand the importance of staying in school.)
  • Mid-term or intermediate effects. (Mentored students improve their grades and remain in school.)
  • Longer-term or ultimate effects. (High school graduation rates rise, thus giving graduates more employment opportunities, greater financial stability, and improved health status.)
Here are two important notes about constructing and refining logic models. Outcome or Impact? Clarify your language. In a collaborative project, it is wise to anticipate confusion over language. If you understand the basic elements of a logic model, any labels can be meaningful provided stakeholders agree to them. In the generic and TB models above, we called the effects short-, mid-, and long-term. It is also common to hear people talk about effects that are "upstream" or "proximal" (near to the activities) versus "downstream" or "distal" (distant from the activities). Because disciplines have their own jargon, stakeholders from two different fields might define the same word in different ways. Some people are trained to call the earliest effects "outcomes" and the later ones "impacts." Other people are taught the reverse: "impacts" come first, followed by "outcomes." The idea of sequence is the same regardless of which terms you and your partners use. The main point is to clearly show connections between activities and effects over time, thus making explicit your initiative's assumptions about what kinds of change to expect and when. Try to define essential concepts at the design stage and then be consistent in your use of terms. The process of developing a logic model supports this important dialogue and will bring potential misunderstandings into the open. For good or for ill? Understand effects. While the starting point for logic modeling is to identify the effects that correspond to stated goals, your intended effect are not the only effects to watch for. Any intervention capable of changing problem behaviors or altering conditions in communities can also generate unintended effects. These are changes that no one plans and that might somehow make the problem worse. Many times our efforts to solve a problem lead to surprising, counterintuitive results. There is always a risk that our "cure" could be worse than the "disease" if we're not careful. Part of the added value of logic modeling is that the process creates a forum for scrutinizing big leaps of faith, a way to searching for unintended effects. (See the discussion of simulation in "What makes a logic model effective" for some thoughts on how to do this in a disciplined manner.) One of the greatest rewards for the extra effort is the ability to spot potential problems and redesign an initiative (and its logic model) before the unintended effects get out of hand, so that the model truly depicts activities that will plausibly produce the intended effects.

Choosing the right level of detail: the importance of utility and simplicity

It may help at this point to consider what a logic model is not. Although it captures the big picture, it is not an exact representation of everything that's going on. All models simplify reality; if they didn't, they wouldn't be of much use.

Even though it leaves out information, a good model represents those aspects of an initiative that, in the view of your stakeholders, are most important for understanding how the effort works. In most cases, the developers will go through several drafts before producing at a version that the stakeholders agree accurately reflects their story.

Should the information become overly complex, it is possible to create a family of related models, or nested models, each capturing a different level of detail. One model could sketch out the broad pathways of change, whereas others could elaborate on separate components, revealing detailed information about how the program operates on a deeper level. Individually, each model conveys only essential information, and together they provide a more complete overview of how the program or initiative functions. (See "How do you create a logic model?" for further details.)

Imagine "zooming-in" on the inner workings of a specific component and creating another, more detailed model just for that part. For a complex initiative, you may choose to develop an entire family of such related models that display how each part of the effort works, as well as how all the parts fit together. In the end, you may have some or all of the following family of models, each one differing in scope:

  • View from Outer Space. This overall road map shows the major pathways of change and the full spectrum of effects. This view answers questions such s: Do the activities follow a single pathway, or are there separate pathways that converge down the line? How far does the chain of effects go? How do our program activities align with those of other organizations? What other forces might influence the effects that we hope to see? Where can we anticipate feedback loops and in what direction will they travel? Are there significant time delays between any of the connections?
  • View from the Mountaintop. This closer view focuses on a specific component or set of components, yet it is still broad enough to describe the infrastructure, activities, and full sequence of effects. This view answers the same questions as the view from outer space, but with respect to just the selected component(s).
  • You Are Here. This view expands on a particular part of the sequence, such as the roles of different stakeholders, staff, or agencies in a coalition, and functions like a flow chart for someone's work plan. It is a specific model that outlines routine processes and anticipated effects. This is the view that you might need to understand quality control within the initiative.
Families, Nesting, and Zooming-In In the Examples section, the idea of nested models is illustrated in the Tobacco Control family of models. It includes a global model that encompasses three intermediate outcomes in tobacco control - environments without tobacco smoke, reduced smoking initiation among youth, and increased cessation among youth and adults. Then a zoom-in model is elaborated for each one of these intermediate outcomes. The Comprehensive Cancer model illustrates a generic logic model accompanied by a zoom-in on the activities to give program staff the specific details they need. Notably, the intended effects on the zoom-in are identical to those on the global model and all major categories of activities are also apparent. But the zoom in unpacks these activities into their detailed components and, more important, indicates that the activities achieve their effects by influencing intermediaries who then move gatekeepers to take action. This level of detail is necessary for program staff, but it may be too much for discussions with funders and stakeholders.

The Diabetes Control model is another good example of a family of models. In this case, the zoom in models are very similar to the global model in level of detail. They add value by translating the global model into a plan for specific actors (in this case a state diabetes control program) or for specific objectives (e..g., increasing timely foot exams).

Logic models are useful for both new and existing programs and initiatives. If your effort is being planned, a logic model can help get it off to a good start. Alternatively, if your program is already under way, a model can help you describe, modify or enhance it.

Planners, program managers, trainers, evaluators, advocates and other stakeholders can use a logic model in several ways throughout an initiative. One model may serve more than one purpose, or it may be necessary to create different versions tailored for different aims. Here are examples of the various times that a logic model could be used.

During planning to:

  • clarify program strategy
  • identify appropriate outcome targets (and avoid over-promising)
  • align your efforts with those of other organizations
  • write a grant proposal or a request for proposals
  • assess the potential effectiveness of an approach
  • set priorities for allocating resources
  • estimate timelines
  • identify necessary partnerships
  • negotiate roles and responsibilities
  • focus discussions and make planning time more efficient

During implementation to:

  • provide an inventory of what you have and what you need to operate the program or initiative
  • develop a management plan
  • incorporate findings from research and demonstration projects
  • make mid-course adjustments
  • reduce or avoid unintended effects

During staff and stakeholder orientation to:

  • explain how the overall program works
  • show how different people can work together
  • define what each person is expected to do
  • indicate how one would know if the program is working

During evaluation to:

  • document accomplishments
  • organize evidence about the program
  • identify differences between the ideal program and its real operation
  • determine which concepts will (and will not) be measured
  • frame questions about attribution (of cause and effect) and contribution (of initiative components to the outcomes)
  • specify the nature of questions being asked
  • prepare reports and other media
  • tell the story of the program or initiative

During advocacy to:

  • justify why the program will work
  • explain how resource investments will be used

There is no single way to create a logic model. Think of it as something to be used, its form and content governed by the users' needs.

Who creates the model? This depends on your situation. The same people who will use the model - planners, program managers, trainers, evaluators, advocates and other stakeholders - can help create it. For practical reasons, though, you will probably start with a core group, and then take the working draft to others for continued refinement.

Remember that your logic model is a living document, one that tells the story of your efforts in the community. As your strategy changes, so should the model. On the other hand, while developing the model you might see new pathways that are worth exploring in real life.

Two main development strategies are usually combined when constructing a logic model.

  • Moving forward from the activities (also known as forward logic ). This approach explores the rationale for activities that are proposed or currently under way. It is driven by But why? questions or If-then thinking: But why should we focus on briefing Senate staffers? But why do we need them to better understand the issues affecting kids? But why would they create policies and programs to support mentoring? But why would new policies make a difference?... and so on. That same line of reasoning could also be uncovered using if-then statements: If we focus on briefing legislators, then they will better understand the issues affecting kids. If legislators understand, then they will enact new policies...
  • Moving backward from the effects (also known as reverse logic ). This approach begins with the end in mind. It starts with a clearly identified value, a change that you and your colleagues would definitely like to see occur, and asks a series of "But how?" questions: But how do we overcome fear and stigma? But how can we ensure our services are culturally competent? But how can we admit that we don't already know what we're doing?

At first, you may not agree with the answers that certain stakeholders give for these questions. Their logic may not seem convincing or even logical. But therein lies the power of logic modeling. By making each stakeholder's thinking visible on paper, you can decide as a group whether the logic driving your initiative seems reasonable. You can talk about it, clarify misinterpretations, ask for other opinions, check the assumptions, compare them with research findings, and in the end develop a solid system of program logic. This product then becomes a powerful tool for planning, implementation, orientation, evaluation, and advocacy, as described above.

By now you have probably guessed that there is not a rigid step-by-step process for developing a logic model. Like the rest of community work, logic modeling is an ongoing process. Nevertheless, there are a few tasks you should be sure to accomplish.

To illustrate these in action, we'll use another example for an initiative called "HOME: Home Ownership Mobilization Effort." HOME aims to increase home ownership in order to give neighborhood control to the people who live there, rather than to outside landlords with no stake in the community. It does this through a combination of educating community residents, organizing the neighborhood, and building relationships with partners such as businesses.

Steps for drafting a logic model

  • Available written materials often contain more than enough information to get started. Collect narrative descriptions, justifications, grant applications, or overview documents that explain the basic idea behind the intervention effort. If your venture involves a coalition of several organizations, be sure to get descriptions from each agency's point of view. For the HOME campaign, we collected documents from planners who proposed the idea, as well as mortgage companies, homeowner associations, and other neighborhood organizations.
  • Your job as a logic modeler is to decode these documents. Keep a piece of paper by your side and sketch out the logical links as you find them. (This work can be done in a group to save time and engage more people if you prefer.)
  • Read each document with an eye for the logical structure of the program. Sometimes that logic will be clearly spelled out (e.g., The information, counseling, and support services we provide to community residents will help them improve their credit rating, qualify for home loans, purchase homes in the community; over time, this program will change the proportion of owner-occupied housing in the neighborhood).
  • Other times the logic will be buried in vague language, with big leaps from actions to downstream effects (e.g., Ours is a comprehensive community-based program that will transform neighborhoods, making them controlled by people who live there and not outsiders with no stake in the community).
  • As you read each document, ask yourself the But why? and But how? questions. See if the writing provides an answer. Pay close attention to parts of speech. Verbs such as teach, inform, support, or refer are often connected to descriptions of program activities. Adjectives like reduced, improved, higher, or better are often used when describing expected effects.
  • The HOME initiative , for instance, created different models to address the unique needs of their financial partners, program managers, and community educators. Mortgage companies, grant makers, and other decision makers who decided whether to allocate resources for the effort found the global view from space most helpful for setting context. Program managers wanted the closer, yet still broad view from the mountaintop. And community educators benefited most from the you are here version. The important thing to remember is that these are not three different programs, but different ways of understanding how the same program works.
  • Logic models convey the story of community change. Working with the stakeholders, it's your responsibility to ensure that the story you've told in your draft makes sense (i.e., is logical) and is complete (has no loose ends). As you iteratively refine the model, ask yourself and others if it captures the full story.
  • Short-term - Potential home owners attain greater understanding of how credit ratings are calculated and more accurate information about the steps to improve a credit rating; mortgage companies create new policies and procedures allowing renters to buy their own homes; local businesses start incentive programs; and anti-discrimination lawsuits are filed against illegal lending practices.
  • Mid-term - The community's average credit rating improves; applications rise for home loans along with the approval rate; support services are established for first-time home buyers; neighborhood organizing gets stronger, and alliances expand to include businesses, health agencies, and elected officials.
  • Longer-term - The proportion of owner-occupied housing rises; economic revitalization takes off as businesses invest in the community; residents work together to create walking trails, crime patrols, and fire safety screenings; rates of obesity, crime, and injury fall dramatically.
  • An advantage of the graphic model is that it can display both the sequence and the interactions of effects. For example, in the HOME model, credit counseling leads to better understanding of credit ratings, while loan assistance leads to more loan submissions, but the two together (plus other activities such as more new buyer programs) are needed for increased home ownership.
  • Drama (activities, interventions). How will obstacles be overcome? Who is doing what? What kinds of conflict and cooperation are evident? What's being done to re-arrange the forces of change? What new services or conditions are being introduced? Your activities, based on a clear analysis of risk and protective factors, are the answers to these kinds of questions, Your interventions reveal the drama in your story of directed social change.

Dramatic actions in the HOME initiative include offering educational sessions and forming business alliances, homeowner support groups, and a neighborhood organizing council. At evaluation time, each of these actions is closely connected to output indicators that document whether the program is on track and how fast it is moving. These outputs could be the number of educational sessions held, their average attendance, the size of the business alliance, etc. (These outputs are not depicted in the global model, but that could be done if valuable for users.)

  • Raw Materials (inputs, resources, or infrastructure). The energy to create change can't come from nothing. Real resources must come into the system. Those resources may be financial, but they may also include people, space, information, technology, equipment, and other assets. The HOME campaign runs because of the input from volunteer educators, support from schools and faith institutions in the neighborhood, discounts provided by lenders and local businesses, revenue from neighborhood revitalization, and increasing social capital among community residents.
  • Stakeholders working on the HOME campaign understood that they were challenging a history of racial discrimination and economic injustice. They saw gentrification occurring in nearby neighborhoods. They were aware of backlash from outside property owners who benefit from the status quo. None of these facts are included in the model per se, but a shaded box labeled History and Context was added to serve as a visual reminder that these things are in the background.
  • Draft the logic model using both sides of your brain and all the talents of your stakeholders. Use your artistic and your analytic abilities .
  • Arrange activities and intended effects in the expected time sequence. And don't forget to include important feedback loops - after all, most actions provoke a reaction.
  • Link components by drawing arrows or using other visual methods that communicate the order of activities and effects. (Remember - the model does not have to be linear or read from left to right, top to bottom. A circle may better express a repeating cycle.)
  • Allow yourself plenty of space to develop the model. Freely revise the picture to show the relationships better or to add components.
  • Neatness counts, so avoid overlapping lines and unnecessary clutter.
  • Color code regions of the model to help convey the main storyline.
  • Try to keep everything on one page. When the model get too crowded, either adjust its scope or build nested models.
  • Make sure it passes the "laugh test." That is, be sure that the image you create isn't so complex that it provokes an immediate laugh from stakeholders. Of course, different stakeholders will have different laugh thresholds.
  • Use PowerPoint or other computer software to animate the model, building it step-by-step so that when you present it to people in an audience, they can follow the logic behind every connection.
  • Don't let your model become a tedious exercise that you did just to satisfy someone else. Don't let it sit in a drawer. Once you've gone through the effort of creating a model, the rewards are in its use. Revisit it often and be prepared to make changes. All programs evolve and change through time, if only to keep up with changing conditions in the community. Like a roadmap, a good model will help you to recognize new or reinterpret old territory.
  • Also, when things are changing rapidly, it's easy for collaborators to lose sight of their common goals. Having a well-developed logic model can keep stakeholders focused on achieving outcomes while remaining open to finding the best means for accomplishing the work. If you need to take a detour or make a longer stop, the model serves as a framework for incorporating the change.
  • Clarify the path of activities to effects and outcomes
  • Elaborate links
  • Expand activities to reach your goals
  • Establish or revise mile markers
  • Redefine the boundary of your initiative or program
  • Reframe goals or desired outcomes

You will know a model's effectiveness mainly by its usefulness to intended users. A good logic model usually:

  • Logically links activities and effects
  • Is visually engaging (simple, parsimonious) yet contains the appropriate degree of detail for the purpose (not too simple or too confusing)
  • Provokes thought, triggers questions
  • Includes forces known to influence the desired outcomes

The more complete your model, the better your chances of reaching "the promised land" of the story. In order to tell a complete story or present a complete picture in your model, make sure to consider all forces of change (root causes, trends, and system dynamics). Does your model reveal assumptions and hypotheses about the root causes and feedback loops that contribute to problems and their solutions?

In the HOME model, for instance, low home ownership persists when there is a vicious cycle of discrimination, bad credit, and hopelessness preventing neighborhood-wide organizing and social change. Three pathways of change were proposed to break that cycle: education; business reform; and neighborhood organizing. Building a model on one pathway to address only one force would limit the program's effectiveness.

You can discover forces of change in your situation using multiple assessment strategies, including forward logic and reverse logic as described above. When exploring forces of change, be sure to search for personal factors (knowledge, belief, skills) as well as environmental factors (barriers, opportunities, support, incentives) that keep the situation the same as well as ones that push for it to change.

Take time to simulate After you've mapped out the structure of a program strategy, there is still another crucial step to take before taking action: some kind of simulation. As logical as the story you are telling seems to you, as a plan for intervention it runs the risk of failure if you haven't explored how things might turn out in the real world of feedback and resistance. Simulation is one of the most practical ways to find out if a seemingly sensible plan will actually play out as you hope. Simulation is not the same as testing a model with stakeholders to see if it makes logical sense. The point of a simulation is to see how things will change - how the system will behave - through time and under different conditions. Though simulation is a powerful tool, it can be conducted in ways ranging from the simple to the sophisticated. Simulation can be as straightforward as an unstructured role-playing game, in which you talk the model through to its logical conclusions. In a more structured simulation, you could develop a tabletop exercise in which you proceed step by step through a given scenario with pre-defined roles and responsibilities for the participants. Ultimately, you could create a computer-based mathematical simulation by using any number of available software tools. The key point to remember is that creating logical models and simulating how those models will behave involve two different sets of skills, both of which are essential for discovering which change strategies will be effective in your community.

You can probably envision a variety of ways in which you might use the logic model you've developed or that logic modeling would benefit your work.

Here are a few advantages that experienced modelers have discovered.

  • Logic models integrate planning, implementation, and evaluation. As a detailed description of your initiative, from resources to results, the logic model is equally important for planning, implementing, and evaluating the project. If you are a planner, the modeling process challenges you to think more like an evaluator. If your purpose is evaluation, the modeling prompts discussion of planning. And for those who implement, the modeling answers practical questions about how the work will be organized and managed.
  • Logic models prevent mismatches between activities and effects. Planners often summarize an effort by listing its vision, mission, objectives, strategies and action plans . Even with this information, it can be hard to tell how all the pieces fit together. By connecting activities and effects, a logic model helps avoid proposing activities with no intended effect, or anticipating effects with no supporting activities. The ability to spot such mismatches easily is perhaps the main reason why so many logic models use a flow chart format.
  • Logic models leverage the power of partnerships. As the W.K. Kellogg Foundation notes (see Internet Resources below), refining a logic model is an iterative or repeating process that allows participants to "make changes based on consensus-building and a logical process rather than on personalities, politics, or ideology. The clarity of thinking that occurs from the process of building the model becomes an important part of the overall success of the program." With a well-specified logic model, it is possible to note where the baton should be passed from one person or agency to another. This enhances collaboration and guards against things falling through the cracks.
  • Logic models enhance accountability by keeping stakeholders focused on outcomes. As Connie Schmitz and Beverly Parsons point out (see Internet Resources), a list of action steps usually function as a manager's guide for running a project, showing what staff or others need to do to--for example, "Hire an outreach worker for a TB clinic." With a logic model, however, it is also possible to illustrate the effects of those tasks--for example, "Hiring an outreach worker will result in a greater proportion of clients coming into the clinic for treatment." This short-term effect then connects to mid- and longer-term effects, such as "Satisfied clients refer others to the clinic" and "Improved screening and treatment coverage results in fewer deaths due to TB."

In a coalition or collaborative partnership, the logic model makes it clear which effects each partner creates and how all those effects converge to a common goal. The family or nesting approach works well in a collaborative partnership because a model can be developed for each objective along a sequence of effects, thereby showing layers of contributions and points of intersection.

  • Logic models help planners to set priorities for allocating resources . A comprehensive model will reveal where physical, financial, human, and other resources are needed. When planners are discussing options and setting priorities, a logic model can help them make resource-related decisions in light of how the program's activities and outcomes will be affected.
  • Logic models reveal data needs and provide a framework for interpreting results. It is possible to design a documentation system that includes only beginning and end measurements. This is a risky strategy with a good chance of yielding disappointing results. An alternative approach calls for tracking changes at each step along the planned sequence of effects. With a logic model, program planners can identify intermediate effects and define measurable indicators for them.
  • Logic models enhance learning by integrating research findings and practice wisdom . Most initiatives are founded on assumptions about the behaviors and conditions that need to change, and how they are subject to intervention. Frequently, there are different degrees of certainty about those assumptions. For example, some of the links in a logic model may have been tested and proved to be sound through previous research. Other linkages, by contrast, may never have been researched, indeed may never have been tried or thought of before. The explicit form of a logic model means that you can combine evidence-based practices from prior research with innovative ideas that veteran practitioners believe will make a difference. If you are armed with a logic model, it won't be easy for critics to claim that your work is not evidence-based.
  • Logic models define a shared language and shared vision for community change . The terms used in a model help to standardize the way people think and how they speak about community change. It gets everyone rowing in the same direction, and enhances communication with external audiences, such as the media or potential funders. Even stakeholders who are skeptical or antagonistic toward your work can be drawn into the discussion and development of a logic model. Once you've got them talking about the logical connections between activities and effects, they're no longer criticizing from the sidelines. They'll be engaged in problem-solving and they'll be doing so in an open forum, where everyone can see their resistance to change or lack of logic if that's the case.

Limitations

Any tool this powerful must not be approached lightly. When you undertake the task of developing a logic model, be aware of the following challenges and limitations.

First, no matter how logical your model seems, there is always a danger that it will not be correct. The world sometimes works in surprising, counter-intuitive ways, which means we may not comprehend the logic of change until after the fact. With this in mind, modelers will appreciate the fact that the real effects of intervention actions could differ from the intended effects. Certain actions might even make problems worse, so it's important to keep one eye on the plan and another focused on the real-life experiences of community members.

If nothing else, a logic model ought to be logical. Therein lies its strength and its weakness. Those who are trying to follow your logic will magnify any inconsistency or inaccuracy. This places a high burden on modelers to pay attention to detail and refine their own thinking to great degree. Of course, no model can be perfect. You'll have to decide on the basis of stakeholders' uses what level of precision is required.

Establishing the appropriate boundaries of a logic model can be a difficult challenge. In most cases, there is a tension between focusing on a specific program and situating that effort within its broader context. Many models seem to suggest that the only forces of change come from within the program in question, as if there is only one child in the sandbox.

At the other extreme, it would be ridiculous and unproductive to map all the simultaneous forces of change that affect health and community development. A modeler's challenge is to include enough depth so the organizational context is clear, without losing sight of the reasons for developing a logic model in the first place.

On a purely practical level, logic modeling can also be time consuming, requiring much energy in the beginning and continued attention throughout the life of an initiative. The process can demand a high degree of specificity; it risks oversimplifying complex relationships and relies on the skills of graphic artists to convey complex thought processes.

Indeed, logic models can be very difficult to create, but the process of creating them, as well as the product, will yield many benefits over the course of an initiative.

A logic model is a story or picture of how an effort or initiative is supposed to work. The process of developing the model brings together stakeholders to articulate the goals of the program and the values that support it, and to identify strategies and desired outcomes of the initiative.

As a means to communicate a program visually, within your coalition or work group and to external audiences, a logic model provides a common language and reference point for everyone involved in the initiative.

A logic model is useful for planning, implementing and evaluating an initiative. It helps stakeholders agree on short-term as well as long-term objectives during the planning process, outline activities and actors, and establish clear criteria for evaluation during the effort. When the initiative ends, it provides a framework for assessing overall effectiveness of the initiative, as well as the activities, resources, and external factors that played a role in the outcome.

To develop a model, you will probably use both forward and reverse logic. Working backwards, you begin with the desired outcomes and then identify the strategies and resources that will accomplish them. Combining this with forward logic, you will choose certain steps to produce the desired effects.

You will probably revise the model periodically, and that is precisely one advantage to using a logic model. Because it relates program activities to their effect, it helps keep stakeholders focused on achieving outcomes, while it remains flexible and open to finding the best means to enact a unique story of change.

Online Resources

The Community Builder’s Approach to Theory of Change: A Practical Guide to Theory Development , from The Aspen Institute’s Roundtable on Community Change.

A concise definition by Connie C. Schmitz and Beverly A. Parsons .

The CDC Evaluation Working Group provides a linked section on  logic models  in its resources for project evaluation.

The Evaluation Guidebook for Projects Funded by S.T.O.P. Formula Grants under the Violence Against Women Act  includes a chapter on developing and using a logic model (Chapter 2), and additional examples of model in the "Introduction to the Resource Chapters."

A logic model from Harvard  that uses a family/school partnership program.

Excerpts from United Way's publication on Measuring Program Outcomes See especially "Program Outcome Model."

Logic Model Magic Tutorial from the CDC - this tutorial will provide you with information and resources to assist you as you plan and develop a logic model to describe your program and help guide program evaluation. You will have opportunities to interact with the material, and you can proceed at your own pace, reviewing where you need to or skipping to sections of your choice.

Tara Gregory on Using Storytelling to Help Organizations Develop Logic Models discusses techniques to facilitate creative discussion while still attending to the elements in a traditional logic model. These processes encourage participation by multiple staff, administrators and stakeholders and can use the organization’s vision or impact statement as the “happily ever after.”

Theory of Change: A Practical Tool for Action, Results and Learning , prepared by Organizational Research Services.

Theories of Change and Logic Models: Telling Them Apart  is a helpful PowerPoint presentation saved as a PDF. It’s from the Aspen Institute Roundtable on Community Change.

University of Wisconsin’s Program Development and Evaluation  provides a comprehensive template for a logic model and elaborates on creating and developing logic models.

The U.S. Centers for Disease Control Evaluation Group  provides links to a variety of logic model resources.

The W.K. Kellogg Foundation Logic Model Development Guide  is a comprehensive source for background information, examples and templates (Adobe Acrobat format).

Print Resources

American Cancer Society (1998).  Stating outcomes for American Cancer Society programs: a handbook for volunteers and staff . Atlanta, GA, American Cancer Society.

Julian, D. (1997). The utilization of the logic model as a system level planning and evaluation device.  Evaluation and Program Planning  20(3): 251-257.

McEwan, K., &  Bigelow, A. (1997).  Using a logic model to focus health services on population health goals . Canadian Journal of Program Evaluation 12(1): 167-174.

McLaughlin, J., & Jordan, B. (1999). Logic models: a tool for telling your program's performance story.  Evaluation and Program Planning  22(1): 65-72.

Moyer, A., Verhovsek, et al. (1997). Facilitating the shift to population-based public health programs: innovation through the use of framework and logic model tools.  Canadian Journal of Public Health  88(2): 95-98.

Rush, B. & Ogbourne, A. (1991). Program logic models: expanding their role and structure for program planning and evaluation.  Canadian Journal of Program Evaluation  6: 95-106.

Taylor-Powell, E., Rossing, B., et al. (1998).  Evaluating collaboratives: reaching the potential . Madison, WI, University of Wisconsin Cooperative Extension.

United Way of America (1996).  Measuring program outcomes: a practical approach . Alexandria, VA, United Way of America.

Western Center for the Application of Prevention Technologies. (1999)  Building a Successful Prevention Program . Reno, NV, Western Center for the Application of Prevention Technologies.

Wong-Reiger, D., & David, L. (1995). Using program logic models to plan and evaluate education and prevention programs. In  Love, A. Ed.  Evaluation Methods Sourcebook II.  Ottawa, Ontario, Canadian Evaluation Society.

  • Open access
  • Published: 25 September 2020

The Implementation Research Logic Model: a method for planning, executing, reporting, and synthesizing implementation projects

  • Justin D. Smith   ORCID: orcid.org/0000-0003-3264-8082 1 , 2 ,
  • Dennis H. Li 3 &
  • Miriam R. Rafferty 4  

Implementation Science volume  15 , Article number:  84 ( 2020 ) Cite this article

93k Accesses

207 Citations

83 Altmetric

Metrics details

A Letter to the Editor to this article was published on 17 November 2021

Numerous models, frameworks, and theories exist for specific aspects of implementation research, including for determinants, strategies, and outcomes. However, implementation research projects often fail to provide a coherent rationale or justification for how these aspects are selected and tested in relation to one another. Despite this need to better specify the conceptual linkages between the core elements involved in projects, few tools or methods have been developed to aid in this task. The Implementation Research Logic Model (IRLM) was created for this purpose and to enhance the rigor and transparency of describing the often-complex processes of improving the adoption of evidence-based interventions in healthcare delivery systems.

The IRLM structure and guiding principles were developed through a series of preliminary activities with multiple investigators representing diverse implementation research projects in terms of contexts, research designs, and implementation strategies being evaluated. The utility of the IRLM was evaluated in the course of a 2-day training to over 130 implementation researchers and healthcare delivery system partners.

Preliminary work with the IRLM produced a core structure and multiple variations for common implementation research designs and situations, as well as guiding principles and suggestions for use. Results of the survey indicated a high utility of the IRLM for multiple purposes, such as improving rigor and reproducibility of projects; serving as a “roadmap” for how the project is to be carried out; clearly reporting and specifying how the project is to be conducted; and understanding the connections between determinants, strategies, mechanisms, and outcomes for their project.

Conclusions

The IRLM is a semi-structured, principle-guided tool designed to improve the specification, rigor, reproducibility, and testable causal pathways involved in implementation research projects. The IRLM can also aid implementation researchers and implementation partners in the planning and execution of practice change initiatives. Adaptation and refinement of the IRLM are ongoing, as is the development of resources for use and applications to diverse projects, to address the challenges of this complex scientific field.

Peer Review reports

Contributions to the literature

Drawing from and integrating existing frameworks, models, and theories, the IRLM advances the traditional logic model for the requirements of implementation research and practice.

The IRLM provides a means of describing the complex relationships between critical elements of implementation research and practice in a way that can be used to improve the rigor and reproducibility of research and implementation practice, and the testing of theory.

The IRLM offers researchers and partners a useful tool for the purposes of planning, executing, reporting, and synthesizing processes and findings across the stages of implementation projects.

In response to a call for addressing noted problems with transparency, rigor, openness, and reproducibility in biomedical research [ 1 ], the National Institutes of Health issued guidance in 2014 pertaining to the research it funds ( https://www.nih.gov/research-training/rigor-reproducibility ). The field of implementation science has similarly recognized a need for better specification with similar intent [ 2 ]. However, integrating the necessary conceptual elements of implementation research, which often involves multiple models, frameworks, and theories, is an ongoing challenge. A conceptually grounded organizational tool could improve rigor and reproducibility of implementation research while offering additional utility for the field.

This article describes the development and application of the Implementation Research Logic Model (IRLM). The IRLM can be used with various types of implementation studies and at various stages of research, from planning and executing to reporting and synthesizing implementation studies. Example IRLMs are provided for various common study designs and scenarios, including hybrid designs and studies involving multiple service delivery systems [ 3 , 4 ]. Last, we describe the preliminary use of the IRLM and provide results from a post-training evaluation. An earlier version of this work was presented at the 2018 AcademyHealth/NIH Conference on the Science of Dissemination and Implementation in Health, and the abstract appeared in the Implementation Science [ 5 ].

Specification challenges in implementation research

Having an imprecise understanding of what was done and why during the implementation of a new innovation obfuscates identifying the factors responsible for successful implementation and prevents learning from what contributed to failed implementation. Thus, improving the specification of phenomena in implementation research is necessary to inform our understanding of how implementation strategies work, for whom, under what determinant conditions, and on what implementation and clinical outcomes. One challenge is that implementation science uses numerous models and frameworks (hereafter, “frameworks”) to describe, organize, and aid in understanding the complexity of changing practice patterns and integrating evidence-based health interventions across systems [ 6 ]. These frameworks typically address implementation determinants, implementation process, or implementation evaluation [ 7 ]. Although many frameworks incorporate two or more of these broad purposes, researchers often find it necessary to use more than one to describe the various aspects of an implementation research study. The conceptual connections and relationships between multiple frameworks are often difficult to describe and to link to theory [ 8 ].

Similarly, reporting guidelines exist for some of these implementation research components, such as strategies [ 9 ] and outcomes [ 10 ], as well as for entire studies (i.e., Standards for Reporting Implementation Studies [ 11 ]); however, they generally help describe the individual components and not their interactions. To facilitate causal modeling [ 12 ], which can be used to elucidate mechanisms of change and the processes involved in both successful and unsuccessful implementation research projects, investigators must clearly define the relations among variables in ways that are testable with research studies [ 13 ]. Only then can we open the “black box” of how specific implementation strategies operate to predict outcomes.

  • Logic models

Logic models, graphic depictions that present the shared relationships among various elements of a program or study, have been used for decades in program development and evaluation [ 14 ] and are often required by funding agencies when proposing studies involving implementation [ 15 ]. Used to develop agreement among diverse stakeholders of the “what” and the “how” of proposed and ongoing projects, logic models have been shown to improve planning by highlighting theoretical and practical gaps, support the development of meaningful process indicators for tracking, and aid in both reproducing successful studies and identifying failures of unsuccessful studies [ 16 ]. They are also useful at other stages of research and for program implementation, such as organizing a project/grant application/study protocol, presenting findings from a completed project, and synthesizing the findings of multiple projects [ 17 ].

Logic models can also be used in the context of program theory, an explicit statement of how a project/strategy/intervention/program/policy is understood to contribute to a chain of intermediate results that eventually produce the intended/observed impacts [ 18 ]. Program theory specifies both a Theory of Change (i.e., the central processes or drivers by which change comes about following a formal theory or tacit understanding) and a Theory of Action (i.e., how program components are constructed to activate the Theory of Change) [ 16 ]. Inherent within program theory is causal chain modeling. In implementation research, Fernandez et al. [ 19 ] applied mapping methods to implementation strategies to postulate the ways in which changes to the system affect downstream implementation and clinical outcomes. Their work presents an implementation mapping logic model based on Proctor et al. [ 20 , 21 ], which is focused primarily on the selection of implementation strategy(s) rather than a complete depiction of the conceptual model linking all implementation research elements (i.e., determinants, strategies, mechanisms of action, implementation outcomes, clinical outcomes) in the detailed manner we describe in this article.

Development of the IRLM

The IRLM began out of a recognition that implementation research presents some unique challenges due to the field’s distinct and still codifying terminology [ 22 ] and its use of implementation-specific and non-specific (borrowed from other fields) theories, models, and frameworks [ 7 ]. The development of the IRLM occurred through a series of case applications. This began with a collaboration between investigators at Northwestern University and the Shirley Ryan AbilityLab in which the IRLM was used to study the implementation of a new model of patient care in a new hospital and in other related projects [ 23 ]. Next, the IRLM was used with three already-funded implementation research projects to plan for and describe the prospective aspects of the trials, as well as with an ongoing randomized roll-out implementation trial of the Collaborative Care Model for depression management [Smith JD, Fu E, Carroll AJ, Rado J, Rosenthal LJ, Atlas JA, Burnett-Zeigler I, Carlo, A, Jordan N, Brown CH, Csernansky J: Collaborative care for depression management in primary care: a randomized rollout trial using a type 2 hybrid effectiveness-implementation design submitted for publication]. It was also applied in the later stages of a nearly completed implementation research project of a family-based obesity management intervention in pediatric primary care to describe what had occurred over the course of the 3-year trial [ 24 ]. Last, the IRLM was used as a training tool in a 2-day training with 63 grantees of NIH-funded planning project grants funded as part of the Ending the HIV Epidemic initiative [ 25 ]. Results from a survey of the participants in the training are reported in the “Results” section. From these preliminary activities, we identified a number of ways that the IRLM could be used, described in the section on “Using the IRLM for different purposes and stages of research.”

The Implementation Research Logic Model

In developing the IRLM, we began with the common “pipeline” logic model format used by AHRQ, CDC, NIH, PCORI, and others [ 16 ]. This structure was chosen due to its familiarity with funders, investigators, readers, and reviewers. Although a number of characteristics of the pipeline logic model can be applied to implementation research studies, there is an overall misfit due to implementation research’s focusing on the systems that support adoption and delivery of health practices; involving multiple levels within one or more systems; and having its own unique terminology and frameworks [ 3 , 22 , 26 ]. We adapted the typical evaluation logic model to integrate existing implementation science frameworks as its core elements while keeping to the same aim of facilitating causal modeling.

The most common IRLM format is depicted in Fig. 1 . Additional File A1 is a Fillable PDF version of Fig. 1 . In certain situations, it might be preferable to include the evidence-based intervention (EBI; defined as a clinical, preventive, or educational protocol or a policy, principle, or practice whose effects are supported by research [ 27 ]) (Fig. 2 ) to demonstrate alignment of contextual factors (determinants) and strategies with the components and characteristics of the clinical intervention/policy/program and to disentangle it from the implementation strategies. Foremost in these indications are “home-grown” interventions, whose components and theory of change may not have been previously described, and novel interventions that are early in the translational pipeline, which may require greater detail for the reader/reviewer. Variant formats are provided as Additional Files A 2 to A 4 for use with situations and study designs commonly encountered in implementation research, including comparative implementation studies (A 2 ), studies involving multiple service contexts (A 3 ), and implementation optimization designs (A 4 ). Further, three illustrative IRLMs are provided, with brief descriptions of the projects and the utility of the IRLM (A 5 , A 6 and A 7 ).

figure 1

Implementation Research Logic Model (IRLM) Standard Form. Notes. Domain names in the determinants section were drawn from the Consolidated Framework for Implementation Research. The format of the outcomes column is from Proctor et al. 2011

figure 2

Implementation Research Logic Model (IRLM) Standard Form with Intervention. Notes. Domain names in the determinants section were drawn from the Consolidated Framework for Implementation Research. The format of the outcomes column is from Proctor et al. 2011

Core elements and theory

The IRLM specifies the relationships between determinants of implementation, implementation strategies, the mechanisms of action resulting from the strategies, and the implementation and clinical outcomes affected. These core elements are germane to every implementation research project in some way. Accordingly, the generalized theory of the IRLM posits that (1) implementation strategies selected for a given EBI are related to implementation determinants (context-specific barriers and facilitators), (2) strategies work through specific mechanisms of action to change the context or the behaviors of those within the context, and (3) implementation outcomes are the proximal impacts of the strategy and its mechanisms, which then relate to the clinical outcomes of the EBI. Articulated in part by others [ 9 , 12 , 21 , 28 , 29 ], this causal pathway theory is largely explanatory and details the Theory of Change and the Theory of Action of the implementation strategies in a single model. The EBI Theory of Action can also be displayed within a modified IRLM (see Additional File A 4 ). We now briefly describe the core elements and discuss conceptual challenges in how they relate to one another and to the overall goals of implementation research.

Determinants

Determinants of implementation are factors that might prevent or enable implementation (i.e., barriers and facilitators). Determinants may act as moderators, “effect modifiers,” or mediators, thus indicating that they are links in a chain of causal mechanisms [ 12 ]. Common determinant frameworks are the Consolidated Framework for Implementation Research (CFIR) [ 30 ] and the Theoretical Domains Framework [ 31 ].

Implementation strategies

Implementation strategies are supports, changes to, and interventions on the system to increase adoption of EBIs into usual care [ 32 ]. Consideration of determinants is commonly used when selecting and tailoring implementation strategies [ 28 , 29 , 33 ]. Providing the theoretical or conceptual reasoning for strategy selection is recommended [ 9 ]. The IRLM can be used to specify the proposed relationships between strategies and the other elements (determinants, mechanisms, and outcomes) and assists with considering, planning, and reporting all strategies in place during an implementation research project that could contribute to the outcomes and resulting changes

Because implementation research occurs within dynamic delivery systems with multiple factors that determine success or failure, the field has experienced challenges identifying consistent links between individual barriers and specific strategies to overcome them. For example, the Expert Recommendations for Implementing Change (ERIC) compilation of strategies [ 32 ] was used to determine which strategies would best address contextual barriers identified by CFIR [ 29 ]. An online CFIR–ERIC matching process completed by implementation researchers and practitioners resulted in a large degree of heterogeneity and few consistent relationships between barrier and strategy, meaning the relationship is rarely one-to-one (e.g., a single strategy is often is linked to multiple barriers; more than one strategy needed to address a single barrier). Moreover, when implementation outcomes are considered, researchers often find that to improve one outcome, more than one contextual barrier needs to be addressed, which might in turn require one or more strategies.

Frequently, the reporting of implementation research studies focuses on the strategy or strategies that were introduced for the research study, without due attention to other strategies already used in the system or additional supporting strategies that might be needed to implement the target strategy. The IRLM allows for the comprehensive specification of all introduced and present strategies, as well as their changes (adaptations, additions, discontinuations) during the project.

Mechanisms of action

Mechanisms of action are processes or events through which an implementation strategy operates to affect desired implementation outcomes [ 12 ]. The mechanism can be a change in a determinant, a proximal implementation outcome, an aspect of the implementation strategy itself, or a combination of these in a multiple-intervening-effect model. An example of a causal process might be using training and fidelity monitoring strategies to improve delivery agents’ knowledge and self-efficacy about the EBI in response to knowledge-related barriers in the service delivery system. This could result in raising their acceptability of the EBI, increase the likelihood of adoption, improve the fidelity of delivery, and lead to sustainment. Relatively, few implementation studies formally test mechanisms of action, but this area of investigation has received significant attention more recently as the necessity to understand how strategies operate grows in the field [ 33 , 34 , 35 ].

Implementation outcomes are the effects of deliberate and purposive actions to implement new treatments, practices, and services [ 21 ]. They can be indicators of implementation processes, or key intermediate outcomes in relation to service, or target clinical outcomes. Glasgow et al. [ 36 , 37 , 38 ] describe the interrelated nature of implementation outcomes as occurring in a logical, but not necessarily linear, sequence of adoption by a delivery agent, delivery of the innovation with fidelity, reach of the innovation to the intended population, and sustainment of the innovation over time. The combined impact of these nested outcomes, coupled with the size of the effect of the EBI, determines the population or public health impact of implementation [ 36 ]. Outcomes earlier in the sequence can be conceptualized as mediators and mechanisms of strategies on later implementation outcomes. Specifying which strategies are theoretically intended to affect which outcomes, through which mechanisms of action, is crucial for improving the rigor and reproducibility of implementation research and to testing theory.

Using the Implementation Research Logic Model

Guiding principles.

One of the critical insights from our preliminary work was that the use of the IRLM should be guided by a set of principles rather than governed by rules. These principles are intended to be flexible both to allow for adaptation to the various types of implementation studies and evolution of the IRLM over time and to address concerns in the field of implementation science regarding specification, rigor, reproducibility, and transparency of design and process [ 5 ]. Given this flexibility of use, the IRLM will invariably require accompanying text and other supporting documents. These are described in the section “Use of Supporting Text and Documents.”

Principle 1: Strive for comprehensiveness

Comprehensiveness increases transparency, can improve rigor, and allows for a better understanding of alternative explanations to the conclusions drawn, particularly in the presence of null findings for an experimental design. Thus, all relevant determinants, implementation strategies, and outcomes should be included in the IRLM.

Concerning determinants, the valence should be noted as being either a barrier, a facilitator, neutral, or variable by study unit. This can be achieved by simply adding plus (+) or minus (–) signs for facilitators and barriers, respectively, or by using coding systems such as that developed by Damschroder et al. [ 39 ], which indicates the relative strength of the determinant on a scale: – 2 ( strong negative impact ), – 1 ( weak negative impact ), 0 ( neutral or mixed influence ), 1 ( weak positive impact ), and 2 ( strong positive impact ). The use of such a coding system could yield better specification compared to using study-specific adjectives or changing the name of the determinant (e.g., greater relative priority, addresses patient needs, good climate for implementation). It is critical to include all relevant determinants and not simply limit reporting to those that are hypothesized to be related to the strategies and outcomes, as there are complex interrelationships between determinants.

Implementation strategies should be reported in their entirety. When using the IRLM for planning a study, it is important to list all strategies in the system, including those already in use and those to be initiated for the purposes of the study, often in the experimental condition of the design. Second, strategies should be labeled to indicate whether they were (a) in place in the system prior to the study, (b) initiated prospectively for the purposes of the study (particularly for experimental study designs), (c) removed as a result of being ineffective or onerous, or (d) introduced during the study to address an emergent barrier or supplement other strategies because of low initial impact. This is relevant when using the IRLM for planning, as an ongoing tracking system, for retrospective application to a completed study, and in the final reporting of a study. There have been a number of processes proposed for tracking the use of and adaptations to implementation strategies over time [ 40 , 41 ]. Each of these is more detailed than would be necessary for the IRLM, but the processes described provide a method for accurately tracking the temporal aspects of strategy use that fulfill the comprehensiveness principle.

Although most studies will indicate a primary implementation outcome, other outcomes are almost assuredly to be measured. Thus, they ought to be included in the IRLM. This guidance is given in large part due to the interdependence of implementation outcomes, such that adoption relates to delivery with fidelity, reach of the intervention, and potential for sustainment [ 36 ]. Similarly, the overall public health impact (defined as reach multiplied by the effect size of the intervention [ 38 ]) is inextricably tied to adoption, fidelity, acceptability, cost, etc. Although the study might justifiably focus on only one or two implementation outcomes, the others are nonetheless relevant and should be specified and reported. For example, it is important to capture potential unintended consequences and indicators of adverse effects that could result from the implementation of an EBI.

Principle 2: Indicate key conceptual relationships

Although the IRLM has a generalized theory (described earlier), there is a need to indicate the relationships between elements in a manner aligning with the specific theory of change for the study. Researchers ought to provide some form or notation to indicate these conceptual relationships using color-coding, superscripts, arrows, or a combination of the three. Such notations in the IRLM facilitate reference in the text to the study hypotheses, tests of effects, causal chain modeling, and other forms of elaboration (see “Supporting Text and Resources”). We prefer the use of superscripts to color or arrows in grant proposals and articles for practical purposes, as colors can be difficult to distinguish, and arrows can obscure text and contribute to visual convolution. When presenting the IRLM using presentation programs (e.g., PowerPoint, Keynote), colors and arrows can be helpful, and animations can make these connections dynamic and sequential without adding to visual complexity. This principle could also prove useful in synthesizing across similar studies to build the science of tailored implementation, where strategies are selected based on the presence of specific combinations of determinants. As previously indicated [ 29 ], there is much work to be done in this area given.

Principle 3: Specify critical study design elements

This critical element will vary by the study design (e.g., hybrid effectiveness-implementation trial, observational, what subsystems are assigned to the strategies). This principle includes not only researchers but service systems and communities, whose consent is necessary to carry out any implementation design [ 3 , 42 , 43 ].

Primary outcome(s)

Indicate the primary outcome(s) at each level of the study design (i.e., clinician, clinic, organization, county, state, nation). The levels should align with the specific aims of a grant application or the stated objective of a research report. In the case of a process evaluation or an observational study including the RE-AIM evaluation components [ 38 ] or the Proctor et al. [ 21 ] taxonomy of implementation outcomes, the primary outcome may be the product of the conceptual or theoretical model used when a priori outcomes are not clearly indicated. We also suggest including downstream health services and clinical outcomes even if they are not measured, as these are important for understanding the logic of the study and the ultimate health-related targets.

For quasi/experimental designs

When quasi/experimental designs [ 3 , 4 ] are used, the independent variable(s) (i.e., the strategies that are introduced or manipulated or that otherwise differentiate study conditions) should be clearly labeled. This is important for internal validity and for differentiating conditions in multi-arm studies.

For comparative implementation trials

In the context of comparative implementation trials [ 3 , 4 ], a study of two or more competing implementation strategies are introduced for the purposes of the study (i.e., the comparison is not implementation-as-usual), and there is a need to indicate the determinants, strategies, mechanisms, and potentially outcomes that differentiate the arms (see Additional File A 2 ). As comparative implementation can involve multiple service delivery systems, the determinants, mechanisms, and outcomes might also differ, though there must be at least one comparable implementation outcome. In our preliminary work applying the IRLM to a large-scale comparative implementation trial, we found that we needed to use an IRLM for each arm of the trial as it was not possible to use a single IRLM because the strategies being tested occurred across two delivery systems and strategies were very different, by design. This is an example of the flexible use of the IRLM.

For implementation optimization designs

A number of designs are now available that aim to test processes of optimizing implementation. These include factorial, Sequential Multiple Assignment Randomized Trial (SMART) [ 44 ], adaptive [ 45 ], and roll-out implementation optimization designs [ 46 ]. These designs allow for (a) building time-varying adaptive implementation strategies based on the order in which components are presented [ 44 ], (b) evaluating the additive and combined effects of multiple strategies [ 44 , 47 ], and (c) can incorporate data-driven iterative changes to improve implementation in successive units [ 45 , 46 ]. The IRLM in Additional File A 4 can be used for such designs.

Additional specification options

Users of the IRLM are allowed to specify any number of additional elements that may be important to their study. For example, one could notate those elements of the IRLM that have been or will be measured versus those that were based on the researcher’s prior studies or inferred from findings reported in the literature. Users can also indicate when implementation strategies differ by level or unit within the study. In large multisite studies, strategies might not be uniform across all units, particularly those strategies that already exist within the system. Similarly, there might be a need to increase the dose of certain strategies to address the relative strengths of different determinants within units.

Using the IRLM for different purposes and stages of research

Commensurate with logic models more generally, the IRLM can be used for planning and organizing a project, carrying out a project (as a roadmap), reporting and presenting the findings of a completed project, and synthesizing the findings of multiple projects or of a specific area of implementation research, such as what is known about how learning collaboratives are effective within clinical care settings.

When the IRLM is used for planning, the process of populating each of the elements often begins with the known parameter(s) of the study. For example, if the problem is improving the adoption and reach of a specific EBI within a particular clinical setting, the implementation outcomes and context, as well as the EBI, are clearly known. The downstream clinical outcomes of the EBI are likely also known. Working from the two “bookends” of the IRLM, the researchers and community partners and/or organization stakeholders can begin to fill in the implementation strategies that are likely to be feasible and effective and then posit conceptually derived mechanisms of action. In another example, only the EBI and primary clinical outcomes were known. The IRLM was useful in considering different scenarios for what strategies might be needed and appropriate to test the implementation of the EBI in different service delivery contexts. The IRLM was a tool for the researchers and stakeholders to work through these multiple options.

When we used the IRLM to plan for the execution of funded implementation studies, the majority of the parameters were already proposed in the grant application. However, through completing the IRLM prior to the start of the study, we found that a number of important contextual factors had not been considered, additional implementation strategies were needed to complement the primary ones proposed in the grant, and mechanisms needed to be added and measured. At the time of award, mechanisms were not an expected component of implementation research projects as they will likely become in the future.

For another project, the IRLM was applied retrospectively to report on the findings and overall logic of the study. Because nearly all elements of the IRLM were known, we approached completion of the model as a means of showing what happened during the study and to accurately report the hypothesized relationships that we observed. These relationships could be formally tested using causal pathway modeling [ 12 ] or other path analysis approaches with one or more intervening variables [ 48 ].

Synthesizing

In our preliminary work with the IRLM, we used it in each of the first three ways; the fourth (synthesizing) is ongoing within the National Cancer Institute’s Improving the Management of symPtoms during And Following Cancer Treatment (IMPACT) research consortium. The purpose is to draw conclusions for the implementation of an EBI in a particular context (or across contexts) that are shared and generalizable to provide a guide for future research and implementation.

Use of supporting text and documents

While the IRLM provides a good deal of information about a project in a single visual, researchers will need to convey additional details about an implementation research study through the use of supporting text, tables, and figures in grant applications, reports, and articles. Some elements that require elaboration are (a) preliminary data on the assessment and valence of implementation determinants; (b) operationalization/detailing of the implementation strategies being used or observed, using established reporting guidelines [ 9 ] and labeling conventions [ 32 ] from the literature; (c) hypothesized or tested causal pathways [ 12 ]; (d) process, service, and clinical outcome measures, including the psychometric properties, method, and timing of administration, respondents, etc.; (e) study procedures, including subject selection, assignment to (or observation of natural) study conditions, and assessment throughout the conduct of the study [ 4 ]; and (f) the implementation plan or process for following established implementation frameworks [ 49 , 50 , 51 ]. By utilizing superscripts, subscripts, and other notations within the IRLM, as previously suggested, it is easy to refer to (a) hypothesized causal paths in theoretical overviews and analytic plan sections, (b) planned measures for determinants and outcomes, and (c) specific implementation strategies in text, tables, and figures.

Evidence of IRLM utility and acceptability

The IRLM was used as the foundation for a training in implementation research methods to a group of 65 planning projects awarded under the national Ending the HIV Epidemic initiative. One investigator (project director or co-investigator) and one implementation partner (i.e., a collaborator from a community service delivery system) from each project were invited to attend a 2-day in-person summit in Chicago, IL, in October 2019. One hundred thirty-two participants attended, representing 63 of the 65 projects. A survey, which included demographics and questions pertaining to the Ending the HIV Epidemic, was sent to potential attendees prior to the summit, to which 129 individuals—including all 65 project directors, 13 co-investigators, and 51 implementation partners (62% Female)—responded. Those who indicated an investigator role ( n = 78) received additional questions about prior implementation research training (e.g., formal coursework, workshop, self-taught) and related experiences (e.g., involvement in a funded implementation project, program implementation, program evaluation, quality improvement) and the stage of their project (i.e., exploration, preparation, implementation, sustainment [ 50 ]).

Approximately 6 weeks after the summit, 89 attendees (69%) completed a post-training survey comprising more than 40 questions about their overall experience. Though the invitation to complete the survey made no mention of the IRLM, it included 10 items related to the IRLM and one more generally about the logic of implementation research, each rated on a 4-point scale (1 = not at all , 2 = a little , 3 = moderately , 4 = very much ; see Table 1 ). Forty-two investigators (65% of projects) and 24 implementation partners indicated attending the training and began and completed the survey (68.2% female). Of the 66 respondents who attended the training, 100% completed all 11 IRLM items, suggesting little potential response bias.

Table 1 provides the means, standard deviations, and percent of respondents endorsing either “moderately” or “very” response options. Results were promising for the utility of the IRLM on the majority of the dimensions assessed. More than 50% of respondents indicated that the IRLM was “moderately” or “very” helpful on all questions. Overall, 77.6% ( M = 3.18, SD = .827) of respondents indicated that their knowledge on the logic of implementation research had increased either moderately or very much after the 2-day training. At the time of the survey, when respondents were about 2.5 months into their 1-year planning projects, 44.6% indicated that they had already been able to complete a full draft of the IRLM.

Additional analyses using a one-way analysis of variance indicated no statistically significant differences in responses to the IRLM questions between investigators and implementation partners. However, three items approached significance: planning the project ( F = 2.460, p = .055), clearly reporting and specifying how the project is to be conducted ( F = 2.327, p = .066), and knowledge on the logic of implementation research ( F = 2.107, p = .091). In each case, scores were higher for the investigators compared to the implementation partners, suggesting that perhaps the knowledge gap in implementation research lay more in the academic realm than among community partners, who may not have a focus on research but whose day-to-day roles include the implementation of EBPs in the real world. Lastly, analyses using ordinal logistic regression did not yield any significant relationship between responses to the IRLM survey items and prior training ( n = 42 investigators who attended the training and completed the post-training survey), prior related research experience ( n = 42), and project stage of implementation ( n = 66). This suggests that the IRLM is a useful tool for both investigators and implementers with varying levels of prior exposure to implementation research concepts and across all stages of implementation research. As a result of this training, the IRLM is now a required element in the FY2020 Ending the HIV Epidemic Centers for AIDS Research/AIDS Research Centers Supplement Announcement released March 2020 [ 15 ].

Resources for using the IRLM

As the use of the IRLM for different study designs and purposes continues to expand and evolve, we envision supporting researchers and other program implementers in applying the IRLM to their own contexts. Our team at Northwestern University hosts web resources on the IRLM that includes completed examples and tools to assist users in completing their model, including templates in various formats (Figs. 1 and 2 , Additional Files A 1 , A 2 , A 3 and A 4 and others) a Quick Reference Guide (Additional File A 8 ) and a series of worksheets that provide guidance on populating the IRLM (Additional File A 9 ). These will be available at https://cepim.northwestern.edu/implementationresearchlogicmodel/ .

The IRLM provides a compact visual depiction of an implementation project and is a useful tool for academic–practice collaboration and partnership development. Used in conjunction with supporting text, tables, and figures to detail each of the primary elements, the IRLM has the potential to improve a number of aspects of implementation research as identified in the results of the post-training survey. The usability of the IRLM is high for seasoned and novice implementation researchers alike, as evidenced by our survey results and preliminary work. Its use in the planning, executing, reporting, and synthesizing of implementation research could increase the rigor and transparency of complex studies that ultimately could improve reproducibility—a challenge in the field—by offering a common structure to increase consistency and a method for more clearly specifying links and pathways to test theories.

Implementation occurs across the gamut of contexts and settings. The IRLM can be used when large organizational change is being considered, such as a new strategic plan with multifaceted strategies and outcomes. Within a narrower scope of a single EBI in a specific setting, the larger organizational context still ought to be included as inner setting determinants (i.e., the impact of the organizational initiative on the specific EBI implementation project) and as implementation strategies (i.e., the specific actions being done to make the organizational change a reality that could be leveraged to implement the EBI or could affect the success of implementation). The IRLM has been used by our team to plan for large systemic changes and to initiate capacity building strategies to address readiness to change (structures, processes, individuals) through strategic planning and leadership engagement at multiple levels in the organization. This aspect of the IRLM continues to evolve.

Among the drawbacks of the IRLM is that it might be viewed as a somewhat simplified format. This represents the challenges of balancing depth and detail with parsimony, ease of comprehension, and ease of use. The structure of the IRLM may inhibit creative thinking if applied too rigidly, which is among the reasons we provide numerous examples of different ways to tailor the model to the specific needs of different project designs and parameters. Relatedly, we encourage users to iterate on the design of the IRLM to increase its utility.

The promise of implementation science lies in the ability to conduct rigorous and reproducible research, to clearly understand the findings, and to synthesize findings from which generalizable conclusions can be drawn and actionable recommendations for practice change emerge. As scientists and implementers have worked to better define the core methods of the field, the need for theory-driven, testable integration of the foundational elements involved in impactful implementation research has become more apparent. The IRLM is a tool that can aid the field in addressing this need and moving toward the ultimate promise of implementation research to improve the provision and quality of healthcare services for all people.

Availability of data and materials

Not applicable.

Abbreviations

Consolidated Framework for Implementation Research

Evidence-based intervention

Expert Recommendations for Implementing Change

Implementation Research Logic Model

Nosek BA, Alter G, Banks GC, Borsboom D, Bowman SD, Breckler SJ, Buck S, Chambers CD, Chin G, Christensen G, et al. Promoting an open research culture. Science. 2015;348:1422–5.

Article   CAS   Google Scholar  

Slaughter SE, Hill JN, Snelgrove-Clarke E. What is the extent and quality of documentation and reporting of fidelity to implementation strategies: a scoping review. Implement Sci. 2015;10:1–12.

Article   Google Scholar  

Brown CH, Curran G, Palinkas LA, Aarons GA, Wells KB, Jones L, Collins LM, Duan N, Mittman BS, Wallace A, et al: An overview of research and evaluation designs for dissemination and implementation. Annual Review of Public Health 2017, 38:null.

Hwang S, Birken SA, Melvin CL, Rohweder CL, Smith JD: Designs and methods for implementation research: advancing the mission of the CTSA program. Journal of Clinical and Translational Science 2020:Available online.

Smith JD. An Implementation Research Logic Model: a step toward improving scientific rigor, transparency, reproducibility, and specification. Implement Sci. 2018;14:S39.

Google Scholar  

Tabak RG, Khoong EC, Chambers DA, Brownson RC. Bridging research and practice: models for dissemination and implementation research. Am J Prev Med. 2012;43:337–50.

Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10:53.

Damschroder LJ. Clarity out of chaos: use of theory in implementation research. Psychiatry Res. 2019.

Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. 2013;8.

Kessler RS, Purcell EP, Glasgow RE, Klesges LM, Benkeser RM, Peek CJ. What does it mean to “employ” the RE-AIM model? Evaluation & the Health Professions. 2013;36:44–66.

Pinnock H, Barwick M, Carpenter CR, Eldridge S, Grandes G, Griffiths CJ, Rycroft-Malone J, Meissner P, Murray E, Patel A, et al. Standards for Reporting Implementation Studies (StaRI): explanation and elaboration document. BMJ Open. 2017;7:e013318.

Lewis CC, Klasnja P, Powell BJ, Lyon AR, Tuzzio L, Jones S, Walsh-Bailey C, Weiner B. From classification to causality: advancing understanding of mechanisms of change in implementation science. Front Public Health. 2018;6.

Glanz K, Bishop DB. The role of behavioral science theory in development and implementation of public health interventions. Annu Rev Public Health. 2010;31:399–418.

WK Kellogg Foundation: Logic model development guide. Battle Creek, Michigan: WK Kellogg Foundation; 2004.

CFAR/ARC Ending the HIV Epidemic Supplement Awards [ https://www.niaid.nih.gov/research/cfar-arc-ending-hiv-epidemic-supplement-awards ].

Funnell SC, Rogers PJ. Purposeful program theory: effective use of theories of change and logic models. San Francisco, CA: John Wiley & Sons; 2011.

Petersen D, Taylor EF, Peikes D. The logic model: the foundation to implement, study, and refine patient-centered medical home models (issue brief). Mathematica Policy Research: Mathematica Policy Research Reports; 2013.

Davidoff F, Dixon-Woods M, Leviton L, Michie S. Demystifying theory and its use in improvement. BMJ Quality & Safety. 2015;24:228–38.

Fernandez ME, ten Hoor GA, van Lieshout S, Rodriguez SA, Beidas RS, Parcel G, Ruiter RAC, Markham CM, Kok G. Implementation mapping: using intervention mapping to develop implementation strategies. Front Public Health. 2019;7.

Proctor EK, Landsverk J, Aarons G, Chambers D, Glisson C, Mittman B. Implementation research in mental health services: an emerging science with conceptual, methodological, and training challenges. Admin Pol Ment Health. 2009;36.

Proctor EK, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, Griffey R, Hensley M. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health Ment Health Serv Res. 2011;38.

Rabin BA, Brownson RC: Terminology for dissemination and implementation research. In Dissemination and implementation research in health: translating science to practice. 2 edition. Edited by Brownson RC, Colditz G, Proctor EK. New York, NY: Oxford University Press; 2017: 19-45.

Smith JD, Rafferty MR, Heinemann AW, Meachum MK, Villamar JA, Lieber RL, Brown CH: Evaluation of the factor structure of implementation research measures adapted for a novel context and multiple professional roles. BMC Health Serv Res 2020.

Smith JD, Berkel C, Jordan N, Atkins DC, Narayanan SS, Gallo C, Grimm KJ, Dishion TJ, Mauricio AM, Rudo-Stern J, et al. An individually tailored family-centered intervention for pediatric obesity in primary care: study protocol of a randomized type II hybrid implementation-effectiveness trial (Raising Healthy Children study). Implement Sci. 2018;13:1–15.

Fauci AS, Redfield RR, Sigounas G, Weahkee MD, Giroir BP. Ending the HIV epidemic: a plan for the United States: Editorial. JAMA. 2019;321:844–5.

Grimshaw JM, Eccles MP, Lavis JN, Hill SJ, Squires JE. Knowledge translation of research findings. Implement Sci. 2012;7:50.

Brown CH, Curran G, Palinkas LA, Aarons GA, Wells KB, Jones L, Collins LM, Duan N, Mittman BS, Wallace A, et al. An overview of research and evaluation designs for dissemination and implementation. Annu Rev Public Health. 2017;38:1–22.

Krause J, Van Lieshout J, Klomp R, Huntink E, Aakhus E, Flottorp S, Jaeger C, Steinhaeuser J, Godycki-Cwirko M, Kowalczyk A, et al. Identifying determinants of care for tailoring implementation in chronic diseases: an evaluation of different methods. Implement Sci. 2014;9:102.

Waltz TJ, Powell BJ, Fernández ME, Abadie B, Damschroder LJ. Choosing implementation strategies to address contextual barriers: diversity in recommendations and future directions. Implement Sci. 2019;14:42.

Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4.

Atkins L, Francis J, Islam R, O’Connor D, Patey A, Ivers N, Foy R, Duncan EM, Colquhoun H, Grimshaw JM, et al. A guide to using the Theoretical Domains Framework of behaviour change to investigate implementation problems. Implement Sci. 2017;12:77.

Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, Proctor EK, Kirchner JE. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10.

Powell BJ, Fernandez ME, Williams NJ, Aarons GA, Beidas RS, Lewis CC, McHugh SM, Weiner BJ. Enhancing the impact of implementation strategies in healthcare: a research agenda. Front Public Health. 2019;7.

PAR-19-274: Dissemination and implementation research in health (R01 Clinical Trial Optional) [ https://grants.nih.gov/grants/guide/pa-files/PAR-19-274.html ].

Edmondson D, Falzon L, Sundquist KJ, Julian J, Meli L, Sumner JA, Kronish IM. A systematic review of the inclusion of mechanisms of action in NIH-funded intervention trials to improve medication adherence. Behav Res Ther. 2018;101:12–9.

Gaglio B, Shoup JA, Glasgow RE. The RE-AIM framework: a systematic review of use over time. Am J Public Health. 2013;103:e38–46.

Glasgow RE, Harden SM, Gaglio B, Rabin B, Smith ML, Porter GC, Ory MG, Estabrooks PA. RE-AIM planning and evaluation framework: adapting to new science and practice with a 20-year review. Front Public Health. 2019;7.

Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89:1322–7.

Damschroder LJ, Reardon CM, Sperber N, Robinson CH, Fickel JJ, Oddone EZ. Implementation evaluation of the Telephone Lifestyle Coaching (TLC) program: organizational factors associated with successful implementation. Transl Behav Med. 2016;7:233–41.

Bunger AC, Powell BJ, Robertson HA, MacDowell H, Birken SA, Shea C. Tracking implementation strategies: a description of a practical approach and early findings. Health Research Policy and Systems. 2017;15:15.

Boyd MR, Powell BJ, Endicott D, Lewis CC. A method for tracking implementation strategies: an exemplar implementing measurement-based care in community behavioral health clinics. Behav Ther. 2018;49:525–37.

Brown CH, Kellam S, Kaupert S, Muthén B, Wang W, Muthén L, Chamberlain P, PoVey C, Cady R, Valente T, et al. Partnerships for the design, conduct, and analysis of effectiveness, and implementation research: experiences of the Prevention Science and Methodology Group. Adm Policy Ment Health Ment Health Serv Res. 2012;39:301–16.

McNulty M, Smith JD, Villamar J, Burnett-Zeigler I, Vermeer W, Benbow N, Gallo C, Wilensky U, Hjorth A, Mustanski B, et al: Implementation research methodologies for achieving scientific equity and health equity. In Ethnicity & disease, vol. 29. pp. 83-92; 2019:83-92.

Collins LM, Murphy SA, Strecher V. The multiphase optimization strategy (MOST) and the sequential multiple assignment randomized trial (SMART): new methods for more potent eHealth interventions. Am J Prev Med. 2007;32:S112–8.

Brown CH, Ten Have TR, Jo B, Dagne G, Wyman PA, Muthén B, Gibbons RD. Adaptive designs for randomized trials in public health. Annu Rev Public Health. 2009;30:1–25.

Smith JD: The roll-out implementation optimization design: integrating aims of quality improvement and implementation sciences. Submitted for publication 2020.

Dziak JJ, Nahum-Shani I, Collins LM. Multilevel factorial experiments for developing behavioral interventions: power, sample size, and resource considerations. Psychol Methods. 2012;17:153–75.

MacKinnon DP, Lockwood CM, Hoffman JM, West SG, Sheets V. A comparison of methods to test mediation and other intervening variable effects. Psychol Methods. 2002;7:83–104.

Graham ID, Tetroe J. Planned action theories. In: Straus S, Tetroe J, Graham ID, editors. Knowledge translation in health care: Moving from evidence to practice. Wiley-Blackwell: Hoboken, NJ; 2009.

Moullin JC, Dickson KS, Stadnick NA, Rabin B, Aarons GA. Systematic review of the Exploration, Preparation, Implementation, Sustainment (EPIS) framework. Implement Sci. 2019;14:1.

Rycroft-Malone J. The PARIHS framework—a framework for guiding the implementation of evidence-based practice. J Nurs Care Qual. 2004;19:297–304.

Download references

Acknowledgements

The authors wish to thank our colleagues who provided input at different stages of developing this article and the Implementation Research Logic Model, and for providing the examples included in this article: Hendricks Brown, Brian Mustanski, Kathryn Macapagal, Nanette Benbow, Lisa Hirschhorn, Richard Lieber, Piper Hansen, Leslie O’Donnell, Allen Heinemann, Enola Proctor, Courtney Wolk-Benjamin, Sandra Naoom, Emily Fu, Jeffrey Rado, Lisa Rosenthal, Patrick Sullivan, Aaron Siegler, Cady Berkel, Carrie Dooyema, Lauren Fiechtner, Jeanne Lindros, Vinny Biggs, Gerri Cannon-Smith, Jeremiah Salmon, Sujata Ghosh, Alison Baker, Jillian MacDonald, Hector Torres and the Center on Halsted in Chicago, Michelle Smith, Thomas Dobbs, and the pastors who work tirelessly to serve their communities in Mississippi and Arkansas.

This study was supported by grant P30 DA027828 from the National Institute on Drug Abuse, awarded to C. Hendricks Brown; grant U18 DP006255 to Justin Smith and Cady Berkel; grant R56 HL148192 to Justin Smith; grant UL1 TR001422 from the National Center for Advancing Translational Sciences to Donald Lloyd-Jones; grant R01 MH118213 to Brian Mustanski; grant P30 AI117943 from the National Institute of Allergy and Infectious Diseases to Richard D’Aquila; grant UM1 CA233035 from the National Cancer Institute to David Cella; a grant from the Woman’s Board of Northwestern Memorial Hospital to John Csernansky; grant F32 HS025077 from the Agency for Healthcare Research and Quality; grant NIFTI 2016-20178 from the Foundation for Physical Therapy; the Shirley Ryan AbilityLab; and by the Implementation Research Institute (IRI) at the George Warren Brown School of Social Work, Washington University in St. Louis, through grant R25 MH080916 from the National Institute of Mental Health and the Department of Veterans Affairs, Health Services Research & Development Service, and Quality Enhancement Research Initiative (QUERI) to Enola Proctor. The opinions expressed herein are the views of the authors and do not necessarily reflect the official policy or position of the National Institutes of Health, the Centers for Disease Control and Prevention, the Agency for Healthcare Research and Quality the Department of Veterans Affairs, or any other part of the US Department of Health and Human Services.

Author information

Authors and affiliations.

Department of Population Health Sciences, University of Utah School of Medicine, Salt Lake City, Utah, USA

Justin D. Smith

Center for Prevention Implementation Methodology for Drug Abuse and HIV, Department of Psychiatry and Behavioral Sciences, Department of Preventive Medicine, Department of Medical Social Sciences, and Department of Pediatrics, Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA

Center for Prevention Implementation Methodology for Drug Abuse and HIV, Department of Psychiatry and Behavioral Sciences, Feinberg School of Medicine; Institute for Sexual and Gender Minority Health and Wellbeing, Northwestern University Chicago, Chicago, Illinois, USA

Dennis H. Li

Shirley Ryan AbilityLab and Center for Prevention Implementation Methodology for Drug Abuse and HIV, Department of Psychiatry and Behavioral Sciences and Department of Physical Medicine and Rehabilitation, Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA

Miriam R. Rafferty

You can also search for this author in PubMed   Google Scholar

Contributions

JDS conceived of the Implementation Research Logic Model. JDS, MR, and DL collaborated in developing the Implementation Research Logic Model as presented and in the writing of the manuscript. All authors approved of the final version.

Corresponding author

Correspondence to Justin D. Smith .

Ethics declarations

Ethics approval and consent to participate.

Not applicable. This study did not involve human subjects.

Consent for publication

Competing interests.

None declared.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1..

IRLM Fillable PDF form

Additional file 2.

IRLM for Comparative Implementation

Additional file 3.

IRLM for Implementation of an Intervention Across or Linking Two Contexts

Additional file 4.

IRLM for an Implementation Optimization Study

Additional file 5.

IRLM example 1: Faith in Action: Clergy and Community Health Center Communication Strategies for Ending the Epidemic in Mississippi and Arkansas

Additional file 6.

IRLM example 2: Hybrid Type II Effectiveness–Implementation Evaluation of a City-Wide HIV System Navigation Intervention in Chicago, IL

Additional file 7.

IRLM example 3: Implementation, spread, and sustainment of Physical Therapy for Mild Parkinson’s Disease through a Regional System of Care

Additional file 8.

IRLM Quick Reference Guide

Additional file 9.

IRLM Worksheets

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Smith, J.D., Li, D.H. & Rafferty, M.R. The Implementation Research Logic Model: a method for planning, executing, reporting, and synthesizing implementation projects. Implementation Sci 15 , 84 (2020). https://doi.org/10.1186/s13012-020-01041-8

Download citation

Received : 03 April 2020

Accepted : 03 September 2020

Published : 25 September 2020

DOI : https://doi.org/10.1186/s13012-020-01041-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Program theory
  • Integration
  • Study specification

Implementation Science

ISSN: 1748-5908

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

logic model for research project

  • Open access
  • Published: 16 August 2022

Developing an implementation research logic model: using a multiple case study design to establish a worked exemplar

  • Louise Czosnek   ORCID: orcid.org/0000-0002-2362-6888 1 ,
  • Eva M. Zopf 1 , 2 ,
  • Prue Cormie 3 , 4 ,
  • Simon Rosenbaum 5 , 6 ,
  • Justin Richards 7 &
  • Nicole M. Rankin 8 , 9  

Implementation Science Communications volume  3 , Article number:  90 ( 2022 ) Cite this article

9432 Accesses

21 Altmetric

Metrics details

Implementation science frameworks explore, interpret, and evaluate different components of the implementation process. By using a program logic approach, implementation frameworks with different purposes can be combined to detail complex interactions. The Implementation Research Logic Model (IRLM) facilitates the development of causal pathways and mechanisms that enable implementation. Critical elements of the IRLM vary across different study designs, and its applicability to synthesizing findings across settings is also under-explored. The dual purpose of this study is to develop an IRLM from an implementation research study that used case study methodology and to demonstrate the utility of the IRLM to synthesize findings across case sites.

The method used in the exemplar project and the alignment of the IRLM to case study methodology are described. Cases were purposely selected using replication logic and represent organizations that have embedded exercise in routine care for people with cancer or mental illness. Four data sources were selected: semi-structured interviews with purposely selected staff, organizational document review, observations, and a survey using the Program Sustainability Assessment Tool (PSAT). Framework analysis was used, and an IRLM was produced at each case site. Similar elements within the individual IRLM were identified, extracted, and re-produced to synthesize findings across sites and represent the generalized, cross-case findings.

The IRLM was embedded within multiple stages of the study, including data collection, analysis, and reporting transparency. Between 33-44 determinants and 36-44 implementation strategies were identified at sites that informed individual IRLMs. An example of generalized findings describing “intervention adaptability” demonstrated similarities in determinant detail and mechanisms of implementation strategies across sites. However, different strategies were applied to address similar determinants. Dependent and bi-directional relationships operated along the causal pathway that influenced implementation outcomes.

Conclusions

Case study methods help address implementation research priorities, including developing causal pathways and mechanisms. Embedding the IRLM within the case study approach provided structure and added to the transparency and replicability of the study. Identifying the similar elements across sites helped synthesize findings and give a general explanation of the implementation process. Detailing the methods provides an example for replication that can build generalizable knowledge in implementation research.

Peer Review reports

Contributions to the literature

Logic models can help understand how and why evidence-based interventions (EBIs) work to produce intended outcomes.

The implementation research logic model (IRLM) provides a method to understand causal pathways, including determinants, implementation strategies, mechanisms, and implementation outcomes.

We describe an exemplar project using a multiple case study design that embeds the IRLM at multiple stages. The exemplar explains how the IRLM helped synthesize findings across sites by identifying the common elements within the causal pathway.

By detailing the exemplar methods, we offer insights into how this approach of using the IRLM is generalizable and can be replicated in other studies.

The practice of implementation aims to get “someone…, somewhere… to do something differently” [ 1 ]. Typically, this involves changing individual behaviors and organizational processes to improve the use of evidence-based interventions (EBIs). To understand this change, implementation science applies different theories, models, and frameworks (hereafter “frameworks”) to describe and evaluate the factors and steps in the implementation process [ 2 , 3 , 4 , 5 ]. Implementation science provides much-needed theoretical frameworks and a structured approach to process evaluations. One or more frameworks are often used within a program of work to investigate the different stages and elements of implementation [ 6 ]. Researchers have acknowledged that the dynamic implementation process could benefit from using logic models [ 7 ]. Logic models offer a systematic approach to combining multiple frameworks and to building causal pathways that explain the mechanisms behind individual and organizational change.

Logic models visually represent how an EBI is intended to work [ 8 ]. They link the available resources with the activities undertaken, the immediate outputs of this work, and the intermediate outcomes and longer-term impacts [ 8 , 9 ]. Through this process, causal pathways are identified. For implementation research, the causal pathway provides the interconnection between a chosen EBI, determinants, implementation strategies, and implementation outcomes [ 10 ]. Testing causal mechanisms in the research translation pathway will likely dominate the next wave of implementation research [ 11 , 12 ]. Causal mechanisms (or mechanisms of change) are the “process or event through which an implementation strategy operates to affect desired implementation outcomes” [ 13 ]. Identifying mechanisms can improve implementation strategies’ selection, prioritization, and targeting [ 12 , 13 ]. This provides an efficient and evidence-informed approach to implementation.

Implementation researchers have proposed several methods to develop and examine causal pathways [ 14 , 15 ] and mechanisms [ 16 , 17 ]. This includes formalizing the inherent relationship between frameworks via developing the Implementation Research Logic Model (IRLM) [ 7 ]. The IRLM is a logic model designed to improve the rigor and reproducibility of implementation research. It specifies the relationship between elements of implementation (determinant, strategies, and outcomes) and the mechanisms of change. To do this, it recommends linking implementation frameworks or relevant taxonomies (e.g., determinant and evaluation frameworks and implementation strategy taxonomy). The IRLM authors suggest the tool has multiple uses, including planning, executing, and reporting on the implementation process and synthesizing implementation findings across different contexts [ 7 ]. During its development, the IRLM was tested to confirm its utility in planning, executing, and reporting; however, its utility in synthesizing findings across different contexts is ongoing. Users of the tool are encouraged to consider three principles: (1) comprehensiveness in reporting determinants, implementation strategies, and implementation outcomes; (2) specifying the conceptual relationships via diagrammatic tools such as colors and arrows; and (3) detailing important elements of the study design. Further, the authors also recognize that critical elements of IRLM will vary across different study designs.

This study describes the development of an IRLM from a multiple case study design. Case study methodology can answer “how and why” questions about implementation. They enable researchers to develop a rich, in-depth understanding of a contemporary phenomenon within its natural context [ 18 , 19 , 20 , 21 ]. These methods can create coherence in the dynamic context in which EBIs exist [ 22 , 23 ]. Case studies are common in implementation research [ 24 , 25 , 26 , 27 , 28 , 29 , 30 ], with multiple case study designs suitable for undertaking comparisons across contexts [ 31 , 32 ]. However, they are infrequently applied to establish mechanisms [ 11 ] or combine implementation elements to synthesize findings across contexts (as possible through the IRLM). Hollick and colleagues [ 33 ] undertook a comparative case study, guided by a determinant framework, to explore how context influences successful implementation. The authors contrasted determinants across sites where implementation was successful versus sites where implementation failed. The study did not extend to identifying implementation strategies or mechanisms. By contrast, van Zelm et al. [ 31 ] undertook a theory-driven evaluation of successful implementation across ten hospitals. They used joint displays to present mechanisms of change aligned with evaluation outcomes; however, they did not identify the implementation strategies within the causal pathway. Our study seeks to build on these works and explore the utility of the IRLM in synthesizing findings across sites. The dual objectives of this paper were to:

Describe how case study methods can be applied to develop an IRLM

Demonstrate the utility of the IRLM in synthesizing implementation findings across case sites.

In this section, we describe the methods used in the exemplar case study and the alignment of the IRLM to this approach. The exemplar study explored the implementation of exercise EBIs in the context of the Australian healthcare system. The exemplar study aimed to investigate the integration of exercise EBIs within routine mental illness or cancer care. The evidence base detailing the therapeutic benefits of exercise for non-communicable diseases such as cancer and mental illness are extensively documented [ 34 , 35 , 36 ] but inconsistently implemented as part of routine care [ 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 ].

Additional file 1 provides the Standards for Reporting Qualitative Research (SRQR).

Case study approach

We adopted an approach to case studies based on the methods described by Yin [ 18 ]. This approach is said to have post-positivist philosophical leanings, which are typically associated with the quantitative paradigm [ 19 , 45 , 46 ]. This is evidenced by the structured, deductive approach to the methods that are described with a constant lens on objectivity, validity, and generalization [ 46 ]. Yin’s approach to case studies aligns with the IRLM for several reasons. The IRLM is designed to use established implementation frameworks. The two frameworks and one taxonomy applied in our exemplar were the Consolidated Framework for Implementation Research (CFIR) [ 47 ], Expert Recommendations for Implementing Change (ERIC) [ 48 ], and Proctor et al.’s implementation outcomes framework [ 49 ]. These frameworks guided multiple aspects of our study (see Table 1 ). Commencing an implementation study with a preconceived plan based upon established frameworks is deductive [ 22 ]. Second, the IRLM has its foundation in logic modeling to develop cause and effect relationships [ 8 ]. Yin advocates using logic models to analyze case study findings [ 18 ]. They argue that developing logic models encourages researchers to iterate and consider plausible counterfactual explanations before upholding the causal pathway. Further, Yin notes that case studies are particularly valuable for explaining the transitions and context within the cause-and-effect relationship [ 18 ]. In our exemplar, the transition was the mechanism between the implementation strategy and implementation outcome. Finally, the proposed function of IRLM to synthesize findings across sites aligns with the exemplar study that used a multiple case approach. Multiple case studies aim to develop generalizable knowledge [ 18 , 50 ].

Case study selection and boundaries

A unique feature of Yin’s approach to multiple case studies is using replication logic to select cases [ 18 ]. Cases are chosen to demonstrate similarities (literal replication) or differences for anticipated reasons (theoretical replication) [ 18 ]. In the exemplar study, the cases were purposely selected using literal replication and displayed several common characteristics. First, all cases had delivered exercise EBIs within normal operations for at least 12 months. Second, each case site delivered exercise EBIs as part of routine care for a non-communicable disease (cancer or mental illness diagnosis). Finally, each site delivered the exercise EBI within the existing governance structures of the Australian healthcare system. That is, the organizations used established funding and service delivery models of the Australian healthcare system.

Using replication logic, we posited that sites would exhibit some similarities in the implementation process across contexts (literal replication). However, based on existing implementation literature [ 32 , 51 , 52 , 53 ], we expected sites to adapt the EBIs through the implementation process. The determinant analysis should explain these adaptions, which is informed by the CFIR (theoretical replication). Finally, in case study methods, clearly defining the boundaries of each case and the units of analysis, such as individual, the organization or intervention, helps focus the research. We considered each healthcare organization as a separate case. Within that, organizational-level analysis [ 18 , 54 ] and operationalizing the implementation outcomes focused inquiry (Table 1 ).

Data collection

During the study conceptualization for the exemplar, we mapped the data sources to the different elements of the IRLM (Fig. 1 ). Four primary data sources informed data collection: (1) semi-structured interviews with staff; (2) document review (such as meeting minutes, strategic plans, and consultant reports); (3) naturalistic observations; and (4) a validated survey (Program Sustainability Assessment Tool (PSAT)). A case study database was developed using Microsoft Excel to manage and organize data collection [ 18 , 54 ].

figure 1

Conceptual frame for the study

Semi-structured interviews

An interview guide was developed, informed by the CFIR interview guide tool [ 55 ]. Questions were selected across the five domains of the CFIR, which aligned with the delineation of determinant domains in the IRLM. Purposeful selection was used to identify staff for the interviews [ 56 ]. Adequate sample size in qualitative studies, particularly regarding the number of interviews, is often determined when data saturation is reached [ 57 , 58 ]. Unfortunately, there is little consensus on the definition of saturation [ 59 ], how to interpret when it has occurred [ 57 ], or whether it is possible to pre-determine in qualitative studies [ 60 ]. The number of participants in this study was determined based on the staff’s differential experience with the exercise EBI and their role in the organization. This approach sought to obtain a rounded view of how the EBI operated at each site [ 23 , 61 ]. Focusing on staff experiences also aligned with the organizational lens that bounded the study. Typical roles identified for the semi-structured interviews included the health professional delivering the EBI, the program manager responsible for the EBI, an organizational executive, referral sources, and other health professionals (e.g., nurses, allied health). Between five and ten interviews were conducted at each site. Interview times ranged from 16 to 72 min, most lasting around 40 min per participant.

Document review

A checklist informed by case study literature was developed outlining the typical documents the research team was seeking [ 18 ]. The types of documents sought to review included job descriptions, strategic plans/planning documents, operating procedures and organizational policies, communications (e.g., website, media releases, email, meeting minutes), annual reports, administrative databases/files, evaluation reports, third party consultant reports, and routinely collected numerical data that measured implementation outcomes [ 27 ]. As each document was identified, it was numbered, dated, and recorded in the case study database with a short description of the content related to the research aims and the corresponding IRLM construct. Between 24 and 33 documents were accessed at each site. A total of 116 documents were reviewed across the case sites.

Naturalistic observations

The onsite observations occurred over 1 week, wherein typical organizational operations were viewed. The research team interacted with staff, asked questions, and sought clarification of what was being observed; however, they did not disrupt the usual work routines. Observations allowed us to understand how the exercise EBI operated and contrast that with documented processes and procedures. They also provided the opportunity to observe non-verbal cues and interactions between staff. While onsite, case notes were recorded directly into the case study database [ 62 , 63 ]. Between 15 and 40 h were spent on observations per site. A total of 95 h was spent across sites on direct observations.

Program sustainability assessment tool (survey)

The PSAT is a planning and evaluation tool that assesses the sustainability of an intervention across eight domains [ 64 , 65 , 66 ]: (1) environmental support, (2) funding stability, (3) partnerships, (4) organizational capacity, (5) program evaluation, (6) program adaption, (7) communication, and (8) strategic planning [ 64 , 65 ]. The PSAT was administered to a subset of at least three participants per site who completed the semi-structured interview. The results were then pooled to provide an organization-wide view of EBI sustainability. Three participants per case site are consistent with previous studies that have used the tool [ 67 , 68 ] and recommendations for appropriate use [ 65 , 69 ].

We included a validated measure of sustainability, recognizing calls to improve understanding of this aspect of implementation [ 70 , 71 , 72 ]. Noting the limited number of measurement tools for evaluating sustainability [ 73 ], the PSAT’s characteristics displayed the best alignment with the study aims. To determine “best alignment,” we deferred to a study by Lennox and colleagues that helps researchers select suitable measurement tools based on the conceptualization of sustainability in the study [ 71 ]. The PSAT provides a multi-level view of sustainability. It is a measurement tool that can be triangulated with other implementation frameworks, such as the CFIR [ 74 ], to interrogate better and understand the later stages of implementation. Further, the tool provides a contemporary account of an EBIs capacity for sustainability [ 75 ]. This is consistent with case study methods, which explore complex, contemporary, real-life phenomena.

The voluminous data collection that is possible through case studies, and is often viewed as a challenge of the method [ 19 ], was advantageous to developing the IRLM in the exemplar and identifying the causal pathways. First, it aided three types of triangulation through the study (method, theory, and data source triangulation) [ 76 ]. Method triangulation involved collecting evidence via four methods: interview, observations, document review, and survey. Theoretical triangulation involved applying two frameworks and one taxonomy to understand and interpret the findings. Data source triangulation involved selecting participants with different roles within the organization to gain multiple perspectives about the phenomena being studied. Second, data collection facilitated depth and nuance in detailing determinants and implementation strategies. For the determinant analysis, this illuminated the subtleties within context and improved confidence and accuracy for prioritizing determinants. As case studies are essentially “naturalistic” studies, they provide insight into strategies that are implementable in pragmatic settings. Finally, the design’s flexibility enabled the integration of a survey and routinely collected numerical data as evaluation measures for implementation outcomes. This allowed us to contrast “numbers” against participants’ subjective experience of implementation [ 77 ].

Data analysis

Descriptive statistics were calculated for the PSAT and combined with the three other data sources wherein framework analysis [ 78 , 79 ] was used to analyze the data. Framework analysis includes five main phases: familiarization, identifying a thematic framework, indexing, charting, and mapping and interpretation [ 78 ]. Familiarization occurred concurrently with data collection, and the thematic frame was aligned to the two frameworks and one taxonomy we applied to the IRLM. To index and chart the data, the raw data was uploaded into NVivo 12 [ 80 ]. Codes were established to guide indexing that aligned with the thematic frame. That is, determinants within the CFIR [ 47 ], implementation strategies listed in ERIC [ 48 ], and the implementation outcomes [ 49 ] of acceptability, fidelity, penetration, and sustainability were used as codes in NVivo 12. This process produced a framework matrix that summarized the information housed under each code at each case site.

The final step of framework analysis involves mapping and interpreting the data. We used the IRLM to map and interpret the data in the exemplar. First, we identified the core elements of the implemented exercise EBI. Next, we applied the CFIR valance and strength coding to prioritize the contextual determinants. Then, we identified the implementation strategies used to address the contextual determinants. Finally, we provided a rationale (a causal mechanism) for how these strategies worked to address barriers and contribute to specific implementation outcomes. The systematic approach advocated by the IRLM provided a transparent representation of the causal pathway underpinning the implementation of the exercise EBIs. This process was followed at each case site to produce an IRLM for each organization. To compare, contrast, and synthesize findings across sites, we identified the similarities and differences in the individual IRLMs and then developed an IRLM that explained a generalized process for implementation. Through the development of the causal pathway and mechanisms, we deferred to existing literature seeking to establish these relationships [ 81 , 82 , 83 ]. Aligned with case study methods, this facilitated an iterative process of constant comparison and challenging the proposed causal relationships. Smith and colleagues advise that the IRLM “might be viewed as a somewhat simplified format,” and users are encouraged to “iterate on the design of the IRLM to increase its utility” [ 7 ]. Thus, we re-designed the IRLM within a traditional logic model structure to help make sense of the data collected through the case studies. Figure 1 depicts the conceptual frame for the study and provides a graphical representation of how the IRLM pathway was produced.

The results are presented with reference to the three principles of the IRLM: comprehensiveness, indicating the key conceptual relationship and specifying critical study design . The case study method allowed for comprehensiveness through the data collection and analysis described above. The mean number of data sources informing the analysis and development of the causal pathway at each case site was 63.75 (interviews ( M = 7), observational hours ( M =23.75), PSAT ( M =4), and document review ( M = 29). This resulted in more than 30 determinants and a similar number of implementation strategies identified at each site (determinant range per site = 33–44; implementation strategy range per site = 36–44). Developing a framework matrix meant that each determinant (prioritized and other), implementation strategy, and implementation outcome were captured. The matrix provided a direct link to the data sources that informed the content within each construct. An example from each construct was collated alongside the summary to evidence the findings.

The key conceptual relationship was articulated in a traditional linear process by aligning determinant → implementation strategy → mechanism → implementation outcome, as per the IRLM. To synthesize findings across sites, we compared and contrasted the results within each of the individual IRLM and extracted similar elements to develop a generalized IRLM that represents cross-case findings. By redeveloping the IRLM within a traditional logic model structure, we added visual representations of the bi-directional and dependent relationships, illuminating the dynamism within the implementation process. To illustrate, intervention adaptability was a prioritized determinant and enabler across sites. Healthcare providers recognized that adapting and tailoring exercise EBIs increased “fit” with consumer needs. This also extended to adapting how healthcare providers referred consumers to exercise so that it was easy in the context of their other work priorities. Successful adaption was contingent upon a qualified workforce with the required skills and competencies to enact change. Different implementation strategies were used to make adaptions across sites, such as promoting adaptability and using data experts. However, despite the different strategies, successful adaptation created positive bi-directional relationships. That is, healthcare providers’ confidence and trust in the EBI grew as consumer engagement increased and clinical improvements were observed. This triggered greater engagement with the EBI (e.g., acceptability → penetration → sustainability), albeit the degree of engagement differed across sites. Figure 2 illustrates this relationship within the IRLM and provides a contrasting relationship by highlighting how a prioritized barrier across sites (available resources) was addressed.

figure 2

Example of intervention adaptability (E) contrasted with available resources (B) within a synthesised IRLM across case sites

The final principle is to specify critical study design , wherein we have described how case study methodology was used to develop the IRLM exemplar. Our intention was to produce an explanatory causal pathway for the implementation process. The implementation outcomes of acceptability and fidelity were measured at the level of the provider, and penetration and sustainability were measured at the organizational level [ 49 ]. Service level and clinical level outcomes were not identified for a priori measurement throughout the study. We did identify evidence of clinical outcomes that supported our overall findings via the document review. Historical evaluations on the service indicated patients increased their exercise level or demonstrated a change in symptomology/function. The implementation strategies specified in the study were those chosen by the organizations. We did not attempt to augment routine practice or change implementation outcomes by introducing new strategies. The barriers across sites were represented with a (B) symbol and enablers with an (E) symbol in the IRLM. In the individual IRLM, consistent determinants and strategies were highlighted (via bolding) to support extraction. Finally, within the generalized IRLM, the implementation strategies are grouped according to the ERIC taxonomy category. This accounts for the different strategies applied to achieve similar outcomes across case studies.

This study provides a comprehensive overview that uses case study methodology to develop an IRLM in an implementation research project. Using an exemplar that examines implementation in different healthcare settings, we illustrate how the IRLM (that documents the causal pathways and mechanisms) was developed and enabled the synthesis of findings across sites.

Case study methodologies are fraught with inconsistencies in terminology and approach. We adopted the method described by Yin. Its guiding paradigm, which is rooted in objectivity, means it can be viewed as less flexible than other approaches [ 46 , 84 ]. We found the approach offered sufficient flexibility within the frame of a defined process. We argue that the defined process adds to the rigor and reproducibility of the study, which is consistent with the principles of implementation science. That is, accessing multiple sources of evidence, applying replication logic to select cases, maintaining a case study database, and developing logic models to establish causal pathways, demonstrates the reliability and validity of the study. The method was flexible enough to embed the IRLM within multiple phases of the study design, including conceptualization, philosophical alignment, and analysis. Paparini and colleagues [ 85 ] are developing guidance that recognizes the challenges and unmet value of case study methods for implementation research. This work, supported by the UK Medical Research Council, aims to enhance the conceptualization, application, analysis, and reporting of case studies. This should encourage and support researchers to use case study methods in implementation research with increased confidence.

The IRLM produced a relatively linear depiction of the relationship between context, strategies, and outcomes in our exemplar. However, as noted by the authors of the IRLM, the implementation process is rarely linear. If the tool is applied too rigidly, it may inadvertently depict an overly simplistic view of a complex process. To address this, we redeveloped the IRLM within a traditional logic model structure, adding visual representations of the dependent and bidirectional relationships evident within the general IRLM pathway [ 86 ]. Further, developing a general IRLM of cross-case findings that synthesized results involved a more inductive approach to identifying and extracting similar elements. It required the research team to consider broader patterns in the data before offering a prospective account of the implementation process. This was in contrast to the earlier analysis phases that directly mapped determinants and strategies to the CFIR and ERIC taxonomy. We argue that extracting similar elements is analogous to approaches that have variously been described as portable elements [ 87 ], common elements [ 88 ], or generalization by mechanism [ 89 ]. While defined and approached slightly differently, these approaches aim to identify elements frequently shared across effective EBIs and thus can form the basis of future EBIs to increase their utility, efficiency, and effectiveness [ 88 ]. We identified similarities related to determinant detail and mechanism of different implementation strategies across sites. This finding supports the view that many implementation strategies could be suitable, and selecting the “right mix” is challenging [ 16 ]. Identifying common mechanisms, such as increased motivation, skill acquisition, or optimizing workflow, enabled elucidation of the important functions of strategies. This can help inform the selection of appropriate strategies in future implementation efforts.

Finally, by developing individual IRLMs and then re-producing a general IRLM, we synthesized findings across sites and offered generalized findings. The ability to generalize from case studies is debated [ 89 , 90 ], with some considering the concept a fallacy [ 91 ]. That is, the purpose of qualitative research is to develop a richness through data that is situated within a unique context. Trying to extrapolate from findings is at odds with exploring unique context. We suggest the method described herein and the application of IRLM could be best applied to a form of generalization called ‘transferability’ [ 91 , 92 ]. This suggests that findings from one study can be transferred to another setting or population group. In this approach, the new site takes the information supplied and determines those aspects that would fit with their unique environment. We argue that elucidating the implementation process across multiple sites improves the confidence with which certain “elements” could be applied to future implementation efforts. For example, our approach may also be helpful for multi-site implementation studies that use methods other than case studies. Developing a general IRLM through study conceptualization could identify consistencies in baseline implementation status across sites. Multi-site implementation projects may seek to introduce and empirically test implementation strategies, such as via a cluster randomized controlled trial [ 93 ]. Within this study design, baseline comparison between control and intervention sites might extend to a comparison of organizational type, location and size, and individual characteristics, but not the chosen implementation strategies [ 94 ]. Applying the approach described within our study could enhance our understanding of how to support effective implementation.

Limitations

After the research team conceived this study, the authors of the PSAT validated another tool for use in clinical settings (Clinical Sustainability Assessment Tool (CSAT)) [ 95 ]. This tool appears to align better with our study design due to its explicit focus on maintaining structured clinical care practices. The use of multiple data sources and consistency in some elements across the PSAT and CSAT should minimize the limitations in using the PSAT survey tool. At most case sites, limited staff were involved in developing and implementing exercise EBI. Participants who self-selected for interviews may be more invested in assuring positive outcomes for the exercise EBI. Inviting participants from various roles was intended to reduce selection bias. Finally, we recognize recent correspondence suggesting the IRLM misses a critical step in the causal pathway. That is the mechanism between determinant and selection of an appropriate implementation strategy [ 96 ]. Similarly, Lewis and colleagues note that additional elements, including pre-conditions, moderators, and mediators (distal and proximal), exist within the causal pathway [ 13 ]. Through the iterative process of developing the IRLM, decisions were made about the determinant → implementation strategy relationship; however, this is not captured in the IRLM. Secondary analysis of the case study data would allow elucidation of these relationships, as this information can be extracted through the case study database. This was outside the scope of the exemplar study.

Developing an IRLM via case study methods proved useful in identifying causal pathways and mechanisms. The IRLM can complement and enhance the study design by providing a consistent and structured approach. In detailing our approach, we offer an example of how multiple case study designs that embed the IRLM can aid the synthesis of findings across sites. It also provides a method that can be replicated in future studies. Such transparency adds to the quality, reliability, and validity of implementation research.

Availability of data and materials

The data that support the findings of this study are available on request from the corresponding author [LC]. The data are not publicly available due to them containing information that could compromise research participant privacy.

Presseau J, McCleary N, Lorencatto F, Patey AM, Grimshaw JM, Francis JJ. Action, actor, context, target, time (AACTT): a framework for specifying behaviour. Implement Sci. 2019;14(1):102.

Article   PubMed   PubMed Central   Google Scholar  

Damschroder LJ. Clarity out of chaos: use of theory in implementation research. Psychiatry Res. 2020;283(112461).

Bauer M, Damschroder L, Hagedorn H, Smith J, Kilbourne A. An introduction to implementation science for the non-specialist. BMC Psychol. 2015;3(1):32.

Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10(1):53.

Lynch EA, Mudge A, Knowles S, Kitson AL, Hunter SC, Harvey G. “There is nothing so practical as a good theory”: a pragmatic guide for selecting theoretical approaches for implementation projects. BMC Health Serv Res. 2018;18(1):857.

Birken SA, Powell BJ, Presseau J, Kirk MA, Lorencatto F, Gould NJ, et al. Combined use of the Consolidated Framework for Implementation Research (CFIR) and the Theoretical Domains Framework (TDF): a systematic review. Implement Sci. 2017;12(1):2.

Smith JD, Li DH, Rafferty MR. The Implementation Research Logic Model: a method for planning, executing, reporting, and synthesizing implementation projects. Implement Sci. 2020;15(1):84.

Kellogg WK. Foundation. Logic model development guide. Michigan, USA; 2004.

McLaughlin JA, Jordan GB. Logic models: a tool for telling your programs performance story. Eval Prog Plann. 1999;22(1):65–72.

Article   Google Scholar  

Anselmi L, Binyaruka P, Borghi J. Understanding causal pathways within health systems policy evaluation through mediation analysis: an application to payment for performance (P4P) in Tanzania. Implement Sci. 2017;12(1):10.

Lewis C, Boyd M, Walsh-Bailey C, Lyon A, Beidas R, Mittman B, et al. A systematic review of empirical studies examining mechanisms of implementation in health. Implement Sci. 2020;15(1):21.

Powell BJ, Fernandez ME, Williams NJ, Aarons GA, Beidas RS, Lewis CC, et al. Enhancing the impact of implementation strategies in healthcare: a research agenda. Front Public Health. 2019;7(3).

Lewis CC, Klasnja P, Powell BJ, Lyon AR, Tuzzio L, Jones S, Walsh-Bailey C and Weiner B. From classification to causality: advancing understanding of mechanisms of change in implementation science. Front Public Health. 2018;6(136).

Bartholomew L, Parcel G, Kok G. Intervention mapping: a process for developing theory and evidence-based health education programs. Health Educ Behav. 1998;25(5):545–63.

Article   CAS   PubMed   Google Scholar  

Weiner BJ, Lewis MA, Clauser SB, Stitzenberg KB. In search of synergy: strategies for combining interventions at multiple levels. JNCI Monographs. 2012;2012(44):34–41.

Powell BJ, Beidas RS, Lewis CC, Aarons GA, McMillen J, Proctor EK, et al. Methods to improve the selection and tailoring of implementation strategies. J Behav Health Serv Res. 2017;44(2):177–94.

Fernandez ME, ten Hoor GA, van Lieshout S, Rodriguez SA, Beidas RS, Parcel G, Ruiter R, Markham C and Kok G. Implementation mapping: using intervention mapping to develop implementation strategies. Frontiers. Public Health. 2019;7(158).

Yin R. Case study research and applications design and methods. 6th Edition ed. United States of America: Sage Publications; 2018.

Google Scholar  

Crowe S, Cresswell K, Robertson A, Huby G, Avery A, Sheikh A. The case study approach. BMC Med Res Methodol. 2011;11:100.

Stake R. The art of case study reseach. United States of America: Sage Publications; 2005.

Thomas G. How to do your case study. 2nd Edition ed. London: Sage Publications; 2016.

Ramanadhan S, Revette AC, Lee RM and Aveling E. Pragmatic approaches to analyzing qualitative data for implementation science: an introduction. Implement Sci Commun. 2021;2(70).

National Cancer Institute. Qualitative methods in implementation science United States of America: National Institutes of Health Services; 2018.

Mathers J, Taylor R, Parry J. The challenge of implementing peer-led interventions in a professionalized health service: a case study of the national health trainers service in England. Milbank Q. 2014;92(4):725–53.

Powell BJ, Proctor EK, Glisson CA, Kohl PL, Raghavan R, Brownson RC, et al. A mixed methods multiple case study of implementation as usual in children’s social service organizations: study protocol. Implement Sci. 2013;8(1):92.

van de Glind IM, Heinen MM, Evers AW, Wensing M, van Achterberg T. Factors influencing the implementation of a lifestyle counseling program in patients with venous leg ulcers: a multiple case study. Implement Sci. 2012;7(1):104.

Greenhalgh T, Macfarlane F, Barton-Sweeney C, Woodard F. “If we build it, will it stay?” A case study of the sustainability of whole-system change in London. Milbank Q. 2012;90(3):516–47.

Urquhart R, Kendell C, Geldenhuys L, Ross A, Rajaraman M, Folkes A, et al. The role of scientific evidence in decisions to adopt complex innovations in cancer care settings: a multiple case study in Nova Scotia, Canada. Implement Sci. 2019;14(1):14.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Herinckx H, Kerlinger A, Cellarius K. Statewide implementation of high-fidelity recovery-oriented ACT: A case study. Implement Res Pract. 2021;2:2633489521994938.

Young AM, Hickman I, Campbell K, Wilkinson SA. Implementation science for dietitians: The ‘what, why and how’ using multiple case studies. Nutr Diet. 2021;78(3):276–85.

Article   PubMed   Google Scholar  

van Zelm R, Coeckelberghs E, Sermeus W, Wolthuis A, Bruyneel L, Panella M, et al. A mixed methods multiple case study to evaluate the implementation of a care pathway for colorectal cancer surgery using extended normalization process theory. BMC Health Serv Res. 2021;21(1):11.

Albers B, Shlonsky A, Mildon R. Implementation Science 3.0. Switzerland: Springer; 2020.

Book   Google Scholar  

Hollick RJ, Black AJ, Reid DM, McKee L. Shaping innovation and coordination of healthcare delivery across boundaries and borders. J Health Organ Manag. 2019;33(7/8):849–68.

Article   PubMed Central   Google Scholar  

Pedersen B, Saltin B. Exercise as medicine – evidence for prescribing exercise as therapy in 26 different chronic diseases. Scand J Med Sci Sports. 2015;25:1–72.

Firth J, Siddiqi N, Koyanagi A, Siskind D, Rosenbaum S, Galletly C, et al. The Lancet Psychiatry Commission: a blueprint for protecting physical health in people with mental illness. Lancet Psychiatry. 2019;6(8):675–712.

Campbell K, Winters-Stone K, Wisekemann J, May A, Schwartz A, Courneya K, et al. Exercise guidelines for cancer survivors: consensus statement from international multidisciplinary roundtable. Med Sci Sports Exerc. 2019;51(11):2375–90.

Deenik J, Czosnek L, Teasdale SB, Stubbs B, Firth J, Schuch FB, et al. From impact factors to real impact: translating evidence on lifestyle interventions into routine mental health care. Transl Behav Med. 2020;10(4):1070–3.

Suetani S, Rosenbaum S, Scott JG, Curtis J, Ward PB. Bridging the gap: What have we done and what more can we do to reduce the burden of avoidable death in people with psychotic illness? Epidemiol Psychiatric Sci. 2016;25(3):205–10.

Article   CAS   Google Scholar  

Stanton R, Rosenbaum S, Kalucy M, Reaburn P, Happell B. A call to action: exercise as treatment for patients with mental illness. Aust J Primary Health. 2015;21(2):120–5.

Rosenbaum S, Hobson-Powell A, Davison K, Stanton R, Craft LL, Duncan M, et al. The role of sport, exercise, and physical activity in closing the life expectancy gap for people with mental illness: an international consensus statement by Exercise and Sports Science Australia, American College of Sports Medicine, British Association of Sport and Exercise Science, and Sport and Exercise Science New Zealand. Transll J Am Coll Sports Med. 2018;3(10):72–3.

Chambers D, Vinson C, Norton W. Advancing the science of implementation across the cancer continuum. United States of America: Oxford University Press Inc; 2018.

Schmitz K, Campbell A, Stuiver M, Pinto B, Schwartz A, Morris G, et al. Exercise is medicine in oncology: engaging clinicians to help patients move through cancer. Cancer J Clin. 2019;69(6):468–84.

Santa Mina D, Alibhai S, Matthew A, Guglietti C, Steele J, Trachtenberg J, et al. Exercise in clinical cancer care: a call to action and program development description. Curr Oncol. 2012;19(3):9.

Czosnek L, Rankin N, Zopf E, Richards J, Rosenbaum S, Cormie P. Implementing exercise in healthcare settings: the potential of implementation science. Sports Med. 2020;50(1):1–14.

Harrison H, Birks M, Franklin R, Mills J. Case study research: foundations and methodological orientations. Forum: Qualitative. Soc Res. 2017;18(1).

Yazan B. Three approaches to case study methods in education: Yin, Merriam, and Stake. Qual Rep. 2015;20(2):134–52.

Damschroder L, Aaron D, Keith R, Kirsh S, Alexander J, Lowery J. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50.

Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10(1):21.

Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Admin Pol Ment Health. 2011;38(2):65–76.

Heale R, Twycross A. What is a case study? Evid Based Nurs. 2018;21(1):7–8.

Brownson R, Colditz G, Proctor E. Dissemination and implementation research in health: translating science to practice. Second ed. New York: Oxford University Press; 2017.

Quiñones MM, Lombard-Newell J, Sharp D, Way V, Cross W. Case study of an adaptation and implementation of a Diabetes Prevention Program for individuals with serious mental illness. Transl Behav Med. 2018;8(2):195–203.

Wiltsey Stirman S, Baumann AA, Miller CJ. The FRAME: an expanded framework for reporting adaptations and modifications to evidence-based interventions. Implement Sci. 2019;14(1):58.

Baxter P, Jack S. Qualitative case study methodology: study design and implementation for novice researchers. Qual Rep. 2008;13(4):544–59.

Consolidated Framework for Implementation Research 2018. Available from: http://www.cfirguide.org/index.html . Cited 2018 14 February.

Palinkas LA, Horwitz SM, Green CA, Wisdom JP, Duan N, Hoagwood K. Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Admin Pol Ment Health. 2015;42(5):533–44.

Francis JJ, Johnston M, Robertson C, Glidewell L, Entwistle V, Eccles MP, et al. What is an adequate sample size? Operationalising data saturation for theory-based interview studies. Psychol Health. 2010;25(10):1229–45.

Teddlie C, Yu F. Mixed methods sampling: a typology with examples. J Mixed Methods Res. 2007;1(1):77–100.

Saunders B, Sim J, Kingstone T, Baker S, Waterfield J, Bartlam B, et al. Saturation in qualitative research: exploring its conceptualization and operationalization. Qual Quant. 2018;52(4):1893–907.

Braun V, Clarke V. To saturate or not to saturate? Questioning data saturation as a useful concept for thematic analysis and sample-size rationales. Qual Res Sport Exerc Health. 2021;13(2):201–16.

Burau V, Carstensen K, Fredens M, Kousgaard MB. Exploring drivers and challenges in implementation of health promotion in community mental health services: a qualitative multi-site case study using Normalization Process Theory. BMC Health Serv Res. 2018;18(1):36.

Phillippi J, Lauderdale J. A guide to field notes for qualitative research: context and conversation. Qual Health Res. 2018;28(3):381–8.

Mulhall A. In the field: notes on observation in qualitative research. J Adv Nurs. 2003;41(3):306–13.

Schell SF, Luke DA, Schooley MW, Elliott MB, Herbers SH, Mueller NB, et al. Public health program capacity for sustainability: a new framework. Implement Sci. 2013;8(1):15.

Washington University. The Program Sustainability Assessment Tool St Louis: Washington University; 2018. Available from: https://sustaintool.org/ . Cited 2018 14 February.

Luke DA, Calhoun A, Robichaux CB, Elliott MB, Moreland-Russell S. The Program Sustainability Assessment Tool: a new instrument for public health programs. Prev Chronic Dis. 2014;11:E12.

Stoll S, Janevic M, Lara M, Ramos-Valencia G, Stephens TB, Persky V, et al. A mixed-method application of the Program Sustainability Assessment Tool to evaluate the sustainability of 4 pediatric asthma care coordination programs. Prev Chronic Dis. 2015;12:E214.

Kelly C, Scharff D, LaRose J, Dougherty NL, Hessel AS, Brownson RC. A tool for rating chronic disease prevention and public health interventions. Prev Chronic Dis. 2013;10:E206.

Calhoun A, Mainor A, Moreland-Russell S, Maier RC, Brossart L, Luke DA. Using the Program Sustainability Assessment Tool to assess and plan for sustainability. Prev Chronic Dis. 2014;11:E11.

Proctor E, Luke D, Calhoun A, McMillen C, Brownson R, McCrary S, et al. Sustainability of evidence-based healthcare: research agenda, methodological advances, and infrastructure support. Implement Sci. 2015;10(1):88.

Lennox L, Maher L, Reed J. Navigating the sustainability landscape: a systematic review of sustainability approaches in healthcare. Implement Sci. 2018;13(1):27.

Moore JE, Mascarenhas A, Bain J, Straus SE. Developing a comprehensive definition of sustainability. Implement Sci. 2017;12(1):110.

Lewis CC, Fischer S, Weiner BJ, Stanick C, Kim M, Martinez RG. Outcomes for implementation science: an enhanced systematic review of instruments using evidence-based rating criteria. Implement Sci. 2015;10(1):155.

Shelton RC, Chambers DA, Glasgow RE. An extension of RE-AIM to enhance sustainability: addressing dynamic context and promoting health equity over time. Front Public Health. 2020;8(134).

Moullin JC, Sklar M, Green A, Dickson KS, Stadnick NA, Reeder K, et al. Advancing the pragmatic measurement of sustainment: a narrative review of measures. Implement Sci Commun. 2020;1(1):76.

Denzin N. The research act: A theoretical introduction to sociological methods. New Jersey: Transaction Publishers; 1970.

Grant BM, Giddings LS. Making sense of methodologies: a paradigm framework for the novice researcher. Contemp Nurse. 2002;13(1):10–28.

Gale NK, Heath G, Cameron E, Rashid S, Redwood S. Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC Med Res Methodol. 2013;13(1):117.

Pope C, Ziebland S, Mays N. Qualitative research in health care. Analysing qualitative data. BMJ. 2000;320(7227):114–6.

QSR International. NVivo 11 Pro for Windows 2018. Available from: https://www.qsrinternational.com/nvivo-qualitative-data-analysissoftware/home .

Waltz TJ, Powell BJ, Fernández ME, Abadie B, Damschroder LJ. Choosing implementation strategies to address contextual barriers: diversity in recommendations and future directions. Implement Sci. 2019;14(1):42.

Michie S, Johnston M, Rothman AJ, de Bruin M, Kelly MP, Carey RN, et al. Developing an evidence-based online method of linking behaviour change techniques and theoretical mechanisms of action: a multiple methods study. Southampton (UK): NIHR Journals. Library. 2021;9:1.

Michie S, Johnston M, Abraham C, Lawton R, Parker D, Walker A. Making psychological theory useful for implementing evidence based practice: a consensus approach. Qual Saf Health Care. 2005;14(1):26–33.

Ebneyamini S, Sadeghi Moghadam MR. Toward developing a framework for conducting case study research. Int J Qual Methods. 2018;17(1):1609406918817954.

Paparini S, Green J, Papoutsi C, Murdoch J, Petticrew M, Greenhalgh T, et al. Case study research for better evaluations of complex interventions: rationale and challenges. BMC Med. 2020;18(1):301.

Sarkies M, Long JC, Pomare C, Wu W, Clay-Williams R, Nguyen HM, et al. Avoiding unnecessary hospitalisation for patients with chronic conditions: a systematic review of implementation determinants for hospital avoidance programmes. Implement Sci. 2020;15(1):91.

Koorts H, Cassar S, Salmon J, Lawrence M, Salmon P, Dorling H. Mechanisms of scaling up: combining a realist perspective and systems analysis to understand successfully scaled interventions. Int J Behav Nutr Phys Act. 2021;18(1):42.

Engell T, Kirkøen B, Hammerstrøm KT, Kornør H, Ludvigsen KH, Hagen KA. Common elements of practice, process and implementation in out-of-school-time academic interventions for at-risk children: a systematic review. Prev Sci. 2020;21(4):545–56.

Bengtsson B, Hertting N. Generalization by mechanism: thin rationality and ideal-type analysis in case study research. Philos Soc Sci. 2014;44(6):707–32.

Tsang EWK. Generalizing from research findings: the merits of case studies. Int J Manag Rev. 2014;16(4):369–83.

Polit DF, Beck CT. Generalization in quantitative and qualitative research: myths and strategies. Int J Nurs Stud. 2010;47(11):1451–8.

Adler C, Hirsch Hadorn G, Breu T, Wiesmann U, Pohl C. Conceptualizing the transfer of knowledge across cases in transdisciplinary research. Sustain Sci. 2018;13(1):179–90.

Wolfenden L, Foy R, Presseau J, Grimshaw JM, Ivers NM, Powell BJ, et al. Designing and undertaking randomised implementation trials: guide for researchers. BMJ. 2021;372:m3721.

Nathan N, Hall A, McCarthy N, Sutherland R, Wiggers J, Bauman AE, et al. Multi-strategy intervention increases school implementation and maintenance of a mandatory physical activity policy: outcomes of a cluster randomised controlled trial. Br J Sports Med. 2022;56(7):385–93.

Malone S, Prewitt K, Hackett R, Lin JC, McKay V, Walsh-Bailey C, et al. The Clinical Sustainability Assessment Tool: measuring organizational capacity to promote sustainability in healthcare. Implement Sci Commun. 2021;2(1):77.

Sales AE, Barnaby DP, Rentes VC. Letter to the editor on “the implementation research logic model: a method for planning, executing, reporting, and synthesizing implementation projects” (Smith JD, Li DH, Rafferty MR. the implementation research logic model: a method for planning, executing, reporting, and synthesizing implementation projects. Implement Sci. 2020;15 (1):84. Doi:10.1186/s13012-020-01041-8). Implement Sci. 2021;16(1):97.

Download references

Acknowledgements

The authors would like to acknowledge the healthcare organizations and staff who supported the study.

SR is funded by an NHMRC Early Career Fellowship (APP1123336). The funding body had no role in the study design, data collection, data analysis, interpretation, or manuscript development.

Author information

Authors and affiliations.

Mary MacKillop Institute for Health Research, Australian Catholic University, Melbourne, Australia

Louise Czosnek & Eva M. Zopf

Cabrini Cancer Institute, The Szalmuk Family Department of Medical Oncology, Cabrini Health, Melbourne, Australia

Eva M. Zopf

Peter MacCallum Cancer Centre, Melbourne, Australia

Prue Cormie

Sir Peter MacCallum Department of Oncology, The University of Melbourne, Melbourne, Australia

Discipline of Psychiatry and Mental Health, University of New South Wales, Sydney, Australia

Simon Rosenbaum

School of Health Sciences, University of New South Wales, Sydney, Australia

Faculty of Health, Victoria University of Wellington, Wellington, New Zealand

Justin Richards

Faculty of Medicine and Health, University of Sydney, Sydney, Australia

Nicole M. Rankin

Faculty of Medicine, Dentistry and Health Sciences, University of Melbourne, Melbourne, Australia

You can also search for this author in PubMed   Google Scholar

Contributions

LC, EZ, SR, JR, PC, and NR contributed to the conceptualization of the study. LC undertook the data collection, and LC, EZ, SR, JR, PC, and NR supported the analysis. The first draft of the manuscript was written by LC with NR and EZ providing first review. LC, EZ, SR, JR, PC, and NR commented on previous versions of the manuscript and provided critical review. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Louise Czosnek .

Ethics declarations

Ethics approval and consent to participate.

This study is approved by Sydney Local Health District Human Research Ethics Committee - Concord Repatriation General Hospital (2019/ETH11806). Ethical approval is also supplied by Australian Catholic University (2018-279E), Peter MacCallum Cancer Centre (19/175), North Sydney Local Health District - Macquarie Hospital (2019/STE14595), and Alfred Health (516-19).

Consent for publication

Not applicable.

Competing interests

PC is the recipient of a Victorian Government Mid-Career Research Fellowship through the Victorian Cancer Agency. PC is the Founder and Director of EX-MED Cancer Ltd, a not-for-profit organization that provides exercise medicine services to people with cancer. PC is the Director of Exercise Oncology EDU Pty Ltd, a company that provides fee for service training courses to upskill exercise professionals in delivering exercise to people with cancer.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

Standards for Reporting Qualitative Research (SRQR).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Czosnek, L., Zopf, E.M., Cormie, P. et al. Developing an implementation research logic model: using a multiple case study design to establish a worked exemplar. Implement Sci Commun 3 , 90 (2022). https://doi.org/10.1186/s43058-022-00337-8

Download citation

Received : 19 March 2022

Accepted : 01 August 2022

Published : 16 August 2022

DOI : https://doi.org/10.1186/s43058-022-00337-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Logic model
  • Case study methods
  • Causal pathways
  • Causal mechanisms

Implementation Science Communications

ISSN: 2662-2211

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

logic model for research project

College of Education and Human Ecology

College of Education and Human Ecology

Logic Models

There can be many reasons that a proposal is not funded, but a common issue stems from a narrative that does not offer a solid, well-ordered, logical and convincing argument for the significance of the proposed research. Logic models can be used to not only strengthen proposal resubmissions but can be an invaluable aspect of new proposal development.

Logic models provide a graphical representation that describes how the work conducted will lead to the results you want to achieve—immediately, intermediately and in the long-term. These models allow you to test whether what you propose “makes sense” from a logical perspective as well provide a framework for designing the research and measuring success.

This presentation included an overview of logic models, a review of logic model templates and guidance on how to create a logic model as well as examples.

Karen Bruns Assistant Director, OSU Extension College of Food, Agriculture, and Environmental Science

  • Using Logic Models to Build Better Lives, Stronger Communities Presentation Slides (PDF) Links to a PDF document.
  • Video of presentation (YouTube) Links to a YouTube video.

Belinda Gimbert Associate Professor, Educational Studies College of Education and Human Ecology

Becky Parker Senior Project Manager, CETE College of Education and Human Ecology

  • Using Logic Modeling for Program Planning and Evaluation.pdf Gimbert
  • Logic Model for Project m-NET (PDF) Links to a PDF document.

Jerry D’Agostino Professor, Educational Studies College of Education and Human Ecology

Emily Rodgers Associate Professor, Teaching and Learning College of Education and Human Ecology

  • US Department of Education Investing in Innovation (i3) Logic Model for the Implementation of Reading Recovery (PDF) Links to a PDF document.
  • US Department of Education Investing in Innovation (i3) Logic Model for the Scale-Up of Reading Recovery (PDF) Links to a PDF document.

Mihaiela Gugiu Senior Research Associate, Crane Center for Early Childhood College of Education and Human Ecology

  • Constructing Logic Models for Program Evaluation (PDF) Links to a PDF document.

Ian Wilkinson Professor, Teaching and Learning College of Education and Human Ecology

  • Promoting High-Level Reading Comprehension with Quality Talk Presentation Slides (PDF) Links to a PDF document.

Additional Resources

  • Generic Logic Model for USDA National Institute of Food and Agriculture (NIFA) Project Reporting (PDF) Links to a PDF document. .
  • W. K. Kellogg Foundation. (2004).  Logic model development guide . Battle Creek, MI: W. K. Kellogg Foundation.
  • W. K. Kellogg Foundation. (2004).  Evaluation handbook . Battle Creek, MI: W. K. Kellogg Foundation.
  • Models for Promoting Community Health and Development .

Office of Research, Innovation and Collaboration (ORIC) 153  Arps Hall 1945 N. High St. Columbus, OH 43210

Meet Our Staff

OFFICE HOURS

8 a.m. to 5 p.m. Monday–Friday

Phone: (614) 247-2412 Email: [email protected]

College of Education and Human Ecology

Center for Research Evaluation

Logic Models vs Theories of Change

  • March 15, 2021
  • Uncategorized

By Shannon Sharp

Logic models and theories of change help programs organize and illustrate their activities and goals. Funders often require these illustrations to help them understand exactly what a proposed initiative intends to do and what change is expected to come of it. For program evaluators, understanding a program’s intended progression from intervention (activities) to outcomes (goals) drives evaluation-question and evaluation-plan development.

Isn’t a logic model and theory of change the same thing??

Nope! Though the terms “logic model” and “theory of change” are often used interchangeably, there are some key differences that affect how they are developed and used.

What vs Why

The main distinction between a logic model and theory of change is that a logic model describes a logical sequence showing what the intervention’s intended outcomes are—If we provide X, the result will be Y—while a theory of change includes causal mechanisms to show why each intervention component is expected to result in the intended outcomes—If we provide X, A will support (or hinder) a result of Y. Though logic models ideally include a section for contextual factors and assumptions, these are not detailed within each part of the model and are often left out altogether. A theory of change includes these factors, where appropriate, throughout the model. As such, a logic model is descriptive while a theory of change is explanatory.

A step in the right direction

Logic models and theories of change also differ in how they progress from one step to the next. Logic models are linear, progressing step by step—typically from inputs to activities to outcomes—and an effect never precedes a cause. In a theory of change, however, components are not always linear and effects can, and often do, later influence causes (for example, as shown below, exercise is expected to result in looking better, but positive feedback from others about looking better then encourages the individual to continue exercising).

In the drivers’ seat

As noted earlier, a logic model is based on a logical sequence of steps. On the other hand, as its name implies, a theory of change is theory-driven. The mechanisms responsible for how its various components interact with each other can develop from real-world examples, academic literature or even a program leaders’ hypotheses. Because of this difference, a logic model is usually constructed after the program is developed, as a way of describing the program and its intended outcomes, while a theory of change is most useful when considered before program development as a way to determine the best intervention(s) for the desired outcome(s). This is not always the case in practice, but nonetheless reflects their individual strengths.

Which way to turn?

Practically, as program evaluators we want to use the type of program illustration that is most useful for its given purpose. Easy for me to say… but what does that mean? In a nutshell, a logic model is great for summarizing the key components of a program in a way that others can understand at a glance. Clients, stakeholders and funders are likely to appreciate the way even complex programs can be made easy to understand. Bonus points if the logic model includes conceptual factors and assumptions to help frame decision-making and conclusions. On the other hand, for a more rigorous examination of when to evaluate certain outcomes and determine why a program might or might not work (or in the end, why it did or did not work), a theory of change is your best bet.

Take a look

Here are two examples that highlight the components of a logic model versus theory of change. The Center for Research Evaluation (CERE) developed the logic model for our evaluation of the Mississippi Children’s Museum’s WonderBox (makerspace) exhibit. It includes all inputs, activities and intended short- and long-term outcomes for the program. The theory of change is an adaptation of one illustrated in Michael Quinn Patton’s book, Utilization-Focused Evaluation, 4 th Edition . Though a bit simplistic (most theories of change will have more components), notice how the theory of change includes the causal mechanism positive feedback from others , which mediates how the effects (look better and feel better) impacts their causes (diet and exercise).

logic model for research project

Adapted from: Patton, M. Q. (2008). Utilization-Focused Evaluation, 4 th Edition . Sage.

logic model for research project

We found these resources particularly helpful:

  • Patton, M. Q. (2008). Utilization-Focused Evaluation, 4 th Sage.
  • https://analyticsinaction.co/theory-of-change-vs-logic-model
  • https://www.theoryofchange.org/wp-content/uploads/toco_library/pdf/TOCs_and_Logic_Models_forAEA.pdf

logic model for research project

Join our team!

Are you passionate about evaluation as a tool for social…

logic model for research project

BUILD PODER Evaluation High-level Findings (Spring 2023-Spring 2024)

Check out these video newsletters that we use to share…

logic model for research project

Calling all University-Based Evaluators!

Eden Kyse, Samuel Robison, Joey Rutherford, Sarah Mason Do you…

  • previous post: Bilingual/Bicultural Education- Tips for Evaluation
  • next post: Bridging ‘What worked’ and ‘Will it Work?’: Is there a Role for Futures Studies in Evaluation?

logic model for research project

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

A logic model framework for evaluation and planning in a primary care practice-based research network (PBRN)

Holly hayes.

1 Department of Family and Community Medicine, University of Texas Health Science Center San Antonio

Michael L. Parchman

2 VERDICT Health Services Research Program, South Texas Veterans Health Care System

3 Academic Center for Excellence in Teaching

Evaluating effective growth and development of a Practice-Based Research Network (PBRN) can be challenging. The purpose of this article is to describe the development of a logic model and how the framework has been used for planning and evaluation in a primary care PBRN.

An evaluation team was formed consisting of the PBRN directors, staff and its board members. After the mission and the target audience were determined, facilitated meetings and discussions were held with stakeholders to identify the assumptions, inputs, activities, outputs, outcomes and outcome indicators.

The long-term outcomes outlined in the final logic model are two-fold: 1.) Improved health outcomes of patients served by PBRN community clinicians; and 2.) Community clinicians are recognized leaders of quality research projects. The Logic Model proved useful in identifying stakeholder interests and dissemination activities as an area that required more attention in the PBRN. The logic model approach is a useful planning tool and project management resource that increases the probability that the PBRN mission will be successfully implemented.

Introduction

With the heightened emphasis on translational and comparative effectiveness research to improve patient outcomes, Practice-Based Research Networks (PBRNs) have an unprecedented opportunity to become effective laboratories to address high priority research questions. As PBRNs engage in more funded research, these research dollars come with increased accountability to demonstrate the effectiveness of the work conducted in PBRNs. Despite a significant growth in the number of PBRNs over the past 15 years, little is known about effective and useful methods of evaluating PBRNs ( 1 ). One method with significant potential for PBRN evaluation and planning is a logic model.

What is a logic model?

The logic model has proven to be a successful tool for program planning as well as implementation and performance management in numerous fields, including primary care ( 2 – 14 ). A logic model (see Figure One ) is defined as a graphical/textual representation of how a program is intended to work and links outcomes with processes and the theoretical assumptions of the program ( 6 ). It is a depiction of a program or project showing what the program or project will do and what it is to accomplish. It is a series of “if then” relationships that, if implemented as intended lead to the desired outcomes. Stated another way, it is a framework for describing the relationships between resources, activities and results as they related to a specific program or project goal. The logic model also helps to make underlying assumptions about the program or project explicit. It provides a common approach to integrating planning, implementation and evaluation. Figure One below defines the key components of a logic model and what variables are included for each section

An external file that holds a picture, illustration, etc.
Object name is nihms347454f1.jpg

Program/Project Logic Model Framework

Why use a logic model?

A logic model is an efficient tool that requires little resources other than personnel time. Since evaluation dollars are not usually set aside in PBRN budgets, the cost-efficiency of this framework is attractive. In addition, the process of developing the logic model requires PBRN team members to work together in a manner that has a side benefit of improving team relationships and focus. A logic model can also provide much needed detail about how resources and activities can be connected with the desired results which helps with project management, resource allocation and strategic planning ( 2 – 14 ). The process of developing the logic model also facilitates critical thinking through the process of planning and communicating network objectives and outcomes. According to the Kellogg Foundation, the development of a logic model is a “conscious process that creates an explicit understanding of the challenges ahead, the resources available, and the timetable in which to hit the target” ( 6 ). For more detailed information regarding logic models, refer to the W.K. Kellogg Foundation Logic Model Development Guide ( 6 ).

To date, there are no publications demonstrating how a logic model framework can be used for evaluation and program planning in a primary care PBRN. The purpose of this article is to describe the development of a logic model and how the framework has been used in a primary care PBRN, the South Texas Ambulatory Research Network (STARNet).

Setting and Context

STARNet was founded in 1992 “ to conduct & disseminate practice-based research that results in new knowledge and improves the health of patients in South Texas .” STARNet has 165 practitioners in 108 primary care practices. These are primarily small group practices or solo practitioners located throughout south Texas – spanning a territory from the southernmost Mexico/Texas border to north central Austin, Texas. Over the years, STARNet has published over 20 peer-reviewed manuscripts of research findings from studies conducted in member primary care practice settings ( 15 – 34 ).

Development of a Logic Model

Step one: agree on the mission and target audience.

The STARNet Board of Directors had previously agreed that the primary goal of all STARNet projects is to improve the health of primary care patients in South Texas. The Board believed that to achieve this goal, STARNet clinicians and academic investigators (Target Audiences) were both equally critical for the success of the network. Investigators facilitate the research process and pursue grant opportunities for the overall sustainability of the network and STARNet clinicians are needed to frame and define the research questions that are relevant to their daily practice and assist in the interpretation of results.

Step Two: Identify and describe assumptions, inputs and activities

After defining the mission and the target audience, the STARNet coordinator and evaluation specialist facilitated ten meetings and discussions with key stakeholders over a six month period. Stakeholders at the meetings included: STARNet Board of Directors who are full-time primary care clinicians in family and internal medicine, practice facilitators who visit clinics regularly and assist with change processes, two STARNet directors with over 10 years of experience with the Network and STARNet partners including the School of Public Health and the South Texas Area Health Education Center. This group was tasked with identifying the assumptions, inputs and activities for the STARNet logic model. Assumptions are identified elements that you assume are in place and necessary to carrying out your strategies. For example, one assumption for PBRN research is that clinicians have time to participate in PBRN research and that investigators have funded grants that will contribute to network support. Once assumptions are identified, inputs are defined. Inputs include a list of identified resources (e.g. Network directors with clinical expertise and connections with the community) as well as constraints (e.g. lack of discretionary funds for relationship building – food, small gifts).

After assumptions and inputs are defined, the activities are described for the program which meets the needs of the target audience. Since the network has existed for over 18 years, it took a concerted effort on all members to think beyond current and past activities and initiatives. The coordinator encouraged the team to place equal attention to thinking about STARNet’s past and current activities and what activities need to take place in order to fulfill its mission. Well-designed activities are an essential element for logic model development. For STARNet, if activities could not be linked directly or deemed relevant to the two long-term outcomes (improved health outcomes of patients and clinician-led research projects), they would not be included in the logic model.

Step Three: Identify Outputs, Outcomes, and Outcome Indicators

To demonstrate STARNet’s growth and development, it was necessary to identify the specific outputs and outcomes necessary to fulfill its mission. Outputs are the actual deliverables or the units of service specific to STARNet– what occurs as a result of the planned activities. For example, the specific output for recruiting STARNet clinicians to the network is the number of new network members. The outcome is the actual impact and change associated with each output and is typically broken down into short-term (1–3 years), intermediate (3–5 years) and long-term (5–10 years). For example, an outcome that would apply to most PBRNs would be the development of the research and resource capacity of the STARNet clinicians (short-term) would lead to an increase in the number/quality of research projects in which STARNet clinicians participate (intermediate) which would in turn result in STARNet clinicians becoming recognized leaders of quality research projects (long-term). Once the outcomes were identified, we created the outcome indicators.

The outcome indicators are the milestones that can be observed and measured toward meeting the program’s mission. These measures are an indicator of how successful your program is in making progress towards the identified goals.

The most time-consuming component of the logic model process was identifying the activities, outputs and outcomes, especially ensuring that linkages existed between these three components. Developing meaningful outcomes that would be useful for grants, reports, publications and that informed members was the most difficult exercise during the logic model development process. The evaluation specialist was extremely helpful in assisting the logic model team in determining what outcomes were important enough to measure. The initial model was circulated to the group several times through e-mail and monthly meetings and further refined in an iterative process.

Final Logic Model

As a result of the above activities, the logic model in Figure 2 was agreed upon by all members. [Insert Figure 2 ] The logic model begins with the target population and underlying assumptions and leads into the inputs, activities, outputs and outcomes (short-term, intermediate, and long-term). The long-term outcomes of STARNet are two-fold: 1.) Improved health outcomes of patients served by STARNet clinicians; and 2.) STARNet clinicians are recognized leaders of quality research projects. Every input, activity, and outcome in STARNet’s logic model can be linked back to these two long-term outcomes – our mission’s “bull’s eye”.

An external file that holds a picture, illustration, etc.
Object name is nihms347454f2a.jpg

Program Goal: To establish a collaborative planning and implementation model for evaluating STARNet

Application of the Logic Model to PBRN Activities

Development of the logic model was considered only an initial phase in the process of evaluating, planning and developing the network. It remained clear throughout the process that an ongoing review and refinement of the logic model would be necessary to ensure that the PBRN implementation activities remained consistent with established outcomes. The group agreed that the first step in using our logic model would be to track the key indicators outlined in the outcomes.

Collecting Outcome Data

The group agreed that the first step in using our logic model would be to track the key indicators outlined in the outcomes. The group created detailed “to-do” lists based on the logic model, quarterly reports and updated Board member job descriptions. STARNet staff made a concerted effort to collect data on all of the outputs in an excel spreadsheet. Thus, the logic model informed and focused staff on what specific data needed to be recorded. The STARNet coordinator is charged with collecting all of the quantitative process and qualitative data each year. Detailed minutes and recordings are now being kept for the following meetings: Network staff, all membership, Board of Directors, and one-on-one site visits with STARNet clinicians. Qualitative data has proven to be very important in documenting the extent of involvement of members in network activities (output 9), not just the number involved, and network contextual changes.

Assessing PBRN Progress

The team meets monthly to assess progress and perform an internal evaluation based on logic model activities, outputs and outcomes. One example of this use of the Logic Model occurred when we discussed our progress towards conducting the activities outlined in the logic model framework during one of our monthly meetings. It was obvious that no efforts had been made to “disseminate research findings” to the network members and the broader community (Activity 9 and Output #6). The Board of Directors and STARNet leadership considered this to be a major process gap if the ultimate outcome is to improve patient health. As a result, STARNet is currently working with the University of Texas School Of Public Health and the South Central Area Health Education Center (AHEC) to create two comprehensive social marketing plans regarding research findings of studies conducted in STARNet for clinicians and their staff as well as patients. STARNet Directors and members will participate in focus groups in Summer, 2011 to develop a strategic communication and dissemination plan. This exemplifies how the logic model can also be used for problem identification and reallocation of resources in order to meet a pre-determined outcome.

Another example of the value of our logic model is when the STARNet Board of Directors decided to take a more proactive role in the financial status of the network. Board agendas now include a financial report at every meeting. STARNet recently became an incorporated non-profit and has updated by-laws and elected officers to Executive Committee. The Board of Directors considers these as crucial steps in meeting the mission of the network and is now developing a business plan to assist with future planning.

Subsequent to initiating our work on a logic model, Bleeker ( 35 ) and colleagues from the Netherlands identified only two existing PBRN evaluation tools. These tools were developed by Clement ( 36 ) and Fenton ( 37 ) to evaluate the overall effectiveness of PBRNs. Clement ( 36 ) proposed a conceptual framework to evaluate primary care PBRNs based on seven primary objectives with specific process and outcome indicators. The objectives could be categorized into network infrastructure, activity and dissemination efforts. Based on our review of the evaluation framework proposed by Clement ( 36 ), it appeared to be a very usable and feasible tool for implementation. However, Bleeker ( 35 ) questioned the validity of these indicators and the feasibility of Clement’s framework for conducting an overall evaluation.

Fenton ( 37 ) and colleagues developed the second identified evaluation tool, a Primary Care Research Network Toolkit, which includes a contextualized case study of five networks in the United Kingdom. This toolkit described eight primary dimensions of networks – each one with associated sub-dimensions. Networks could score themselves over time and even conduct comparisons across networks. Although the Primary Care Research Network Toolkit may be a useful in conducting formal evaluations, it lacked sufficient information regarding resources and time needed to successfully replicate the process in the United States.

Considering the relative limited resources of PBRNs, it is not surprising that a majority of PBRNs have not conducted a thorough evaluation of their efforts. Although evaluating a network takes time and requires the involvement of various individuals throughout the process, outcome evaluation efforts are a worthwhile investment. Unfortunately we realized early on, that our budget would not allow us to complete all of the activities outlined in the logic model. It became important to prioritize activities within the logic model due to budget constraints. The logic model should be modified regularly based on the changing capacity and resources of the network. It is yet to be proven whether our logic model framework will meet the planning and evaluation needs of STARNet.

In addition, logic models can be a tremendous tool in determining what is working well and what is not. The Board of Directors continually reminded the staff that all of the activities need to be centered on the mission -- improving patient care . As a result, all of the activities – planned and not planned - are viewed critically from that perspective. It is important to note, however, that every activity cannot be linked directly to long-term outcomes. Based on the logic model framework, the Co-Directors turned away investigators wanting to initiate projects in the network that did not meet the current priorities of the members. This was one of the first times in the history of the Network that it appropriately said “no” to an incompatible research interest. The logic model, in essence, united and empowered the efforts of members in advancing the STARNet mission.

Finally, the logic model reminded the PBRN team that a balance has to be maintained between the hard/traditional measures such as number of studies and publications and the more subjective measures such as easy access to PBRN member offices by PBRN coordinators and researchers. In addition, the core tenet of successful PBRNs is developing and maintaining respectful and trusting long-term relationships that continue beyond research studies (38). The complexity of the relationships and communication within a network is difficult to capture in evaluation efforts. The logic model helped us realize that it’s not just about the quantitative outcomes. In order to share a comprehensive story of STARNet, we also began to collect qualitative data (e.g. rich stories from the members). The logic model helped us realize that in the future, we need to collect this data more systematically from members and patients following the completion of research studies.

In conclusion, we found the logic model to be an effective planning and evaluation tool and a useful project management resource that greatly increases the probability that PBRN goals will be reached consistent with its mission. The logic model framework not only helped facilitate the Network evaluation process, but equally important, it engaged the leadership and members in a meaningful way. As a result, the Board of Directors, community clinician members, academic investigators and staff all have taken a more proactive role working together to advance the STARNet mission.

Acknowledgments

Funding for this study was provided by Clinical Translational Science Award # UL1RR025767 from NCRR/NIH to the University of Texas Health Sciences Center at San Antonio. The authors would like to thank the members of the South Texas Ambulatory Research Network for their support and contribution to this study.

None of the authors have a conflict of interest.

medRxiv

The Implementation Research Logic Model: A Method for Planning, Executing, Reporting, and Synthesizing Implementation Projects

Background Numerous models, frameworks, and theories exist for specific aspects of implementation research, including for determinants, strategies, and outcomes. However, implementation research projects often fail to provide a coherent rationale or justification for how these aspects are selected and tested in relation to one another. Despite this need to better specify the conceptual linkages between the core elements involved in projects, few tools or methods have been developed to aid in this task. The Implementation Research Logic Model (IRLM) was created for this purpose and to enhance the rigor and transparency of describing the often-complex processes of improving the adoption of evidence-based practices in healthcare delivery systems.

Methods The IRLM structure and guiding principles were developed through a series of preliminary activities with multiple investigators representing diverse implementation research projects in terms of contexts, research designs, and implementation strategies being evaluated. The utility of the IRLM was evaluated in the course of a two-day training to over 130 implementation researchers and healthcare delivery system partners.

Results Preliminary work with the IRLM produced a core structure and multiple variations for common implementation research designs and situations, as well as guiding principles and suggestions for use. Results of the survey indicated high utility of the IRLM for multiple purposes, such as improving rigor and reproducibility of projects; serving as a “roadmap” for how the project is to be carried out; clearly reporting and specifying how the project is to be conducted; and understanding the connections between determinants, strategies, mechanisms, and outcomes for their project.

Conclusions The IRLM is a semi-structured, principles-guided tool designed to improve the specification, rigor, reproducibility, and testable causal pathways involved in implementation research projects. The IRLM can also aid implementation researchers and implementation partners in the planning and execution of practice change initiatives. Adaptation and refinement of the IRLM is ongoing, as is the development of resources for use and applications to diverse projects, to address the challenges of this complex scientific field.

Competing Interest Statement

The authors have declared no competing interest.

Funding Statement

This study was supported by grant P30 DA027828 from the National Institute on Drug Abuse, awarded to C. Hendricks Brown; grant U18 DP006255 to Justin Smith and Cady Berkel; grant R56 HL148192 to Justin Smith; grant UL1 TR001422 from the National Center for Advancing Translational Sciences to Donald Lloyd-Jones; grant R01 MH118213 to Brian Mustanski; grant P30 AI117943 from the National Institute of Allergy and Infectious Diseases to Richard D’Aquila; grant UM1 CA233035 from the National Cancer Institute to David Cella; a grant from the Woman’s Board of Northwestern Memorial Hospital to John Csernansky; grant F32 HS025077 from the Agency for Healthcare Research and Quality; grant NIFTI 2016-20178 from the Foundation for Physical Therapy; the Shirley Ryan AbilityLab; and by the Implementation Research Institute (IRI) at the George Warren Brown School of Social Work, Washington University in St. Louis through grant R25 MH080916 from the National Institute of Mental Health and the Department of Veterans Affairs, Health Services Research & Development Service, Quality Enhancement Research Initiative (QUERI) to Enola Proctor. The opinions expressed herein are the views of the authors and do not necessarily reflect the official policy or position of the National Institutes of Health, the Centers for Disease Control and Prevention, the Agency for Healthcare Research and Quality the Department of Veterans Affairs.

Author Declarations

All relevant ethical guidelines have been followed; any necessary IRB and/or ethics committee approvals have been obtained and details of the IRB/oversight body are included in the manuscript.

All necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived.

I understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance).

I have followed all appropriate research reporting guidelines and uploaded the relevant EQUATOR Network research reporting checklist(s) and other pertinent material as supplementary files, if applicable.

Data Availability

Not applicable.

List of Abbreviations

View the discussion thread.

Thank you for your interest in spreading the word about medRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Reddit logo

Citation Manager Formats

  • EndNote (tagged)
  • EndNote 8 (xml)
  • RefWorks Tagged
  • Ref Manager
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Subject Area

  • Health Systems and Quality Improvement
  • Addiction Medicine (329)
  • Allergy and Immunology (649)
  • Anesthesia (174)
  • Cardiovascular Medicine (2502)
  • Dentistry and Oral Medicine (305)
  • Dermatology (210)
  • Emergency Medicine (385)
  • Endocrinology (including Diabetes Mellitus and Metabolic Disease) (882)
  • Epidemiology (11977)
  • Forensic Medicine (10)
  • Gastroenterology (718)
  • Genetic and Genomic Medicine (3900)
  • Geriatric Medicine (362)
  • Health Economics (653)
  • Health Informatics (2515)
  • Health Policy (973)
  • Health Systems and Quality Improvement (934)
  • Hematology (350)
  • HIV/AIDS (813)
  • Infectious Diseases (except HIV/AIDS) (13482)
  • Intensive Care and Critical Care Medicine (778)
  • Medical Education (386)
  • Medical Ethics (106)
  • Nephrology (416)
  • Neurology (3656)
  • Nursing (205)
  • Nutrition (544)
  • Obstetrics and Gynecology (708)
  • Occupational and Environmental Health (681)
  • Oncology (1901)
  • Ophthalmology (554)
  • Orthopedics (230)
  • Otolaryngology (299)
  • Pain Medicine (243)
  • Palliative Medicine (71)
  • Pathology (461)
  • Pediatrics (1071)
  • Pharmacology and Therapeutics (443)
  • Primary Care Research (434)
  • Psychiatry and Clinical Psychology (3290)
  • Public and Global Health (6327)
  • Radiology and Imaging (1340)
  • Rehabilitation Medicine and Physical Therapy (779)
  • Respiratory Medicine (845)
  • Rheumatology (389)
  • Sexual and Reproductive Health (384)
  • Sports Medicine (334)
  • Surgery (426)
  • Toxicology (51)
  • Transplantation (179)
  • Urology (155)

Enhancing Program Performance with Logic Models

Division of Extension

logic model for research project

Home » Enhancing Program Performance with Logic Models » Section 1: What is a Logic Model? » 1.1: A Logic Model is a map

  • Share on Facebook
  • Share on X (Twitter)
  • Share via Email

1.1: A Logic Model is a map

A logic model…..

  • Is a simplified picture of a program, initiative, or intervention that is a response to a given situation. Many people compare a logic model with a roadmap showing how you plan to reach your destination.
  • Some call this program theory (Wiess, 1998) or the program’s theory of action (Patton, 1997). It is a “plausible, sensible model of how a program is supposed to work.” (Bickman, 1987. p. 5)
  • It portrays the underlying rationale of the program or initiative. (Chen, Cato & Rainford 1998-9, Renger & Titcomb 2002)
  • Some think the logic model is only used in evaluation. We find it equally helpful for planning and program design, managing programs, and communicating.

PREVIOUS PAGE

We teach, learn, lead and serve, connecting people with the University of Wisconsin, and engaging with them in transforming lives and communities.

Explore Extension »

Connect with your County Extension Office »

Map of Wisconsin counties

Find an Extension employee in our staff directory »

staff directory

Get the latest news and updates on Extension's work around the state

facebook icon

Feedback, questions or accessibility issues: [email protected] | © 2024 The Board of Regents of the University of Wisconsin System Privacy Policy | Non-Discrimination Policy & How to File a Complaint | Disability Accommodation Requests

An EEO/AA employer, University of Wisconsin-Madison Division of Extension provides equal opportunities in employment and programming, including Title VI, Title IX, the Americans with Disabilities Act (ADA) and Section 504 of the Rehabilitation Act requirements.

IMAGES

  1. The Logic Model: Take it one step at a time

    logic model for research project

  2. More than 40 Logic Model Templates & Examples ᐅ TemplateLab

    logic model for research project

  3. Logic Model

    logic model for research project

  4. Top 10 Logic Model Templates to Demonstrate Your Project Structure

    logic model for research project

  5. More than 40 Logic Model Templates & Examples ᐅ TemplateLab

    logic model for research project

  6. Logic Model Templates

    logic model for research project

VIDEO

  1. Logic design project

  2. Logic Model

  3. Theory of Change (ToC) vs. Logic Model: What is the difference between them?

  4. From Conceptual Data Model to Logic Model Data

  5. Advanced Logic Model by Spark the Fire Grantwriting Classes

  6. Logic Model

COMMENTS

  1. The Implementation Research Logic Model: a method for planning, executing, reporting, and synthesizing implementation projects

    The Implementation Research Logic Model (IRLM) was created for this purpose and to enhance the rigor and transparency of describing the often-complex processes of improving the adoption of evidence-based interventions in healthcare delivery systems. ... identifying gaps in the implementation research logic of their project: 2.86: 1.021: 64.2% ...

  2. PDF How to Develop a Program Logic Model

    • There is no one best logic model. • Logic models represent intention. • A program logic model can change and be refined as the program changes and develops. • Programs do not need to evaluate every aspect of a logic model. • Logic models play a critical role in informing evaluation and building the evidence base for a program.

  3. PDF Logic models for program design, implementation, and evaluation

    a logic model and the process for developing a logic model for a program or policy. The second session provides guidance on how the logic model can be used as a tool to develop evaluation questions and indicators of success. While the workshops are designed as 90-minute to 2-hour sessions, there are many examples of ways to extend the ...

  4. Logic Models

    CDC Approach to Evaluation. A logic model is a graphic depiction (road map) that presents the shared relationships among the resources, activities, outputs, outcomes, and impact for your program. It depicts the relationship between your program's activities and its intended effects. Learn more about logic models and the key steps to ...

  5. Center for Research Partnerships and Program Evaluation (CRPPE)

    The logic model should include only the primary elements related to program/project design and operation. As a general rule, it should provide the "big picture" of the program/project and avoid providing very specific details related to how, for example, interventions will occur, or a list of all the agencies that will serve to improve ...

  6. PDF W.K. Kellogg Foundation Logic Model Development Guide

    Introduction The Logic Model Development Guidecontains four chapters and two comprehensive appendices. Chapter 1presents a basic introduction to the logic model as an action-oriented tool for program planning and evaluation.It also offers an array of sample logic models. Chapter 2consists of exercises and examples focused on the development of a simple program logic

  7. PDF MEASURING WHAT MATTERS Creating a logic model

    A logic model provides a visual diagram of your program all on one page. It shows your outcomes—the changes you hope to achieve—and the inputs and activities necessary to get there. The most important part of developing the logic model is the conversations and agreements that it helps to facilitate about the program's components and ...

  8. Logic Model: A Comprehensive Guide to Program Planning, Implementation

    Here is a step-by-step guide to creating a logic model, along with some best practices: Step 1: Identify the Program Goal. The first step in creating a logic model is to identify the program's overall goal. This should be a broad statement that reflects the program's purpose and the desired change it seeks to achieve.

  9. The Implementation Research Logic Model: a method for planning ...

    Background: Numerous models, frameworks, and theories exist for specific aspects of implementation research, including for determinants, strategies, and outcomes. However, implementation research projects often fail to provide a coherent rationale or justification for how these aspects are selected and tested in relation to one another.

  10. PDF Logic models: A tool for effective program planning, collaboration, and

    This guide, an installment in a four-part series on logic models, describes the role of logic models in effective program planning, collaboration, and monitoring. It defines the four components of these models—resources, activities, outputs, and outcomes—and explains how they connect. Using logic models can help practitioners and evaluators ...

  11. PDF logic model workbook

    Welcome to Innovation Network's Logic Model Workbook. A logic model is a commonly-used tool to clarify and depict a program within an organization. It serves as a foundation for program planning and evaluation. This workbook is a do-it-yourself guide to the concepts and use of the logic model.

  12. Section 1. Developing a Logic Model or Theory of Change

    Julian, D. (1997). The utilization of the logic model as a system level planning and evaluation device. Evaluation and Program Planning 20(3): 251-257. McEwan, K., & Bigelow, A. (1997). Using a logic model to focus health services on population health goals. Canadian Journal of Program Evaluation 12(1): 167-174.

  13. The Implementation Research Logic Model: a method for planning

    The Implementation Research Logic Model (IRLM) was created for this purpose and to enhance the rigor and transparency of describing the often-complex processes of improving the adoption of evidence-based interventions in healthcare delivery systems. ... However, implementation research projects often fail to provide a coherent rationale or ...

  14. Developing an implementation research logic model: using a multiple

    The implementation research logic model (IRLM) provides a method to understand causal pathways, including determinants, implementation strategies, mechanisms, and implementation outcomes. We describe an exemplar project using a multiple case study design that embeds the IRLM at multiple stages.

  15. Logic Models

    Logic models provide a graphical representation that describes how the work conducted will lead to the results you want to achieve—immediately, intermediately and in the long-term. These models allow you to test whether what you propose "makes sense" from a logical perspective as well provide a framework for designing the research and ...

  16. Logic Models vs Theories of Change

    Here are two examples that highlight the components of a logic model versus theory of change. The Center for Research Evaluation (CERE) developed the logic model for our evaluation of the Mississippi Children's Museum's WonderBox (makerspace) exhibit. It includes all inputs, activities and intended short- and long-term outcomes for the program.

  17. A logic model framework for evaluation and planning in a primary care

    Community clinicians are recognized leaders of quality research projects. The Logic Model proved useful in identifying stakeholder interests and dissemination activities as an area that required more attention in the PBRN. The logic model approach is a useful planning tool and project management resource that increases the probability that the ...

  18. PDF Logic Modelsfor Projects and Proposals

    Most Typical Elements of a Logic Model. Elements nearly ALWAYS included. Resources (Inputs) (human, financial, organizational, and community)- Directly support the implementation of the program/materials. Program Activities-What the program does with the resources. The process, tools, events, technology, and actions that are intentional parts ...

  19. PDF The Implementation Research Logic Model: A Method for Planning

    Implementation Research Logic Model (IRLM) Smith, Li, & Rafferty, 2020 testable with research studies [13]. Only then can we open the "black box" of how specific implementation strategies operate to predict outcomes. Logic Models Logic models, a graphic depiction that presents the shared relationships among

  20. The Implementation Research Logic Model: A Method for Planning

    Background Numerous models, frameworks, and theories exist for specific aspects of implementation research, including for determinants, strategies, and outcomes. However, implementation research projects often fail to provide a coherent rationale or justification for how these aspects are selected and tested in relation to one another. Despite this need to better specify the conceptual ...

  21. 1.1: A Logic Model is a map

    A Logic Model…. Is a simplified picture of a program, initiative, or intervention that is a response to a given situation. Many people compare a logic model with a roadmap showing how you plan to reach your destination. Shows the logical relationships among the resources that are invested, the activities that take place, and the benefits or changes that result.

  22. PDF Using the Logic Model for Program Planning

    The WHAT: Logic Model Definition. Basically, a Logic Model is a systematic and visual way to present and share your understanding of the relationships among the resources you have to operate your program, the activities you plan to do, and the changes or results you hope to achieve. Figure 1. The Basic Logic Model.

  23. PDF How-To Note: Developing a Project Logic Model

    project logic model and its associated TOC are included in the Project Appraisal Document (PAD) that approves a project design (see ADS 201.3.3.13). While this How-To Note focuses on logic models at the project level, logic models are also used at the strategy level (specifically, results frameworks - see Box 1), and often at the activity level.

  24. Adding a scholarly analysis of teaching and learning to SoTL: the

    The CIMO-logic model aims to guide both novice and experienced SoTL practitioners in scrutinizing their teaching activities within the framework of generative learning processes described in relevant literature. This approach enables the scholarly analysis that directs SoTL-projects towards formulating well-grounded and focused research ...