Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

It’s Not Just Semantics: Managing Outcomes Vs. Outputs

  • Deborah Mills-Scofield

Outputs are more easily measured, but less important.

What’s the difference between outputs and outcomes? Some think the question is merely semantic, or that the difference is simple: outputs are extrinsic and outcomes intrinsic. I think otherwise; the difference between outputs and outcomes is more fundamental and profound.

in research output

  • DM Deb Mills-Scofield is a strategy and innovation consultant to mid/large corporations and partner in Glengary LLC, an early-stage venture capital firm. She’s also a Visiting Scholar at Brown University and teaches at Oberlin College. Her patent from AT&T Bell Labs was one of the highest revenue-generating patents for AT&T and Lucent. Twitter: @dscofield .

Partner Center

  • A-Z Publications

Annual Review of Psychology

Volume 73, 2022, review article, optimizing research output: how can psychological research methods be improved.

  • Jeff Miller 1 , and Rolf Ulrich 2
  • View Affiliations Hide Affiliations Affiliations: 1 Department of Psychology, University of Otago, Dunedin 9016, New Zealand; email: [email protected] 2 Department of Psychology, University of Tübingen, Tübingen 72074, Germany
  • Vol. 73:691-718 (Volume publication date January 2022) https://doi.org/10.1146/annurev-psych-020821-094927
  • First published as a Review in Advance on October 06, 2021
  • Copyright © 2022 by Annual Reviews. All rights reserved

Recent evidence suggests that research practices in psychology and many other disciplines are far less effective than previously assumed, which has led to what has been called a “crisis of confidence” in psychological research (e.g., Pashler & Wagenmakers 2012 ). In response to the perceived crisis, standard research practices have come under intense scrutiny, and various changes have been suggested to improve them. The burgeoning field of metascience seeks to use standard quantitative data-gathering and modeling techniques to understand the reasons for inefficiency, to assess the likely effects of suggested changes, and ultimately to tell psychologists how to do better science. We review the pros and cons of suggested changes, highlighting the many complex research trade-offs that must be addressed to identify better methods.

Article metrics loading...

Full text loading...

Literature Cited

  • Aczel B , Palfi B , Szaszi B 2017 . Estimating the evidential value of significant results in psychological science. PLOS ONE 12 : 8 e0182651 [Google Scholar]
  • Albers C. 2019 . The problem with unadjusted multiple and sequential statistical testing. Nat. Commun. 10 : 1921 [Google Scholar]
  • Amrhein V , Greenland S , McShane B. 2019 . Retire statistical significance. Nature 567 : 305– 7 [Google Scholar]
  • Armitage P , McPherson CK , Rowe BC. 1969 . Repeated significance tests on accumulating data. J. R. Stat. Soc. A 132 : 2 235– 44 [Google Scholar]
  • Asendorpf JB , Conner M , De Fruyt F , De Houwer J , Denissen JJA et al. 2013 . Recommendations for increasing replicability in psychology. Eur. J. Pers. 27 : 2 108– 19 [Google Scholar]
  • Baker DH , Vilidaite G , Lygo FA , Smith AK , Flack TR et al. 2021 . Power contours: optimising sample size and precision in experimental psychology and human neuroscience. Psychol. Methods 26 : 3 295 – 314 [Google Scholar]
  • Baker M. 2016 . Is there a reproducibility crisis?. Nature 533 : 452– 54 [Google Scholar]
  • Baker SG , Heidenberger K. 1989 . Choosing sample sizes to maximize expected health benefits subject to a constraint on total trial costs. Med. Decis. Mak. 9 : 1 14– 25 [Google Scholar]
  • Bakker M , Van Dijk A , Wicherts JM. 2012 . The rules of the game called psychological science. Perspect. Psychol. Sci. 7 : 6 543– 54 [Google Scholar]
  • Barrett LF. 2020 . Forward into the past. Observer 33 : 3 5– 7 [Google Scholar]
  • Baumeister RF. 2016 . Charting the future of social psychology on stormy seas: winners, losers, and recommendations. J. Exp. Soc. Psychol. 66 : 153– 58 [Google Scholar]
  • Begley CG , Ellis LM. 2012 . Drug development: raise standards for preclinical cancer research. Nature 483 : 7391 531– 33 [Google Scholar]
  • Begley CG , Ioannidis JPA. 2015 . Reproducibility in science: improving the standard for basic and preclinical research. Circ. Res. 116 : 1 116– 26 [Google Scholar]
  • Benjamin DJ , Berger JO , Johannesson M , Nosek BA , Wagenmakers EJ et al. 2018 . Redefine statistical significance. Nat. Hum. Behav. 2 : 6– 10 [Google Scholar]
  • Bero L. 2018 . Meta-research matters: meta-spin cycles, the blindness of bias, and rebuilding trust. PLOS Biol 16 : 4 e2005972 [Google Scholar]
  • Berry DA , Ho CH. 1988 . One-sided sequential stopping boundaries for clinical trials: a decision-theoretic approach. Biometrics 44 : 1 219– 27 [Google Scholar]
  • Białek M. 2018 . Replications can cause distorted belief in scientific progress. Behav. Brain Sci. 41 : e122 [Google Scholar]
  • Brown AN , Wood BDK. 2018 . Replication studies of development impact evaluations. J. Dev. Stud. 55 : 5 917– 25 [Google Scholar]
  • Bueno de Mesquita B , Gleditsch NP , James P , King G , Metelits C et al. 2003 . Symposium on replication in international studies research. Int. Stud. Perspect. 4 : 1 72– 107 [Google Scholar]
  • Button KS , Ioannidis JPA , Mokrysz C , Nosek BA , Flint J et al. 2013 . Power failure: why small sample size undermines the reliability of neuroscience. Nat. Rev. Neurosci. 14 : 5 365– 76 [Google Scholar]
  • Button KS , Munafò MR 2017 . Powering reproducible research. Psychological Science Under Scrutiny: Recent Challenges and Proposed Remedies ed. SO Lilienfeld, ID Waldman 22– 33 New York: Wiley [Google Scholar]
  • Carey B. 2015 . Many psychology findings not as strong as claimed, study says. New York Times Aug. 27 [Google Scholar]
  • Chalmers I , Bracken MB , Djulbegovic B , Garattini S , Grant J et al. 2014 . How to increase value and reduce waste when research priorities are set. Lancet 383 : 9912 156– 65 [Google Scholar]
  • Chalmers I , Glasziou P 2009 . Avoidable waste in the production and reporting of research evidence. Lancet 374 : 9683 86– 89 [Google Scholar]
  • Chambers CD. 2020 . Frontloading selectivity: a third way in scientific publishing?. PLOS Biol 18 : 3 e3000693 [Google Scholar]
  • Clark-Carter D. 1997 . The account taken of statistical power in research published in the British Journal of Psychology . Br. J. Psychol. 88 : 71– 83 [Google Scholar]
  • Cohen J. 1962 . The statistical power of abnormal-social psychological research: a review. J. Abnorm. Soc. Psychol. 65 : 145– 53 [Google Scholar]
  • Cohen J. 1988 . Statistical Power Analysis for the Behavioral Sciences Hillsdale, NJ: Lawrence Erlbaum, 2nd ed.. [Google Scholar]
  • Coles NA , Tiokhin L , Scheel AM , Isager PM , Lakens D. 2018 . The costs and benefits of replication studies. Behav. Brain Sci. 41 : e124 [Google Scholar]
  • Colhoun HM , McKeigue PM , Smith GD. 2003 . Problems of reporting genetic associations with complex outcomes. Lancet 361 : 9360 865– 72 [Google Scholar]
  • Colquhoun D. 2014 . An investigation of the false discovery rate and the misinterpretation of p -values. R. Soc. Open Sci. 1 : 3 140216 [Google Scholar]
  • Cumming G. 2014 . The new statistics: why and how. Psychol. Sci. 25 : 1 7– 29 [Google Scholar]
  • Detsky AS. 1985 . Using economic analysis to determine the resource consequences of choices made in planning clinical trials. J. Chronic Dis. 38 : 9 753– 65 [Google Scholar]
  • Dreber A , Pfeiffer T , Almenberg J , Isaksson S , Wilson B et al. 2015 . Using prediction markets to estimate the reproducibility of scientific research. PNAS 112 : 50 15343– 47 [Google Scholar]
  • Dunbar KN , Fugelsang JA 2005 . Causal thinking in science: how scientists and students interpret the unexpected. Scientific and Technological Thinking ME Gorman, RD Tweney, DC Gooding, AP Kincannon 57– 79 Mahwah, NJ: Lawrence Erlbaum [Google Scholar]
  • Edwards MA , Roy S. 2017 . Academic research in the 21st century: maintaining scientific integrity in a climate of perverse incentives and hypercompetition. Environ. Eng. Sci. 34 : 1 51– 61 [Google Scholar]
  • Etz A , Vandekerckhove J. 2016 . A Bayesian perspective on the reproducibility project: psychology. PLOS ONE 11 : 2 e0149794 [Google Scholar]
  • Fanelli D. 2012 . Negative results are disappearing from most disciplines and countries. Scientometrics 90 : 3 891– 904 [Google Scholar]
  • Fanelli D , Costas R , Larivière V. 2015 . Misconduct policies, academic culture and career stage, not gender or pressures to publish, affect scientific integrity. PLOS ONE 10 : 6 e0127556 [Google Scholar]
  • Fiedler K , Kutzner F , Krueger JI. 2012 . The long way from α -error control to validity proper: problems with a short-sighted false-positive debate. Perspect. Psychol. Sci. 7 : 6 661– 69 [Google Scholar]
  • Fiedler K , Schott M 2017 . False negatives. Psychological Science Under Scrutiny: Recent Challenges and Proposed Remedies SO Lilienfeld, ID Waldman 53– 72 New York: Wiley [Google Scholar]
  • Finkel EJ , Eastwick PW , Reis HT. 2015 . Best research practices in psychology: illustrating epistemological and pragmatic considerations with the case of relationship science. J. Pers. Soc. Psychol. 108 : 2 275– 97 [Google Scholar]
  • Finkel EJ , Eastwick PW , Reis HT. 2017 . Replicability and other features of a high-quality science: toward a balanced and empirical approach. J. Pers. Soc. Psychol. 113 : 2 244– 53 [Google Scholar]
  • Fisher RA. 1925 . Statistical Methods for Research Workers Edinburgh, UK: Oliver & Boyd [Google Scholar]
  • Francis G. 2013 . We don't need replication, but we do need more data. Eur. J. Pers. 27 : 2 125– 26 [Google Scholar]
  • Freese J , Peterson D. 2017 . Replication in social science. Annu. Rev. Sociol. 43 : 147– 65 [Google Scholar]
  • Gilbert DT , King G , Pettigrew S , Wilson TD 2016 . Comment on “Estimating the reproducibility of psychological science. Science 351 : 6277 1037– 37 [Google Scholar]
  • Gillett R. 1994 . The average power criterion for sample size estimation. Statistician 43 : 389– 94 [Google Scholar]
  • Gross C. 2016 . Scientific misconduct. Annu. Rev. Psychol. 67 : 693– 711 [Google Scholar]
  • Hamann S , Canli T. 2004 . Individual differences in emotion processing. Curr. Opin. Neurobiol. 14 : 2 233– 38 [Google Scholar]
  • Hartman TK , Stocks TVA , McKay R , Gibson-Miller J , Levita L et al. 2021 . The authoritarian dynamic during the COVID-19 pandemic: effects on nationalism and anti-immigrant sentiment. Soc. Psychol. Pers. Sci. 12 : 7 1274 – 85 [Google Scholar]
  • Hartshorne JK , Schachner A. 2012 . Tracking replicability as a method of post-publication open evaluation. Front. Comput. Neurosci. 6 : 8 [Google Scholar]
  • Head ML , Holman L , Lanfear R , Kahn AT , Jennions MD. 2015 . The extent and consequences of p -hacking in science. PLOS Biol 13 : 3 e1002106 [Google Scholar]
  • Ioannidis JPA. 2005 . Why most published research findings are false. PLOS Med 2 : 8 e124 [Google Scholar]
  • Ioannidis JPA. 2018 . Meta-research: why research on research matters. PLOS Biol 16 : 3 e2005468 [Google Scholar]
  • John LK , Loewenstein G , Prelec D. 2012 . Measuring the prevalence of questionable research practices with incentives for truth-telling. Psychol. Sci. 23 : 524– 32 [Google Scholar]
  • Johnson VE. 2013 . Revised standards for statistical evidence. PNAS 110 : 48 19313– 17 [Google Scholar]
  • Karrandinos MG. 1976 . Optimum sample size and comments on some published formulae. Bull. Entomol. Soc. Am. 22 : 4 417– 21 [Google Scholar]
  • Kuehberger A , Schulte-Mecklenbeck M. 2018 . Selecting target papers for replication. Behav. Brain Sci. 41 : e139 [Google Scholar]
  • Kuhn TS. 1962 . The Structure of Scientific Revolutions Chicago: Univ. Chicago Press [Google Scholar]
  • Lakens D. 2014 . Performing high-powered studies efficiently with sequential analyses. Eur. J. Soc. Psychol. 44 : 7 701– 10 [Google Scholar]
  • Lakens D , Adolfi FG , Albers CJ , Anvari F , Apps MAJ et al. 2018 . Justify your alpha: a response to “Redefine statistical significance. Nat. Hum. Behav. 2 : 168– 71 [Google Scholar]
  • Lakens D , Evers ERK. 2014 . Sailing from the seas of chaos into the corridor of stability: practical recommendations to increase the informational value of studies. Perspect. Psychol. Sci. 9 : 3 278– 92 [Google Scholar]
  • LeBel EP , Berger D , Campbell L , Loving TJ. 2017a . Falsifiability is not optional. J. Pers. Soc. Psychol. 113 : 2 254– 61 [Google Scholar]
  • LeBel EP , Campbell L , Loving TJ. 2017b . Benefits of open and high-powered research outweigh costs. J. Pers. Soc. Psychol. 113 : 2 230– 43 [Google Scholar]
  • Leek JT , Peng RD. 2015 . Statistics: P values are just the tip of the iceberg. Nature 520 : 612 [Google Scholar]
  • Lenth RV. 2001 . Some practical guidelines for effective sample size determination. Am. Stat. 55 : 3 187– 93 [Google Scholar]
  • Lewandowsky S , Oberauer K. 2020 . Low replicability can support robust and efficient science. Nat. Commun. 11 : 358 [Google Scholar]
  • Lilienfeld SO. 2017 . Psychology's replication crisis and the grant culture: righting the ship. Perspect. Psychol. Sci. 12 : 4 660– 64 [Google Scholar]
  • Loftus GR. 1996 . Psychology will be a much better science when we change the way we analyze data. Curr. Direct. Psychol. Sci. 5 : 161– 71 [Google Scholar]
  • Maxwell SE. 2004 . The persistence of underpowered studies in psychological research: causes, consequences, and remedies. Psychol. Methods 9 : 147– 63 [Google Scholar]
  • McElreath R , Smaldino PE. 2015 . Replication, communication, and the population dynamics of scientific discovery. PLOS ONE 10 : 8 e0136088 [Google Scholar]
  • McGrath JE. 1981 . Dilemmatics: the study of research choices and dilemmas. Am. Behav. Sci. 25 : 2 179– 210 [Google Scholar]
  • McShane BB , Böckenholt U. 2014 . You cannot step into the same river twice: when power analyses are optimistic. Perspect. Psychol. Sci. 9 : 6 612– 25 [Google Scholar]
  • McShane BB , Gal D , Gelman A , Robert C , Tackett JL. 2019 . Abandon statistical significance. Am. Stat. 73 : 235– 45 [Google Scholar]
  • Michaels R. 2017 . Confidence in courts: a delicate balance. Science 357 : 6353 764 [Google Scholar]
  • Miller JO , Ulrich R. 2016 . Optimizing research payoff. Perspect. Psychol. Sci. 11 : 5 664– 91 [Google Scholar]
  • Miller JO , Ulrich R. 2019 . The quest for an optimal alpha. PLOS ONE 14 : 1 e0208631 [Google Scholar]
  • Miller JO , Ulrich R. 2021 . A simple, general, and efficient method for sequential hypothesis testing: the independent segments procedure. Psychol. Methods 26 : 4 486 – 97 [Google Scholar]
  • Miller MG. 1996 . Optimal allocation of resources to clinical trials . PhD Thesis, Sloan Sch. Manag., Mass. Inst. Technol. Cambridge [Google Scholar]
  • Mosteller F , Weinstein M 1985 . Toward evaluating the cost-effectiveness of medical and social experiments. Social Experimentation ed. JA Hausman, DA Wise 221– 50 Chicago: Univ. Chicago Press [Google Scholar]
  • Nosek BA , Alter G , Banks GC , Borsboom D , Bowman SD et al. 2015 . Promoting an open research culture. Science 348 : 6242 1422– 25 [Google Scholar]
  • Nosek BA , Ebersole CR , DeHaven AC , Mellor DT. 2018 . The preregistration revolution. PNAS 115 : 11 2600– 6 [Google Scholar]
  • Nosek BA , Spies JR , Motyl M. 2012 . Scientific utopia II: restructuring incentives and practices to promote truth over publishability. Perspect. Psychol. Sci. 7 : 6 615– 31 [Google Scholar]
  • Nuzzo R. 2014 . Scientific method: statistical errors. Nature 506 : 7487 150– 52 [Google Scholar]
  • Olsson-Collentine A , Wicherts JM , van Assen MALM. 2020 . Heterogeneity in direct replications in psychology and its association with effect size. Psychol. Bull. 146 : 10 922– 40 [Google Scholar]
  • Open Sci. Collab 2015 . Estimating the reproducibility of psychological science. Science 349 : 6251 aac4716 [Google Scholar]
  • Pashler HE , Harris C. 2012 . Is the replicability crisis overblown? Three arguments examined. Perspect. Psychol. Sci. 7 : 6 531– 36 [Google Scholar]
  • Pashler HE , Wagenmakers E. 2012 . Editors' introduction to the special section on replicability in psychological science: a crisis of confidence?. Perspect. Psychol. Sci. 7 : 6 528– 30 [Google Scholar]
  • Poldrack RA. 2019 . The costs of reproducibility. NeuroView 10 : 1 11– 14 [Google Scholar]
  • Popper KR. 2002 . 1963 . Conjectures and Refutations: The Growth of Scientific Knowledge London: Taylor & Francis: [Google Scholar]
  • Roberts RM. 1989 . Serendipity: Accidental Discoveries in Science New York: Wiley [Google Scholar]
  • Rosenthal R. 1979 . The “file drawer problem” and tolerance for null results. Psychol. Bull. 86 : 638– 41 [Google Scholar]
  • Rossi JS. 1990 . Statistical power of psychological research: What have we gained in 20 years?. J. Consult. Clin. Psychol. 58 : 5 646– 56 [Google Scholar]
  • Saltelli A , Funtowicz S. 2017 . What is science's crisis really about?. Futures 91 : 5– 11 [Google Scholar]
  • Schimmack U. 2012 . The ironic effect of significant results on the credibility of multiple-study articles. Psychol. Methods 17 : 4 551– 66 [Google Scholar]
  • Schimmack U. 2020 . A meta-psychological perspective on the decade of replication failures in social psychology. Can. Psychol. Psychol. Can. 61 : 4 364– 76 [Google Scholar]
  • Schnuerch M , Erdfelder E. 2020 . Controlling decision errors with minimal costs: the sequential probability ratio t test. Psychol. Methods 25 : 2 206– 26 [Google Scholar]
  • Schooler J. 2019 . Metascience: the science of doing science. Observer 32 : 9 26– 29 [Google Scholar]
  • Schunn CD , Anderson JR. 1999 . The generality/specificity of expertise in scientific reasoning. Cogn. Sci. 23 : 3 337– 70 [Google Scholar]
  • Sedlmeier P , Gigerenzer G. 1989 . Do studies of statistical power have an effect on the power of studies?. Psychol. Bull. 105 : 2 309– 16 [Google Scholar]
  • Sherman RA , Pashler H. 2019 . Powerful moderator variables in behavioral science?. Don't bet on them (version 3). PsyArxiv May 24. https://doi.org/10.31234/osf.io/c65wm [Crossref] [Google Scholar]
  • Sibley CG , Greaves LM , Satherley N , Wilson MS , Overall NC et al. 2020 . Effects of the COVID-19 pandemic and nationwide lockdown on trust, attitudes towards government, and well-being. Am. Psychol. 75 : 5 618– 30 [Google Scholar]
  • Simmons JP , Nelson LD , Simonsohn U 2011 . False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol. Sci. 22 : 11 1359– 66 [Google Scholar]
  • Simon H. 1947 . Administrative Behavior: A Study of Decision-Making Processes in Administrative Organization New York: Free Press, 2nd ed.. [Google Scholar]
  • Simonsohn U , Nelson LD , Simmons JP 2014 . P -curve: a key to the file-drawer. J. Exp. Psychol. Gen. 143 : 2 534– 47 [Google Scholar]
  • Smaldino PE , McElreath R. 2016 . The natural selection of bad science. R. Soc. Open Sci. 3 : 9 160384 [Google Scholar]
  • Stanley TD , Carter EC , Doucouliagos H 2018 . What meta-analyses reveal about the replicability of psychological research. Psychol. Bull. 144 : 12 1325– 46 [Google Scholar]
  • Sterling TD. 1959 . Publication decisions and their possible effects on inferences drawn from tests of significance—or vice versa. J. Am. Stat. Assoc. 54 : 285 30– 34 [Google Scholar]
  • Sternberg RJ , Sternberg K. 2010 . The Psychologist's Companion: A Guide to Writing Scientific Papers for Students and Researchers New York: Cambridge Univ. Press [Google Scholar]
  • Stroebe W , Postmes T , Spears R 2012 . Scientific misconduct and the myth of self-correction in science. Perspect. Psychol. Sci. 7 : 6 670– 88 [Google Scholar]
  • Stroebe W , Strack F. 2014 . The alleged crisis and the illusion of exact replication. Perspect. Psychol. Sci. 9 : 1 59– 71 [Google Scholar]
  • Strube MJ. 2006 . SNOOP: a program for demonstrating the consequences of premature and repeated null hypothesis testing. Behav. Res. Methods 38 : 1 24– 27 [Google Scholar]
  • Ulrich R , Miller JO 2018 . Some properties of p -curves, with an application to gradual publication bias. Psychol. Methods 23 : 3 546– 60 [Google Scholar]
  • Ulrich R , Miller JO 2020 . Meta-research: Questionable research practices may have little effect on replicability. eLife 9 : e58237 [Google Scholar]
  • Ulrich R , Miller JO , Erdfelder E. 2018 . Effect size estimation from t -statistics in the presence of publication bias: a brief review of existing approaches with some extensions. Z. Psychol. 226 : 1 56– 80 [Google Scholar]
  • Van Bavel JJ , Mende-Siedlecki P , Brady WJ , Reinero DA 2016 . Contextual sensitivity in scientific reproducibility. PNAS 113 : 23 6454– 59 [Google Scholar]
  • Wagenmakers EJ , Wetzels R , Borsboom D , Van Der Maas HLJ. 2011 . Why psychologists must change the way they analyze their data: the case of psi: comment on Bem; 2011 . J. Pers. Soc. Psychol. 100 : 3 426– 32 [Google Scholar]
  • Wald A 1947 . Sequential Analysis New York: Wiley [Google Scholar]
  • Williams B , Myerson J , Hale S 2008 . Individual differences, intelligence, and behavior analysis. J. Exp. Anal. Behav. 90 : 2 219– 31 [Google Scholar]
  • Wilson BM , Wixted JT. 2018 . The prior odds of testing a true effect in cognitive and social psychology. Adv. Methods Pract. Psychol. Sci. 1 : 2 186– 97 [Google Scholar]
  • Witt JK. 2019 . Insights into criteria for statistical significance from signal detection analysis. Meta-Psychology 3 : https://doi.org/10.15626/MP.2018.871 [Crossref] [Google Scholar]
  • Yong E 2012 . Replication studies: bad copy. Nature 485 : 298– 300 [Google Scholar]

Data & Media loading...

  • Article Type: Review Article

Most Read This Month

Most cited most cited rss feed, job burnout, executive functions, social cognitive theory: an agentic perspective, on happiness and human potentials: a review of research on hedonic and eudaimonic well-being, sources of method bias in social science research and recommendations on how to control it, mediation analysis, missing data analysis: making it work in the real world, grounded cognition, personality structure: emergence of the five-factor model, motivational beliefs, values, and goals.

  • Welcome to the Staff Intranet
  • My Workplace
  • Staff Directory
  • Service Status
  • Student Charter & Professional Standards
  • Quick links
  • Bright Red Triangle
  • New to Edinburgh Napier?
  • Regulations
  • Academic Skills
  • A-Z Resources
  • ENroute: Professional Recognition Framework
  • ENhance: Curriculum Enhancement Framework
  • Programmes and Modules
  • QAA Enhancement Themes
  • Quality & Standards
  • L&T ENssentials Quick Guides & Resources
  • DLTE Research
  • Student Interns
  • Intercultural Communication
  • Far From Home
  • Annual Statutory Accounts
  • A-Z Documents
  • Finance Regulations
  • Insurance Certificates
  • Procurement
  • Who's Who
  • Staff Briefing Note on Debt Sanctions
  • Operational Communications
  • Who's Who in Governance & Compliance
  • Governance Services
  • Health & Safety
  • Customer Charter
  • Pay and Benefits
  • HR Policy and Forms
  • Working at the University
  • Recruitment
  • Leaving the University
  • ​Industrial Action
  • Learning Technology
  • Digital Skills
  • IS Policies
  • Plans & Performance
  • Research Cycle
  • International & EU Recruitment
  • International Marketing and Intelligence
  • International Programmes
  • Global Online
  • Global Mobility
  • English for Academic Purposes (EAP)
  • UCAS Results Embargo
  • UK Recruitment
  • Visa and International Support
  • Useful Documents
  • Communications
  • Corporate Gifts
  • Development & Alumni Engagement
  • NSS Staff Hub
  • Planning & Performance
  • Business Intelligence
  • Market Intelligence
  • Data Governance
  • Principal & Vice-Chancellor
  • University Leadership Team
  • The University Chancellor
  • University Strategy
  • Catering, Events & Vacation Lettings
  • Environmental Sustainability
  • Facilities Service Desk
  • Print Services
  • Property and Maintenance
  • Student Accommodation
  • A-Z of Services
  • Directorate
  • Staff Documents
  • Design principles
  • Business Engagement
  • Commercialise Your Research
  • Intellectual Property
  • Consultancy and Commercial Activity Framework
  • Continuing Professional Development (CPD)
  • Research Process
  • Policies and Guidance
  • External Projects
  • Public Engagement
  • Research Data
  • Research Degrees
  • Researcher Development
  • Research Governance
  • Research Induction
  • Research Integrity
  • Worktribe Log-in
  • Worktribe RMS
  • Knowledge Exchange Concordat
  • Academic Appeals
  • Academic Calendar
  • Academic Integrity
  • Curriculum Management
  • Examinations
  • Graduations
  • Key Dates Calendar
  • My Programme template
  • Our Charter
  • PASS Process Guides
  • Student Centre & Campus Receptions (iPoints)
  • Student Check In
  • Student Decision and Status related codes
  • Student Engagement Reporting
  • Student Records
  • Students requesting to leave
  • The Student Charter
  • Student Sudden Death
  • Programme and Student Support (PASS)
  • Timetabling
  • Strategy Hub
  • Careers & Skills Development
  • Placements & Practice Learning
  • Graduate Recruitment
  • Student Ambassadors
  • Confident Futures
  • Disability Inclusion
  • Student Funding
  • Report and Support
  • Keep On Track
  • Student Pregnancy, Maternity, Paternity and Adoption
  • Counselling
  • Widening Access
  • About the AUA
  • Edinburgh Napier Students' Association
  • Join UNISON
  • Member Information & Offers
  • LGPS Pensions Bulletin
  • Donations made to Charity

Skip Navigation Links

  • REF2021 - Results
  • You Said, We Listened
  • Outputs from Research
  • Impact from Research
  • REF Training and Development
  • Sector Consultation

​​Outputs from Research 

A research output is the product of research .  It can take many different forms or types.  See here for a full glossary of output types.

The tables below sets out the generic criteria for assessing outputs and the definitions of the starred levels, as used during the REF2021 exercise.

Definitions 


Quality that is in terms of originality, rigour and significance.
Three starQuality that is in terms of originality, rigor and significance but which falls short of the highest standards of excellence.
Two starQuality that is in terms of originality, rigour and significance.
One starQuality that is in terms of originality, rigour and significance.
Unclassified​Quality that the standard of nationally recognised work. Or work which does not meet the published definition of research for the purposes of this assessment.

'World-leading', 'internationally' and 'nationally' in this context refer to quality standards. They do not refer to the nature or geographical scope of particular subjects, nor to the locus of research, nor its place of dissemination.

Definitions of Originality, Rigour and Significance

 will be understood as the extent to which the output makes an important and innovative contribution to understanding and knowledge in the field. Research outputs that demonstrate originality may do one or more of the following: produce and interpret new empirical findings or new material; engage with new and/or complex problems; develop innovative research methods, methodologies and analytical techniques; show imaginative and creative scope; provide new arguments and/or new forms of expression, formal innovations, interpretations and/or insights; collect and engage with novel types of data; and/or advance theory or the analysis of doctrine, policy or practice, and new forms of expression.
 will be understood as the extent to which the work demonstrates intellectual coherence and integrity, and adopts robust and appropriate concepts, analyses, sources, theories and/or methodologies.
 will be understood as the extent to which the work has influenced, or has the capacity to influence, knowledge and scholarly thought, or the development and understanding of policy and/or practice.

Supplementary Output criteria – Understanding the thresholds:

The 'Panel criteria' explains in more detail how the sub-panels apply the assessment criteria and interpret the thresholds:

Main Panel A: Medicine, health and life sciences  Main Panel B: Physical sciences, engineering and mathematics  Main Panel C: Social sciences  Main Panel D: Arts and humanities ​

Definition of Research for the REF

1. For the purposes of the REF, research is defined as a process of investigation leading to new insights, effectively shared.

2. It  includes  work of direct relevance to the needs of commerce, industry, culture, society, and to the public and voluntary sectors; scholarship; the invention and generation of ideas, images, performances, artefacts including design, where these lead to new or substantially improved insights; and the use of existing knowledge in experimental development to produce new or substantially improved materials, devices, products and processes, including design and construction. It excludes routine testing and routine analysis of materials, components and processes such as for the maintenance of national standards, as distinct from the development of new analytical techniques. 

It also  excludes  the development of teaching materials that do not embody original research.

3. It  includes  research that is published, disseminated or made publicly available in the form of assessable research outputs, and confidential reports 

​Output FAQs

Q.  what is a research output.

A research output is the product of research.  An underpinning principle of the REF is that all forms of research output will be assessed on a fair and equal basis.  Sub-panels will not regard any particular form of output as of greater or lesser quality than another per se.  You can access the full list of eligible output types her​e.

Q.  When is the next Research Excellence Framework?

The next exercise will be REF 2029, with results published in 2029.  It is therefore likely that we will make our submission towards the end of 2028, but the actual timetable hasn't been confirmed yet.

A sector-wide consultation is currently occurring to help refine the detail of the next exercise.  You can learn more about the emerging REF 2029 here.

Q.  Why am I being contacted now, if we don't know the final details for a future assessment?

Although we don't know all of the detail, we know that some of the core components of the previous exercise will be retained.  This will include the assessment of research outputs. 

To make the internal process more manageable and avoid a rush at the end of the REF cycle, we will be conducting an output review process on an annual basis, in some shape and form to spread the workload.

Furthermore, regardless of any external assessment frameworks, it is also important for us to understand the quality of research being produced at Edinburgh Napier University and to introduce support mechanisms that will enhance the quality of the research conducted.  This is of benefit to the University and to you and your career development.

Q. I haven't produced any REF-eligible outputs as yet, what should I do?

We recognise that not everyone contacted this year will have produced a REF-eligible output so early on in a new REF cycle.  If this is the case, you can respond with a nil return and you may be contacted again in a future annual review.

If you need additional support to help you deliver on your research objectives, please contact your line manager and/or Head of Research to discuss.

Q.  I was contacted last year to identify an output, but I have not received a notification for the 2024 annual cycle, why not?

Due to administrative capacity in RIE and the lack of detail on the REF 2029 rules relating to staff and outputs, we are restricting this years' scoring activity to a manageable volume based on a set of pre-defined, targeted criteria.

An output review process will be repeated annually.  If an output is not reviewed in the current year, we anticipate that it will be included in a future review process if it remains in your top selection.

Once we know more about the shape of future REF, we will adapt the annual process to meet the new eligibility criteria and aim to increase the volume of outputs being reviewed.

Q. I am unfamiliar with the REF criteria, and I do not feel well-enough equipped to provide a score or qualitative statement for my output/s, what should I do?

The output self-scoring field is optional.  We appreciate that some staff may not be familiar with the criteria and are therefore unable to provide a reliable score. 

The REF team has been working with Schools to develop a programme of REF awareness and output quality enhancement which aims to promote understanding of REF criteria and enable staff to score their work in future.  We aim to deliver quality enhancement training in all Schools by the end of the 2023-24 academic cycle.

Please look out for further communications on this.

For those staff who do wish to provide a score and commentary, please refer specifically to the REF main panel output criteria: Main Panel A: Medicine, health and life sciences  Main Panel B: Physical sciences, engineering and mathematics  Main Panel C: Social sciences  Main Panel D: Arts and humanities 

Q. Can I refer to Journal impact factors or other metrics as a basis of Output quality?

An underpinning principle of REF is that journal impact factors or any hierarchy of journals, journal-based metrics (this includes ABS rating, journal ranking and total citations) should not be used in the assessment o​f outputs. No output is privileged or disadvantaged on the basis of the publisher, where it is published or the medium of its publication. 

An output should be assessed on its content and contribution to advancing knowledge in its own right and in the context of the REF quality threshold criteria, irrespective of the ranking of the journal or publication outlet in which it appears.

You should refer only to the REF output quality criteria (please see definitions above) if you are adding the optional self-score and commentary field and you should not refer to any journal ranking sources.

Q. What is Open Access Policy and how does it affect my outputs?

Under current rules, to be eligible for future research assessment exercises, higher education institutions (HEIs) are required to implement processes and procedures to comply with the REF Open Access policy. 

It is a requirement for all journal articles and conference proceedings with an International Standard Serial Number (ISSN), accepted for publication after 1 April 2016, to be made open access.  This can be achieved by either publishing the output in an open access journal outlet or by depositing an author accepted manuscript version in the University's repository within three months of the acceptance date.

Although the current Open Access policy applies only to journal and conference proceedings with an ISSN, Edinburgh Napier University expects staff to deposit all forms of research output in the University research management system, subject to any publishers' restrictions.

You can read the University's Open Access Policy here .

Q. My Output is likely to form part of a portfolio of work (multi-component output), how do I collate and present this type of output for assessment?

The REF team will be working with relevant School research leadership teams to develop platforms to present multicomponent / portfolio submissions.  In the meantime, please use the commentary section to describe how your output could form part of a multicomponent submission and provide any useful contextual information about the research question your work is addressing.

Q. How will the information I provide about my outputs be used and for what purpose?

In the 2024 output cycle, a minimum of one output identified by each identified author will be reviewed by a panel of internal and external subject experts.

The information provided will be used to enable us to report on research quality measures as identified in the University R&I strategy.

Output quality data will be recorded centrally on the University's REF module in Worktribe.  Access to this data is restricted to a core team of REF staff based with the Research, Innovation and Enterprise Office and key senior leaders in the School.

The data will not be used for any other purpose, other than for monitoring REF-related preparations.

Q. Who else will be involved in reviewing my output/s?

Outputs will be reviewed by an expert panel of internal and external independent reviewers.

Q. Will I receive feedback on my Output/s?

The REF team encourages open and transparent communication relating to output review and feedback.  We will be working with senior research leaders within the School to promote this.

Q.  I have identified more than one Output, will all of my identified outputs be reviewed this year?

In the 2024 cycle, we are committed to reviewing at least one output from each contacted author via an internal, external and moderation review process in the 2024 cycle.

​Once we know more about the shape of a future REF, we will adapt the annual process to meet the new criteria / eligibility.

Get in touch

  • Report a bug
  • Privacy Policy

Edinburgh Napier University is a registered Scottish charity. Registration number SC018373

Becker Medical Library logotype

  • Library Hours
  • (314) 362-7080
  • [email protected]
  • Library Website
  • Electronic Books & Journals
  • Database Directory
  • Catalog Home
  • Library Home

Research Impact : Outputs and Activities

  • Outputs and Activities
  • Establishing Your Author Name and Presence
  • Enhancing Your Impact
  • Tracking Your Work
  • Telling Your Story
  • Impact Frameworks

What are Scholarly Outputs and Activities?

Scholarly/research outputs and activities represent the various outputs and activities created or executed by scholars and investigators in the course of their academic and/or research efforts.

One common output is in the form of scholarly publications which are defined by Washington University as:

". . . articles, abstracts, presentations at professional meetings and grant applications, [that] provide the main vehicle to disseminate findings, thoughts, and analysis to the scientific, academic, and lay communities. For academic activities to contribute to the advancement of knowledge, they must be published in sufficient detail and accuracy to enable others to understand and elaborate the results. For the authors of such work, successful publication improves opportunities for academic funding and promotion while enhancing scientific and scholarly achievement and repute."

Examples of activities include: editorial board memberships, leadership in professional societies, meeting organizer, consultative efforts, contributions to successful grant applications, invited talks and presentations, admininstrative roles, contribution of service to a clinical laboratory program, to name a few. For more examples of activities, see Washington University School of Medicine Appointments & Promotions Guidelines and Requirements or the "Examples of Outputs and Activities" box below. Also of interest is Table 1 in the " Research impact: We need negative metrics too " work.

Tracking your research outputs and activities is key to being able to document the impact of your research. One starting point for telling a story about your research impact is your publications. Advances in digital technology afford numerous avenues for scholars to not only disseminate research findings but also to document the diffusion of their research. The capacity to measure and report tangible outcomes can be used for a variety of purposes and tailored for various audiences ranging from the layperson, physicians, investigators, organizations, and funding agencies. Publication data can be used to craft a compelling narrative about your impact. See Quantifying the Impact of My Publications for examples of how to tell a story using publication data.

Another tip is to utilize various means of disseminating your research. See Strategies for Enhancing Research Impact for more information.

  • << Previous: Impact
  • Next: Establishing Your Author Name and Presence >>
  • Last Updated: Jun 24, 2024 7:38 AM
  • URL: https://beckerguides.wustl.edu/impact
  • Process: Research Outputs
  • Output Types

Ask a Librarian

Research Outputs

Decorative chess piece

Scholars circulate and share research in a variety of ways and in numerous genres. Below you'll find a few common examples. Keep in mind there are many other ways to circulate knowledge: factsheets, software, code, government publications, clinical guidelines, and exhibitions, just to name a few.

Outputs Defined

Original research article.

An article published in an academic journal can go by several names: original research, an article, a scholarly article, or a peer reviewed article. This format is an important output for many fields and disciplines. Original research articles are written by one or a number of authors who typically advance a new argument or idea to their field.

Conference Presentations or Proceedings

Conferences are organized events, usually centered on one field or topic, where researchers gather to present and discuss their work. Typically, presenters submit abstracts, or short summaries of their work, before a conference, and a group of organizers select a number of researchers who will present. Conference presentations are frequently transcribed and published in written form after they are given.
Books are often composed of a collection of chapters, each written by a unique author. Usually, these kinds of books are organized by theme, with each author's chapter presenting a unique argument or perspective. Books with uniquely authored chapters are often curated and organized by one or more editors, who may contribute a chapter or foreward themselves.
Often, when researchers perform their work, they will produce or work with large amounts of data, which they compile into datasets. Datasets can contain information about a wide variety of topics, from genetic code to demographic information. These datasets can then be published either independently, or as an accompaniment to another scholarly output, such as an article. Many scientific grants and journals now require researchers to publish datasets.
For some scholars, artwork is a primary research output. Scholars’ artwork can come in diverse forms and media, such as paintings, sculptures, musical performances, choreography, or literary works like poems. s.
Reports can come in many forms and may serve many functions. They can be authored by one or a number of people, and are frequently commissioned by government or private agencies. Some examples of reports are market reports, which analyze and predict a sector of an economy, technical reports, which can explain to researchers or clients how to complete a complex task, or white papers, which can inform or persuade an audience about a wide range of complex issues.

Digital Scholarship

Digital scholarship is a research output that significantly incorporates or relies on digital methodologies, authoring, presentation, and presentation. Digital scholarship often complements and adds to more traditional research outputs, and may be presented in a multimedia format. Some examples include mapping projects; multimodal projects that may be composed of text, visual, and audio elements; or digital, interactive archives.
Researchers from every field and discipline produce books as a research output. Because of this, books can vary widely in content, length, form, and style, but often provide a broad overview of a topic compared to research outputs that are more limited in length, such as articles or conference proceedings. Books may be written by one or many authors, and researchers may contribute to a book in a number of ways: they could author an entire book, write a forward, or collect and organize existing works in an anthology, among others.
Scholars may be called upon by media outlets to share their knowledge about the topic they study. Interviews can provide an opportunity for researchers to teach a more general audience about the work that they perform.

Article in a Newspaper or Magazine

While a significant amount of researchers’ work is intended for a scholarly audience, occasionally researchers will publish in popular newspapers or magazines. Articles in these popular genres can be intended to inform a general audience of an issue in which the researcher is an expert, or they may be intended to persuade an audience about an issue.
In addition to other scholarly outputs, many researchers also compose blogs about the work they do. Unlike books or articles, blogs are often shorter, more general, and more conversational, which makes them accessible to a wider audience. Blogs, again unlike other formats, can be published almost in real time, which can allow scholars to share current developments of their work.
  • University of Colorado Boulder Libraries
  • Research Guides
  • Research Strategies
  • Last Updated: Jul 10, 2024 10:28 AM
  • URL: https://libguides.colorado.edu/strategies/products
  • © Regents of the University of Colorado
  • Search Menu
  • Sign in through your institution
  • Advance articles
  • Author Guidelines
  • Submission Site
  • Open Access
  • Why Publish?
  • About Research Evaluation
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Article Contents

1. introduction, 2. limitations of previous research, goals, and research questions, 3. the austrian science fund, 6. discussion, 7. conclusions.

  • < Previous

Types of research output profiles: A multilevel latent class analysis of the Austrian Science Fund’s final project report data

  • Article contents
  • Figures & tables
  • Supplementary Data

Rüdiger Mutz, Lutz Bornmann, Hans-Dieter Daniel, Types of research output profiles: A multilevel latent class analysis of the Austrian Science Fund’s final project report data, Research Evaluation , Volume 22, Issue 2, June 2013, Pages 118–133, https://doi.org/10.1093/reseval/rvs038

  • Permissions Icon Permissions

Starting out from a broad concept of research output, this article looks at the question as to what research outputs can typically be expected from certain disciplines. Based on a secondary analysis of data from final project reports (ex post research evaluation) at the Austrian Science Fund (FWF), Austria’s central funding organization for basic research, the goals are (1) to find, across all scientific disciplines, types of funded research projects with similar research output profiles; and (2) to classify the scientific disciplines in homogeneous segments bottom-up according to the frequency distribution of these research output profiles. The data comprised 1,742 completed, FWF-funded research projects across 22 scientific disciplines. The multilevel latent class (LC) analysis produced four LCs or types of research output profiles: ‘Not Book’, ‘Book and Non-Reviewed Journal Article’, ‘Multiple Outputs’, and ‘Journal Article, Conference Contribution, and Career Development’. The class membership can be predicted by three covariates: project duration, requested grant sum, and project head’s age. In addition, five segments of disciplines can be distinguished: ‘Life Sciences and Medicine’, ‘Social Sciences/Arts and Humanities’, ‘Formal Sciences’, ‘Technical Sciences’, and ‘Physical Sciences’. In ‘Social Sciences/Arts and Humanities’ almost all projects are of the type ‘Book and Non-Reviewed Journal Article’, but, vice versa, not all projects of the ‘Book and Non-reviewed Journal Article’ type are in the ‘Social Sciences/Arts and Humanities’ segment. The research projects differ not only qualitatively in their output profile; they also differ quantitatively, so that projects can be ranked according to amount of output.

Research funding organizations have shown increasing interest in ex post research evaluation of the funded projects ( European Science Foundation 2011a ). For instance, the Austrian Science Fund (FWF), Austria’s central funding organization for the promotion of basic research and the subject of this article, has conducted ex post research evaluations for some years now ( Dinges 2005 ). By collecting and analysing information on the ‘progress, productivity, and quality’ ( European Science Foundation 2011b : 3) of funded projects, research funding organizations hope ‘to be able to identify gaps and opportunities, avoid duplication, encourage collaboration, and strengthen the case for research’ ( European Science Foundation 2011b : 3). As stated succinctly in the title of a 2011 working document by the European Science Foundation (ESF), a central topic in this connection is ‘The Capture and Analysis of Research Outputs’ ( European Science Foundation 2011a ). This involves the issues of what research outputs are actually important for ex post research evaluation, how they can be classified (typology) and how the data can be analysed. The ESF document provides the following definition of outputs: ‘Research outputs, as the products generated from research, include the means of evidencing, interpreting, and disseminating the findings of a research study’ ( European Science Foundation 2011a : 5).

But opinions differ on what research output categories should be included in ex post research evaluation. Without doubt, publication in a scientific journal is viewed in all scientific disciplines as the primary communication form ( European Commission 2010 ). For assessing the merits of a publication, bibliometric analyses are favoured. In the humanities and social sciences, however, the use of classical bibliometric analysis ( Glänzel 1996 ; Nederhof et al. 1989 ; Nederhof 2006 ; Van Leeuwen 2006 ) is viewed critically in the face of different forms of research outputs (e.g. monographs) and limitations of the databases ( Cronin and La Barre 2004 ; Hicks 2004 ; Archambault et al. 2006 ). For these disciplines, other forms of quantitative evaluation are under discussion ( Kousha and Thelwell 2009 ; White et al. 2009 ).

A number of authors have made a plea for extending classical biblio analysis and for broadening the concept of ‘research output’ generally ( Bourke and Butler 1996 ; Lewison 2003 ; Butler 2008 ; Huang and Chang 2008 ; Linmans 2010 ; Sarli et al. 2010 ): ‘A fair and just research evaluation should take into account the diversity of research output across disciplines and include all major forms of research publications’ ( Huang and Chang 2008 : 2018). Huang and Chang (2008) looked at an empirical analysis conducted of the publication types of all publications in the year 1998–9 across all disciplines at the University of Hong Kong and found that journal articles accounted for 90% and 99% of the total publications produced only in the disciplines medicine and physics. The other disciplines produced output in the form of very different types of written communication, such as books, book chapters, and conference and working papers. Huang and Chang’s (2008) comprehensive review of the literature on the characteristics of research output showed that especially in the humanities and social sciences, books, monographs, and book chapters are important forms of written communication.

The German Research Foundation (DFG), Germany’s central funding organization for basic research, carried out a survey in the year 2004 on the publishing strategies of researchers with regard to open access ( Deutsche Forschungsgemeinschaft 2005 ), and 1,083 DFG-funded researchers responded (response rate of 67.7%). When the researchers were asked to name their preferred form of traditional publication of their own work, they mentioned articles in scientific journals (on the average about 20 articles in 5 years). Life scientists published the largest number of journal articles (23.6 articles in 5 years) and humanities scholars and social scientists the fewest (12.7 articles in 5 years). Papers in proceedings were published far more often by engineering scholars than by researchers in other disciplines. Social scientists and humanities scholars had a greater preference for publishing their work in edited volumes and monographs than researchers in other disciplines. However, big differences in the numbers reported (e.g. number of books, number of journal articles) were found within disciplines. This study and the Huang and Chang study made it clear that not only the sciences and humanities differ greatly from other disciplines in their preferred form of written communication. There are great differences also within the natural sciences and humanities. The Expert Group on Assessment of University-Based Research set up by the European Commission came to similar conclusions ( European Commission 2010 : 26). In the opinion of the expert group, the peer-reviewed journal article is used as the primary form of written communication in all scientific disciplines. In addition, engineering scientists primarily publish in conference proceedings, whereas social scientists and humanists show a wide range of research outputs, with monographs and books as the most important forms of written communications.

The broadest concept of research output is used by the Research Council UK (RCUK) (see www.rcuk.ac.uk ), the United Kingdom’s (UK) central funding organization, and the Research Assessment Exercise (RAE) ( www.rae.ac.uk ), which in 2014 will be replaced by the new system, Research Excellence Framework (REF) (ww.ref.ac.uk). RAE and REF have the task of assessing the quality of research in higher education institutions in the UK. Whereas the RAE focuses on scientific impact, the performance measurement by the REF in addition includes societal impact—that is, any social, economic or cultural impact, or benefit beyond academia. As research output, the RAE and REF include different forms of research products (journal article, book, conference contribution, patent, software, Internet publication, and so on). The Research Outcome System (ROS) of RCUK distinguishes a total of nine categories of research outputs: publication, other research output, collaboration, communication, exploitation, recognition, staff development, further funding, and impact. The new REF is planned to extend the currently peer-supported RAE with a quantitative, indicator-based evaluation system that includes bibliometric and other quantitative methods. Butler and McAllister ( Butler and McAllister 2009 , 2011 ) spoke generally of a metric as opposed to peer review that would capture more than the classical bibliometric analysis based on journal articles does. RAE and REF are based on a research production model ( Bence and Oppenheim 2005 ) that differentiates between inputs (personnel, equipment, overheads), research generation processes, outputs (paper, articles, and so on), and utilization of research (scientific and societal impact). This kind of structuring in input, process, output, outcome/impact is also found in other frameworks for research evaluation, such as in the payback approach ( Buxton and Haney 1998 ; European Commission 2010 ; Banzi et al. 2011 ) and other national and international evaluation systems ( European Commission 2010 ).

Previous research on research outputs has had the following limitations:

As the databases for the empirical analysis, studies up to now used mainly literature databases ( Glänzel 1996 ; Nederhof et al. 1989 ) and (survey) data from researchers ( Deutsche Forschungsgemeinschaft 2005 ; Huang and Chang 2008 ). Therefore, the unit of analysis was people and not projects (European Science Foundation 2011). But the different research outputs and also inputs (e.g. human resources, funding) are tied with the research projects.

For the individual disciplines, the frequencies of certain research outputs were presented mostly in totals and separately without any closer examination of the combination of different research outputs in the form of a core profile. For example, some disciplines focus more on monographs and conference contributions and not so much on journal articles, whereas for other disciplines it is just the opposite. Beyond that, the variability of research output within a discipline, such as that found in a study conducted by the DFG ( Deutsche Forschungsgemeinschaft 2005 ), was hardly considered.

The studies often did not describe the research output comprehensively, as the RAE, REF, and RCUK do, for instance, and instead restricted the study to a specific research output category, such as journal articles. This can lead to an inadequate treatment of some disciplines. Technical sciences can be at a disadvantage, for instance, if patents are not included in the study. Moreover, mostly only selected disciplines were included in the analyses, such as social sciences and humanities, so that comparative analysis of various disciplines was not possible. But research projects in different disciplines can be very similar in the profiles of research output categories (abbreviated in the following as ‘research output profiles’).

The studies did not distinguish between quality and quantity of research outputs. For example, life sciences are similar to natural sciences in research output profiles, but life sciences have a higher volume of journal articles than the natural sciences do ( Deutsche Forschungsgemeinschaft 2005 ).

The goals of our study are:

Based on a secondary analysis of data in final project reports ( Glass 1976 ) at the FWF, Austria’s central funding organization for basic research, the goals of this study were (1) to find, across all scientific disciplines, types of funded research projects with similar research output profiles; and (2) to classify the scientific disciplines in homogeneous segments (e.g. humanities, natural sciences, engineering sciences) bottom-up according to the frequency distribution of these research output profiles. We aimed to establish the types of funded research projects using multilevel latent class analysis (MLLCA) ( Vermunt 2003 ; Kimberly and Muthén 2010 ; Mutz and Seeling 2010 ; Mutz and Daniel 2012 ).

The research questions are:

Are there any types of FWF-funded projects that have different core profiles of research outputs?

Do types of research output profiles vary across scientific disciplines? Can disciplines be clustered into segments according to the different proportions of certain types of research output profiles?

How does the probability of being in a particular type of research output profile depend on a set of project-related covariates (e.g. requested grant sum)?

Is there any additional variability within types of research output profiles that allows for a quantitative ranking of projects according to higher or lower research productivity?

The FWF is Austria’s central funding organization for the promotion of basic research. It is equally committed to all scientific disciplines. The body responsible for funding decisions at the FWF is the board of trustees, made up of 26 elected reporters and 26 alternates ( Bornmann 2012 ; Fischer and Reckling 2010 ; Mutz, Bornmann and Daniel 2012a , 2012b ; Sturn and Novak 2012 ). For each grant application, the FWF obtains at least two international expert reviews (ex ante evaluation). The number of reviewers depends on the amount of funding requested. The expert review consists (among other things) of an extensive written comment and a rating providing an overall numerical assessment of the application. At the FWF board’s decision meetings, the reporters present the written reviews and ratings of each grant application. In the period from 1999 to 2009 the approval rate of proposals was 44.2%. Since 2003, all funded projects are evaluated after completion ( Dinges 2005 ) (see www.fwf.ac.at/de/projects/evaluation-fwf.html ). The FWF surveys the FWF-funded researchers, asking them to report the outputs of their research projects using a category system that is akin to the research output system of RCUK. Additionally, referees are requested to provide a brief review giving their opinions on aspects of the final project report. They are also requested to assign a numerical rating to each aspect. The final reports were used for accountability purposes and to improve the quality of FWF’s decision procedure ( Dinges 2005 ).

The data for this study comprised 1,742 FWF-funded research projects called ‘Stand-Alone Projects’ across all fields of science (22 scientific disciplines classified into six research areas), which contributed to 60% of all FWF grants (‘Stand-Alone Projects’, ‘Special Research Programs’, ‘Awards and Prizes’, ‘Transnational Funding Activities’) and finished within a period of 9 years (2002–10). The labelling of the scientific disciplines and the research areas was adopted from the FWF ( Fischer and Reckling 2010 ). Each project head was requested to report the results of his or her research project by completing a form (final project report) containing several sections (summary for public relations; brief project report; information on project participants; attachments; collaboration with FWF).

Of the 1,742 completed FWF-funded research projects ( Table 1 ), most were in the natural sciences (31.6%), and the fewest were in the social sciences (6.0%) and technical sciences (4.5%). The finished projects (end of funding) were approved for funding in the period 1999–2010, one-third of them in 2003–4 alone. Due to still ongoing research projects, projects approved for funding in 2007–8 make up only 3.9% of the total database of 1,742 FWF-funded research projects. The average duration of the research projects was 39 months. In 84.5% of the projects, the project heads were men. The average age of the project heads was 47.

Sample description ( N = 1,742 completed FWF-funded research projects)

Variable Per cent SDRange
Research area
    Biosciences39922.9
    Humanities33919.5
    Human medicine26915.4
    Natural sciences55131.6
    Social sciences1056.0
    Technical sciences794.5
Time period of the approval decision
    1999–200021012.1
    2001–243324.9
    2003–458233.4
    2005–644825.7
    2007–8693.9
Time period of the project end
    2002–428116.1
    2005–653130.5
    2007–855832.0
    2009–1037221.4
Project duration [months]1,742100.039.08.89→62
Overall rating of the proposal (ex ante evaluation)1,73599.689.74.761.7→100
Requested grant sum [1,000 €]1,742100.0179.782.87.6→592.7
Project head’s sex
    Man (=0)1,47284.5
    Woman (=1)27015.5
Project head’s age1,73999.847.19.827→87
Variable Per cent SDRange
Research area
    Biosciences39922.9
    Humanities33919.5
    Human medicine26915.4
    Natural sciences55131.6
    Social sciences1056.0
    Technical sciences794.5
Time period of the approval decision
    1999–200021012.1
    2001–243324.9
    2003–458233.4
    2005–644825.7
    2007–8693.9
Time period of the project end
    2002–428116.1
    2005–653130.5
    2007–855832.0
    2009–1037221.4
Project duration [months]1,742100.039.08.89→62
Overall rating of the proposal (ex ante evaluation)1,73599.689.74.761.7→100
Requested grant sum [1,000 €]1,742100.0179.782.87.6→592.7
Project head’s sex
    Man (=0)1,47284.5
    Woman (=1)27015.5
Project head’s age1,73999.847.19.827→87

Note : N = frequency, per cent = column per cent, M = mean, SD = standard deviation, range = minimum and maximum.

The following six research output categories were captured in quantity and number (count data) and served as the basis for the analysis: publication (peer-reviewed journal article; non-peer-reviewed journal article, monograph, anthology, mass communication, i.e. any kind of publication in mass media, e.g. newspaper article), conference contribution (invited paper, paper, poster), award, patent, career development (diploma/degree, PhD dissertation, habilitation thesis) follow-up project (FWF funded or not). It was not differentiated between different sub-categories of the mentioned research output categories. For example, hybrid, open access and standard peer-reviewed journal articles or ongoing or terminated PhD dissertations were summarized under the respective research output category. In order to avoid problems with different publication lags, the FWF treated equally manuscripts, already published, and manuscripts, accepted for publication. The ex post evaluation approach of the FWF does not distinguish between project publications written in English and written in any other language.

Because of strongly skewed distributions, the count variables were transformed in 2-point to 5-point ordinal scale variables with at most equally sized ordinal classes, to avoid sparse classes or cells in a multivariate statistical analysis. To draw up a typology, actually, binary variables might be sufficient in which it was coded whether the particular research output category (e.g. monograph) existed (= 1) for a research project or not (= 0). However, because we wanted to differentiate a qualitative dimension (types) and a quantitative dimension (amount of output), we chose an ordinal scale with a sparse number of ordinal classes that in addition allow a quantitative assessment.

The research output variables ( Table 2 ) show a large share of zeros. The most frequently produced types of publication were reviewed journal articles (an average of five per project) and conference papers (on average nine), with a large variance across the research projects. For publication of research results, monographs are used the least (0.2 monographs per project).

Data description ( N = 1,742 FWF-funded research projects)

Research outputOrdinal categories SDMax
Scale01234
Journal article, reviewedNumber01–23–6>65.16.9750.61
Per cent23.722.826.427.1
Journal article, non-reviewedNumber012–4>42.85.650
Per cent0.500.140.180.18
Contribution to anthologiesNumber01>10.82.3320.15
Per cent75.410.214.4
MonographNumber0>00.20.780.15
Per cent89.410.6
Mass communicationNumber01>11.02.9380.16
Per cent68.513.517.9
AwardNumber01>10.51.2130.28
Per cent74.013.512.5
Other output (patent, impact)Number01>10.61.4260.19
Per cent71.014.914.1
Conference paperNumber013–56–11>119.111.11010.59
Per cent12.714.921.824.825.8
Other conference contributionNumber01–23–6>64.77.5980.51
Per cent31.620.323.924.2
Habilitation thesisNumber01>10.60.970.12
Per cent60.725.813.5
PhD dissertationNumber012>21.11.4230.30
Per cent41.030.817.310.9
Diploma/degreeNumber012>21.32.122
Per cent53.417.210.818.6
Follow-up projectNumber01>10.71.1150.19
Per cent61.623.115.3
Research outputOrdinal categories SDMax
Scale01234
Journal article, reviewedNumber01–23–6>65.16.9750.61
Per cent23.722.826.427.1
Journal article, non-reviewedNumber012–4>42.85.650
Per cent0.500.140.180.18
Contribution to anthologiesNumber01>10.82.3320.15
Per cent75.410.214.4
MonographNumber0>00.20.780.15
Per cent89.410.6
Mass communicationNumber01>11.02.9380.16
Per cent68.513.517.9
AwardNumber01>10.51.2130.28
Per cent74.013.512.5
Other output (patent, impact)Number01>10.61.4260.19
Per cent71.014.914.1
Conference paperNumber013–56–11>119.111.11010.59
Per cent12.714.921.824.825.8
Other conference contributionNumber01–23–6>64.77.5980.51
Per cent31.620.323.924.2
Habilitation thesisNumber01>10.60.970.12
Per cent60.725.813.5
PhD dissertationNumber012>21.11.4230.30
Per cent41.030.817.310.9
Diploma/degreeNumber012>21.32.122
Per cent53.417.210.818.6
Follow-up projectNumber01>10.71.1150.19
Per cent61.623.115.3

Note : Per cent = row per cent, M = mean of the raw data, SD = standard deviation of the raw data, Max = maximum, R 2 indicates how well an indicator is explained by the final LC model.

In a review of the literature Gonzalez-Brambila and Velosos (2007) discuss age, sex, education, and cohort effects as empirically investigated determinants of research outputs. In our study, we included the following covariates to predict research profile type membership ( Table 1 ): time period of the approval decision, time period of the project end, project duration; overall rating of the proposal, requested grant sum; gender and age of the project head. This information was taken from an ex ante evaluation of the project proposals. In the ex ante evaluation, two to three reviewers rated each proposal on a scale from 1 to 100 (ascending from poor to excellent). The mean of the overall ratings of a proposal averaged across reviewers was 89.7 (minimum: 61.7, maximum: 100).

4.2 Statistical procedure

Latent Class Analysis (LCA) in its basic structure can be defined as a statistical procedure that extracts clusters of units (latent classes (LCs)) that are homogenous with respect to the observed nominal or ordinal scale variables ( McCutcheon 1987 ). Similar to factor analysis, LCs are extracted in such a way that the correlations between the observed variables should vanish completely within each LC (local stochastic independence). LCA is favoured towards cluster analysis due to the fact that fewer pre-decisions are required than in common cluster analysis procedures (e.g. similarity measure, aggregation algorithm). Efficient algorithms for parameter estimation (maximum likelihood) are used, and a broad range of different models (LCA, IRT models, multilevel models, and more) are offered ( Magidson and Vermunt 2004 ; Vermunt and Magidson 2005a ). In a more advanced version of LCA, MLLCA, the nested data structure is additionally considered. In our study, research projects are nested within certain scientific disciplines; LCs or project types might vary between scientific disciplines. In MLLCA, not only are projects grouped according to their output profiles but also scientific disciplines will be segmented according to their different proportions of types of output profiles. In the technical framework of MLLCA, LCs represent the types of research output profile, and latent clusters (GClass) indicate the segments of disciplines. It will be presumed that a project in a certain LC behaves the same way (same research output profile) irrespective of the latent cluster to which the project belongs.

In secondary analysis the problem frequently arises that the assumption of local stochastic independence does not fully hold. For instance, career development output categories like diploma/degree and PhD dissertation are more strongly correlated with one another than with the other research output categories, so that a LCA cannot completely clarify the association between the two career development outputs. There are three possible ways to handle this problem ( Magidson and Vermunt 2004 ): First, one or more direct effects can be added that account for the residual correlations between the observed research output variables that are responsible for the violation of the local stochastic independence assumption. Second, one or more variables that are responsible for high residual correlations can be eliminated. Third, the number of latent variables (LCs, continuous latent variables) is increased. In this study we used all three strategies. After a first model run, the residuals were inspected, and a few direct effects were included in the MLLCA model. Additionally, two variables that were responsible for high residual correlations were eliminated—non-peer-reviewed journal articles and diplomas/degrees. Last but not least a MLLCA model was tested that incorporates a continuous latent variable comparable to a factor analysis. With this C-factor not only can residual correlations among the output variables be explained but also additional quantitative differences between research projects (amount of research output) can be assessed and can be taken for a ranking of projects, respectively. If, over and above, a model fits the data with the same structure (i.e. loadings of the research output variables on the factor) for all LCs as well as or better than a model with different structures in terms of different loadings of the variables in each LC, all research projects can be compared or ranked on the same scale of the latent variable.

For statistical analysis of the data we used MLLCA as implemented in the software program Latent GOLD 4.5 ( Vermunt and Magidson 2005b ). Following Bijmolt, Paas, and Vermunt (2004) , Lukočienė, Varriale, and Vermunt (2010) and Rindskopf (2006) , in a first step we calculated a simple LCA of the research outputs to obtain types of research projects with a similar research output profile. To determine the optimal number of classes (project types, segments of disciplines), information criteria were used, such as the Bayesian information criterion (BIC) or Akaike information criterion (AIC). The lower BIC or AIC the better the model fits. These information criteria penalize models for complexity (number of parameters), making it possible to make direct comparisons among models of different numbers of parameters. Results of a simulation study conducted by Lukočienė and Vermunt (2010) for MLLCA models showed that in all simulation conditions, the more advanced criteria AIC3 ( Bozdagon 1993 ) and the BIC(k) outperformed the usual BIC to identify the true number of higher-level LCs (Lukočienė, Varriale and Vermunt 2010 ). Unlike BIC, BIC(k) uses the number of groups, here the number of disciplines, in the formula for sample size n : BIC(k) =−2 * LL – df * ln(k); AIC3 = −2 * LL −3 * df, where df denotes the degrees of freedom, LL denotes the loglikelihood. In the second step , we took the hierarchical structure of data into account, calculating an MLLCA to obtain latent clusters of scientific disciplines, or segments. In a third step we fixed the number of latent clusters of the second step and again determined the number of LCs. However, Lukočienė and Vermunt’s (2010) simulation study showed that the third step results in very small improvement of 1%. We therefore abstained from applying this step.

In the last step we included covariates in the model to explain the LC membership ( Vermunt 2010 ). However, this one-step procedure has the disadvantage that by including the covariates, the model and its parameters, respectively, could change. Therefore, a three-step procedure was suggested. First, we estimated a LC model. Second, we assigned the subjects to the LCs according to their highest posterior class membership probability. Third, the LCs were regressed on a set of covariates using a multinomial regression model. However, this procedure does not take into account the uncertainty of class membership. Bolck, Croon, and Hagenaars (2004) showed that such a modelling strategy underestimates the true relationships between LCs and covariates. Recently, Vermunt (2010) developed a procedure that takes into account the uncertainty of class membership by including the classification table that cross-tabulates modal and probabilistic class assignment ( Vermunt and Magidson (2005b) as weighting matrix into the multinomial regression model. We followed this improved three-step approach. The covariates mentioned above were included for prediction of class membership ( Table 1 ).

5.1 Latent structure of research output profiles

In the first step the nested data structure (projects are nested within scientific disciplines) was ignored, and simple LC models were explored. Table 3 shows the results of fitting the models containing one to 11 LCs with and without a continuous latent C-factor, respectively. For model comparison we used the AIC3. Out of all 22 models, Model 15 with four LCs, 107 parameters, and one C-factor shows the smallest AIC3. We therefore decided on this model. With regard to our research questions, there were four types of projects with different research output profiles (qualitative dimension). Additionally, the projects differed in their productivity, i.e. the amount of outputs, represented by the continuous latent C-factor (quantitative dimension).

Fit statistics for exploratory LC models (project types)

Note : MNR = model number, NCL = number of latent classes, LL = loglikelihood, NPAR = number of parameter, AIC3 = Akaike information criterion 3. Final model grey coloured.

Figure 1 shows the four LCs or project types with different research output profiles. The 2-point to 5-point ordinal scales were re-scaled such that the numerical values varied within the range of 0–1.0 ( Vermunt and Magidson 2005b : 117). We obtained this scaling by subtracting the lowest observed value from the class-specific mean and dividing the results by the range, where the range was nothing but the difference between highest and lowest value. The advantage of this scaling is that all variables can be depicted on the same scale as the class-specific probabilities for nominal variables. It must be noted that the LC results depicted in Fig. 1 were the results of the final MLLCA model (introduced in Section 5.2 ) and not the non-nested LC model in Table 3 . However, this does not matter, because the LC models with and without nesting do not differ.

 alt=

LCs of research output profiles (* = not used in the MLLCA).

The four LCs or project types with different research output profiles can be described as follows (class sizes in per cent of the total number of projects in parentheses):

Latent Class 1 ‘ Not Book ’ (37.0%): The research output profile of this research project type is quite similar to the average profile across all projects but with fewer non-reviewed journal articles, anthologies, and monographs than the average.

Latent Class 2 ‘ Book and Non-Reviewed Journal Article ’ (35.8%): this project type uses anthologies and monographs but also non-reviewed journal articles and mass communication as primary forms of written communication. Career development—such as diploma/degree, PhD dissertation and habilitation thesis—reviewed journal articles and follow-up projects score quite below the average.

Latent Class 3 ‘ Multiple Outputs ’ (17.9%): This project type generates research outputs in multiple ways with above-average outputs as peer-reviewed journal articles, non-reviewed journal articles, anthologies, monographs, conference papers, habilitation theses, PhD dissertations, diplomas/degrees, follow-up projects, but with fewer other conference contributions.

Latent Class 4 ‘ Journal Article, Conference Contribution, and Career Development ’ (9.3%): this most productive project type focuses strongly on peer-reviewed journal articles, with many published papers in combination with conference contributions (papers or other products), career development (diploma/degree, PhD dissertation, habilitation thesis), and follow-up projects, but this type uses fewer monographs as a form of written communication.

Of all the output variables, peer-reviewed journal articles and conference contributions discriminate the best between the LCs, with a discrimination index of about 0.60 ( Table 2 , last column, R 2 ).

5.2 Multilevel latent structure of research output profiles

In a multilevel latent structure model it is presumed that there is variation among the 22 scientific disciplines in the unconditional probabilities (the probabilities belonging to each LC). In an MLLCA the 22 scientific disciplines are grouped into latent clusters or segments according to their different proportion of types of research output profiles, as obtained in Section 5.1 .

Table 4 shows the results of fitting models containing one to eight latent clusters (M 1 –M 8 ), each with four LCs and with one continuous latent C-factor, respectively. With respect to BIC(k) and AIC3, a 5-GClass model will be favoured, i.e. there are five different segments of scientific disciplines with different proportions of the project types or LCs. Additionally, using the option of ‘cluster-independent C-factor’, we tested (M 9 ) whether the same loading structure can be held in all four LCs. The BIC(k) and the AIC3 improved slightly from model M 5 to the more restricted model M 9 with 122 − 89 = 33 fewer parameters than M 5 . Therefore, the assumption of a cluster-independent C-factor held, which made it possible to compare and rank all projects on the same scale. Including direct effects, such as the association between habilitation thesis and PhD dissertation, further improved the model. Only one residual (res = 3.88) was somewhat larger than the criterion of 3.84 ( Magidson and Vermunt 2004 ). To fulfil the basic model assumption of local stochastic independence, we chose model M 10 as the final model.

Fit statistics of models for variation among scientific disciplines (GClass) with four LCs and one C-factor

MNRModels of disciplinesLLNPARBIC(k)AIC3
11 GClass−17,789.410635,906.435,896.8
22 GClass−17,328.911034,997.834,987.8
33 GClass−17,211.111434,774.634,764.2
44 GClass−17,155.611834,676.034,665.3
55 GClass−17,139.712234,656.434,645.3
66 GClass−17,134.912634,659.434,647.9
77 GClass−17,133.413034,668.534,656.7
88 GClass−17,130.513434,675.134,662.9
95 GClass cluster-independent C-factor−17,188.18934,651.234,643.1
10Model 9 plus four additional direct effects (follow-up—PhD dissertation, habilitation thesis—PhD dissertation, habilitation thesis—anthology, monograph—anthology)−17,166.79334,620.834,612.4
11Model 10 plus order restriction of the latent clusters−17,351.58034,950.234,943.0
MNRModels of disciplinesLLNPARBIC(k)AIC3
11 GClass−17,789.410635,906.435,896.8
22 GClass−17,328.911034,997.834,987.8
33 GClass−17,211.111434,774.634,764.2
44 GClass−17,155.611834,676.034,665.3
55 GClass−17,139.712234,656.434,645.3
66 GClass−17,134.912634,659.434,647.9
77 GClass−17,133.413034,668.534,656.7
88 GClass−17,130.513434,675.134,662.9
95 GClass cluster-independent C-factor−17,188.18934,651.234,643.1
10Model 9 plus four additional direct effects (follow-up—PhD dissertation, habilitation thesis—PhD dissertation, habilitation thesis—anthology, monograph—anthology)−17,166.79334,620.834,612.4
11Model 10 plus order restriction of the latent clusters−17,351.58034,950.234,943.0

Note : MNR = model number, LL = loglikelihood, NPAR = number of parameters, BIC(k) = Bayesian information criterion for k clusters, AIC3 = Akaike information criterion 3.

To assess the separation between LCs, we calculated entropy-based measures, which varied between 0 and 1.0. They show how well the observed variables were able to predict the class membership (Lukočienė, Varriale and Vermunt 2010 ). For LC, the R 2 entropy amounted to 0.78, for latent clusters R 2 entropy amounted to 0.98. The separation of both the LCs and the latent clusters is therefore very large. Another model validity index is the proportion of classification error. For each project and each LC or latent cluster a posterior probability that a project belongs to the respective class can be estimated. Out of this set of probabilities the highest one indicates the LC to which a project or discipline should be assigned (modal assignment). Overall, the modal assignments can deviate from the expected assignments according to the sum of the posterior probabilities. The classification error indicates the amount of misclassification. For model M 10 the classification error was comparatively low, with 11.0% at the level of projects and 0.7% at the level of disciplines.

Based on Fig. 1 it could be supposed that the LCs do not represent a qualitative configuration but rather a quantitative dimension, in that the individual profiles run largely parallel and differ only in the level, that is, the quantity of research output. To prove this assumption the LCs were order-restricted (model M 11 ). However, the BIC(k) as well as the AIC3 of M 11 strongly increased in comparison to all other models, with the result that the assumption of a quantitative dimension behind the LCs was not very plausible.

To illustrate the meaning of these segments of scientific disciplines, Table 5 shows the distribution of the projects among the four LCs ( Fig. 1 ) of each of the five segments of disciplines (latent clusters). The last column of numbers in Table 5 indicates the size of the LCs or types of research output profiles. The last row of numbers in Table 5 indicates the proportion of disciplines that were in each discipline segment. The latent clusters or segments of scientific disciplines can be described according to the disciplines that belong to them (cluster sizes in per cent of the total number of disciplines in parentheses):

Latent Cluster 1 ‘ Life Sciences and Medicine ’ (31.6%): biology; botany; zoology; geosciences; preclinical medicine; clinical medicine; agricultural, forestry and veterinary sciences.

Latent Cluster 2 ‘ Social Sciences / Arts and Humanities ’ (31.4%): social sciences; jurisprudence; philosophy/theology; history; linguistics and literary studies; art history; other humanities fields.

Latent Cluster 3 ‘ Formal Sciences ’ (13.9%): mathematics; computer sciences; economic sciences.

Latent Cluster 4 ‘ Technical Sciences ’ (13.5%): Other natural sciences; technical sciences; psychology.

Latent Cluster 5 ‘ Physical Sciences ’ (9.6%): physics, astronomy and mechanics; chemistry.

Relative class sizes and distribution of projects among LCs (project output types) within each latent clusters (discipline segments) for M 10 (column per cent)

Latent classes (research output profile types)Latent clusters (discipline segments) LC size
GClass 1GClass 2GClass 3GClass 4GClass 5
LC 1 ‘Not Book’ 0.000.14 0.37
LC 2 ‘Book and Non-Reviewed Journal Article’0.00 0.02 0.000.36
LC 3 ‘Multiple Outputs’0.060.03 0.240.060.18
LC 4 ‘Journal Article, Conference Contribution, Career Development’0.100.000.030.04 0.09
GClass size0.320.310.140.140.10
Latent classes (research output profile types)Latent clusters (discipline segments) LC size
GClass 1GClass 2GClass 3GClass 4GClass 5
LC 1 ‘Not Book’ 0.000.14 0.37
LC 2 ‘Book and Non-Reviewed Journal Article’0.00 0.02 0.000.36
LC 3 ‘Multiple Outputs’0.060.03 0.240.060.18
LC 4 ‘Journal Article, Conference Contribution, Career Development’0.100.000.030.04 0.09
GClass size0.320.310.140.140.10

Note : LC size = size of the latent class, GClass size = size of the latent clusters, proportions over 0.30 (except for class sizes) are in bold face .

The remaining columns in Table 5 show the distribution of projects in each discipline segment or the probability of a project showing a specific profile type given its latent cluster membership. For instance, of all projects falling into the first GClass 84% are in LC 1 (‘Not Book’), 0% are in LC 2 (‘Book and Non-Reviewed Journal Article’), 6% are in LC 3 (‘Multiple Outputs’), and 10% are in LC 4 (‘Journal Article, Conference Contribution, and Career Development’). High proportions in a cell indicate a strong association of the corresponding segment of disciplines in the column with the corresponding type of research output profile in the row. In this respect the segment ‘Life Sciences and Medicine’ (GClass 1) was strongly associated with the ‘Not Book’ project type (LC 1) (84% of projects of this segment), but 10% of this cluster fell also in the most productive type, ‘Journal Article, Conference Contribution, and Career Development’ (LC 4). In the segment ‘Social Sciences/Arts and Humanities’ (GClass 2) almost all projects (97%) are of the second ‘Book and Non-Reviewed Journal Article’ type (LC 2). Projects of the third segment ‘Formal Sciences’ are classified about 80% in the ‘Multiple Outputs’ type, 14% also in the ‘Not Book’ type. The fourth segment, ‘Technical Sciences’, is rather heterogeneous, with over 95% of the projects of this segment in the first three project types and 37% even in the ‘Book and Non-Reviewed Journal Article’ type (LC 2). The projects of the last segment, ‘Physical Sciences’, can be divided mainly into two groups: 38% in the first project type ‘Not Book’ and 56% in the most productive project type, ‘Journal Article, Conference Contribution, and Career Development’. Overall, except for ‘Humanities’, there is no one-to-one assignment of a segment of disciplines to a special type of research output profile. Disciplines show great heterogeneity in their research output profiles.

Figure 2 shows the LC proportions for each single discipline, structured according to the latent cluster (segments of disciplines). This finding also replicated the basic findings in Table 5 at the level of single disciplines. It is of interest that the ‘Book and Non-reviewed Journal Article’ type (LC 2) played an important role not only in ‘Social Sciences/Arts and Humanities’ but also in ‘Technical Sciences’.

 alt=

Estimated proportions of the four LCs of projects for each scientific discipline (stacked bars plot), classified into one of five latent clusters (1–5, separated by dashed lines).

5.3 Explaining LC membership

To explain the LC membership we conducted a modified multilevel multinomial regression model with the latent-class membership as categorical variable and the set of covariates as predictors ( Vermunt 2010 ). Beforehand, the continuous covariates time, age, duration, overall rating of a proposal (ex ante evaluation), and requested grant sum were z -transformed ( M = 0, S = 1) to facilitate the interpretation of the regression results independently of the units of the covariates ( Table 6 ).

Wald statistics are used to assess the statistical significance of a set of parameter estimates. Using Wald statistics, the restriction is tested that each estimate in a set of parameters associated with a given covariate equals zero ( Vermunt and Magidson 2005b ). A non-significant Wald statistic indicates that the respective covariate does not differ between the LCs. Additionally, we calculated a z -test for each single parameter. There are three covariates that explained the class membership with statistically significant Wald tests: project duration, requested grant sum, and the project head’s age. The overall rating of the proposal (ex ante evaluation), for instance, had no impact on the class membership. Research projects with a duration longer than the average of 39 months were more often in LC 4 (‘Journal Article, Conference Contribution, and Career Development’) than research projects with a shorter than average duration were. The higher the requested grant sum of a project, the less probable it was for the project to be in LC 2 (‘Book and Non-Reviewed Journal Article’), but the more probable it was for it to be in LC 4 (‘Journal Article, Conference Contribution, and Career Development’). Projects where the project head was older than the average age of 47 were more frequently in LC 2 (‘Book and Non-Reviewed Journal Article’), whereas projects where the project head was younger than 47 tended to be in LC 3 (‘Multiple Outputs’). Additionally, the percentage of projects in LC 4 (‘Journal Article, Conference Contribution, and Career Development’) decreased from project end year 2002 to project end 2010.

In sum, projects that belong to the ‘Book and Non-Reviewed Journal Article’ type (LC 2) tended to have rather low requested grant sums and project heads who were older than the average, whereas the most productive ‘Journal Article, Conference Contribution, and Career Development’ type was characterized by above-average requested grant sums and above-average project durations. Further, the percentage of this most productive type decreased over time (time of project end). The third type, ‘Multiple Outputs’, tended to have younger project heads.

5.4 Ranking of projects

Until now it was assumed that output profiles of research projects can be fully explained by the LC or types of output profiles into which the projects were classified. However, as Table 3 shows, projects differed not only with respect to LCs or latent cluster but also with respect to an additional quantitative dimension, a latent C-factor, referring to classical concepts of factor analysis. Unlike LCs, all output variables have positive loadings on this dimension—namely, with the same correlation or loading structure within each LC. Thus, the higher the value in any of the output variable, the higher the value of the C-factor is. Positive values in the C-factor represent productivity above average of the projects in this LC, and negative values indicate projects with less productivity with respect to projects in the same LC. In sum, the C-factor represents productivity differences of projects within each LC, similar to a Mixed-Rasch model in psychometrics ( Mutz, Borchers and Becker 2002 ; Mutz and Daniel 2007 ). This type of ranking can be used by the FWF (and other funding organizations) for comparative evaluation of the output of different projects within a certain time period.

According to the C-factor, the projects within each LC or project type could be ranked ( Fig. 3 ) from left (projects with the highest productivity) to right (projects with the lowest productivity). Additionally, Goldstein-adjusted confidence intervals are shown which makes it possible to interpret non-overlapping intervals of two projects as statistical significant differences at the 5% probability level ( Mutz and Daniel 2007 ). Roughly speaking, only the first and the last 100 projects in each LC actually showed statistically significant differences in their C-factor values.

 alt=

Rankings of projects within LCs from left (largest amount of research output) to right (smallest amount of research output) and Goldstein-adjusted confidence intervals.

The aim of this study was to conduct a secondary analysis of final report data from the FWF (ex post evaluation) for the years 2002–10 (project end) and—using multilevel LCA—to build bottom-up a typology of research projects and, further, to classify scientific disciplines according to the different proportions of the types of research output profiles found. Referring to our four research questions, the results can be summarized as follows:

The 1,742 completed FWF-funded research projects available for a final report can be classified according to the research output profiles in the following four types with relatively high discrimination: 37% of all projects are in the ‘Not Book’ type, 35.8% in the ‘Book and Non-Reviewed Journal’ type, 17.9% in the ‘Multiple Outputs’ type, and 9.3% in the ‘Journal Article, Conference Contribution, and Career Development’ type, which is the most productive type in terms of number of journal articles and career-related activities. These project types represent primarily a qualitative configuration and not a quantitative dimension according to which projects can be ranked.

The 22 scientific disciplines can be divided into five segments of disciplines based on different proportions of the types of research output profiles: 31.6% of all projects can be classified in the segment ‘Life Science and Medicine’, 31.4% in ‘Social Sciences/Arts and Humanities’, 13.9% in ‘Formal Sciences’, 13.5% in ‘Technical Sciences’ and 9.6% in ‘Physical Sciences’, such as chemistry and physics. Only the ‘Social Sciences/Arts and Humanities’ segment is almost fully associated with one research output profile (‘Book and Non-Reviewed Journal Article’ type); all other segments show different proportions of the four research output profiles. Psychology and economic sciences are usually subsumed under humanities and social sciences. But the MLLCA showed that these two scientific disciplines do not belong to the segment ‘Social Sciences/Arts and Humanities’. Additionally, the fourth and most productive type of research output profile is highly represented (56%) in the fifth segment of disciplines, ‘Physical Sciences’, and with only 10% in ‘Life Science and Medicine’, contrary to the findings of the DFG ( Deutsche Forschungsgemeinschaft 2005 ) mentioned above in the introduction. ‘Life Sciences and Medicine’ is strongly associated (84%) with the ‘Not Book’ type. Projects of the third segment, ‘Formal Sciences’, are classified about 80% in the ‘Multiple Outputs’ type and 14% also in the ‘Not Book’ type. The fourth segment, ‘Technical Sciences’, is rather heterogeneous, with over 90% of the projects in this segment in the first three project types and 37% even in the ‘Book and Non-Reviewed Journal Article’ type. In the end, the findings of the Expert Group on Assessment of University-Based Research set up by the European Commission ( European Commission 2010 ) on the disciplines’ preferred forms of communication are too simple. To sum up, there are not only differences between scientific disciplines in the research output profiles; there is also great heterogeneity of research output profiles within disciplines and segments of disciplines, respectively.

Membership in a particular project type can essentially be explained by three covariates—project duration, requested grant sum, and the project head’s age. Projects that belong to the ‘Book and Non-Reviewed Journal Article’ type tend to be characterized by small requested grant sums and project heads who are older than the average, whereas the most productive type, ‘Journal Article, Conference Contribution, and Career Development’, tends to be characterized by high requested grant sums and longer than average project durations, but whose proportion decreases the more the date of the project termination approximates the year 2010. Reviewers’ overall rating of the proposal (ex ante evaluation) had no influence on latent-class membership.

Projects differ not only in the qualitative configuration of research outputs, their research output profiles, but also with respect to a quantitative dimension that makes productivity rankings of projects possible. The higher the output of a project in each of the research output variables, the higher its value on the quantitative (latent) dimension is. Only the first and the last 100 projects within each project type differed statistically significantly on this dimension.

However, there are also some limitations of our study that have to be discussed: first, the findings represent a specific picture of the research situation in one country, namely, Austria, in a 10-year period situation, and they may not necessarily apply in other countries. The quality of the research was not considered, such as through using international reference values for bibliographic indicators ( Opthof and Leydesdorff 2010 ; Bornmann and Mutz 2011 ) or through using discipline-specific quality criteria. Second, the study included only projects (in particular, ‘Stand-Alone Projects’) that were funded by the FWF. Research projects in Austria that were funded by other research funding organizations, that were not Stand-Alone Projects (40%) or that were funded by higher education institutions themselves could not be included. Further, research projects are mostly financed by mixed funding—that is, in part by grants from various research funding organizations and in part by matching funds from the relevant higher education institution (e.g. human resources), so that research output profiles cannot necessarily be explained by covariates of a single research funding organization. Third, the persons responsible for preparing a report (here, the project heads) always have a certain leeway to mention or not mention certain results of their research as results of the FWF-funded research projects in the final report (e.g. journal articles, career development). In social psychology terms, this phenomenon can be subsumed under the concept of ‘social desirability’ ( Nederhof 1985 ). Social desirability is a psychological tendency to respond in a manner that conforms to consensual standards and general expectancies in a culture. The findings of this study could thus also in part reflect different report policies in the different scientific disciplines.

Despite these limitations, we draw the following conclusions from the results:

Concept of ‘ research output ’ : If the aim is to include all disciplines in the ex post research evaluation, it is necessary to define the term ‘research output’ more broadly, as do the RCUK and the FWF, and to include—in addition to journal articles—also other output categories, such as monographs, anthologies, conference contributions, and patents, in order to treat all disciplines fairly with regard to research output.

Arts and Humanities : As has been repeatedly demanded, the arts and humanities really should be treated as an independent and relatively uniform area ( Nederhof et al. 1989 ; Nederhof 2006 ). Instead of counting only journal articles and their citations, however, it is important to include also monographs and anthologies ( Kousha and Thelwell 2009 ). Psychology and economic sciences do not belong to the segment ‘Social Sciences/Arts and Humanities’. Therefore, it is rather problematic to subsume psychology, economic sciences, social sciences, sociology, and humanities in one unique concept, ‘Social Sciences and Humanities’, as is often the case ( Archambault et al. 2006 ; Nederhof 2006 ).

Hierarchy of the sciences : A most familiar and widespread belief is that scientific disciplines can be classified as ‘hard’ sciences and ‘soft’ sciences, with physics at the top of the hierarchy, social sciences at the bottom and biology somewhere in between ( Smith et al. 2000 ). The strategy followed here made it possible to work out, bottom-up from the research outputs of funded research projects, an empirically based typology of scientific disciplines that at its heart is not hierarchically structured. The typology found reflects much more strongly the real structure of science than the top-down classification systems of sciences allow. However, the identified research output profiles do not unambiguously indicate the segment of the discipline. For instance, almost all projects in the segment ‘Social Sciences/Arts and Humanities’ are of the ‘Book and Non-Reviewed Journal Article’ type, but not all projects of the ‘Book and Non-Reviewed Journal Article’ type are in the segment ‘Social sciences/Arts and Humanities’; there is also a high proportion of ‘Book and Non-Reviewed Journal Article’ type projects in the segment ‘Technical Sciences’.

Research output profiles: Using MLLCA, research projects are not examined with regard to few arbitrarily selected project outputs; instead, the profile or combination of multiple research outputs is analysed. This should receive more attention also in ex post research evaluations of projects.

Ranking of projects : In addition, with MLLCA a qualitative dimension of different types of projects and segments of disciplines can be distinguished from a quantitative dimension that captures research productivity. In this way, projects and possibly also scientific disciplines can be ranked according to their productivity.

Selected model parameters of the regression from LCs on covariates

CovariateLatent classes Overall test Wald
LC 1 LC 2 LC 3 LC 4
Not Book Book and Non-Reviewed Journal Article Multiple Outputs Journal Article, Conference Contribution, Career Development
ParSEParSEParSEParSE
Time period of the approval decision−0.110.63−0.851.09−1.050.912.011.053.73
Time period of the project end0.400.631.011.050.810.90−2.22 1.094.26
Project duration−0.110.29−0.930.47−0.520.401.56 0.519.62**
Overall rating of the proposal−0.190.15−0.160.25−0.040.210.400.263.53
Requested grant sum−0.280.19−1.17 0.370.450.261.00 0.2823.90**
Project head’s sex0.510.61−0.100.97−0.721.120.300.760.77
Project head’s age−0.250.130.72 0.23−0.49 0.220.020.2113.59**
CovariateLatent classes Overall test Wald
LC 1 LC 2 LC 3 LC 4
Not Book Book and Non-Reviewed Journal Article Multiple Outputs Journal Article, Conference Contribution, Career Development
ParSEParSEParSEParSE
Time period of the approval decision−0.110.63−0.851.09−1.050.912.011.053.73
Time period of the project end0.400.631.011.050.810.90−2.22 1.094.26
Project duration−0.110.29−0.930.47−0.520.401.56 0.519.62**
Overall rating of the proposal−0.190.15−0.160.25−0.040.210.400.263.53
Requested grant sum−0.280.19−1.17 0.370.450.261.00 0.2823.90**
Project head’s sex0.510.61−0.100.97−0.721.120.300.760.77
Project head’s age−0.250.130.72 0.23−0.49 0.220.020.2113.59**

Note : LC = latent class, Par = parameter estimate, SE = standard error, Wald = Wald test, df = degrees of freedom.

*p < 0.05 ( z -test) **p < 0.05 (Wald test, df = 3).

Google Scholar

Google Preview

Month: Total Views:
December 2016 5
January 2017 4
February 2017 4
March 2017 11
April 2017 6
May 2017 2
June 2017 2
July 2017 7
August 2017 5
September 2017 6
October 2017 13
November 2017 14
December 2017 23
January 2018 28
February 2018 15
March 2018 25
April 2018 47
May 2018 27
June 2018 19
July 2018 26
August 2018 34
September 2018 15
October 2018 25
November 2018 31
December 2018 15
January 2019 20
February 2019 53
March 2019 12
April 2019 34
May 2019 35
June 2019 20
July 2019 29
August 2019 26
September 2019 14
October 2019 21
November 2019 14
December 2019 14
January 2020 12
February 2020 14
March 2020 20
April 2020 13
May 2020 18
June 2020 22
July 2020 16
August 2020 41
September 2020 20
October 2020 24
November 2020 9
December 2020 19
January 2021 20
February 2021 19
March 2021 25
April 2021 28
May 2021 20
June 2021 26
July 2021 16
August 2021 16
September 2021 15
October 2021 27
November 2021 18
December 2021 15
January 2022 18
February 2022 10
March 2022 36
April 2022 18
May 2022 27
June 2022 39
July 2022 47
August 2022 24
September 2022 49
October 2022 31
November 2022 48
December 2022 24
January 2023 37
February 2023 14
March 2023 33
April 2023 21
May 2023 30
June 2023 36
July 2023 18
August 2023 41
September 2023 27
October 2023 19
November 2023 24
December 2023 41
January 2024 33
February 2024 49
March 2024 21
April 2024 25
May 2024 29
June 2024 33
July 2024 15

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1471-5449
  • Print ISSN 0958-2029
  • Copyright © 2024 Oxford University Press
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

View the latest institution tables

View the latest country/territory tables

What's the best measure of research output?

Nature Index

in research output

An artist's interpretation of a paper published in Science that compared human and machine learning. Credit: Danqing Wang

What's the best measure of research output?

21 March 2016

in research output

Danqing Wang

An artist's interpretation of a paper published in Science that compared human and machine learning.

When ranking countries based on their output of high-quality research, weighted fractional count (WFC) is often used as the Nature Index's primary metric. And for good reason. WFC reflects the size of the contribution a country's researchers have made to every study published in the 68 top-tier journals included in the index. This measure also takes into account the higher proportion of astronomy papers in the index. It seems astronomers love to write papers. The index's unweighted measure of contribution is fractional count (FC).

For every paper included in the index, the FC and WFC are split among authors based on their affiliation. Take a recent paper published in Science , which created a computer model that captures humans' unique ability to learn, and had three authors from three different universities, two in the USA and one in Canada. For this paper, each affiliation received an FC of 0.33, and because it wasn't published in an astronomy journal, they received the same WFC. Adding the WFC of a country's institutions presents a picture of that nation's performance over a designated period of time.

In a recent post , we published a graph of the top and bottom 10 countries ranked by change in their WFC. It revealed a compelling narrative. Since 2012 China's contribution to high-quality research has soared, while traditional stronghold, the United States, appears to have lost its mojo. (Although it's worth noting that the USA's total WFC is miles ahead of anyone else, China included.)

But what does it mean when a country's WFC drops? Does that suggest its research performance is slipping?

Not necessarily. When assessing the output of a country's top-quality research, it is prudent to also consider article count - the total number of studies that a country's researchers have contributed too, regardless of the size of that contribution.

Consider this next graph. It shows the change in article count for the countries in the graph above. While the United States and Japan's article count followed a similar downward trajectory to their WFC, the article count of all the other countries grew between 2012 and 2015 - including the eight countries that experienced a drop in their WFC.

As article count isn't a weighted metric, it shouldn't be directly compared to WFC. When considering the change in a country's total number of papers versus the contribution it made to those papers, it best to compare AC with FC.

An interesting trend emerges when a country's article count goes up, but its fractional count dwindles. It suggests that while the country's researchers have contributed to a larger total number of studies, the proportion of their contribution has become smaller.

The reverse can be observed in countries with an increase in their FC but a fall in their AC. In those countries, researchers contributed to fewer studies but received more of the credit for the ones that were published.

These two scenarios at least partly reflect patterns of collaboration, and their significance will depend on a broader research context. What is certain, however, is that neither WFC, FC or AC alone can reveal the state of a country's high-quality natural science output. The three metrics should be considered together.

How To Make Conceptual Framework (With Examples and Templates)

How To Make Conceptual Framework (With Examples and Templates)

We all know that a research paper has plenty of concepts involved. However, a great deal of concepts makes your study confusing.

A conceptual framework ensures that the concepts of your study are organized and presented comprehensively. Let this article guide you on how to make the conceptual framework of your study.

Related: How to Write a Concept Paper for Academic Research

Table of Contents

At a glance: free conceptual framework templates.

Too busy to create a conceptual framework from scratch? No problem. We’ve created templates for each conceptual framework so you can start on the right foot. All you need to do is enter the details of the variables. Feel free to modify the design according to your needs. Please read the main article below to learn more about the conceptual framework.

Conceptual Framework Template #1: Independent-Dependent Variable Model

Conceptual framework template #2: input-process-output (ipo) model, conceptual framework template #3: concept map, what is a conceptual framework.

A conceptual framework shows the relationship between the variables of your study.  It includes a visual diagram or a model that summarizes the concepts of your study and a narrative explanation of the model presented.

Why Should Research Be Given a Conceptual Framework?

Imagine your study as a long journey with the research result as the destination. You don’t want to get lost in your journey because of the complicated concepts. This is why you need to have a guide. The conceptual framework keeps you on track by presenting and simplifying the relationship between the variables. This is usually done through the use of illustrations that are supported by a written interpretation.

Also, people who will read your research must have a clear guide to the variables in your study and where the research is heading. By looking at the conceptual framework, the readers can get the gist of the research concepts without reading the entire study. 

Related: How to Write Significance of the Study (with Examples)

What Is the Difference Between Conceptual Framework and Theoretical Framework?

You can develop this through the researcher’s specific concept in the study.Purely based on existing theories.
The research problem is backed up by existing knowledge regarding things the researcher wants us to discover about the topic.The research problem is supported using past relevant theories from existing literature.
Based on acceptable and logical findings.It is established with the help of the research paradigm.
It emphasizes the historical background and the structure to fill in the knowledge gap.A general set of ideas and theories is essential in writing this area.
It highlights the fundamental concepts characterizing the study variable.It emphasizes the historical background and the structure to fill the knowledge gap.

Both of them show concepts and ideas of your study. The theoretical framework presents the theories, rules, and principles that serve as the basis of the research. Thus, the theoretical framework presents broad concepts related to your study. On the other hand, the conceptual framework shows a specific approach derived from the theoretical framework. It provides particular variables and shows how these variables are related.

Let’s say your research is about the Effects of Social Media on the Political Literacy of College Students. You may include some theories related to political literacy, such as this paper, in your theoretical framework. Based on this paper, political participation and awareness determine political literacy.

For the conceptual framework, you may state that the specific form of political participation and awareness you will use for the study is the engagement of college students on political issues on social media. Then, through a diagram and narrative explanation, you can show that using social media affects the political literacy of college students.

What Are the Different Types of Conceptual Frameworks?

The conceptual framework has different types based on how the research concepts are organized 1 .

1. Taxonomy

In this type of conceptual framework, the phenomena of your study are grouped into categories without presenting the relationship among them. The point of this conceptual framework is to distinguish the categories from one another.

2. Visual Presentation

In this conceptual framework, the relationship between the phenomena and variables of your study is presented. Using this conceptual framework implies that your research provides empirical evidence to prove the relationship between variables. This is the type of conceptual framework that is usually used in research studies.

3. Mathematical Description

In this conceptual framework, the relationship between phenomena and variables of your study is described using mathematical formulas. Also, the extent of the relationship between these variables is presented with specific quantities.

How To Make Conceptual Framework: 4 Steps

1. identify the important variables of your study.

There are two essential variables that you must identify in your study: the independent and the dependent variables.

An independent variable is a variable that you can manipulate. It can affect the dependent variable. Meanwhile, the dependent variable is the resulting variable that you are measuring.

You may refer to your research question to determine your research’s independent and dependent variables.

Suppose your research question is: “Is There a Significant Relationship Between the Quantity of Organic Fertilizer Used and the Plant’s Growth Rate?” The independent variable of this study is the quantity of organic fertilizer used, while the dependent variable is the plant’s growth rate.

2. Think About How the Variables Are Related

Usually, the variables of a study have a direct relationship. If a change in one of your variables leads to a corresponding change in another, they might have this kind of relationship.

However, note that having a direct relationship between variables does not mean they already have a cause-and-effect relationship 2 . It takes statistical analysis to prove causation between variables.

Using our example earlier, the quantity of organic fertilizer may directly relate to the plant’s growth rate. However, we are not sure that the quantity of organic fertilizer is the sole reason for the plant’s growth rate changes.

3. Analyze and Determine Other Influencing Variables

Consider analyzing if other variables can affect the relationship between your independent and dependent variables 3 .

4. Create a Visual Diagram or a Model

Now that you’ve identified the variables and their relationship, you may create a visual diagram summarizing them.

Usually, shapes such as rectangles, circles, and arrows are used for the model. You may create a visual diagram or model for your conceptual framework in different ways. The three most common models are the independent-dependent variable model, the input-process-output (IPO) model, and concept maps.

a. Using the Independent-Dependent Variable Model

You may create this model by writing the independent and dependent variables inside rectangles. Then, insert a line segment between them, connecting the rectangles. This line segment indicates the direct relationship between these variables. 

Below is a visual diagram based on our example about the relationship between organic fertilizer and a plant’s growth rate. 

conceptual framework 1

b. Using the Input-Process-Output (IPO) Model

If you want to emphasize your research process, the input-process-output model is the appropriate visual diagram for your conceptual framework.

To create your visual diagram using the IPO model, follow these steps:

  • Determine the inputs of your study . Inputs are the variables you will use to arrive at your research result. Usually, your independent variables are also the inputs of your research. Let’s say your research is about the Level of Satisfaction of College Students Using Google Classroom as an Online Learning Platform. You may include in your inputs the profile of your respondents and the curriculum used in the online learning platform.
  • Outline your research process. Using our example above, the research process should be like this: Data collection of student profiles → Administering questionnaires → Tabulation of students’ responses → Statistical data analysis.
  • State the research output . Indicate what you are expecting after you conduct the research. In our example above, the research output is the assessed level of satisfaction of college students with the use of Google Classroom as an online learning platform.
  • Create the model using the research’s determined input, process, and output.

Presented below is the IPO model for our example above.

conceptual framework 2

c. Using Concept Maps

If you think the two models presented previously are insufficient to summarize your study’s concepts, you may use a concept map for your visual diagram.

A concept map is a helpful visual diagram if multiple variables affect one another. Let’s say your research is about Coping with the Remote Learning System: Anxiety Levels of College Students. Presented below is the concept map for the research’s conceptual framework:

conceptual framework 3

5. Explain Your Conceptual Framework in Narrative Form

Provide a brief explanation of your conceptual framework. State the essential variables, their relationship, and the research outcome.

Using the same example about the relationship between organic fertilizer and the growth rate of the plant, we can come up with the following explanation to accompany the conceptual framework:

Figure 1 shows the Conceptual Framework of the study. The quantity of the organic fertilizer used is the independent variable, while the plant’s growth is the research’s dependent variable. These two variables are directly related based on the research’s empirical evidence.

Conceptual Framework in Quantitative Research

You can create your conceptual framework by following the steps discussed in the previous section. Note, however, that quantitative research has statistical analysis. Thus, you may use arrows to indicate a cause-and-effect relationship in your model. An arrow implies that your independent variable caused the changes in your dependent variable.

Usually, for quantitative research, the Input-Process-Output model is used as a visual diagram. Here is an example of a conceptual framework in quantitative research:

Research Topic : Level of Effectiveness of Corn (Zea mays) Silk Ethanol Extract as an Antioxidant

conceptual framework 4

Conceptual Framework in Qualitative Research

Again, you can follow the same step-by-step guide discussed previously to create a conceptual framework for qualitative research. However, note that you should avoid using one-way arrows as they may indicate causation . Qualitative research cannot prove causation since it uses only descriptive and narrative analysis to relate variables.

Here is an example of a conceptual framework in qualitative research:

Research Topic : Lived Experiences of Medical Health Workers During Community Quarantine

conceptual framework 5

Conceptual Framework Examples

Presented below are some examples of conceptual frameworks.

Research Topic : Hypoglycemic Ability of Gabi (Colocasia esculenta) Leaf Extract in the Blood Glucose Level of Swiss Mice (Mus musculus)

conceptual framework 6

Figure 1 presents the Conceptual Framework of the study. The quantity of gabi leaf extract is the independent variable, while the Swiss mice’s blood glucose level is the study’s dependent variable. This study establishes a direct relationship between these variables through empirical evidence and statistical analysis . 

Research Topic : Level of Effectiveness of Using Social Media in the Political Literacy of College Students

conceptual framework 7

Figure 1 shows the Conceptual Framework of the study. The input is the profile of the college students according to sex, year level, and the social media platform being used. The research process includes administering the questionnaires, tabulating students’ responses, and statistical data analysis and interpretation. The output is the effectiveness of using social media in the political literacy of college students.

Research Topic: Factors Affecting the Satisfaction Level of Community Inhabitants

conceptual framework 8

Figure 1 presents a visual illustration of the factors that affect the satisfaction level of community inhabitants. As presented, environmental, societal, and economic factors influence the satisfaction level of community inhabitants. Each factor has its indicators which are considered in this study.

Tips and Warnings

  • Please keep it simple. Avoid using fancy illustrations or designs when creating your conceptual framework. 
  • Allot a lot of space for feedback. This is to show that your research variables or methodology might be revised based on the input from the research panel. Below is an example of a conceptual framework with a spot allotted for feedback.

conceptual framework 9

Frequently Asked Questions

1. how can i create a conceptual framework in microsoft word.

First, click the Insert tab and select Shapes . You’ll see a wide range of shapes to choose from. Usually, rectangles, circles, and arrows are the shapes used for the conceptual framework. 

conceptual framework 10

Next, draw your selected shape in the document.

conceptual framework 11

Insert the name of the variable inside the shape. You can do this by pointing your cursor to the shape, right-clicking your mouse, selecting Add Text , and typing in the text.

conceptual framework 12

Repeat the same process for the remaining variables of your study. If you need arrows to connect the different variables, you can insert one by going to the Insert tab, then Shape, and finally, Lines or Block Arrows, depending on your preferred arrow style.

2. How to explain my conceptual framework in defense?

If you have used the Independent-Dependent Variable Model in creating your conceptual framework, start by telling your research’s variables. Afterward, explain the relationship between these variables. Example: “Using statistical/descriptive analysis of the data we have collected, we are going to show how the <state your independent variable> exhibits a significant relationship to <state your dependent variable>.”

On the other hand, if you have used an Input-Process-Output Model, start by explaining the inputs of your research. Then, tell them about your research process. You may refer to the Research Methodology in Chapter 3 to accurately present your research process. Lastly, explain what your research outcome is.

Meanwhile, if you have used a concept map, ensure you understand the idea behind the illustration. Discuss how the concepts are related and highlight the research outcome.

3. In what stage of research is the conceptual framework written?

The research study’s conceptual framework is in Chapter 2, following the Review of Related Literature.

4. What is the difference between a Conceptual Framework and Literature Review?

The Conceptual Framework is a summary of the concepts of your study where the relationship of the variables is presented. On the other hand, Literature Review is a collection of published studies and literature related to your study. 

Suppose your research concerns the Hypoglycemic Ability of Gabi (Colocasia esculenta) Leaf Extract on Swiss Mice (Mus musculus). In your conceptual framework, you will create a visual diagram and a narrative explanation presenting the quantity of gabi leaf extract and the mice’s blood glucose level as your research variables. On the other hand, for the literature review, you may include this study and explain how this is related to your research topic.

5. When do I use a two-way arrow for my conceptual framework?

You will use a two-way arrow in your conceptual framework if the variables of your study are interdependent. If variable A affects variable B and variable B also affects variable A, you may use a two-way arrow to show that A and B affect each other.

Suppose your research concerns the Relationship Between Students’ Satisfaction Levels and Online Learning Platforms. Since students’ satisfaction level determines the online learning platform the school uses and vice versa, these variables have a direct relationship. Thus, you may use two-way arrows to indicate that the variables directly affect each other.

  • Conceptual Framework – Meaning, Importance and How to Write it. (2020). Retrieved 27 April 2021, from https://afribary.com/knowledge/conceptual-framework/
  • Correlation vs Causation. Retrieved 27 April 2021, from https://www.jmp.com/en_ph/statistics-knowledge-portal/what-is-correlation/correlation-vs-causation.html
  • Swaen, B., & George, T. (2022, August 22). What is a conceptual framework? Tips & Examples. Retrieved December 5, 2022, from https://www.scribbr.com/methodology/conceptual-framework/

Written by Jewel Kyle Fabula

in Career and Education , Juander How

in research output

Jewel Kyle Fabula

Jewel Kyle Fabula is a Bachelor of Science in Economics student at the University of the Philippines Diliman. His passion for learning mathematics developed as he competed in some mathematics competitions during his Junior High School years. He loves cats, playing video games, and listening to music.

Browse all articles written by Jewel Kyle Fabula

Copyright Notice

All materials contained on this site are protected by the Republic of the Philippines copyright law and may not be reproduced, distributed, transmitted, displayed, published, or broadcast without the prior written permission of filipiknow.net or in the case of third party materials, the owner of that content. You may not alter or remove any trademark, copyright, or other notice from copies of the content. Be warned that we have already reported and helped terminate several websites and YouTube channels for blatantly stealing our content. If you wish to use filipiknow.net content for commercial purposes, such as for content syndication, etc., please contact us at legal(at)filipiknow(dot)net

in research output

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Dissertation
  • How to Write a Results Section | Tips & Examples

How to Write a Results Section | Tips & Examples

Published on August 30, 2022 by Tegan George . Revised on July 18, 2023.

A results section is where you report the main findings of the data collection and analysis you conducted for your thesis or dissertation . You should report all relevant results concisely and objectively, in a logical order. Don’t include subjective interpretations of why you found these results or what they mean—any evaluation should be saved for the discussion section .

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

How to write a results section, reporting quantitative research results, reporting qualitative research results, results vs. discussion vs. conclusion, checklist: research results, other interesting articles, frequently asked questions about results sections.

When conducting research, it’s important to report the results of your study prior to discussing your interpretations of it. This gives your reader a clear idea of exactly what you found and keeps the data itself separate from your subjective analysis.

Here are a few best practices:

  • Your results should always be written in the past tense.
  • While the length of this section depends on how much data you collected and analyzed, it should be written as concisely as possible.
  • Only include results that are directly relevant to answering your research questions . Avoid speculative or interpretative words like “appears” or “implies.”
  • If you have other results you’d like to include, consider adding them to an appendix or footnotes.
  • Always start out with your broadest results first, and then flow into your more granular (but still relevant) ones. Think of it like a shoe store: first discuss the shoes as a whole, then the sneakers, boots, sandals, etc.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

in research output

If you conducted quantitative research , you’ll likely be working with the results of some sort of statistical analysis .

Your results section should report the results of any statistical tests you used to compare groups or assess relationships between variables . It should also state whether or not each hypothesis was supported.

The most logical way to structure quantitative results is to frame them around your research questions or hypotheses. For each question or hypothesis, share:

  • A reminder of the type of analysis you used (e.g., a two-sample t test or simple linear regression ). A more detailed description of your analysis should go in your methodology section.
  • A concise summary of each relevant result, both positive and negative. This can include any relevant descriptive statistics (e.g., means and standard deviations ) as well as inferential statistics (e.g., t scores, degrees of freedom , and p values ). Remember, these numbers are often placed in parentheses.
  • A brief statement of how each result relates to the question, or whether the hypothesis was supported. You can briefly mention any results that didn’t fit with your expectations and assumptions, but save any speculation on their meaning or consequences for your discussion  and conclusion.

A note on tables and figures

In quantitative research, it’s often helpful to include visual elements such as graphs, charts, and tables , but only if they are directly relevant to your results. Give these elements clear, descriptive titles and labels so that your reader can easily understand what is being shown. If you want to include any other visual elements that are more tangential in nature, consider adding a figure and table list .

As a rule of thumb:

  • Tables are used to communicate exact values, giving a concise overview of various results
  • Graphs and charts are used to visualize trends and relationships, giving an at-a-glance illustration of key findings

Don’t forget to also mention any tables and figures you used within the text of your results section. Summarize or elaborate on specific aspects you think your reader should know about rather than merely restating the same numbers already shown.

A two-sample t test was used to test the hypothesis that higher social distance from environmental problems would reduce the intent to donate to environmental organizations, with donation intention (recorded as a score from 1 to 10) as the outcome variable and social distance (categorized as either a low or high level of social distance) as the predictor variable.Social distance was found to be positively correlated with donation intention, t (98) = 12.19, p < .001, with the donation intention of the high social distance group 0.28 points higher, on average, than the low social distance group (see figure 1). This contradicts the initial hypothesis that social distance would decrease donation intention, and in fact suggests a small effect in the opposite direction.

Example of using figures in the results section

Figure 1: Intention to donate to environmental organizations based on social distance from impact of environmental damage.

In qualitative research , your results might not all be directly related to specific hypotheses. In this case, you can structure your results section around key themes or topics that emerged from your analysis of the data.

For each theme, start with general observations about what the data showed. You can mention:

  • Recurring points of agreement or disagreement
  • Patterns and trends
  • Particularly significant snippets from individual responses

Next, clarify and support these points with direct quotations. Be sure to report any relevant demographic information about participants. Further information (such as full transcripts , if appropriate) can be included in an appendix .

When asked about video games as a form of art, the respondents tended to believe that video games themselves are not an art form, but agreed that creativity is involved in their production. The criteria used to identify artistic video games included design, story, music, and creative teams.One respondent (male, 24) noted a difference in creativity between popular video game genres:

“I think that in role-playing games, there’s more attention to character design, to world design, because the whole story is important and more attention is paid to certain game elements […] so that perhaps you do need bigger teams of creative experts than in an average shooter or something.”

Responses suggest that video game consumers consider some types of games to have more artistic potential than others.

Your results section should objectively report your findings, presenting only brief observations in relation to each question, hypothesis, or theme.

It should not  speculate about the meaning of the results or attempt to answer your main research question . Detailed interpretation of your results is more suitable for your discussion section , while synthesis of your results into an overall answer to your main research question is best left for your conclusion .

Prevent plagiarism. Run a free check.

I have completed my data collection and analyzed the results.

I have included all results that are relevant to my research questions.

I have concisely and objectively reported each result, including relevant descriptive statistics and inferential statistics .

I have stated whether each hypothesis was supported or refuted.

I have used tables and figures to illustrate my results where appropriate.

All tables and figures are correctly labelled and referred to in the text.

There is no subjective interpretation or speculation on the meaning of the results.

You've finished writing up your results! Use the other checklists to further improve your thesis.

If you want to know more about AI for academic writing, AI tools, or research bias, make sure to check out some of our other articles with explanations and examples or go directly to our tools!

Research bias

  • Survivorship bias
  • Self-serving bias
  • Availability heuristic
  • Halo effect
  • Hindsight bias
  • Deep learning
  • Generative AI
  • Machine learning
  • Reinforcement learning
  • Supervised vs. unsupervised learning

 (AI) Tools

  • Grammar Checker
  • Paraphrasing Tool
  • Text Summarizer
  • AI Detector
  • Plagiarism Checker
  • Citation Generator

The results chapter of a thesis or dissertation presents your research results concisely and objectively.

In quantitative research , for each question or hypothesis , state:

  • The type of analysis used
  • Relevant results in the form of descriptive and inferential statistics
  • Whether or not the alternative hypothesis was supported

In qualitative research , for each question or theme, describe:

  • Recurring patterns
  • Significant or representative individual responses
  • Relevant quotations from the data

Don’t interpret or speculate in the results chapter.

Results are usually written in the past tense , because they are describing the outcome of completed actions.

The results chapter or section simply and objectively reports what you found, without speculating on why you found these results. The discussion interprets the meaning of the results, puts them in context, and explains why they matter.

In qualitative research , results and discussion are sometimes combined. But in quantitative research , it’s considered important to separate the objective results from your interpretation of them.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

George, T. (2023, July 18). How to Write a Results Section | Tips & Examples. Scribbr. Retrieved July 10, 2024, from https://www.scribbr.com/dissertation/results/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, what is a research methodology | steps & tips, how to write a discussion section | tips & examples, how to write a thesis or dissertation conclusion, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

  • Policy Library

Research Output

  • All Policy and Procedure A-Z
  • Policy and Procedure Categories
  • Enterprise Agreement
  • Current Activity
  • Policy Framework
  • Definitions Dictionary

PDF version of Research Output

Definition overview

1 definition, 2 references, 3 definition information.

An output is an outcome of research and can take many forms. Research Outputs must meet the definition of Research.

Source: Australian Research Council Excellence in Research for Australia 2018 Submission Guidelines.

Approved Date

3/6/2024

Effective Date

3/6/2024

Record No

15/2329PL

Complying with the law and observing Policy and Procedure is a condition of working and/or studying at the University.

* This file is available in Portable Document Format (PDF) which requires the use of Adobe Acrobat Reader. A free copy of Acrobat Reader may be obtained from Adobe. Users who are unable to access information in PDF should email [email protected] to obtain this information in an alternative format.

Research incentives and research output

  • Published: 09 March 2018
  • Volume 76 , pages 1029–1049, ( 2018 )

Cite this article

in research output

  • Finn Jørgensen 1 &
  • Thor-Erik Sandberg Hanssen 1  

2117 Accesses

11 Altmetric

Explore all metrics

This paper first briefly reviews the worldwide development of the size of the university sector, its research merits and authorities’ use of incentive systems for its academic staff. Then, the paper develops a static model of a researcher’s behaviour, aiming to discuss how different salary reward schemes and teaching obligations influence his or her research merits. Moreover, special focus is placed on discussing the importance of the researcher’s skills and of working in solid academic environments for quality research. The main findings are as follows: First, research achievements will improve irrespective of the relative impact quantity and quality of research have on researchers’ salaries. Second, small changes in fixed salary and teaching duties will not influence the amount of time academics spend on research and, as such, their research merits. Third, because research productivity, i.e. the number of pages written and research quality increase with the researcher’s skills and effort, both these figures signal a researcher’s potential when adjusting for his or her age and the kind of research carried out. Finally, because researchers’ utility depends on factors beyond salary and leisure time, employers have a number of instruments to use in order to attract skilled researchers in a globalised market.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Similar content being viewed by others

in research output

Explaining research performance: investigating the importance of motivation

in research output

Research and Incentives

Crossing the hurdle: the determinants of individual scientific performance.

In Hanssen and Jørgensen ( 2014 ), Hanssen and Jørgensen ( 2015 ) and Hanssen et al. ( 2018 ), a researcher’s skills are measured by the number of times his works are cited.

Here and throughout the article the notation Y X means the partial derivative of Y with respect to X etc.

A thorough discussion of quasi-concave functions can be found in Sydsæther and Hammond ( 2006 ).

It is reasonable to assume a positive monotonic relationship between the number of pages produced ( P ) and the number of article equivalent ( AE ). Note that AE is not the same as the value of the publication indicator ( PI ) used in Norway. PI is a weighted average of the number of article equivalents using the journals’ impact factors as weights. Hence, the value of PI depends on both P and Q.

The assumption made that the marginal influence on salary of increased research quality is non-increasing may be open to debate, but most universities (at least in Europe) have wage systems that limit large wage differentials among the staff.

\( {\left(-\frac{\partial I}{\partial t}\right)}_{Q={Q}^{\ast }}=\frac{Q_t^{\hbox{'}}}{Q_I^{\hbox{'}}} \) and/or \( {\left(-\frac{\partial I}{\partial T}\right)}_{Q={Q}^{\ast }}=\frac{Q_T^{\hbox{'}}}{Q_I^{\hbox{'}}} \) diminish rapidly.

In Norway, for example, general guidelines indicate that professors and associate professors should not use more than approximately 50% of their working time on teaching and administrative duties. Figures from Egeland and Bergene ( 2012 ) show, however, that these groups use only 30% of their total working time for research.

Based on our a priori assumptions about the functional forms, it is reasonable to assume that the second-order conditions are met.

\( {\left(-\frac{\partial T}{\partial t}\right)}_{Q={Q}^{\ast }}=\frac{Q_t}{Q_T} \) .

\( {\left(-\frac{\partial E}{\partial P}\right)}_{S={S}^{\ast }}=\frac{S_P}{S_E} \) , \( {\left(-\frac{\partial L}{\partial P}\right)}_{U={U}^{\ast }}=\frac{U_P}{U_L} \) .

1 €≈9.5  NOK .

\( {\left(-\frac{\partial S}{\partial Q}\right)}_{U={U}^{\ast }}=\frac{U_Q}{U_S} \)

\( \frac{U_S}{U_P}=\frac{\beta_1}{\beta_3} \) , \( \frac{U_S}{U_Q}=\frac{\beta_1}{\beta_4}, \) \( \frac{U_P}{U_Q}=\frac{\beta_3}{\beta_4},\frac{U_L}{U_S}=\frac{\beta_2}{L\bullet {\beta}_1}\ \frac{U_L}{U_P}=\frac{\beta_2}{L\bullet {\beta}_3} \) , \( \frac{U_L}{U_Q}=\frac{\beta_2}{L\bullet {\beta}_4}. \) A thorough discussion of different types of utility functions can be found in Nechyba ( 2011 ).

Our specification of the quality function implies that the marginal rates of substitution between t , T and I are holding quality constant are \( \frac{\partial t}{\partial T}=-\frac{\alpha_2}{\alpha_1}\bullet \frac{t}{T},\frac{\partial t}{\partial I}=-\frac{\alpha_3}{\alpha_1}\bullet \frac{t}{I} \) , \( \frac{\partial T}{\partial I}=-\frac{\alpha_3}{\alpha_2}\bullet \frac{T}{I} \)

The values of τ 0 , τ 1,   τ 2 and D can, to some extent, vary among universities within the same country, but in many European countries, trade unions and central authorities can affect these figures, in particular τ 0 .

The interrelationships between the effects on work effort of intrinsic motivation and external payment schemes is discussed in Grepperud and Pedersen ( 2006 ).

Aagaard, K., Bloch, C., Schneider, J. W., Henriksen, D., Ryan, T. K., Lauridsen, P. S. (2014). Evaluering af den norske publiceringsindikator. Aarhus, Dansk Center for Forskningsanalyse, Institut for Statskundskab, Aarhus Universitet. (In Danish). [Evaluation of the Norwegian publication indicator].

Aghion, P., Dewatripont, M., Hoxby, C., Mas-Colell, A. & Sapir, A. (2007) Why reform Europe’s universities. Bruegel Policy Brief. http://bruegel.org/wp-content/uploads/imported/publications/pbf_040907_universities.pdf .

Aghion, P., Dewatripont, M., Hoxby, C., Mas-Colell, A., & Sapir, A. (2010). The governance and performance of universities: Evidence from Europe and the US. Economic Policy, 25 , 7–59.

Article   Google Scholar  

Azoulay, P., Zivin, J. S. G., & Wang, J. (2010). Superstar extinction. Quarterly Journal of Economics, 125 , 549–589.

Becker, G. S. (1964). Human capital: A theoretical and empirical analysis with special reference to education . Chicago: The University of Chicago Press.

Google Scholar  

Bleiklie, I. (1998). Justifying the evaluative state: New public management ideals in higher education. European Journal of Education, 33 , 299–316.

Bornmann, L. (2017). Measuring impact in research evaluations: A thorough discussion of methods for, effects of and problems with impact measurements. Higher Education, 73 , 775–787.

Brown, H. (1972). History and the learned journal. Journal of the History of Ideas, 33 , 365–378.

Browne, J., Barber, M., Coyle, D., Eastwood, D., King, J., Naik, R., Sands, P. (2010) Securing a sustainable future for higher education - An independent review of higher education funding & student finance.

Butler, L. (2004). What happens when funding is linked to publication counts? In H. F. Moed, W. Glänzel, & U. Schmoch (Eds.), Handbook of quantitative science and technology . Dordrecht: Kluwer Academic Publishers.

Colussi, T. (2017) Social ties in academia: a friend is a treasure. The Review of Economics and Statistics , https://doi.org/10.1162/REST_a_00666 .

de Lourdes Machado-Taylor, M., Meira Soares, V., Brites, R., Brites Ferreira, J., Farhangmehr, M., Gouveia, O. M. R., & Peterson, M. (2016). Academic job satisfaction and motivation: Findings from a nationwide study in Portuguese higher education. Studies in Higher Education, 41 , 541–559.

Delgado-Márquez, B. L., Escudero-Torres, M. A., & Hurtado-Torres, N. E. (2013). Being highly internationalised strengthens your reputation: An empirical investigation of top higher education institutions. Higher Education, 66 , 619–633.

Dimov, D. P., & Shepherd, D. A. (2005). Human capital theory and venture capital firms: Exploring “home runs” and “strike outs”. Journal of Business Venturing, 20 , 1–21.

Dries, N., Pepermans, R., & Carlier, O. (2008). Career success: Constructing a multidimensional model. Journal of Vocational Behavior, 73 , 254–267.

Egeland, C., & Bergene, A. C. (2012). Tidsbruk, arbeidstid og tidskonflikter i den norske universitets. og høgskolesektoren. Oslo, Arbeidsforskningsinstituttet. (In Norwegian). [Time use, working hours and time conflicts in Norwegian higher education].

Elken, M., Hovdhaugen, E., & Stensaker, B. (2016). Global rankings in the Nordic region: Challenging the identity of research-intensive universities? Higher Education, 72 , 781–795.

Enders, J., & Westerheijden, D. F. (2014). The Dutch way of new public management: A critical perspective on quality assurance in higher education. Policy and Society, 33 , 189–198.

Ericsson, A., Krampe, R. T., & Tesch-Römer, C. (1993). The role of deliberate practice in the acquisition of expert performance. Psychological Review, 100 , 363–406.

Furnham, A., & Monsen, J. (2009). Personality traits and intelligence predict academic school grades. Learning and Individual Differences, 19 , 28–33.

Grepperud, S., & Pedersen, P. A. (2006). Crowding effects and work ethics. Labour, 20 , 125–138.

Hægeland, T., Ervik, A. O., Hansen, H. F., Hervik, A., Lommerud, K. E., Ringdal, O., Sahlin, K., Steinveg, B. E. Stensaker, B. (2014). Finansiering for kvalitet, mangfold og samspill. Nytt finansieringssystem for universiteter og høyskoler. Oslo, Kunnskapsdepartementet. (In Norwegian). [Funding for quality, variety and interaction. New funding system for universities and colleges].

Hambrick, D. Z., & Meinz, E. J. (2011). Limits on the predictive power of domain-specific experience and knowledge in skilled performance. Current Directions in Psychological Science, 20 , 275–279.

Hanssen, T.-E. S., & Jørgensen, F. (2014). Citation counts in transportation research. European Transport Research Review, 6 , 205–212.

Hanssen, T.-E. S., & Jørgensen, F. (2015). The value of experience in research. Journal of Informetrics, 9 , 16–24.

Hanssen, T.-E. S., Jørgensen, F., & Larsen, B. (2018) The relation between the quality of research, researchers´ experience, and their academic environment. Scientometrics , https://doi.org/10.1007/s11192-017-2580-y .

Hesli, V. L., & Lee, J. M. (2013). Job satisfaction in academia: Why are some faculty members happier than others? PS: Political Science & Politics, 46 , 339–354.

Hicks, D. (2012). Performance-based university research funding systems. Research Policy, 41 , 251–261.

Horta, H., Dautel, V., & Veloso, F. M. (2012). An output perspective on the teaching–research nexus: An analysis focusing on the United States higher education system. Studies in Higher Education, 37 , 171–187.

Hüther, O., & Krücken, G. (2013). Hierarchy and power: A conceptual analysis with particular reference to new public management reforms in German universities. European Journal of Higher Education, 3 , 307–323.

Igami, M., Nagaoka, S., & Walsh, J. P. (2015). Contribution of postdoctoral fellows to fast-moving and competitive scientific research. Journal of Technology Transfer, 40 , 723–741.

Ioannidis, J. P. A., Boyack, K. W. & Klavans, R. (2014). Estimates of the continuously publishing core in the scientific workforce. PLoS ONE , 9 (7), 1–10

Janger, J., & Nowotny, K. (2016). Job choice in academia. Research Policy, 45 , 1672–1683.

Jinha, A. (2010). Article 50 million: An estimate of the number of scholarly articles in existence. Learned Publishing, 23 , 258–263.

Jørgensen, F., & Wentzel-Larsen, T. (1995). En modell for en forskers tilpasning. Norsk Økonomisk Tidsskrift, 109 , 205–228 (In Norwegian). [A model for a researchers behavior].

Jung, J. (2014). Research productivity by career stage among Korean academics. Tertiary Education and Management, 20 , 85–105.

Kallio, K. M., & Kallio, T. J. (2014). Management-by-results and performance measurement in universities - implications for work motivation. Studies in Higher Education, 39 , 574–589.

Kenna, R., & Berche, B. (2011). Critical mass and the dependency of research quality on group size. Scientometrics, 86 , 527–540.

Kim, T. (2017). Academic mobility, transnational identity capital, and stratification under conditions of academic capitalism. Higher Education, 73 , 981–997.

KUNNSKAPSDEPARTEMENTET (2015). Forskningsbarometeret 2015. Oslo. (In Norwegian). [Research barometer 2015].

Kwiek, M. (2015). The European research elite: a cross-national study of highly productive academics in 11 countries. Higher Education , 71, 379–397.

Kyvik, S., & Aksnes, D. W. (2015). Explaining the increase in publication productivity among academic staff: A generational perspective. Studies in Higher Education, 40 , 1438–1453.

Larivière, V. (2012). On the shoulders of students? The contribution of PhD students to the advancement of knowledge. Scientometrics, 90 , 463–481.

Larivière, V., & Costas, R. (2016). How many is too many? On the relationship between research productivity and impact. PLoS One, 11 , e0162709.

Levin, S. G., & Stephan, P. E. (1991). Research productivity over the life cycle: Evidence for academic scientists. The American Economic Review, 81 , 114–132.

Lorenz, C. (2012). If you’re so smart, why are you under surveillance? Universities, neoliberalism, and new public management. Critical Inquiry, 38 , 599–629.

Ma, A., Mondragón, R. J., & Latora, V. (2015). Anatomy of funded research in science. Proceedings of the National Academy of Sciences of the United States of America, 112 , 14760–14765.

Mankiw, N. G., Romer, D., & Weil, D. N. (1992). A contribution to the empirics of economic growth. The Quarterly Journal of Economics, 107 , 407–437.

Manuelli, R. E., & Seshadri, A. (2014). Human capital and the wealth of nations. American Economic Review, 104 , 2736–2762.

Nechyba, T. (2011). Microeconomics: An intuitive approach with calculus . Stamford: Cengage Learning.

OECD (2014a). Education at a glance 2014 . Paris.

OECD. (2014b). Main science and technology indicators . Paris: OECD.

Petersen, A. M., Fortunato, S., Pan, R. K., Kaski, K., Penner, O., Rungi, A., Riccaboni, M., Stanley, H. E., & Pammolli, F. (2014). Reputation and impact in academic careers. Proceedings of the National Academy of Sciences, 111 , 15316–15321.

Piro, F. N., Aksnes, D. W., & Rørstad, K. (2013). A macro analysis of productivity differences across fields: Challenges in the measurement of scientific publishing. Journal of the American Society for Information Science and Technology, 64 , 307–320.

Rørstad, K., & Aksnes, D. W. (2015). Publication rate expressed by age, gender and academic position—a large-scale analysis of Norwegian academic staff. Journal of Informetrics, 9 , 317–333.

Sandström, U., & van den Besselaar, P. (2016). Quantity and/or quality? The importance of publishing many papers. PLoS One, 11 , e0166149.

Schofer, E., & Meyer, J. W. (2005). The worldwide expansion of higher education in the twentieth century. American Sociological Review, 70 , 898–920.

Schubert, T. (2009). Empirical observations on new public management to increase efficiency in public research—Boon or bane? Research Policy, 38 , 1225–1234.

Stankiewicz, R. (1979). The size and age of Swedish academic research groups and their scientific performance. In F. M. Andrews (Ed.), Scientific productivity. The effectiveness of research groups in six countries . Cambridge: Cambridge University Press.

Stokey, N. L. (1991). Human capital, product quality, and growth. The Quarterly Journal of Economics, 106 , 587–616.

Stremersch, S., Verniers, I., & Verhoef, P. C. (2007). The quest for citations: Drivers of article impact. Journal of Marketing, 71 , 171–193.

Swidler, S., & Goldreyer, E. (1998). The value of a finance journal publication. Journal of Finance, 53 , 351–363.

Sydsæther, K., & Hammond, P. (2006). Essential mathematics for economic analysis . Upper Saddle River: Prentice-Hall.

Tolofari, S. (2005). New public management and education. Policy Futures in Education, 3 , 75–89.

Tremblay, K., Lalancette, D. & Roseveare, D. (2012). Assessment of higher education learning outcomes. Feasibility study report. Volume 1 - design and implementation. OECD. 270 pp.

von Hippel, T., & von Hippel, C. (2015). To apply or not to apply: A survey analysis of grant writing costs and benefits. PLoS One, 10 .

Ware, M. (2006). Scientific publishing in transition: An overview of current developments . Mark Ware Consulting Ltd: Bristol.

Download references

Author information

Authors and affiliations.

Nord University Business School, 8049, Bodø, Norway

Finn Jørgensen & Thor-Erik Sandberg Hanssen

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Finn Jørgensen .

Rights and permissions

Reprints and permissions

About this article

Jørgensen, F., Hanssen, TE.S. Research incentives and research output. High Educ 76 , 1029–1049 (2018). https://doi.org/10.1007/s10734-018-0238-1

Download citation

Published : 09 March 2018

Issue Date : December 2018

DOI : https://doi.org/10.1007/s10734-018-0238-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Research effort
  • Time allocation
  • Research merits
  • Reward schemes
  • Find a journal
  • Publish with us
  • Track your research

How to formulate strong outputs

  • Post author By Thomas Winderl
  • Post date September 8, 2020

in research output

Outputs are arguably not the most important level of the results chain. It is outcomes that should be the focus of a good plan. Ultimately, that´s what counts.

However, outputs still matter.

Just to be clear: Simply put, outputs refer to changes in  skills or abilities or the availability of new products and services . In plain lingo: Outputs are what we plan to do to achieve a result.

Ok, let’s be a bit more precise: Outputs usually refer to a group of people or an organization that has improved capacities, abilities, skills, knowledge, systems, or policies or if something is built, created or repaired as a direct result of the support provided. That’s a definition we can work with.

Language is important

When describing what you do, focus on the  change , not the  process . Language matters.

Don’t say: ‘ Local organisations will support young women and men in becoming community leaders .’ This emphasises the process rather than the change.

Instead, emphasis what will be different as a result of your support. Say:  ‘Young women and men have the skills and motivation to be community leaders’ . 

Make it time-bound

An organization’s support is typically not open-ended. You usually expect to wrap up what you do at a  certain time . Emphasise that your activities are carried out within a certain time frame. So it’s always helpful to include in the formulation for example ‘ By January 2019 , …’. 

A formula for describing what you do

To ensure that you accurately describe what you do, use the following formula:

in research output

What to learn more about how to plan for results ? Check out our detailed video course on Practical Results Based Management on Udemy.

  • Open access
  • Published: 06 July 2024

Strengthening a culture of research dissemination: A narrative report of research day at King Faisal Hospital Rwanda, a tertiary-level teaching hospital in Rwanda

  • Kara L. Neil 1 ,
  • Richard Nduwayezu 1 ,
  • Belise S. Uwurukundo 1 ,
  • Damas Dukundane 1 ,
  • Ruth Mbabazi 1 &
  • Gaston Nyirigira 1  

BMC Medical Education volume  24 , Article number:  732 ( 2024 ) Cite this article

194 Accesses

Metrics details

There are significant gaps in research output and authorship in low- and middle-income countries. Research dissemination events have the potential to help bridge this gap through knowledge transfer, institutional collaboration, and stakeholder engagement. These events may also have an impact on both clinical service delivery and policy development. King Faisal Hospital Rwanda (KFH) is a tertiary-level teaching hospital located in Kigali, Rwanda. To strengthen its research dissemination, KFH conducted an inaugural Research Day (RD) to disseminate its research activities, recognize staff and student researchers at KFH, define a research agenda for the hospital, and promote a culture of research both at KFH and in Rwanda.

RD was coordinated by an interdisciplinary committee of clinical and non-clinical staff at KFH. Researchers were encouraged to disseminate their research across all disciplines. Abstracts were blind reviewed using a weighted rubric and ranked by overall score. Top researchers were also awarded and recognized for their work, and equity and inclusion was at the forefront of RD programming.

RD had over 100 attendees from KFH and other public, private, and academic institutions. Forty-seven abstracts were submitted from the call for abstracts, with the highest proportion studying cancer (17.02%) and sexual and reproductive health (10.64%). Thirty-seven researchers submitted abstracts, and most of the principal investigators were medical doctors (35.14%), allied health professionals (27.03%), and nurses and midwives (16.22%). Furthermore, 30% of principal investigators were female, with the highest proportion of them being nurses and midwives (36.36%).

RD is an effective way to disseminate research in a hospital setting. RD has the potential to strengthen the institution’s research agenda, engage the community in ongoing projects, and provide content-area support to researchers. Equity and inclusion should be at the forefront of research dissemination, including gender equity, authorship representation, and the inclusion of interdisciplinary health professionals. Stakeholder engagement can also be utilized to strengthen institutional research collaboration for greater impact.

Peer Review reports

Significant gaps in research output and author representation exist based on geographic region, particularly in low- and middle-income countries (LMICs). For example, one study conducted by The Lancet Global Health found that while 92% of articles target interventions in LMICs, only 35% of authors are actually from or work in those LMICs [ 1 ]. The Initiative to Strengthen Health Research Capacity in Africa identified nine key requirements for strengthening health research on the continent, including institutional support, providing research funding, promoting networks and research dissemination, and providing tools for conducting research [ 2 ]. In line with this, research dissemination events can be utilized to strengthen the research culture, institutional collaboration and knowledge transfer, and to engage stakeholders. Alongside knowledge transfer, these events can also impact both clinical service delivery and policy development [ 3 ]. This is further corroborated by an article on establishing a clinical research network in Rwanda, highlighting the importance of strengthening research partnerships and dissemination opportunities to mitigate the disease burden in Rwanda and the region [ 4 ].

King Faisal Hospital Rwanda (KFH) is a tertiary-level teaching hospital in Kigali, Rwanda. As a teaching hospital, KFH hosts hundreds of health professional students, including medical students, residents, fellows, allied health professionals, and nurses. Furthermore, KFH hosts some of Rwanda’s most highly specialized medical services and their respective subspecialty fellow trainees, including a catheterization laboratory, cardiothoracic surgery, and renal transplant surgery. While KFH previously had a focal person for education and research activities, there was no full-time team in place to manage this. Therefore, to mitigate this, KFH established a Division of Education, Training, and Research in 2021 to oversee the ongoing teaching and learning activities, including research capacity building and output. KFH also has its own Institutional Review Board (IRB) to review and approve research projects conducted at the hospital, and to monitor the overall uptake in research activity. Alongside the highly specialized services and training hosted at KFH, the hospital is putting significant effort into strengthening its research capacity and culture to ensure that evidence-based practice is at the forefront of strengthening these clinical services.

The trend of research activity at KFH is also increasing, and Fig.  1 outlines the trend of KFH IRB submissions from 2009 to 2023. From 2009 to 2020, the trend in research activity was inconsistent and without a significant increase in activity. However, since 2020, there has been a significant upward trend in research activity. This is most likely attributed to the emphasis placed on evidence-based research and practice by the hospital’s leadership over the past several years. However, the numbers are still low, and further interventions are needed to improve this activity.

figure 1

Trend of KFH IRB Submissions

Research institutions and teaching hospitals are mandated to provide clinical serives, train health professionals, and conduct research. However, researchers in these institutions may not have institutionalized means of sharing their research findings with the relevant departments and leadership upon completing their research. This can result in a lack of known or implemented findings in the institutions where the research was conducted. This can also lead to the duplication of efforts, especially when research findings have not been locally disseminated or published. In response to this, having dedicated dissemination events will not only support clinical researchers to share their findings, but will also support institutions in conducting more meaningful research in relation to the institutional or national priorities, and building off of previously conducted studies.

The aim of this narrative report is to document the development and implementation of KFH’s inaugural Research Day (RD), which aimed to disseminate its research activities, recognize staff and student researchers at KFH, define a research agenda for the hospital, and promote a culture of research at KFH and more broadly in Rwanda. Furthermore, based on the output of RD, this report proposes recommendations to further strengthen research capacity and culture at KFH or through similar RD events going forward.

RD was coordinated by an interdisciplinary clinical and non-clinical committee at KFH. Researchers were encouraged to submit and disseminate their research across all disciplines at KFH. The committee also considered ways to award and recognize researchers for their work, and ensure that the program and other logistics promoted equity and inclusion. Additionally, the committee oversaw the call for abstracts, program and participant inclusion, and the selection and awards process.

Call for abstracts

The Directorate of Research disseminated a call for abstracts for researchers to submit their projects for poster and oral presentations. Eligible researchers included those who either work or study at KFH, or who conducted research at KFH. To encourage researchers at all stages of their study to participate, eligible abstracts included already published studies and those still in progress.

Program and participant inclusion

To promote the inclusion of KFH staff and students in the event, the organizing committee considered the best venue for RD. As a result, RD was hosted in the KFH inpatient reception area instead of being hosted offsite, with one area for the poster display and another for the main event program. This allowed KFH staff and students to come view the poster display during their working hours without it conflicting with their regular clinical schedules. This also aimed to increase staff awareness towards the ongoing research activities at the hospital and encourage them to also get involved in research going forward.

The program for the day had several components. It commenced with a poster display, where representatives from each research team were stationed with their respective posters to answer questions and provide more information on their studies. The main program included opening remarks from the KFH Chair of the Board of Directors, a keynote speech on the importance of research dissemination from Head of Health Workforce Development at the Ministry of Health, and an overview of the state of research at KFH. The main program concluded with oral presentations and the award ceremony.

Selection and awards

Before the event, an interdisciplinary selection committee composed of external reviewers blind-reviewed each abstract. Each abstract was evaluated using a weighted rubric, which was developed based on existing literature and the main components of an abstract. Specifically, the rubric considered 7 criteria, including clarity and organization; relevance and significance of the study; originality and innovation; methods and approach; results and findings; conclusions and implications; and grammar and writing. Within these criteria, the rubric also evaluated the overall quality of the study, adherence with ethical and legal requirements, and the validity of the findings against the methods and study design. The blind review was conducted individually by external reviewers to avoid potential biases, and reviewers were assigned to abstracts based on their expertise and the topics of the abstracts. The individual scores were then compiled, with an average taken for each abstract. The abstracts were then ranked from the highest to the lowest scores. The selection committee used these results to recommend oral and poster presenters, which included 40 posters and 7 oral presentations. In general, all abstracts meeting the minimum quality criteria were selected for poster displays. This was done to encourage researchers to disseminate their progress and increase the visibility of their work more inclusively. However, only completed studies were eligible for oral presentations.

During the event, three additional awards committees with external reviewers were established to evaluate the posters and oral presentations for one of three awards: best oral presentation, best poster presentation, and most impactful study. These committees utilized rubrics that were developed based on the main components of the abstract, along with the overall impact and presentation. The committee members reviewed the projects throughout research day, whereby the results were compiled and presented at the end of RD during the awards ceremony.

Over 100 attendees participated in the main program of RD, and additional participants attended in the poster presentation throughout the day. For the main program, attendees included key stakeholders and senior researchers from Rwanda and the region, including those with the ability to positively influence the research environment and mentor junior researchers. Specifically, participants included KFH leadership, professional councils (Rwanda Medical and Dental Council), government institutions (Ministry of Health and Rwanda Biomedical Centre), health sciences schools (University of Rwanda and University of Global Health Equity), and teaching hospitals (University Teaching Hospital of Kigali, University Teaching Hospital of Butare, and Rwanda Military Hospital), among others.

Abstract submissions

Forty-seven abstracts were submitted from the call for abstracts, as outlined in Table  1 . The highest proportion of abstracts were studying cancer (17.02%), and primarily in colorectal and breast cancer. Sexual and reproductive health was the second most represented content area, making up 10.64% of abstract submissions, followed by anesthesia and pain management (8.51%) and data science/IT (8.51%).

Table  1 Outlines the submitted abstracts by content area.

Researcher profile

Eligible researchers included KFH staff and students, as well as external researchers with projects conducted at KFH. This was decided with the aim to ensure that all disseminated research either featured KFH staff and students, or was research conducted at the hospital. Overall, 37 researchers submitted 47 abstracts. Principal Investigators (PIs) were primarily medical doctors (35.14%), allied health professionals (27.03%), and nurses and midwives (16.22%). Amongst medical doctors, anesthesia and critical care professionals represented the highest proportion of PIs (38.4%), and amongst allied health professionals, imaging services represented the highest proportion (40%). Additionally, 30% of PIs were female, with most of them being nurses or midwives (36.36%). Females comprised at least half of PIs in administration, nursing and midwifery, and data science/IT. Table  2 outlines the PIs who submitted abstracts by department and sex.

Selection process and awards

The selection committee selected seven oral presentations. Table  3 outlines the oral presentations that were selected, along with those awarded for the best oral presentation and most impactful project. Additionally, the best poster presentation was awarded to a midwife staff member who presented on strengthening family-centered maternity care at KFH.

Because this was the first event of its kind at KFH, there were a few challenges in organizing and hosting the event. When the organizing committee started planning, there was a general lack of awareness on the event’s importance. Some staff questioned its benefit and why staff should be released from their clinical activities to attend. Additionally, there were few abstract submissions leading up to the submission deadline. To mitigate these issues, the committee intentionally engaged with the hospital leadership, departments, and individuals to strengthen buy in and participation in the event. This included individual meetings with department leadership to explain RD’s importance. Additionally, the RD committee membership was expanded to ensure better representation across departments and disciplines. Finally, the committee extended its submission deadline and approached researchers individually to encourage them to submit abstracts, regardless of their completion status. Because this was the first RD at KFH, engaging staff individually and at the team level helped build buy in across all levels of the institution, and ultimately increased participation in the event.

RD demonstrated the critical need to further strengthen research dissemination activities at KFH. The long-term aim at KFH is to promote knowledge transfer and translation through research. Research dissemination was highlighted as an initial step towards this to generate engagement and participation in the ongoing activities, and hopefully encourage junior or inactive researchers to start engaging. Specifically, RD highlighted the need to define a research agenda; promote equity and inclusion both in research activity and dissemination events; and ensure multi-institutional stakeholder collaboration in dissemination activities.

Defining a research agenda

Common research areas were revealed through the abstract submissions, including in internal medicine (45%), obstetrics and gynecology (14%), and pediatrics (12%). However, it also revealed the need to streamline dissemination efforts through a defined hospital research agenda. This will contribute to knowledge translation in those specialties in the future, as well as more initiatives to strengthen research in those specialties. The research agenda itself may be driven by the research interests generated by the departments and researchers seen in RD. These departmental interests can then be narrowed down to specific specialties. For example, among those conducted in internal medicine, the research mainly focused on cancer, infectious diseases, and cardiovascular diseases. Integrating department or specialty-driven research priorities requires a deeper investigation into why these research areas were more frequently represented.

Additionally, many of the research projects had simple study designs, which may be attributed to limited capacity to conduct more complex projects, likely due to limited financial capacity, skills, or time. Currently, there is no policy that defines time allocation for research as a clinician. To be able to implement this research agenda and strengthen the research culture, there is a need to mobilize financial and non-financial resources that will enable the institution and researchers to conduct impactful and complex research. Ensuring equity and the distribution of research support and resources across services and departments alongside this defined research agenda is critical.

Promoting equity and inclusion

Healthcare professionals exhibit a wide range of characteristics, including diverse social backgrounds, gender, experiences, and disability statuses [ 5 ]. As a result, healthcare institutions should adopt an inclusive research agenda that fosters cognitive diversity and encourages the sharing of innovative ideas. Such an approach ensures the development of a culturally competent workforce, ultimately reducing research biases [ 6 ]. Additionally, a culturally competent environment enhances individual motivation, leading to improved team performance [ 7 ]. This is because all healthcare providers, irrespective of their roles, contribute unique ideas and problem-solving techniques, often referred to as collective intelligence, which is essential in achieving comprehensive and unbiased research outcomes [ 8 ]. Having a diverse healthcare workforce engaged in research endeavors ensures the minimization of knowledge gaps. The multidisciplinary approach in healthcare has consistently been reflected in the highest quality of care, and it is therefore expected that it will similarly translate into the highest quality of research.

Additionally, gender equity in authorship aims to ensure equal opportunities for individuals of all genders to contribute to academic publications, which is a critical factor in professional success [ 9 , 10 ]. As highlighted at KFH’s RD, individuals of all genders were welcomed and provided equal submission opportunities. This is evident in our RD researcher profile, where female PIs were 50% of administrators and 67% of nurses and midwives. Having 70% of PIs being male overall was likely influenced by the existing gender gap in medical doctors, further emphasizing the need to empower and engage women in medicine and in academic publications. Globally, the progress in women’s empowerment is reflected in the increasing number of women pursuing careers in health and academia [ 11 ]. Statistics show a significant rise in female authors in major journals, from 6% to 10% in the 1970s to 54% and 46% for first and last authorship in 2019 [ 12 ]. This progress serves as motivation for KFH, where there were gaps in female participation, highlighting the need for more intentional efforts to promote equity and inclusion in research activity and dissemination platforms.

Stakeholder collaboration and engagement

RD revealed the importance of stakeholder collaboration to strengthen research dissemination and an overall research culture in health science institutions. As a lesson learned through RD, there is a need to streamline the way research is conducted and engage different stakeholders on this journey. To enhance and impact clinical outcomes, there is a need to strengthen research collaboration between academic institutions and hospitals. Evidence-based clinical decisions will ultimately result in higher quality healthcare by informing the development of policies and strategies. As these collective research endeavors advance, it is crucial to have a comprehensive health research policy alongside this engagement. This policy should not only serve as a guiding framework for health research within its institutions, but also ensure that the research addresses the specific needs of its communities. Students and researchers affiliated with academic institutions can contribute to fulfilling the mission of hospitals when a well-defined research agenda is in place and vice versa, and this policy will serve as the guiding principle for its implementation.

While other institutions were invited to the KFH RD, there is still a need for more intentional efforts towards institutional research collaboration and dissemination efforts. Specific ways that this can be achieved are through joint research dissemination opportunities, as well as the integration of professional societies in Rwanda, to ensure that institutions and health professions are equitably represented in these activities. Furthermore, utilizing technology can also allow for more collaboration and allow dissemination activities to be more accessible to a wider audience outside of the hospital.

Implications for policy and practice

RD also highlighted implications for policy and practice at KFH and teaching hospitals in general. In addition to the need to define an institutional research agenda, the gaps in authorship and topic area representation across all hospital specialties suggests the need to integrate research into staff performance appraisal and promotion systems to institutionally motivate staff to participate. In doing so, the representation of all staff and respective disciplines would become more representative of the hospital itself. Furthermore, although over 100 internal and external attendees participated, and the event was hosted in the hospital for free to promote engagement, the participant number still only reflects a small proportion of the hospital, which has over 800 staff. This suggests that KFH could implement other policies or practices to motivate or require staff to participate in research-related activities. Finally, informal feedback from RD participants suggested that RD is an important step towards knowledge translation, but that additional efforts are needed alongside this event, especially towards building staff research capacity, providing resources to conduct research, and supporting those researchers with in-progress projects towards completion. Going forward, KFH will implement these recommendations towards its practices and evaluate their impact.

RD provides an important platform for teaching hospitals to strengthen their research dissemination and overall research culture. RD is also an opportunity to implement the hospital’s research agenda and drive forward evidence-based practice in identified research areas. In LMICs, where there is already a significant gap in research output and authorship representation, this provides an opportunity for researchers to present and get feedback on their progress, and to motivate them to further engage in research activities. To sustain momentum and address the challenges encountered, teaching hospitals should consider RD as just one component of a broader research dissemination plan, with the wider aim of knowledge translation. By ensuring that RD is not hosted in isolation of other initiatives, this also strengthens the institutional, team-level, and individual buy in needed to strengthen RD engagement. Furthermore, when designing RD, emphasis should be given to promoting equity and inclusion in authorship, including gender, discipline, and professional experience levels. Stakeholder engagement should also be considered to strengthen institutional research collaboration for greater impact, as collaboration with other institutions can strengthen institutional research collaboration, maximizing the impact of research findings and fostering a culture of collaboration and knowledge dissemination. Going forward, KFH will continue to strengthen its research culture by leveraging RD as an initial step towards knowledge translation and implementing a defined research agenda geared towards strengthening clinical service delivery and patient outcomes.

Data availability

The data analyzed during this study are available from the corresponding author upon reasonable request.

Abbreviations

Institutional Review Board

King Faisal Hospital Rwanda

Low- and middle-income country

Principal Investigator

Research Day

Bowsher G, Papamichail A, El Achi N, Ekzayez A, Roberts B, Sullivan R, et al. A narrative review of health research capacity strengthening in low and middle-income countries: lessons for conflict-affected areas. Globalization Health. 2019;15(1):23.

Article   Google Scholar  

Whitworth JAGD, Kokwaro GP, Kinyanjui SP, Snewin VAP, Tanner MP, Walport MF, et al. Strengthening capacity for health research in Africa. Lancet (British Edition). 2008;372(9649):1590–3.

Google Scholar  

Mc Sween-Cadieux E, Dagenais C, Somé P-A, Ridde V. Research dissemination workshops: observations and implications based on an experience in Burkina Faso. Health Res Policy Syst. 2017;15(1):43.

Musabyiman JP, Musanabaganwa C, Dushimiyimana V, Namabajimana JP, Karame P, Nshimiyimana L, et al. The Rwanda Clinical Research Network: a model for mixed south-south and north-south collaborations for clinical research capacity development. BMJ Global Health. 2019;4(Suppl 3):A11–2.

Stanford FC. The importance of diversity and inclusion in the Healthcare workforce. J Natl Med Assoc. 2020;112(3):247–9.

Kayingo G, Bradley-Guidry C, Burwell N, Suzuki S, Dorough R, Bester V. Assessing and benchmarking equity, diversity, and inclusion in healthcare professions. JAAPA. 2022;35(11).

Harrison D, Klein K. What’s the Difference? Diversity Constructs as Separation, Variety, or Disparity in Organizations. Acad Manage Rev. 2007;32.

Aggarwal I, Woolley AW, Chabris CF, Malone TW. The impact of cognitive style diversity on implicit learning in teams. Front Psychol. 2019;10.

Chatterjee P, Werner RM. Gender disparity in citations in high-impact Journal Articles. JAMA Netw Open. 2021;4(7):e2114509–e.

West JD, Jacquet J, King MM, Correll SJ, Bergstrom CT. The role of gender in Scholarly Authorship. PLoS ONE. 2013;8(7):e66212.

Rexrode KM. The gender gap in first authorship of research papers. BMJ. 2016;352:i1130.

Madden C, O’Malley R, O’Connor P, O’Dowd E, Byrne D, Lydon S. Gender in authorship and editorship in medical education journals: a bibliometric review. Med Educ. 2021;55(6):678–88.

Download references

Acknowledgements

We would like to thank the leadership of King Faisal Hospital Rwanda for their significant support towards strengthening the Directorate of Research and the overall research culture at the hospital.

This paper received no funding.

Author information

Authors and affiliations.

King Faisal Hospital Rwanda, P.O Box 2534, Kigali, Rwanda

Kara L. Neil, Richard Nduwayezu, Belise S. Uwurukundo, Damas Dukundane, Ruth Mbabazi & Gaston Nyirigira

You can also search for this author in PubMed   Google Scholar

Contributions

KN and GN wrote the background, methods, and findings. DD, BU, RN, and RM wrote the discussion and conclusion sections. All authors reviewed and edited the final manuscript.

Corresponding author

Correspondence to Kara L. Neil .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare no competing interests.

Authors’ information

Additional information, publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Neil, K.L., Nduwayezu, R., Uwurukundo, B.S. et al. Strengthening a culture of research dissemination: A narrative report of research day at King Faisal Hospital Rwanda, a tertiary-level teaching hospital in Rwanda. BMC Med Educ 24 , 732 (2024). https://doi.org/10.1186/s12909-024-05736-0

Download citation

Received : 11 November 2023

Accepted : 02 July 2024

Published : 06 July 2024

DOI : https://doi.org/10.1186/s12909-024-05736-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Research dissemination
  • Teaching hospital

BMC Medical Education

ISSN: 1472-6920

in research output

| Essential inputs and outputs, outcomes and impact of the research process used to explore the 3e's (effectiveness, efficiency and equity) in 

| Essential inputs and outputs, outcomes and impact of the research process used to explore the 3e's (effectiveness, efficiency and equity) in 

Figure 1 | Essential inputs and outputs, outcomes and impact of the...

Future of learning & research outputs

CLOSING THE IMPACT GAP: REPORT

Here we look to the future of research outputs, contemplating how they can be improved to reach all users. We reveal the future trends that students and academics believe will make research more effective and consider the role of technology in making research more usable for non-academics. 

Turning attention to specific groups, we share the top content forms that students and academics believe will make research more accessible to learners and effective for decision makers.  

On this page

The role of technology, how might technology improve research accessibility, regional views about the role of technology, technology versus tradition, four groups of learners who could benefit from technology.

  • Future trends to make research more effective

Making research more usable for the next generation

How best to present research to decision makers, in the report.

  • Introduction & key takeaways
  • Analysis: Is academia stuck in a rut?
  • Analysis: Are research papers fit for purpose?
  • Analysis: Barriers to innovation
  • Analysis: Future of learning & research outputs
  • Conclusions & future trends

Universities around the world are using technology to enhance their learning environments. Technology Enhanced Learning (TEL), usually delivered via an online platform, is increasingly used in higher education, and has become central to the delivery of remote learning during the COVID-19 pandemic.

TEL can improve engagement and accessibility and it helps students to pace their learning and understand complex information. Research has shown that universities can increase the benefits of TEL by using it more frequently to build connections between lecturers, fellow students and external groups.

Considering the benefits of TEL, we asked students and academics for their thoughts on using technology to improve the useability of research in general, as well as enhance learning within higher education.

in research output

Response Academics % Students %
46% of academics and 27% of students believe that technology and traditional methods should play an equal role in research / learning 46 27
26% of academics and 36% of students agree that technology should play a large role in research / learning, along with some focus on traditional methods 26 36
19% of academics and 29% of students want technology to play a very large role in research / learning, with very little focus on traditional methods 19 29

Both academics and students are keen for technology to play a role in improving both research accessibility and learning. However, there are mixed views over how much technology should be used versus traditional methods. Students generally feel more strongly than academics about technology playing a large or a very large role in research / learning.

There are mixed opinions on the role of technology at the regional level. In all but one region (Middle East and Africa), academics agree that both technology and traditional methods should play an equal role in research – around 50%. However, students are more disparate and equally spread, with most agreeing that technology should play a large role in learning, along with some focus on traditional methods, rising to 47% of students in China.

Academics most in favour of change are those in the Middle East and Africa (32%) and India (26%), who want to move away from traditional methods and for technology to play a very large role in research. This was echoed most strongly by students in Egypt (46%) and the USA (53%) who want the same for learning. Meanwhile, academics in North America (7%) and Latin America (8%) are least in favour of technology playing a very large role in research.

In their verbatim responses, students repeatedly call for a break with traditional learning approaches and for technology to play a greater role. One student in Egypt urges universities to 'stay away from the old ways and methods' and make 'more use of modern technology to keep pace with the times'.

Another student in the UK sees traditional teaching approaches as merely a tick box exercise he must endure to earn a degree: '[…] traditional teaching facilities have just stuck to the norm and ignored the fact that a majority don’t get on well with the traditional and are just putting up with it to get a piece of paper […] University is not a space to learn, it is a place for lecturers to "teach you how to learn" which is the biggest cop out. '

Most students want academics to use technology and new content forms such as videos to support learning at university.

' Improve media communication abstract themes. Each YouTuber can do something like this nowadays better than professors whose task it should be .' Postgraduate student in Germany

' They can provide us animated notes and give us some creativity assignments that are fun, and we gain knowledge from it at the same time too .' Undergraduate student in India

' Make study tip videos and helpful articles and content .' Undergraduate student in UK

Academics generally agree that videos and other tools can help make learning more engaging and digestible.

' Five-minute video summary. It can be made for students, juniors and seniors or one for all .' Academic in Egypt

' Learning should be fun, and interactive. There should be a degree of freedom. If students choose not to do exams, that should be fine .' Academic in Malaysia

' Learning Bytes – 1/2-minute videos, 5-minute videos depending on the content .' Academic in India

' Flexibility is key, accessibility of course. Variety of information formats – text, but also visual audio and AV. ' Academic in Australia

A minority of students and academics are less confident about the role of technology in opening learning opportunities. One UK academic raises concerns over the idea that technology automatically offers quality learning opportunities:

' Learning is fundamentally a social process ', explains the academic. ' Relationships mediated by technology are not ideal though they might be the best available in certain, limited circumstances (e.g. COVID). The evidence is that social media often does not support connection and mental health .' 

Likewise, a UK student kicks back at the notion of technology leading to quality learning experience, describing first-hand how remote learning has muted opportunities. ' In the current term learning is extremely poor, online lectures just do not have the same captivating power as in person lectures ', they note.

Most academics agree that technology could help the following groups of people gain access to learning opportunities.

in research output

Response Academics %
Remote learners: Academics believe technology could benefit those wanting to study remotely from any location 87
Professionals / carers: Academics think technology could help those with work or caring responsibilities 86
Returners to study / school leavers: Academics see technology being useful for people returning to study later in their careers as well as students starting straight from school 82
Career changers: Technology helps professionals retrain when switching careers 79

Future trends to make research more effective for learning outcomes

How important do you think the following trends are in improving learning outcomes from academic research?

in research output

Response Academics % Students %
Real world 64 44
Project-based learning 63 36
Lecturers' role 62 44
Collaborative learning 52 32
Visualisation 51 43
Personal learning environment 46 40
Personalisation 43 37
Mobile learning 38 37
Flipped classroom 38 29
Social media 27 28
Game-based learning 22 27
  • Real world : virtual experiences, simulations, other teachers/experts, real-world problems, and workplaces will bring the outside world into learning.
  • Project-based learning : students work on challenges and problems. Learning usually goes beyond traditional subjects.
  • Lecturers' role : lecturers bring their knowledge and experience to the learning environment.
  • Collaborative learning : less working alone and more time spent on group work.
  • Visualisation : visual devices bring content to life.
  • Personal learning environment : the online learning environment you engage with is tailored to your personal needs, learning style and personal interests.
  • Personalisation : learning that’s more personalised, driven by rich data and guided by learning analytics and advise which are the most efficient for which students.
  • Mobile learning : we get access to knowledge through smartphones and tablets, sometimes using virtual learning environments. It is learning anytime, anywhere.
  • Flipped classroom : students master basic concepts of topics at home. Time spent in classroom is used to reflect, discuss, and develop topics.
  • Social media : learners share ideas and feelings.
  • Game-based learning : learning is mixed with games or with game mechanisms.

Academics and students agree that a greater focus on the real world will make the biggest improvement in learning outcomes from academic research, through virtual experiences, simulations, solving real world problems and bringing the outside world into learning. Although, for students, the lecturers’ role is equally important in making research more effective for learners. Project-based learning is the next popular trend for academics, while for students it is visualisation.

Both academics and students want learning to focus more on the real world, bringing the outside world into the classroom through virtual experiences, simulations, other teachers/experts, real-world problems, and workplaces. Real world is a trend most favoured by academics in Latin American (71%) and Middle East and Africa (69%), as well as students in India (58%). However, it is least popular among students in Japan (16%). Although, students in Japan rated all the trends well below average.

Project-based learning, where students work on challenges and problems (often on non-traditional subjects), is the second most favourable trend for academics (63%), rising to 82% for those in India. This trend is closely followed by the lecturers’ role, a choice for 62% of academics overall, rising to 72% for those in India and 71% in the Middle East and Africa. The lecturers’ role is the second most popular choice among students, with 44% choosing this option.

Visualisation, where visual devices bring content to life in far more interesting and dynamic ways, is the third most significant trend for students, with 43% rating it very important, rising to 56% for students in Egypt. While, the average score was higher for academics, with 51% making this choice, other trends seem more important for this group.

Both groups place less value on game-based learning, with 27% of students and 22% of academics selecting this trend. Game-based learning is least popular among students in in Japan (12%) and academics in Australasia (11%).

See regional breakdowns of participants in this report for students and for academics .

We know that students want academics to use more videos and animations to help with their learning and academics are supportive of this move, with 64% of them choosing videos, podcasts and infographics as the number one way they could more effectively present research to students. Academics are keen that measures to improve research accessibility for students go further by providing article summaries (59%), making research open access (59%), and presenting research in more accessible language (45%). Academics are least keen on sharing policy makers’ perspectives at 21%.

Sixty-four percent of academics believe that content forms such as videos, podcasts and infographics could help when presenting research to students.

Thirty-two percent of students would like to see video and animation used for learning.

' Paywalls are a massive issue. There are concerns about readability and digestibility – but these all pale compared to access. People can’t read papers, because they’re priced at horrific costs of which little to none goes back to the actual author/s .' Undergraduate student in Australia

' I often use more easily digestible videos – with interesting graphics and narrators for teaching – especially at the undergrad level .' Academic in the USA

' I would select game-based learning, videos and other edtechs, as they would allow students to learn in a more attractive way and, at the same time, make learning accessible to students in remote places. However, this will only be possible if internet is available to everyone .' Academic in Brazil

How do you think research could be more effectively presented to decision makers outside academia? How do you think research could be more effectively presented to the next generation of students?

in research output

Response Academics % Students %
Using different forms of content (e.g. videos, podcasts, infographics) to highlight the research 52 64
Article summaries, such as lay summaries or structured abstracts which detail the key findings of research 57 59
Making research open access 52 59
Making research language more accessible; writing in plain English 44 45
Co-creating research with academics and non-academics 54 42
Making research accessible in different languages 31 33
Using impact statements for research 27 27
Sharing policy makers perspectives 31 21
Other 5 4
None of these 1 2

Academics believe that article summaries such as lay summaries or structured abstracts are the best way to improve how research information is presented to decision makers outside of academia, with 57% choosing this option. Article summaries are also within the top 3 answers for all regions, with 77% of academics in North America and 75% in Australasia choosing this option.

In addition to the lay summary, most academics agree they could more effectively present research to non-academics through co-creation between academics and practitioners (54%), open research (52%) and using different forms of content such as videos, podcasts and infographics (52%). Impact statements are the least popular option for academics looking to improve the way research is presented to non-academic decision makers, with 27% selecting this option.

' If a more layman’s report was made, the general public could have a better understanding of important issues like climate change, mental health, suicide rates, impact from COVID .' Undergraduate student in Australia

' Journals could provide an ‘alternative’ abstract that sums up the paper in very simple / nonscientific terms (if possible) so that the message that the article is trying to put across is clear even for someone not working in that field .' Postgraduate student in UK

' Animation and animated CAD tools are the most effective tool to connect research work to decision makers outside academia .' Academic in Egypt

' Publishing summaries in newsletter style/in relevant magazines […] Or making sure there is a regular summary of research in industry magazines .' Academic in United Arab Emirates

Next section: Conclusions & future trends

Publish with us

Choose the right home for your research across our journals, books, teaching cases and open access options. Follow our guides and find the right resources to help you submit, publish and promote your work. 

share this!

July 8, 2024

This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

Researchers realize time reversal through input-output indefiniteness

by Huang Rui, University of Science and Technology of China

Researchers realize time reversal through input-output indefiniteness

A research team has constructed a coherent superposition of quantum evolution with two opposite directions in a photonic system and confirmed its advantage in characterizing input-output indefiniteness. The study was published in Physical Review Letters .

The notion that time flows inexorably from the past to the future is deeply rooted in people's mind. However, the laws of physics that govern the motion of objects in the microscopic world do not deliberately distinguish the direction of time.

To be more specific, the basic equations of motion of both classical and quantum mechanics are reversible, and changing the direction of the time coordinate system of a dynamical process (possibly along with the direction of some other parameters) still constitutes a valid evolution process.

This is known as time reversal symmetry . In quantum information science , time reversal has attracted great interest due to its applications in multi-time quantum states, simulations of closed timelike curves, and inversion of unknown quantum evolutions. However, time reversal is difficult to realize experimentally.

To tackle this problem, the team, led by academician Guo Guangcan, Prof. Li Chuanfeng and Prof. Liu Biheng from the University of Science and Technology of China (USTC) of the Chinese Academy of Sciences (CAS), collaborating with Prof. Giulio Chiribella from the University of Hong Kong, constructed a class of quantum evolution processes in a photonic setup by extending the time reversal to the input-output inversion of a quantum device.

When exchanging the input and output ports of a quantum device, the resulting evolution satisfied the time-reversal properties of the initial evolution, thus obtaining a time-reversal simulator for quantum evolution.

On this basis, the team further quantized the evolution time direction, achieving the coherent superposition of quantum evolution and its inverse evolution. They also characterized the structures using quantum witness techniques.

Compared to the scenario of a definite evolution time direction, the quantization of the time direction showed significant advantages in quantum channel identification.

In this study, researchers used the device to distinguish between two sets of quantum channels with a 99.6% success probability, while the maximum success probability of a definite time direction strategy was only 89% with the same resource consumption.

The study reveals the potential of input-output indefiniteness as a valuable resource for advancements in quantum information and photonic quantum technologies.

Journal information: Physical Review Letters

Provided by University of Science and Technology of China

Explore further

Feedback to editors

in research output

A new species of extinct crocodile relative rewrites life on the Triassic coastline

2 hours ago

in research output

New method achieves tenfold increase in quantum coherence time via destructive interference of correlated noise

in research output

Mars likely had cold and icy past, new study finds

3 hours ago

in research output

Study: Nanoparticle vaccines enhance cross-protection against influenza viruses

in research output

New tools are needed to make water affordable, says study

in research output

Researchers demonstrate how to build 'time-traveling' quantum sensors

in research output

Lion with nine lives breaks record with longest swim in predator-infested waters

4 hours ago

in research output

New multimode coupler design advances scalable quantum computing

in research output

High-speed electron camera uncovers new 'light-twisting' behavior in ultrathin material

5 hours ago

in research output

Perceived warmth, competence predict callback decisions in meta-analysis of hiring experiments

Relevant physicsforums posts, quantum superposition and a coin.

6 hours ago

Will an electron release energy when it is added into an atom?

9 hours ago

Is delayed choice remote entanglement of photons derived from TDSE?

Jul 8, 2024

Hard-Core Boson Model in K space

Evaluating taylor series at the mid-point.

Jul 6, 2024

Why is m_j not a good quantum number in strong-field Zeeman effect?

More from Quantum Physics

Related Stories

in research output

Researchers provide comprehensive review of quantum teleportation

Jun 13, 2023

in research output

Researchers shed light on memory effects in multi-step evolution of open quantum system

Jul 7, 2021

in research output

The first quantum orienteering by quantum entangling measurements enhancement

Feb 26, 2020

in research output

How to reverse unknown quantum processes

Feb 7, 2023

in research output

Research team observes non-Markovian evolution of EPR steering in quantum open systems

Jun 21, 2023

in research output

Researchers investigate 'imaginary part' in quantum resource theory

Mar 3, 2021

Recommended for you

in research output

Fermionic Hubbard quantum simulator observes antiferromagnetic phase transition

7 hours ago

in research output

Visualizing the boundary modes of the charge density wave in a topological material

12 hours ago

in research output

Physicists move one step closer to topological quantum computing

Let us know if there is a problem with our content.

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Phys.org in any form.

Newsletter sign up

Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.

More information Privacy policy

Donate and enjoy an ad-free experience

We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.

E-mail newsletter

  • All Research Labs
  • 3D Deep Learning
  • Applied Research
  • Autonomous Vehicles
  • Deep Imagination
  • New and Featured
  • AI Art Gallery
  • AI & Machine Learning
  • Computer Vision
  • Academic Collaborations
  • Government Collaborations
  • Graduate Fellowship
  • Internships
  • Research Openings
  • Research Scientists
  • Meet the Team
  • Publications

Surface-Filling Curve Flows via Implicit Medial Axes

in research output

We introduce a fast, robust, and user-controllable algorithm to generate surface-filling curves. We compute these curves through the gradient flow of a simple sparse energy, making our method several orders of magnitude faster than previous works. Our algorithm makes minimal assumptions on the topology and resolution of the input surface, achieving improved robustness. Our framework provides tuneable parameters that guide the shape of the output curve, making it ideal for interactive design applications.

Publication Date

Published in, research area.

IMAGES

  1. Research Outputs

    in research output

  2. Research Outputs

    in research output

  3. Figure 1 from On a variety of research output types

    in research output

  4. | Essential inputs and outputs, outcomes and impact of the research

    in research output

  5. Research Steps: Input, Process, and Output

    in research output

  6. The Outcomes of Research Outputs

    in research output

VIDEO

  1. Video presentation of Saba's research output of AI global impact

  2. Thematic Network Analysis on the Global Trends of Pragmatic Research

  3. Presentation of Research Output

  4. Measuring Research Output and Promoting Research Visibility

  5. Public launch of the Skills4EOSC Competence Centres network

  6. DTHF Research output meeting_21May2024

COMMENTS

  1. Conceptualizing the elements of research impact: towards ...

    This classification aims to help focus research evaluation appropriately and enhance appreciation of the multiple pathways and mechanisms by which scholarship contributes to change.

  2. It's Not Just Semantics: Managing Outcomes Vs. Outputs

    What's the difference between outputs and outcomes? Some think the question is merely semantic, or that the difference is simple: outputs are extrinsic and outcomes intrinsic. I think otherwise ...

  3. Outputs Versus Outcomes

    This chapter explores what we mean by research project deliverables—particularly the difference between outputs and outcomes. This is an increasingly important distinction to funding bodies. Research outputs, which are key performance indicators for academics,...

  4. Optimizing Research Output: How Can Psychological Research Methods Be

    Recent evidence suggests that research practices in psychology and many other disciplines are far less effective than previously assumed, which has led to what has been called a "crisis of confidence" in psychological research (e.g., Pashler & Wagenmakers 2012). In response to the perceived crisis, standard research practices have come under intense scrutiny, and various changes have ...

  5. Outputs from Research

    A research output is the product of research. An underpinning principle of the REF is that all forms of research output will be assessed on a fair and equal basis.

  6. BeckerGuides: Research Impact : Outputs and Activities

    Tracking your research outputs and activities is key to being able to document the impact of your research. One starting point for telling a story about your research impact is your publications. Advances in digital technology afford numerous avenues for scholars to not only disseminate research findings but also to document the diffusion of ...

  7. Output Types

    An article published in an academic journal can go by several names: original research, an article, a scholarly article, or a peer reviewed article. This format is an important output for many fields and disciplines. Original research articles are written by one or a number of authors who typically advance a new argument or idea to their field.

  8. Characteristics of research output in social sciences and humanities

    The goal of research evaluation is to reveal the achievement and progress of research. Research output offers a basis for empirical evaluation. A fair and just research evaluation should take into ac...

  9. Types of research output profiles: A multilevel latent class analysis

    Abstract. Starting out from a broad concept of research output, this article looks at the question as to what research outputs can typically be expected fr

  10. How to talk about your research outputs

    The University of Glasgow has signed up to the San Francisco Declaration on Research Assessment and this animation explains how you can talk about your range...

  11. What's the best measure of research output?

    When assessing the output of a country's top-quality research, it is prudent to also consider article count - the total number of studies that a country's researchers have contributed too ...

  12. Turning Research into Outputs: Thesis, Papers and Beyond

    These mile-stones are often aligned with the output of key research outputs, such as papers, talks or reports, along the way and are likely to result in significant contributions, or individual, thesis chapters.

  13. How To Make Conceptual Framework (With Examples and Templates)

    State the research output. Indicate what you are expecting after you conduct the research. In our example above, the research output is the assessed level of satisfaction of college students with the use of Google Classroom as an online learning platform. Create the model using the research's determined input, process, and output.

  14. How to Write a Results Section

    In the results section, concisely present the main findings and observe how they relate to your research questions or hypotheses.

  15. Research Output

    1 Definition An output is an outcome of research and can take many forms. Research Outputs must meet the definition of Research.

  16. Research incentives and research output

    This paper first briefly reviews the worldwide development of the university sector, the use of reward systems for university employees and the measurement of research output. It concludes that new public management thinking has resulted in more focus on quantitative research measures and external incentives, leading to a significant increase ...

  17. (PDF) Research management and research output

    The theoretical research output prediction model highlights predictors such as 'professional activities' and 'individual skills and competence' for specific groupings.

  18. How to formulate strong outputs

    This blog post shows how an output can be properly formulated. Outputs are are key element in of results based management and a results chain.

  19. Research Impact, Research Output, and the Role of International

    This data brief explores how international collaboration relates to the impact and output of research publications. Focusing on the top 10 countries with the highest publication output from 2010 to 2019, the authors provide a comprehensive analysis across the major fields of science and technology.

  20. Strengthening a culture of research dissemination: A narrative report

    There are significant gaps in research output and authorship in low- and middle-income countries. Research dissemination events have the potential to help bridge this gap through knowledge transfer, institutional collaboration, and stakeholder engagement. These events may also have an impact on both clinical service delivery and policy development.

  21. | Essential inputs and outputs, outcomes and impact of the research

    Research activity leads to outputs, outcomes and wider impact, which can serve to tell us whether research has been effective. Finally, the information on inputs, research processes and outputs ...

  22. Analysis 4: Future of learning & research outputs

    Here we look to the future of research outputs, contemplating how they can be improved to reach all users. We reveal the future trends that students and academics believe will make research more effective and consider the role of technology in making research more usable for non-academics. Turning attention to specific groups, we share the top ...

  23. Money and output asymmetry: the unintended consequences of central

    The study re-examines the relationship between money and output for the US and the UK using quarterly data up to 2019. Modern central banks are focused on controlling inflation and adjust their mon...

  24. Researchers realize time reversal through input-output indefiniteness

    A research team has constructed a coherent superposition of quantum evolution with two opposite directions in a photonic system and confirmed its advantage in characterizing input-output ...

  25. Ultra‐High Peak Power Generation for Rotational Triboelectric

    Currently, enhancing the output power of rotational-mode triboelectric nanogenerators (TENGs) using various complicated systems is a contentious issue; however, this is a challenging process owing to the inherent characteristics of TENGs, namely, low output currents as opposed to high voltages.

  26. Top Stock Reports for Novo Nordisk, AbbVie & AstraZeneca

    The Zacks Research Daily presents the best research output of our analyst team. Today's Research Daily features new research reports on 16 major stocks, including Novo Nordisk A/S (NVO), AbbVie ...

  27. Novel clamping modulation for three‐phase buck‐boost ac choppers

    Three-phase ac choppers feature output voltage amplitude controllability and enable more compact system realizations compared to autotransformers. For the practical realization advantageously standard power transistor with unipolar voltage blocking capability such as MOSFETs can be employed as a naturally resulting offset voltage between the ...

  28. Surface-Filling Curve Flows via Implicit Medial Axes

    We introduce a fast, robust, and user-controllable algorithm to generate surface-filling curves. We compute these curves through the gradient flow of a simple sparse energy, making our method several orders of magnitude faster than previous works. Our algorithm makes minimal assumptions on the topology and resolution of the input surface, achieving improved robustness.