Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

How do you determine the quality of a journal article?

Published on October 17, 2014 by Bas Swaen . Revised on March 4, 2019.

In the theoretical framework of your thesis, you support the research that you want to perform by means of a literature review . Here, you are looking for earlier research about your subject. These studies are often published in the form of scientific articles in journals (scientific publications).

Table of contents

Why is good quality important, check the following points.

The better the quality of the articles that you use in the literature review , the stronger your own research will be. When you use articles that are not well respected, you run the risk that the conclusions you draw will be unfounded. Your supervisor will always check the article sources for the conclusions you draw.

We will use an example to explain how you can judge the quality of a scientific article. We will use the following article as our example:

Example article

Perrett, D. I., Burt, D. M., Penton-Voak, I. S., Lee, K. J., Rowland, D. A., & Edwards, R. (1999). Symmetry and Human Facial Attractiveness.  Evolution and Human Behavior ,  20 , 295-307. Retrieved from  http://www.grajfoner.com/Clanki/Perrett%201999%20Symetry%20Attractiveness.pdf

This article is about the possible link between facial symmetry and the attractiveness of a human face.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

research paper quality

1. Where is the article published?

The journal (academic publication) where the article is published says something about the quality of the article. Journals are ranked in the Journal Quality List (JQL). If the journal you used is ranked at the top of your professional field in the JQL, then you can assume that the quality of the article is high.

The article from the example is published in the journal “Evolution and Human Behavior”. The journal is not on the Journal Quality List, but after googling the publication, it seems from multiple sources that it nevertheless is among the top in the field of Psychology (see Journal Ranking at   http://www.ehbonline.org/ ). The quality of the source is thus high enough to use it.

So, if a journal is not listed in the Journal Quality List then it is worthwhile to google it. You will then find out more about the quality of the journal.

2. Who is the author?

The next step is to look at who the author of the article is:

  • What do you know about the person who wrote the paper?
  • Has the author done much research in this field?
  • What do others say about the author?
  • What is the author’s background?
  • At which university does the author work? Does this university have a good reputation?

The lead author of the article (Perrett) has already done much work within the research field, including prior studies of predictors of attractiveness. Penton-Voak, one of the other authors, also collaborated on these studies. Perrett and Penton-Voak were in 1999 both professors at the University of St Andrews in the United Kingdom. This university is among the top 100 best universities in the world. There is less information available about the other authors. It could be that they were students who helped the professors.

3. What is the date of publication?

In which year is the article published? The more recent the research, the better. If the research is a bit older, then it’s smart to check whether any follow-up research has taken place. Maybe the author continued the research and more useful results have been published.

Tip! If you’re searching for an article in Google Scholar , then click on ‘Since 2014’ in the left hand column. If you can’t find anything (more) there, then select ‘Since 2013’. If you work down the row in this manner, you will find the most recent studies.

The article from the example was published in 1999. This is not extremely old, but there has probably been quite a bit of follow-up research done in the past 15 years. Thus, I quickly found via Google Scholar an article from 2013, in which the influence of symmetry on facial attractiveness in children was researched. The example article from 1999 can probably serve as a good foundation for reading up on the subject, but it is advisable to find out how research into the influence of symmetry on facial attractiveness has further developed.

4. What do other researchers say about the paper?

Find out who the experts are in this field of research. Do they support the research, or are they critical of it?

By searching in Google Scholar, I see that the article has been cited at least 325 times! This says then that the article is mentioned at least in 325 other articles. If I look at the authors of the other articles, I see that these are experts in the research field. The authors who cite the article use the article as support and not to criticize it.

5. Determine the quality

Now look back: how did the article score on the points mentioned above? Based on that, you can determine quality.

The example article scored ‘reasonable’ to ‘good’ on all points. So we can consider the article to be qualitatively good, and therefore it is useful in, for example, a literature review. Because the article is already somewhat dated, however, it is wise to also go in search of more recent research.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Swaen, B. (2019, March 04). How do you determine the quality of a journal article?. Scribbr. Retrieved July 10, 2024, from https://www.scribbr.com/tips/how-do-you-determine-the-quality-of-a-journal-article/

Is this article helpful?

Bas Swaen

Get unlimited documents corrected

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

  • Search Menu
  • Sign in through your institution
  • Advance articles
  • Author Guidelines
  • Submission Site
  • Open Access
  • Why Publish?
  • About Research Evaluation
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Article Contents

1. introduction, 4. synthesis, 4.1 principles of tdr quality, 5. conclusions, supplementary data, acknowledgements, defining and assessing research quality in a transdisciplinary context.

  • Article contents
  • Figures & tables
  • Supplementary Data

Brian M. Belcher, Katherine E. Rasmussen, Matthew R. Kemshaw, Deborah A. Zornes, Defining and assessing research quality in a transdisciplinary context, Research Evaluation , Volume 25, Issue 1, January 2016, Pages 1–17, https://doi.org/10.1093/reseval/rvv025

  • Permissions Icon Permissions

Research increasingly seeks both to generate knowledge and to contribute to real-world solutions, with strong emphasis on context and social engagement. As boundaries between disciplines are crossed, and as research engages more with stakeholders in complex systems, traditional academic definitions and criteria of research quality are no longer sufficient—there is a need for a parallel evolution of principles and criteria to define and evaluate research quality in a transdisciplinary research (TDR) context. We conducted a systematic review to help answer the question: What are appropriate principles and criteria for defining and assessing TDR quality? Articles were selected and reviewed seeking: arguments for or against expanding definitions of research quality, purposes for research quality evaluation, proposed principles of research quality, proposed criteria for research quality assessment, proposed indicators and measures of research quality, and proposed processes for evaluating TDR. We used the information from the review and our own experience in two research organizations that employ TDR approaches to develop a prototype TDR quality assessment framework, organized as an evaluation rubric. We provide an overview of the relevant literature and summarize the main aspects of TDR quality identified there. Four main principles emerge: relevance, including social significance and applicability; credibility, including criteria of integration and reflexivity, added to traditional criteria of scientific rigor; legitimacy, including criteria of inclusion and fair representation of stakeholder interests, and; effectiveness, with criteria that assess actual or potential contributions to problem solving and social change.

Contemporary research in the social and environmental realms places strong emphasis on achieving ‘impact’. Research programs and projects aim to generate new knowledge but also to promote and facilitate the use of that knowledge to enable change, solve problems, and support innovation ( Clark and Dickson 2003 ). Reductionist and purely disciplinary approaches are being augmented or replaced with holistic approaches that recognize the complex nature of problems and that actively engage within complex systems to contribute to change ‘on the ground’ ( Gibbons et al. 1994 ; Nowotny, Scott and Gibbons 2001 , Nowotny, Scott and Gibbons 2003 ; Klein 2006 ; Hemlin and Rasmussen 2006 ; Chataway, Smith and Wield 2007 ; Erno-Kjolhede and Hansson 2011 ). Emerging fields such as sustainability science have developed out of a need to address complex and urgent real-world problems ( Komiyama and Takeuchi 2006 ). These approaches are inherently applied and transdisciplinary, with explicit goals to contribute to real-world solutions and strong emphasis on context and social engagement ( Kates 2000 ).

While there is an ongoing conceptual and theoretical debate about the nature of the relationship between science and society (e.g. Hessels 2008 ), we take a more practical starting point based on the authors’ experience in two research organizations. The first author has been involved with the Center for International Forestry Research (CIFOR) for almost 20 years. CIFOR, as part of the Consultative Group on International Agricultural Research (CGIAR), began a major transformation in 2010 that shifted the emphasis from a primary focus on delivering high-quality science to a focus on ‘…producing, assembling and delivering, in collaboration with research and development partners, research outputs that are international public goods which will contribute to the solution of significant development problems that have been identified and prioritized with the collaboration of developing countries.’ ( CGIAR 2011 ). It was always intended that CGIAR research would be relevant to priority development and conservation issues, with emphasis on high-quality scientific outputs. The new approach puts much stronger emphasis on welfare and environmental results; research centers, programs, and individual scientists now assume shared responsibility for achieving development outcomes. This requires new ways of working, with more and different kinds of partnerships and more deliberate and strategic engagement in social systems.

Royal Roads University (RRU), the home institute of all four authors, is a relatively new (created in 1995) public university in Canada. It is deliberately interdisciplinary by design, with just two faculties (Faculty of Social and Applied Science; Faculty of Management) and strong emphasis on problem-oriented research. Faculty and student research is typically ‘applied’ in the Organization for Economic Co-operation and Development (2012) sense of ‘original investigation undertaken in order to acquire new knowledge … directed primarily towards a specific practical aim or objective’.

An increasing amount of the research done within both of these organizations can be classified as transdisciplinary research (TDR). TDR crosses disciplinary and institutional boundaries, is context specific, and problem oriented ( Klein 2006 ; Carew and Wickson 2010 ). It combines and blends methodologies from different theoretical paradigms, includes a diversity of both academic and lay actors, and is conducted with a range of research goals, organizational forms, and outputs ( Klein 2006 ; Boix-Mansilla 2006a ; Erno-Kjolhede and Hansson 2011 ). The problem-oriented nature of TDR and the importance placed on societal relevance and engagement are broadly accepted as defining characteristics of TDR ( Carew and Wickson 2010 ).

The experience developing and using TDR approaches at CIFOR and RRU highlights the need for a parallel evolution of principles and criteria for evaluating research quality in a TDR context. Scientists appreciate and often welcome the need and the opportunity to expand the reach of their research, to contribute more effectively to change processes. At the same time, they feel the pressure of added expectations and are looking for guidance.

In any activity, we need principles, guidelines, criteria, or benchmarks that can be used to design the activity, assess its potential, and evaluate its progress and accomplishments. Effective research quality criteria are necessary to guide the funding, management, ongoing development, and advancement of research methods, projects, and programs. The lack of quality criteria to guide and assess research design and performance is seen as hindering the development of transdisciplinary approaches ( Bergmann et al. 2005 ; Feller 2006 ; Chataway, Smith and Wield 2007 ; Ozga 2008 ; Carew and Wickson 2010 ; Jahn and Keil 2015 ). Appropriate quality evaluation is essential to ensure that research receives support and funding, and to guide and train researchers and managers to realize high-quality research ( Boix-Mansilla 2006a ; Klein 2008 ; Aagaard-Hansen and Svedin 2009 ; Carew and Wickson 2010 ).

Traditional disciplinary research is built on well-established methodological and epistemological principles and practices. Within disciplinary research, quality has been defined narrowly, with the primary criteria being scientific excellence and scientific relevance ( Feller 2006 ; Chataway, Smith and Wield 2007 ; Erno-Kjolhede and Hansson 2011 ). Disciplines have well-established (often implicit) criteria and processes for the evaluation of quality in research design ( Erno-Kjolhede and Hansson 2011 ). TDR that is highly context specific, problem oriented, and includes nonacademic societal actors in the research process is challenging to evaluate ( Wickson, Carew and Russell 2006 ; Aagaard-Hansen and Svedin 2009 ; Andrén 2010 ; Carew and Wickson 2010 ; Huutoniemi 2010 ). There is no one definition or understanding of what constitutes quality, nor a set guide for how to do TDR ( Lincoln 1995 ; Morrow 2005 ; Oberg 2008 ; Andrén 2010 ; Huutoniemi 2010 ). When epistemologies and methods from more than one discipline are used, disciplinary criteria may be insufficient and criteria from more than one discipline may be contradictory; cultural conflicts can arise as a range of actors use different terminology for the same concepts or the same terminology for different concepts ( Chataway, Smith and Wield 2007 ; Oberg 2008 ).

Current research evaluation approaches as applied to individual researchers, programs, and research units are still based primarily on measures of academic outputs (publications and the prestige of the publishing journal), citations, and peer assessment ( Boix-Mansilla 2006a ; Feller 2006 ; Erno-Kjolhede and Hansson 2011 ). While these indicators of research quality remain relevant, additional criteria are needed to address the innovative approaches and the diversity of actors, outputs, outcomes, and long-term social impacts of TDR. It can be difficult to find appropriate outlets for TDR publications simply because the research does not meet the expectations of traditional discipline-oriented journals. Moreover, a wider range of inputs and of outputs means that TDR may result in fewer academic outputs. This has negative implications for transdisciplinary researchers, whose performance appraisals and long-term career progression are largely governed by traditional publication and citation-based metrics of evaluation. Research managers, peer reviewers, academic committees, and granting agencies all struggle with how to evaluate and how to compare TDR projects ( ex ante or ex post ) in the absence of appropriate criteria to address epistemological and methodological variability. The extent of engagement of stakeholders 1 in the research process will vary by project, from information sharing through to active collaboration ( Brandt et al. 2013) , but at any level, the involvement of stakeholders adds complexity to the conceptualization of quality. We need to know what ‘good research’ is in a transdisciplinary context.

As Tijssen ( 2003 : 93) put it: ‘Clearly, in view of its strategic and policy relevance, developing and producing generally acceptable measures of “research excellence” is one of the chief evaluation challenges of the years to come’. Clear criteria are needed for research quality evaluation to foster excellence while supporting innovation: ‘A principal barrier to a broader uptake of TD research is a lack of clarity on what good quality TD research looks like’ ( Carew and Wickson 2010 : 1154). In the absence of alternatives, many evaluators, including funding bodies, rely on conventional, discipline-specific measures of quality which do not address important aspects of TDR.

There is an emerging literature that reviews, synthesizes, or empirically evaluates knowledge and best practice in research evaluation in a TDR context and that proposes criteria and evaluation approaches ( Defila and Di Giulio 1999 ; Bergmann et al. 2005 ; Wickson, Carew and Russell 2006 ; Klein 2008 ; Carew and Wickson 2010 ; ERIC 2010; de Jong et al. 2011 ; Spaapen and Van Drooge 2011 ). Much of it comes from a few fields, including health care, education, and evaluation; little comes from the natural resource management and sustainability science realms, despite these areas needing guidance. National-scale reviews have begun to recognize the need for broader research evaluation criteria but have had difficulty dealing with it and have made little progress in addressing it ( Donovan 2008 ; KNAW 2009 ; REF 2011 ; ARC 2012 ; TEC 2012 ). A summary of the national reviews that we reviewed in the development of this research is provided in Supplementary Appendix 1 . While there are some published evaluation schemes for TDR and interdisciplinary research (IDR), there is ‘substantial variation in the balance different authors achieve between comprehensiveness and over-prescription’ ( Wickson and Carew 2014 : 256) and still a need to develop standardized quality criteria that are ‘uniquely flexible to provide valid, reliable means to evaluate and compare projects, while not stifling the evolution and responsiveness of the approach’ ( Wickson and Carew 2014 : 256).

There is a need and an opportunity to synthesize current ideas about how to define and assess quality in TDR. To address this, we conducted a systematic review of the literature that discusses the definitions of research quality as well as the suggested principles and criteria for assessing TDR quality. The aim is to identify appropriate principles and criteria for defining and measuring research quality in a transdisciplinary context and to organize those principles and criteria as an evaluation framework.

The review question was: What are appropriate principles, criteria, and indicators for defining and assessing research quality in TDR?

This article presents the method used for the systematic review and our synthesis, followed by key findings. Theoretical concepts about why new principles and criteria are needed for TDR, along with associated discussions about evaluation process are presented. A framework, derived from our synthesis of the literature, of principles and criteria for TDR quality evaluation is presented along with guidance on its application. Finally, recommendations for next steps in this research and needs for future research are discussed.

2.1 Systematic review

Systematic review is a rigorous, transparent, and replicable methodology that has become widely used to inform evidence-based policy, management, and decision making ( Pullin and Stewart 2006 ; CEE 2010). Systematic reviews follow a detailed protocol with explicit inclusion and exclusion criteria to ensure a repeatable and comprehensive review of the target literature. Review protocols are shared and often published as peer reviewed articles before undertaking the review to invite critique and suggestions. Systematic reviews are most commonly used to synthesize knowledge on an empirical question by collating data and analyses from a series of comparable studies, though methods used in systematic reviews are continually evolving and are increasingly being developed to explore a wider diversity of questions ( Chandler 2014 ). The current study question is theoretical and methodological, not empirical. Nevertheless, with a diverse and diffuse literature on the quality of TDR, a systematic review approach provides a method for a thorough and rigorous review. The protocol is published and available at http://www.cifor.org/online-library/browse/view-publication/publication/4382.html . A schematic diagram of the systematic review process is presented in Fig. 1 .

Search process.

Search process.

2.2 Search terms

Search terms were designed to identify publications that discuss the evaluation or assessment of quality or excellence 2 of research 3 that is done in a TDR context. Search terms are listed online in Supplementary Appendices 2 and 3 . The search strategy favored sensitivity over specificity to ensure that we captured the relevant information.

2.3 Databases searched

ISI Web of Knowledge (WoK) and Scopus were searched between 26 June 2013 and 6 August 2013. The combined searches yielded 15,613 unique citations. Additional searches to update the first searchers were carried out in June 2014 and March 2015, for a total of 19,402 titles scanned. Google Scholar (GS) was searched separately by two reviewers during each search period. The first reviewer’s search was done on 2 September 2013 (Search 1) and 3 September 2013 (Search 2), yielding 739 and 745 titles, respectively. The second reviewer’s search was done on 19 November 2013 (Search 1) and 25 November 2013 (Search 2), yielding 769 and 774 titles, respectively. A third search done on 17 March 2015 by one reviewer yielded 98 new titles. Reviewers found high redundancy between the WoK/Scopus searches and the GS searches.

2.4 Targeted journal searches

Highly relevant journals, including Research Evaluation, Evaluation and Program Planning, Scientometrics, Research Policy, Futures, American Journal of Evaluation, Evaluation Review, and Evaluation, were comprehensively searched using broader, more inclusive search strings that would have been unmanageable for the main database search.

2.5 Supplementary searches

References in included articles were reviewed to identify additional relevant literature. td-net’s ‘Tour d’Horizon of Literature’, lists important inter- and transdisciplinary publications collected through an invitation to experts in the field to submit publications ( td-net 2014 ). Six additional articles were identified via supplementary search.

2.6 Limitations of coverage

The review was limited to English-language published articles and material available through internet searches. There was no systematic way to search the gray (unpublished) literature, but relevant material identified through supplementary searches was included.

2.7 Inclusion of articles

This study sought articles that review, critique, discuss, and/or propose principles, criteria, indicators, and/or measures for the evaluation of quality relevant to TDR. As noted, this yielded a large number of titles. We then selected only those articles with an explicit focus on the meaning of IDR and/or TDR quality and how to achieve, measure or evaluate it. Inclusion and exclusion criteria were developed through an iterative process of trial article screening and discussion within the research team. Through this process, inter-reviewer agreement was tested and strengthened. Inclusion criteria are listed in Tables 1 and 2 .

Inclusion criteria for title and abstract screening

Topic coverage
Document type
GeographicNo geographic barriers
DateNo temporal barriers
Discipline/fieldDiscussion must be relevant to environment, natural resources management, sustainability, livelihoods, or related areas of human–environmental interactionsThe discussion need not explicitly reference any of the above subject areas
Topic coverage
Document type
GeographicNo geographic barriers
DateNo temporal barriers
Discipline/fieldDiscussion must be relevant to environment, natural resources management, sustainability, livelihoods, or related areas of human–environmental interactionsThe discussion need not explicitly reference any of the above subject areas

Inclusion criteria for abstract and full article screening

ThemeInclusion criteria
Relevance to review objectives (all articles must meet this criteria)Intention of article, or part of article, is to discuss the meaning of research quality and how to measure/evaluate it
Theoretical discussion
Quality definitions and criteriaOffers an explicit definition or criteria of inter and/or transdisciplinary research quality
Evaluation processSuggests approaches to evaluate inter and/or transdisciplinary research quality. (will only be included if there is relevant discussion of research quality criteria and/or measurement)
Research ‘impact’Discusses research outcomes (diffusion, uptake, utilization, impact) as an indicator or consequence of research quality.
ThemeInclusion criteria
Relevance to review objectives (all articles must meet this criteria)Intention of article, or part of article, is to discuss the meaning of research quality and how to measure/evaluate it
Theoretical discussion
Quality definitions and criteriaOffers an explicit definition or criteria of inter and/or transdisciplinary research quality
Evaluation processSuggests approaches to evaluate inter and/or transdisciplinary research quality. (will only be included if there is relevant discussion of research quality criteria and/or measurement)
Research ‘impact’Discusses research outcomes (diffusion, uptake, utilization, impact) as an indicator or consequence of research quality.

Article screening was done in parallel by two reviewers in three rounds: (1) title, (2) abstract, and (3) full article. In cases of uncertainty, papers were included to the next round. Final decisions on inclusion of contested papers were made by consensus among the four team members.

2.8 Critical appraisal

In typical systematic reviews, individual articles are appraised to ensure that they are adequate for answering the research question and to assess the methods of each study for susceptibility to bias that could influence the outcome of the review (Petticrew and Roberts 2006). Most papers included in this review are theoretical and methodological papers, not empirical studies. Most do not have explicit methods that can be appraised with existing quality assessment frameworks. Our critical appraisal considered four criteria adapted from Spencer et al. (2003): (1) relevance to the review question, (2) clarity and logic of how information in the paper was generated, (3) significance of the contribution (are new ideas offered?), and (4) generalizability (is the context specified; do the ideas apply in other contexts?). Disagreements were discussed to reach consensus.

2.9 Data extraction and management

The review sought information on: arguments for or against expanding definitions of research quality, purposes for research quality evaluation, principles of research quality, criteria for research quality assessment, indicators and measures of research quality, and processes for evaluating TDR. Four reviewers independently extracted data from selected articles using the parameters listed in Supplementary Appendix 4 .

2.10 Data synthesis and TDR framework design

Our aim was to synthesize ideas, definitions, and recommendations for TDR quality criteria into a comprehensive and generalizable framework for the evaluation of quality in TDR. Key ideas were extracted from each article and summarized in an Excel database. We classified these ideas into themes and ultimately into overarching principles and associated criteria of TDR quality organized as a rubric ( Wickson and Carew 2014 ). Definitions of each principle and criterion were developed and rubric statements formulated based on the literature and our experience. These criteria (adjusted appropriately to be applied ex ante or ex post ) are intended to be used to assess a TDR project. The reviewer should consider whether the project fully satisfies, partially satisfies, or fails to satisfy each criterion. More information on application is provided in Section 4.3 below.

We tested the framework on a set of completed RRU graduate theses that used transdisciplinary approaches, with an explicit problem orientation and intent to contribute to social or environmental change. Three rounds of testing were done, with revisions after each round to refine and improve the framework.

3.1 Overview of the selected articles

Thirty-eight papers satisfied the inclusion criteria. A wide range of terms are used in the selected papers, including: cross-disciplinary; interdisciplinary; transdisciplinary; methodological pluralism; mode 2; triple helix; and supradisciplinary. Eight included papers specifically focused on sustainability science or TDR in natural resource management, or identified sustainability research as a growing TDR field that needs new forms of evaluation ( Cash et al. 2002 ; Bergmann et al. 2005 ; Chataway, Smith and Wield 2007 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Andrén 2010 ; Carew and Wickson 2010 ; Lang et al. 2012 ; Gaziulusoy and Boyle 2013 ). Carew and Wickson (2010) build on the experience in the TDR realm to propose criteria and indicators of quality for ‘responsible research and innovation’.

The selected articles are written from three main perspectives. One set is primarily interested in advancing TDR approaches. These papers recognize the need for new quality measures to encourage and promote high-quality research and to overcome perceived biases against TDR approaches in research funding and publishing. A second set of papers is written from an evaluation perspective, with a focus on improving evaluation of TDR. The third set is written from the perspective of qualitative research characterized by methodological pluralism, with many characteristics and issues relevant to TDR approaches.

The majority of the articles focus at the project scale, some at the organization level, and some do not specify. Some articles explicitly focus on ex ante evaluation (e.g. proposal evaluation), others on ex post evaluation, and many are not explicit about the project stage they are concerned with. The methods used in the reviewed articles include authors’ reflection and opinion, literature review, expert consultation, document analysis, and case study. Summaries of report characteristics are available online ( Supplementary Appendices 5–8 ). Eight articles provide comprehensive evaluation frameworks and quality criteria specifically for TDR and research-in-context. The rest of the articles discuss aspects of quality related to TDR and recommend quality definitions, criteria, and/or evaluation processes.

3.2 The need for quality criteria and evaluation methods for TDR

Many of the selected articles highlight the lack of widely agreed principles and criteria of TDR quality. They note that, in the absence of TDR quality frameworks, disciplinary criteria are used ( Morrow 2005 ; Boix-Mansilla 2006a , b ; Feller 2006 ; Klein 2006 , 2008 ; Wickson, Carew and Russell 2006 ; Scott 2007 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Oberg 2008 ; Erno-Kjolhede and Hansson 2011 ), and evaluations are often carried out by reviewers who lack cross-disciplinary experience and do not have a shared understanding of quality ( Aagaard-Hansen and Svedin 2009 ). Quality is discussed by many as a relative concept, developed within disciplines, and therefore defined and understood differently in each field ( Morrow 2005 ; Klein 2006 ; Oberg 2008 ; Mitchell and Willets 2009 ; Huutoniemi 2010 ; Hellstrom 2011 ). Jahn and Keil (2015) point out the difficulty of creating a common set of quality criteria for TDR in the absence of a standard agreed-upon definition of TDR. Many of the selected papers argue the need to move beyond narrowly defined ideas of ‘scientific excellence’ to incorporate a broader assessment of quality which includes societal relevance ( Hemlin and Rasmussen 2006 ; Chataway, Smith and Wield 2007 ; Ozga 2007 ; Spaapen, Dijstelbloem and Wamelink 2007 ). This shift includes greater focus on research organization, research process, and continuous learning, rather than primarily on research outputs ( Hemlin and Rasmussen 2006 ; de Jong et al. 2011 ; Wickson and Carew 2014 ; Jahn and Keil 2015 ). This responds to and reflects societal expectations that research should be accountable and have demonstrated utility ( Cloete 1997 ; Defila and Di Giulio 1999 ; Wickson, Carew and Russell 2006 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Stige 2009 ).

A central aim of TDR is to achieve socially relevant outcomes, and TDR quality criteria should demonstrate accountability to society ( Cloete 1997 ; Hemlin and Rasmussen 2006 ; Chataway, Smith and Wield 2007 ; Ozga 2007 ; Spaapen, Dijstelbloem and Wamelink 2007 ; de Jong et al. 2011 ). Integration and mutual learning are a core element of TDR; it is not enough to transcend boundaries and incorporate societal knowledge but, as Carew and Wickson ( 2010 : 1147) summarize: ‘…the TD researcher needs to put effort into integrating these potentially disparate knowledges with a view to creating useable knowledge. That is, knowledge that can be applied in a given problem context and has some prospect of producing desired change in that context’. The inclusion of societal actors in the research process, the unique and often dispersed organization of research teams, and the deliberate integration of different traditions of knowledge production all fall outside of conventional assessment criteria ( Feller 2006 ).

Not only do the range of criteria need to be updated, expanded, agreed upon, and assumptions made explicit ( Boix-Mansilla 2006a ; Klein 2006 ; Scott 2007 ) but, given the specific problem orientation of TDR, reviewers beyond disciplinary academic peers need to be included in the assessment of quality ( Cloete 1997 ; Scott 2007 ; Spappen et al. 2007 ; Klein 2008 ). Several authors discuss the lack of reviewers with strong cross-disciplinary experience ( Aagaard-Hansen and Svedin 2009 ) and the lack of common criteria, philosophical foundations, and language for use by peer reviewers ( Klein 2008 ; Aagaard-Hansen and Svedin 2009 ). Peer review of TDR could be improved with explicit TDR quality criteria, and appropriate processes in place to ensure clear dialog between reviewers.

Finally, there is the need for increased emphasis on evaluation as part of the research process ( Bergmann et al. 2005 ; Hemlin and Rasmussen 2006 ; Meyrick 2006 ; Chataway, Smith and Wield 2007 ; Stige, Malterud and Midtgarden 2009 ; Hellstrom 2011 ; Lang et al. 2012 ; Wickson and Carew 2014 ). This is particularly true in large, complex, problem-oriented research projects. Ongoing monitoring of the research organization and process contributes to learning and adaptive management while research is underway and so helps improve quality. As stated by Wickson and Carew ( 2014 : 262): ‘We believe that in any process of interpreting, rearranging and/or applying these criteria, open negotiation on their meaning and application would only positively foster transformative learning, which is a valued outcome of good TD processes’.

3.3 TDR quality criteria and assessment approaches

Many of the papers provide quality criteria and/or describe constituent parts of quality. Aagaard-Hansen and Svedin (2009) define three key aspects of quality: societal relevance, impact, and integration. Meyrick (2006) states that quality research is transparent and systematic. Boaz and Ashby (2003) describe quality in four dimensions: methodological quality, quality of reporting, appropriateness of methods, and relevance to policy and practice. Although each article deconstructs quality in different ways and with different foci and perspectives, there is significant overlap and recurring themes in the papers reviewed. There is a broadly shared perspective that TDR quality is a multidimensional concept shaped by the specific context within which research is done ( Spaapen, Dijstelbloem and Wamelink 2007 ; Klein 2008 ), making a universal definition of TDR quality difficult or impossible ( Huutoniemi 2010 ).

Huutoniemi (2010) identifies three main approaches to conceptualizing quality in IDR and TDR: (1) using existing disciplinary standards adapted as necessary for IDR; (2) building on the quality standards of disciplines while fundamentally incorporating ways to deal with epistemological integration, problem focus, context, stakeholders, and process; and (3) radical departure from any disciplinary orientation in favor of external, emergent, context-dependent quality criteria that are defined and enacted collaboratively by a community of users.

The first approach is prominent in current research funding and evaluation protocols. Conservative approaches of this kind are criticized for privileging disciplinary research and for failing to provide guidance and quality control for transdisciplinary projects. The third approach would ‘undermine the prevailing status of disciplinary standards in the pursuit of a non-disciplinary, integrated knowledge system’ ( Huutoniemi 2010 : 313). No predetermined quality criteria are offered, only contextually embedded criteria that need to be developed within a specific research project. To some extent, this is the approach taken by Spaapen, Dijstelbloem and Wamelink (2007) and de Jong et al. (2011) . Such a sui generis approach cannot be used to compare across projects. Most of the reviewed papers take the second approach, and recommend TDR quality criteria that build on a disciplinary base.

Eight articles present comprehensive frameworks for quality evaluation, each with a unique approach, perspective, and goal. Two of these build comprehensive lists of criteria with associated questions to be chosen based on the needs of the particular research project ( Defila and Di Giulio 1999 ; Bergmann et al. 2005 ). Wickson and Carew (2014) develop a reflective heuristic tool with questions to guide researchers through ongoing self-evaluation. They also list criteria for external evaluation and to compare between projects. Spaapen, Dijstelbloem and Wamelink (2007) design an approach to evaluate a research project against its own goals and is not meant to compare between projects. Wickson and Carew (2014) developed a comprehensive rubric for the evaluation of Research and Innovation that builds of their extensive previous work in TDR. Finally, Lang et al. (2012) , Mitchell and Willets (2009) , and Jahn and Keil (2015) develop criteria checklists that can be applied across transdisciplinary projects.

Bergmann et al. (2005) and Carew and Wickson (2010) organize their frameworks into managerial elements of the research project, concerning problem context, participation, management, and outcomes. Lang et al. (2012) and Defila and Di Giulio (1999) focus on the chronological stages in the research process and identify criteria at each stage. Mitchell and Willets (2009) , , with a focus on doctoral s tudies, adapt standard dissertation evaluation criteria to accommodate broader, pluralistic, and more complex studies. Spaapen, Dijstelbloem and Wamelink (2007) focus on evaluating ‘research-in-context’. Wickson and Carew (2014) created a rubric based on criteria that span the research process, stages, and all actors included. Jahn and Keil (2015) organized their quality criteria into three categories of quality including: quality of the research problems, quality of the research process, and quality of the research results.

The remaining papers highlight key themes that must be considered in TDR evaluation. Dominant themes include: engagement with problem context, collaboration and inclusion of stakeholders, heightened need for explicit communication and reflection, integration of epistemologies, recognition of diverse outputs, the focus on having an impact, and reflexivity and adaptation throughout the process. The focus on societal problems in context and the increased engagement of stakeholders in the research process introduces higher levels of complexity that cannot be accommodated by disciplinary standards ( Defila and Di Giulio 1999 ; Bergmann et al. 2005 ; Wickson, Carew and Russell 2006 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Klein 2008 ).

Finally, authors discuss process ( Defila and Di Giulio 1999 ; Bergmann et al. 2005 ; Boix-Mansilla 2006b ; Spaapen, Dijstelbloem and Wamelink 2007 ) and utilitarian values ( Hemlin 2006 ; Ernø-Kjølhede and Hansson 2011 ; Bornmann 2013 ) as essential aspects of quality in TDR. Common themes include: (1) the importance of formative and process-oriented evaluation ( Bergmann et al. 2005 ; Hemlin 2006 ; Stige 2009 ); (2) emphasis on the evaluation process itself (not just criteria or outcomes) and reflexive dialog for learning ( Bergmann et al. 2005 ; Boix-Mansilla 2006b ; Klein 2008 ; Oberg 2008 ; Stige, Malterud and Midtgarden 2009 ; Aagaard-Hansen and Svedin 2009 ; Carew and Wickson 2010 ; Huutoniemi 2010 ); (3) the need for peers who are experienced and knowledgeable about TDR for fair peer review ( Boix-Mansilla 2006a , b ; Klein 2006 ; Hemlin 2006 ; Scott 2007 ; Aagaard-Hansen and Svedin 2009 ); (4) the inclusion of stakeholders in the evaluation process ( Bergmann et al. 2005 ; Scott 2007 ; Andréen 2010 ); and (5) the importance of evaluations that are built in-context ( Defila and Di Giulio 1999 ; Feller 2006 ; Spaapen, Dijstelbloem and Wamelink 2007 ; de Jong et al. 2011 ).

While each reviewed approach offers helpful insights, none adequately fulfills the need for a broad and adaptable framework for assessing TDR quality. Wickson and Carew ( 2014 : 257) highlight the need for quality criteria that achieve balance between ‘comprehensiveness and over-prescription’: ‘any emerging quality criteria need to be concrete enough to provide real guidance but flexible enough to adapt to the specificities of varying contexts’. Based on our experience, such a framework should be:

Comprehensive: It should accommodate the main aspects of TDR, as identified in the review.

Time/phase adaptable: It should be applicable across the project cycle.

Scalable: It should be useful for projects of different scales.

Versatile: It should be useful to researchers and collaborators as a guide to research design and management, and to internal and external reviews and assessors.

Comparable: It should allow comparison of quality between and across projects/programs.

Reflexive: It should encourage and facilitate self-reflection and adaptation based on ongoing learning.

In this section, we synthesize the key principles and criteria of quality in TDR that were identified in the reviewed literature. Principles are the essential elements of high-quality TDR. Criteria are the conditions that need to be met in order to achieve a principle. We conclude by providing a framework for the evaluation of quality in TDR ( Table 3 ) and guidance for its application.

Transdisciplinary research quality assessment framework

CriteriaDefinitionRubric scale
Clearly defined socio-ecological contextThe context is well defined and described and analyzed sufficiently to identify research entry points.The context is well defined, described, and analyzed sufficiently to identify research entry points.
Socially relevant research problem Research problem is relevant to the problem context. The research problem is defined and framed in a way that clearly shows its relevance to the context and that demonstrates that consideration has been given to the practical application of research activities and outputs.
Engagement with problem context Researchers demonstrate appropriate breadth and depth of understanding of and sufficient interaction with the problem context. The documentation demonstrates that the researcher/team has interacted appropriately and sufficiently with the problem context to understand it and to have potential to influence it (e.g. through site visits, meeting participation, discussion with stakeholders, document review) in planning and implementing the research.
Explicit theory of changeThe research explicitly identifies its main intended outcomes and how they are intended/expected to be realized and to contribute to longer-term outcomes and/or impacts.The research explicitly identifies its main intended outcomes and how they are intended/expected to be realized and to contribute to longer-term outcomes and/or impacts.
Relevant research objectives and designThe research objectives and design are relevant, timely, and appropriate to the problem context, including attention to stakeholder needs and values.The documentation clearly demonstrates, through sufficient analysis of key factors, needs, and complexity within the context, that the research objectives and design are relevant and appropriate.
Appropriate project implementationResearch execution is suitable to the problem context and the socially relevant research objectives.The documentation reflects effective project implementation that is appropriate to the context, with reflection and adaptation as needed.
Effective communication Communication during and after the research process is appropriate to the context and accessible to stakeholders, users, and other intended audiences The documentation indicates that the research project planned and achieved appropriate communications with all necessary actors during the research process.
Broad preparationThe research is based on a strong integrated theoretical and empirical foundation that is relevant to the context.The documentation demonstrates critical understanding of an appropriate breadth and depth of literature and theory from across disciplines relevant to the context, and of the context itself
Clear research problem definitionThe research problem is clearly defined, researchable, grounded in the academic literature, and relevant to the context.The research problem is clearly stated and defined, researchable, and grounded in the academic literature and the problem context.
Objectives stated and metResearch objectives are clearly stated.The research objectives are clearly stated, logically and appropriately related to the context and the research problem, and achieved, with any necessary adaptation explained.
Feasible research projectThe research design and resources are appropriate and sufficient to meet the objectives as stated, and sufficiently resilient to adapt to unexpected opportunities and challenges throughout the research process.The research design and resources are appropriate and sufficient to meet the objectives as stated, and sufficiently resilient to adapt to unexpected opportunities and challenges throughout the research process.
Adequate competenciesThe skills and competencies of the researcher/team/collaboration (including academic and societal actors) are sufficient and in appropriate balance (without unnecessary complexity) to succeed.The documentation recognizes the limitations and biases of individuals’ knowledge and identifies the knowledge, skills, and expertise needed to carry out the research and provides evidence that they are represented in the research team in the appropriate measure to address the problem.
Research approach fits purposeDisciplines, perspectives, epistemologies, approaches, and theories are combined appropriately to create an approach that is appropriate to the research problem and the objectivesThe documentation explicitly states the rationale for the inclusion and integration of different epistemologies, disciplines, and methodologies, justifies the approach taken in reference to the context, and discusses the process of integration, including how paradoxes and conflicts were managed.
Appropriate methodsMethods are fit to purpose and well-suited to answering the research questions and achieving the objectives.Methods are clearly described, and documentation demonstrates that the methods are fit to purpose, systematic yet adaptable, and transparent. Novel (unproven) methods or adaptations are justified and explained, including why they were used and how they maintain scientific rigor.
Clearly presented argumentThe movement from analysis through interpretation to conclusions is transparently and logically described. Sufficient evidence is provided to clearly demonstrate the relationship between evidence and conclusions.Results are clearly presented. Analyses and interpretations are adequately explained, with clearly described terminology and full exposition of the logic leading to conclusions, including exploration of possible alternate explanations.
Transferability/generalizability of research findingsAppropriate and rigorous methods ensure the study’s findings are externally valid (generalizable). In some cases, findings may be too context specific to be generalizable in which case research would be judged on its ability to act as a model for future research.Document clearly explains how the research findings are transferable to other contexts OR, in cases that are too context-specific to be generalizable, discusses aspects of the research process or findings that may be transferable to other contexts and/or used as learning cases.
Limitations statedResearchers engage in ongoing individual and collective reflection in order to explicitly acknowledge and address limitations.Limitations are clearly stated and adequately accounted for on an ongoing basis through the research project.
Ongoing monitoring and reflexivity Researchers engage in ongoing reflection and adaptation of the research process, making changes as new obstacles, opportunities, circumstances, and/or knowledge surface.Processes of reflection, individually and as a research team, are clearly documented throughout the research process along with clear descriptions and justifications for any changes to the research process made as a result of reflection.
Disclosure of perspectiveActual, perceived, and potential bias is clearly stated and accounted for. This includes aspects of: researchers’ position, sources of support, financing, collaborations, partnerships, research mandate, assumptions, goals, and bounds placed on commissioned research.The documentation identifies potential or actual bias, including aspects of researchers’ positions, sources of support, financing, collaborations, partnerships, research mandate, assumptions, goals, and bounds placed on commissioned research.
Effective collaborationAppropriate processes are in place to ensure effective collaboration (e.g. clear and explicit roles and responsibilities agreed upon, transparent and appropriate decision-making structures)The documentation explicitly discusses the collaboration process, with adequate demonstration that the opportunities and process for collaboration are appropriate to the context and the actors involved (e.g. clear and explicit roles and responsibilities agreed upon, transparent and appropriate decision-making structures)
Genuine and explicit inclusionInclusion of diverse actors in the research process is clearly defined. Representation of actors' perspectives, values, and unique contexts is ensured through adequate planning, explicit agreements, communal reflection, and reflexivity.The documentation explains the range of participants and perspectives/cultural backgrounds involved, clearly describes what steps were taken to ensure the respectful inclusion of diverse actors/views, and explains the roles and contributions of all participants in the research process.
Research is ethicalResearch adheres to standards of ethical conduct.The documentation describes the ethical review process followed and, considering the full range of stakeholders, explicitly identifies any ethical challenges and how they were resolved.
Research builds social capacityChange takes place in individuals, groups, and at the institutional level through shared learning. This can manifest as a change in knowledge, understanding, and/or perspective of participants in the research project. There is evidence of observed changes in knowledge, behavior, understanding, and/or perspectives of research participants and/or stakeholders as a result of the research process and/or findings.
Contribution to knowledgeResearch contributes to knowledge and understanding in academic and social realms in a timely, relevant, and significant way.There is evidence that knowledge created through the project is being/has been used by intended audiences and end-users.
Practical applicationResearch has a practical application. The findings, process, and/or products of research are used.There is evidence that innovations developed through the research and/or the research process have been (or will be applied) in the real world.
Significant outcomeResearch contributes to the solution of the targeted problem or provides unexpected solutions to other problems. This can include a variety of outcomes: building societal capacity, learning, use of research products, and/or changes in behaviorsThere is evidence that the research has contributed to positive change in the problem context and/or innovations that have positive social or environmental impacts.
CriteriaDefinitionRubric scale
Clearly defined socio-ecological contextThe context is well defined and described and analyzed sufficiently to identify research entry points.The context is well defined, described, and analyzed sufficiently to identify research entry points.
Socially relevant research problem Research problem is relevant to the problem context. The research problem is defined and framed in a way that clearly shows its relevance to the context and that demonstrates that consideration has been given to the practical application of research activities and outputs.
Engagement with problem context Researchers demonstrate appropriate breadth and depth of understanding of and sufficient interaction with the problem context. The documentation demonstrates that the researcher/team has interacted appropriately and sufficiently with the problem context to understand it and to have potential to influence it (e.g. through site visits, meeting participation, discussion with stakeholders, document review) in planning and implementing the research.
Explicit theory of changeThe research explicitly identifies its main intended outcomes and how they are intended/expected to be realized and to contribute to longer-term outcomes and/or impacts.The research explicitly identifies its main intended outcomes and how they are intended/expected to be realized and to contribute to longer-term outcomes and/or impacts.
Relevant research objectives and designThe research objectives and design are relevant, timely, and appropriate to the problem context, including attention to stakeholder needs and values.The documentation clearly demonstrates, through sufficient analysis of key factors, needs, and complexity within the context, that the research objectives and design are relevant and appropriate.
Appropriate project implementationResearch execution is suitable to the problem context and the socially relevant research objectives.The documentation reflects effective project implementation that is appropriate to the context, with reflection and adaptation as needed.
Effective communication Communication during and after the research process is appropriate to the context and accessible to stakeholders, users, and other intended audiences The documentation indicates that the research project planned and achieved appropriate communications with all necessary actors during the research process.
Broad preparationThe research is based on a strong integrated theoretical and empirical foundation that is relevant to the context.The documentation demonstrates critical understanding of an appropriate breadth and depth of literature and theory from across disciplines relevant to the context, and of the context itself
Clear research problem definitionThe research problem is clearly defined, researchable, grounded in the academic literature, and relevant to the context.The research problem is clearly stated and defined, researchable, and grounded in the academic literature and the problem context.
Objectives stated and metResearch objectives are clearly stated.The research objectives are clearly stated, logically and appropriately related to the context and the research problem, and achieved, with any necessary adaptation explained.
Feasible research projectThe research design and resources are appropriate and sufficient to meet the objectives as stated, and sufficiently resilient to adapt to unexpected opportunities and challenges throughout the research process.The research design and resources are appropriate and sufficient to meet the objectives as stated, and sufficiently resilient to adapt to unexpected opportunities and challenges throughout the research process.
Adequate competenciesThe skills and competencies of the researcher/team/collaboration (including academic and societal actors) are sufficient and in appropriate balance (without unnecessary complexity) to succeed.The documentation recognizes the limitations and biases of individuals’ knowledge and identifies the knowledge, skills, and expertise needed to carry out the research and provides evidence that they are represented in the research team in the appropriate measure to address the problem.
Research approach fits purposeDisciplines, perspectives, epistemologies, approaches, and theories are combined appropriately to create an approach that is appropriate to the research problem and the objectivesThe documentation explicitly states the rationale for the inclusion and integration of different epistemologies, disciplines, and methodologies, justifies the approach taken in reference to the context, and discusses the process of integration, including how paradoxes and conflicts were managed.
Appropriate methodsMethods are fit to purpose and well-suited to answering the research questions and achieving the objectives.Methods are clearly described, and documentation demonstrates that the methods are fit to purpose, systematic yet adaptable, and transparent. Novel (unproven) methods or adaptations are justified and explained, including why they were used and how they maintain scientific rigor.
Clearly presented argumentThe movement from analysis through interpretation to conclusions is transparently and logically described. Sufficient evidence is provided to clearly demonstrate the relationship between evidence and conclusions.Results are clearly presented. Analyses and interpretations are adequately explained, with clearly described terminology and full exposition of the logic leading to conclusions, including exploration of possible alternate explanations.
Transferability/generalizability of research findingsAppropriate and rigorous methods ensure the study’s findings are externally valid (generalizable). In some cases, findings may be too context specific to be generalizable in which case research would be judged on its ability to act as a model for future research.Document clearly explains how the research findings are transferable to other contexts OR, in cases that are too context-specific to be generalizable, discusses aspects of the research process or findings that may be transferable to other contexts and/or used as learning cases.
Limitations statedResearchers engage in ongoing individual and collective reflection in order to explicitly acknowledge and address limitations.Limitations are clearly stated and adequately accounted for on an ongoing basis through the research project.
Ongoing monitoring and reflexivity Researchers engage in ongoing reflection and adaptation of the research process, making changes as new obstacles, opportunities, circumstances, and/or knowledge surface.Processes of reflection, individually and as a research team, are clearly documented throughout the research process along with clear descriptions and justifications for any changes to the research process made as a result of reflection.
Disclosure of perspectiveActual, perceived, and potential bias is clearly stated and accounted for. This includes aspects of: researchers’ position, sources of support, financing, collaborations, partnerships, research mandate, assumptions, goals, and bounds placed on commissioned research.The documentation identifies potential or actual bias, including aspects of researchers’ positions, sources of support, financing, collaborations, partnerships, research mandate, assumptions, goals, and bounds placed on commissioned research.
Effective collaborationAppropriate processes are in place to ensure effective collaboration (e.g. clear and explicit roles and responsibilities agreed upon, transparent and appropriate decision-making structures)The documentation explicitly discusses the collaboration process, with adequate demonstration that the opportunities and process for collaboration are appropriate to the context and the actors involved (e.g. clear and explicit roles and responsibilities agreed upon, transparent and appropriate decision-making structures)
Genuine and explicit inclusionInclusion of diverse actors in the research process is clearly defined. Representation of actors' perspectives, values, and unique contexts is ensured through adequate planning, explicit agreements, communal reflection, and reflexivity.The documentation explains the range of participants and perspectives/cultural backgrounds involved, clearly describes what steps were taken to ensure the respectful inclusion of diverse actors/views, and explains the roles and contributions of all participants in the research process.
Research is ethicalResearch adheres to standards of ethical conduct.The documentation describes the ethical review process followed and, considering the full range of stakeholders, explicitly identifies any ethical challenges and how they were resolved.
Research builds social capacityChange takes place in individuals, groups, and at the institutional level through shared learning. This can manifest as a change in knowledge, understanding, and/or perspective of participants in the research project. There is evidence of observed changes in knowledge, behavior, understanding, and/or perspectives of research participants and/or stakeholders as a result of the research process and/or findings.
Contribution to knowledgeResearch contributes to knowledge and understanding in academic and social realms in a timely, relevant, and significant way.There is evidence that knowledge created through the project is being/has been used by intended audiences and end-users.
Practical applicationResearch has a practical application. The findings, process, and/or products of research are used.There is evidence that innovations developed through the research and/or the research process have been (or will be applied) in the real world.
Significant outcomeResearch contributes to the solution of the targeted problem or provides unexpected solutions to other problems. This can include a variety of outcomes: building societal capacity, learning, use of research products, and/or changes in behaviorsThere is evidence that the research has contributed to positive change in the problem context and/or innovations that have positive social or environmental impacts.

a Research problems are the particular topic, area of concern, question to be addressed, challenge, opportunity, or focus of the research activity. Research problems are related to the societal problem but take on a specific focus, or framing, within a societal problem.

b Problem context refers to the social and environmental setting(s) that gives rise to the research problem, including aspects of: location; culture; scale in time and space; social, political, economic, and ecological/environmental conditions; resources and societal capacity available; uncertainty, complexity, and novelty associated with the societal problem; and the extent of agency that is held by stakeholders ( Carew and Wickson 2010 ).

c Words such as ‘appropriate’, ‘suitable’, and ‘adequate’ are used deliberately to allow for quality criteria to be flexible and specific enough to the needs of individual research projects ( Oberg 2008 ).

d Research process refers to the series of decisions made and actions taken throughout the entire duration of the research project and encompassing all aspects of the research project.

e Reflexivity refers to an iterative process of formative, critical reflection on the important interactions and relationships between a research project’s process, context, and product(s).

f In an ex ante evaluation, ‘evidence of’ would be replaced with ‘potential for’.

There is a strong trend in the reviewed articles to recognize the need for appropriate measures of scientific quality (usually adapted from disciplinary antecedants), but also to consider broader sets of criteria regarding the societal significance and applicability of research, and the need for engagement and representation of stakeholder values and knowledge. Cash et al. (2002) nicely conceptualize three key aspects of effective sustainability research as: salience (or relevance), credibility, and legitimacy. These are presented as necessary attributes for research to successfully produce transferable, useful information that can cross boundaries between disciplines, across scales, and between science and society. Many of the papers also refer to the principle that high-quality TDR should be effective in terms of contributing to the solution of problems. These four principles are discussed in the following sections.

4.1.1 Relevance

Relevance is the importance, significance, and usefulness of the research project's objectives, process, and findings to the problem context and to society. This includes the appropriateness of the timing of the research, the questions being asked, the outputs, and the scale of the research in relation to the societal problem being addressed. Good-quality TDR addresses important social/environmental problems and produces knowledge that is useful for decision making and problem solving ( Cash et al. 2002 ; Klein 2006 ). As Erno-Kjolhede and Hansson ( 2011 : 140) explain, quality ‘is first and foremost about creating results that are applicable and relevant for the users of the research’. Researchers must demonstrate an in-depth knowledge of and ongoing engagement with the problem context in which their research takes place ( Wickson, Carew and Russell 2006 ; Stige, Malterud and Midtgarden 2009 ; Mitchell and Willets 2009 ). From the early steps of problem formulation and research design through to the appropriate and effective communication of research findings, the applicability and relevance of the research to the societal problem must be explicitly stated and incorporated.

4.1.2 Credibility

Credibility refers to whether or not the research findings are robust and the knowledge produced is scientifically trustworthy. This includes clear demonstration that the data are adequate, with well-presented methods and logical interpretations of findings. High-quality research is authoritative, transparent, defensible, believable, and rigorous. This is the traditional purview of science, and traditional disciplinary criteria can be applied in TDR evaluation to an extent. Additional and modified criteria are needed to address the integration of epistemologies and methodologies and the development of novel methods through collaboration, the broad preparation and competencies required to carry out the research, and the need for reflection and adaptation when operating in complex systems. Having researchers actively engaged in the problem context and including extra-scientific actors as part of the research process helps to achieve relevance and legitimacy of the research; it also adds complexity and heightened requirements of transparency, reflection, and reflexivity to ensure objective, credible research is carried out.

Active reflexivity is a criterion of credibility of TDR that may seem to contradict more rigid disciplinary methodological traditions ( Carew and Wickson 2010 ). Practitioners of TDR recognize that credible work in these problem-oriented fields requires active reflexivity, epitomized by ongoing learning, flexibility, and adaptation to ensure the research approach and objectives remain relevant and fit-to-purpose ( Lincoln 1995 ; Bergmann et al. 2005 ; Wickson, Carew and Russell 2006 ; Mitchell and Willets 2009 ; Andreén 2010 ; Carew and Wickson 2010 ; Wickson and Carew 2014 ). Changes made during the research process must be justified and reported transparently and explicitly to maintain credibility.

The need for critical reflection on potential bias and limitations becomes more important to maintain credibility of research-in-context ( Lincoln 1995 ; Bergmann et al. 2005 ; Mitchell and Willets 2009 ; Stige, Malterud and Midtgarden 2009 ). Transdisciplinary researchers must ensure they maintain a high level of objectivity and transparency while actively engaging in the problem context. This point demonstrates the fine balance between different aspects of quality, in this case relevance and credibility, and the need to be aware of tensions and to seek complementarities ( Cash et al. 2002 ).

4.1.3 Legitimacy

Legitimacy refers to whether the research process is perceived as fair and ethical by end-users. In other words, is it acceptable and trustworthy in the eyes of those who will use it? This requires the appropriate inclusion and consideration of diverse values, interests, and the ethical and fair representation of all involved. Legitimacy may be achieved in part through the genuine inclusion of stakeholders in the research process. Whereas credibility refers to technical aspects of sound research, legitimacy deals with sociopolitical aspects of the knowledge production process and products of research. Do stakeholders trust the researchers and the research process, including funding sources and other sources of potential bias? Do they feel represented? Legitimate TDR ‘considers appropriate values, concerns, and perspectives of different actors’ ( Cash et al. 2002 : 2) and incorporates these perspectives into the research process through collaboration and mutual learning ( Bergmann et al. 2005 ; Chataway, Smith and Wield 2007 ; Andrén 2010 ; Huutoneimi 2010 ). A fair and ethical process is important to uphold standards of quality in all research. However, there are additional considerations that are unique to TDR.

Because TDR happens in-context and often in collaboration with societal actors, the disclosure of researcher perspective and a transparent statement of all partnerships, financing, and collaboration is vital to ensure an unbiased research process ( Lincoln 1995 ; Defila and Di Giulio 1999 ; Boaz and Ashby 2003 ; Barker and Pistrang 2005 ; Bergmann et al. 2005 ). The disclosure of perspective has both internal and external aspects, on one hand ensuring the researchers themselves explicitly reflect on and account for their own position, potential sources of bias, and limitations throughout the process, and on the other hand making the process transparent to those external to the research group who can then judge the legitimacy based on their perspective of fairness ( Cash et al. 2002 ).

TDR includes the engagement of societal actors along a continuum of participation from consultation to co-creation of knowledge ( Brandt et al. 2013 ). Regardless of the depth of participation, all processes that engage societal actors must ensure that inclusion/engagement is genuine, roles are explicit, and processes for effective and fair collaboration are present ( Bergmann et al. 2005 ; Wickson, Carew and Russell 2006 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Hellstrom 2012 ). Important considerations include: the accurate representation of those involved; explicit and agreed-upon roles and contributions of actors; and adequate planning and procedures to ensure all values, perspectives, and contexts are adequately and appropriately incorporated. Mitchell and Willets (2009) consider cultural competence as a key criterion that can support researchers in navigating diverse epistemological perspectives. This is similar to what Morrow terms ‘social validity’, a criterion that asks researchers to be responsive to and critically aware of the diversity of perspectives and cultures influenced by their research. Several authors highlight that in order to develop this critical awareness of the diversity of cultural paradigms that operate within a problem situation, researchers should practice responsive, critical, and/or communal reflection ( Bergmann et al. 2005 ; Wickson, Carew and Russell 2006 ; Mitchell and Willets 2009 ; Carew and Wickson 2010 ). Reflection and adaptation are important quality criteria that cut across multiple principles and facilitate learning throughout the process, which is a key foundation to TD inquiry.

4.1.4 Effectiveness

We define effective research as research that contributes to positive change in the social, economic, and/or environmental problem context. Transdisciplinary inquiry is rooted in the objective of solving real-word problems ( Klein 2008 ; Carew and Wickson 2010 ) and must have the potential to ( ex ante ) or actually ( ex post ) make a difference if it is to be considered of high quality ( Erno-Kjolhede and Hansson 2011 ). Potential research effectiveness can be indicated and assessed at the proposal stage and during the research process through: a clear and stated intention to address and contribute to a societal problem, the establishment of the research process and objectives in relation to the problem context, and the continuous reflection on the usefulness of the research findings and products to the problem ( Bergmann et al. 2005 ; Lahtinen et al. 2005 ; de Jong et al. 2011 ).

Assessing research effectiveness ex post remains a major challenge, especially in complex transdisciplinary approaches. Conventional and widely used measures of ‘scientific impact’ count outputs such as journal articles and other publications and citations of those outputs (e.g. H index; i10 index). While these are useful indicators of scholarly influence, they are insufficient and inappropriate measures of research effectiveness where research aims to contribute to social learning and change. We need to also (or alternatively) focus on other kinds of research and scholarship outputs and outcomes and the social, economic, and environmental impacts that may result.

For many authors, contributing to learning and building of societal capacity are central goals of TDR ( Defila and Di Giulio 1999 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Carew and Wickson 2010 ; Erno-Kjolhede and Hansson 2011 ; Hellstrom 2011 ), and so are considered part of TDR effectiveness. Learning can be characterized as changes in knowledge, attitudes, or skills and can be assessed directly, or through observed behavioral changes and network and relationship development. Some evaluation methodologies (e.g. Outcome Mapping ( Earl, Carden and Smutylo 2001 )) specifically measure these kinds of changes. Other evaluation methodologies consider the role of research within complex systems and assess effectiveness in terms of contributions to changes in policy and practice and resulting social, economic, and environmental benefits ( ODI 2004 , 2012 ; White and Phillips 2012 ; Mayne et al. 2013 ).

4.2 TDR quality criteria

TDR quality criteria and their definitions (explicit or implicit) were extracted from each article and summarized in an Excel database. These criteria were classified into themes corresponding to the four principles identified above, sorted and refined to develop sets of criteria that are comprehensive, mutually exclusive, and representative of the ideas presented in the reviewed articles. Within each principle, the criteria are organized roughly in the sequence of a typical project cycle (e.g. with research design following problem identification and preceding implementation). Definitions of each criterion were developed to reflect the concepts found in the literature, tested and refined iteratively to improve clarity. Rubric statements were formulated based on the literature and our own experience.

The complete set of principles, criteria, and definitions is presented as the TDR Quality Assessment Framework ( Table 3 ).

4.3 Guidance on the application of the framework

4.3.1 timing.

Most criteria can be applied at each stage of the research process, ex ante , mid term, and ex post , using appropriate interpretations at each stage. Ex ante (i.e. proposal) assessment should focus on a project’s explicitly stated intentions and approaches to address the criteria. Mid-term indicators will focus on the research process and whether or not it is being implemented in a way that will satisfy the criteria. Ex post assessment should consider whether the research has been done appropriately for the purpose and that the desired results have been achieved.

4.3.2 New meanings for familiar terms

Many of the terms used in the framework are extensions of disciplinary criteria and share the same or similar names and perhaps similar but nuanced meaning. The principles and criteria used here extend beyond disciplinary antecedents and include new concepts and understandings that encapsulate the unique characteristics and needs of TDR and allow for evaluation and definition of quality in TDR. This is especially true in the criteria related to credibility. These criteria are analogous to traditional disciplinary criteria, but with much stronger emphasis on grounding in both the scientific and the social/environmental contexts. We urge readers to pay close attention to the definitions provided in Table 3 as well as the detailed descriptions of the principles in Section 4.1.

4.3.3 Using the framework

The TDR quality framework ( Table 3 ) is designed to be used to assess TDR research according to a project’s purpose; i.e. the criteria must be interpreted with respect to the context and goals of an individual research activity. The framework ( Table 3 ) lists the main criteria synthesized from the literature and our experience, organized within the principles of relevance, credibility, legitimacy, and effectiveness. The table presents the criteria within each principle, ordered to approximate a typical process of identifying a research problem and designing and implementing research. We recognize that the actual process in any given project will be iterative and will not necessarily follow this sequence, but this provides a logical flow. A concise definition is provided in the second column to explain each criterion. We then provide a rubric statement in the third column, phrased to be applied when the research has been completed. In most cases, the same statement can be used at the proposal stage with a simple tense change or other minor grammatical revision, except for the criteria relating to effectiveness. As discussed above, assessing effectiveness in terms of outcomes and/or impact requires evaluation research. At the proposal stage, it is only possible to assess potential effectiveness.

Many rubrics offer a set of statements for each criterion that represent progressively higher levels of achievement; the evaluator is asked to select the best match. In practice, this often results in vague and relative statements of merit that are difficult to apply. We have opted to present a single rubric statement in absolute terms for each criterion. The assessor can then rank how well a project satisfies each criterion using a simple three-point Likert scale. If a project fully satisfies a criterion—that is, if there is evidence that the criterion has been addressed in a way that is coherent, explicit, sufficient, and convincing—it should be ranked as a 2 for that criterion. A score of 2 means that the evaluator is persuaded that the project addressed that criterion in an intentional, appropriate, explicit, and thorough way. A score of 1 would be given when there is some evidence that the criterion was considered, but it is lacking completion, intention, and/or is not addressed satisfactorily. For example, a score of 1 would be given when a criterion is explicitly discussed but poorly addressed, or when there is some indication that the criterion has been considered and partially addressed but it has not been treated explicitly, thoroughly, or adequately. A score of 0 indicates that there is no evidence that the criterion was addressed or that it was addressed in a way that was misguided or inappropriate.

It is critical that the evaluation be done in context, keeping in mind the purpose, objectives, and resources of the project, as well as other contextual information, such as the intended purpose of grant funding or relevant partnerships. Each project will be unique in its complexities; what is sufficient or adequate in one criterion for one research project may be insufficient or inappropriate for another. Words such as ‘appropriate’, ‘suitable’, and ‘adequate’ are used deliberately to encourage application of criteria to suit the needs of individual research projects ( Oberg 2008 ). Evaluators must consider the objectives of the research project and the problem context within which it is carried out as the benchmark for evaluation. For example, we tested the framework with RRU masters theses. These are typically small projects with limited scope, carried out by a single researcher. Expectations for ‘effective communication’ or ‘competencies’ or ‘effective collaboration’ are much different in these kinds of projects than in a multi-year, multi-partner CIFOR project. All criteria should be evaluated through the lens of the stated research objectives, research goals, and context.

The systematic review identified relevant articles from a diverse literature that have a strong central focus. Collectively, they highlight the complexity of contemporary social and environmental problems and emphasize that addressing such issues requires combinations of new knowledge and innovation, action, and engagement. Traditional disciplinary research has often failed to provide solutions because it cannot adequately cope with complexity. New forms of research are proliferating, crossing disciplinary and academic boundaries, integrating methodologies, and engaging a broader range of research participants, as a way to make research more relevant and effective. Theoretically, such approaches appear to offer great potential to contribute to transformative change. However, because these approaches are new and because they are multidimensional, complex, and often unique, it has been difficult to know what works, how, and why. In the absence of the kinds of methodological and quality standards that guide disciplinary research, there are no generally agreed criteria for evaluating such research.

Criteria are needed to guide and to help ensure that TDR is of high quality, to inform the teaching and learning of new researchers, and to encourage and support the further development of transdisciplinary approaches. The lack of a standard and broadly applicable framework for the evaluation of quality in TDR is perceived to cause an implicit or explicit devaluation of high-quality TDR or may prevent quality TDR from being done. There is a demonstrated need for an operationalized understanding of quality that addresses the characteristics, contributions, and challenges of TDR. The reviewed articles approach the topic from different perspectives and fields of study, using different terminology for similar concepts, or the same terminology for different concepts, and with unique ways of organizing and categorizing the dimensions and quality criteria. We have synthesized and organized these concepts as key TDR principles and criteria in a TDR Quality Framework, presented as an evaluation rubric. We have tested the framework on a set of masters’ theses and found it to be broadly applicable, usable, and useful for analyzing individual projects and for comparing projects within the set. We anticipate that further testing with a wider range of projects will help further refine and improve the definitions and rubric statements. We found that the three-point Likert scale (0–2) offered sufficient variability for our purposes, and rating is less subjective than with relative rubric statements. It may be possible to increase the rating precision with more points on the scale to increase the sensitivity for comparison purposes, for example in a review of proposals for a particular grant application.

Many of the articles we reviewed emphasize the importance of the evaluation process itself. The formative, developmental role of evaluation in TDR is seen as essential to the goals of mutual learning as well as to ensure that research remains responsive and adaptive to the problem context. In order to adequately evaluate quality in TDR, the process, including who carries out the evaluations, when, and in what manner, must be revised to be suitable to the unique characteristics and objectives of TDR. We offer this review and synthesis, along with a proposed TDR quality evaluation framework, as a contribution to an important conversation. We hope that it will be useful to researchers and research managers to help guide research design, implementation and reporting, and to the community of research organizations, funders, and society at large. As underscored in the literature review, there is a need for an adapted research evaluation process that will help advance problem-oriented research in complex systems, ultimately to improve research effectiveness.

This work was supported by funding from the Canada Research Chairs program. Funding support from the Canadian Social Sciences and Humanities Research Council (SSHRC) and technical support from the Evidence Based Forestry Initiative of the Centre for International Forestry Research (CIFOR), funded by UK DfID are also gratefully acknowledged.

Supplementary data is available here

The authors thank Barbara Livoreil and Stephen Dovers for valuable comments and suggestions on the protocol and Gillian Petrokofsky for her review of the protocol and a draft version of the manuscript. Two anonymous reviewers and the editor provided insightful critique and suggestions in two rounds that have helped to substantially improve the article.

Conflict of interest statement . None declared.

1. ‘Stakeholders’ refers to individuals and groups of societal actors who have an interest in the issue or problem that the research seeks to address.

2. The terms ‘quality’ and ‘excellence’ are often used in the literature with similar meaning. Technically, ‘excellence’ is a relative concept, referring to the superiority of a thing compared to other things of its kind. Quality is an attribute or a set of attributes of a thing. We are interested in what these attributes are or should be in high-quality research. Therefore, the term ‘quality’ is used in this discussion.

3. The terms ‘science’ and ‘research’ are not always clearly distinguished in the literature. We take the position that ‘science’ is a more restrictive term that is properly applied to systematic investigations using the scientific method. ‘Research’ is a broader term for systematic investigations using a range of methods, including but not restricted to the scientific method. We use the term ‘research’ in this broad sense.

Aagaard-Hansen J. Svedin U. ( 2009 ) ‘Quality Issues in Cross-disciplinary Research: Towards a Two-pronged Approach to Evaluation’ , Social Epistemology , 23 / 2 : 165 – 76 . DOI: 10.1080/02691720902992323

Google Scholar

Andrén S. ( 2010 ) ‘A Transdisciplinary, Participatory and Action-Oriented Research Approach: Sounds Nice but What do you Mean?’ [unpublished working paper] Human Ecology Division: Lund University, 1–21. < https://lup.lub.lu.se/search/publication/1744256 >

Australian Research Council (ARC) ( 2012 ) ERA 2012 Evaluation Handbook: Excellence in Research for Australia . Australia : ARC . < http://www.arc.gov.au/pdf/era12/ERA%202012%20Evaluation%20Handbook_final%20for%20web_protected.pdf >

Google Preview

Balsiger P. W. ( 2004 ) ‘Supradisciplinary Research Practices: History, Objectives and Rationale’ , Futures , 36 / 4 : 407 – 21 .

Bantilan M. C. et al.  . ( 2004 ) ‘Dealing with Diversity in Scientific Outputs: Implications for International Research Evaluation’ , Research Evaluation , 13 / 2 : 87 – 93 .

Barker C. Pistrang N. ( 2005 ) ‘Quality Criteria under Methodological Pluralism: Implications for Conducting and Evaluating Research’ , American Journal of Community Psychology , 35 / 3-4 : 201 – 12 .

Bergmann M. et al.  . ( 2005 ) Quality Criteria of Transdisciplinary Research: A Guide for the Formative Evaluation of Research Projects . Central report of Evalunet – Evaluation Network for Transdisciplinary Research. Frankfurt am Main, Germany: Institute for Social-Ecological Research. < http://www.isoe.de/ftp/evalunet_guide.pdf >

Boaz A. Ashby D. ( 2003 ) Fit for Purpose? Assessing Research Quality for Evidence Based Policy and Practice .

Boix-Mansilla V. ( 2006a ) ‘Symptoms of Quality: Assessing Expert Interdisciplinary Work at the Frontier: An Empirical Exploration’ , Research Evaluation , 15 / 1 : 17 – 29 .

Boix-Mansilla V. . ( 2006b ) ‘Conference Report: Quality Assessment in Interdisciplinary Research and Education’ , Research Evaluation , 15 / 1 : 69 – 74 .

Bornmann L. ( 2013 ) ‘What is Societal Impact of Research and How can it be Assessed? A Literature Survey’ , Journal of the American Society for Information Science and Technology , 64 / 2 : 217 – 33 .

Brandt P. et al.  . ( 2013 ) ‘A Review of Transdisciplinary Research in Sustainability Science’ , Ecological Economics , 92 : 1 – 15 .

Cash D. Clark W.C. Alcock F. Dickson N. M. Eckley N. Jäger J . ( 2002 ) Salience, Credibility, Legitimacy and Boundaries: Linking Research, Assessment and Decision Making (November 2002). KSG Working Papers Series RWP02-046. Available at SSRN: http://ssrn.com/abstract=372280 .

Carew A. L. Wickson F. ( 2010 ) ‘The TD Wheel: A Heuristic to Shape, Support and Evaluate Transdisciplinary Research’ , Futures , 42 / 10 : 1146 – 55 .

Collaboration for Environmental Evidence (CEE) . ( 2013 ) Guidelines for Systematic Review and Evidence Synthesis in Environmental Management . Version 4.2. Environmental Evidence < www.environmentalevidence.org/Documents/Guidelines/Guidelines4.2.pdf >

Chandler J. ( 2014 ) Methods Research and Review Development Framework: Policy, Structure, and Process . < http://methods.cochrane.org/projects-developments/research >

Chataway J. Smith J. Wield D. ( 2007 ) ‘Shaping Scientific Excellence in Agricultural Research’ , International Journal of Biotechnology 9 / 2 : 172 – 87 .

Clark W. C. Dickson N. ( 2003 ) ‘Sustainability Science: The Emerging Research Program’ , PNAS 100 / 14 : 8059 – 61 .

Consultative Group on International Agricultural Research (CGIAR) ( 2011 ) A Strategy and Results Framework for the CGIAR . < http://library.cgiar.org/bitstream/handle/10947/2608/Strategy_and_Results_Framework.pdf?sequence=4 >

Cloete N. ( 1997 ) ‘Quality: Conceptions, Contestations and Comments’, African Regional Consultation Preparatory to the World Conference on Higher Education , Dakar, Senegal, 1-4 April 1997 .

Defila R. DiGiulio A. ( 1999 ) ‘Evaluating Transdisciplinary Research,’ Panorama: Swiss National Science Foundation Newsletter , 1 : 4 – 27 . < www.ikaoe.unibe.ch/forschung/ip/Specialissue.Pano.1.99.pdf >

Donovan C. ( 2008 ) ‘The Australian Research Quality Framework: A Live Experiment in Capturing the Social, Economic, Environmental, and Cultural Returns of Publicly Funded Research. Reforming the Evaluation of Research’ , New Directions for Evaluation , 118 : 47 – 60 .

Earl S. Carden F. Smutylo T. ( 2001 ) Outcome Mapping. Building Learning and Reflection into Development Programs . Ottawa, ON : International Development Research Center .

Ernø-Kjølhede E. Hansson F. ( 2011 ) ‘Measuring Research Performance during a Changing Relationship between Science and Society’ , Research Evaluation , 20 / 2 : 130 – 42 .

Feller I. ( 2006 ) ‘Assessing Quality: Multiple Actors, Multiple Settings, Multiple Criteria: Issues in Assessing Interdisciplinary Research’ , Research Evaluation 15 / 1 : 5 – 15 .

Gaziulusoy A. İ. Boyle C. ( 2013 ) ‘Proposing a Heuristic Reflective Tool for Reviewing Literature in Transdisciplinary Research for Sustainability’ , Journal of Cleaner Production , 48 : 139 – 47 .

Gibbons M. et al.  . ( 1994 ) The New Production of Knowledge: The Dynamics of Science and Research in Contemporary Societies . London : Sage Publications .

Hellstrom T. ( 2011 ) ‘Homing in on Excellence: Dimensions of Appraisal in Center of Excellence Program Evaluations’ , Evaluation , 17 / 2 : 117 – 31 .

Hellstrom T. . ( 2012 ) ‘Epistemic Capacity in Research Environments: A Framework for Process Evaluation’ , Prometheus , 30 / 4 : 395 – 409 .

Hemlin S. Rasmussen S. B . ( 2006 ) ‘The Shift in Academic Quality Control’ , Science, Technology & Human Values , 31 / 2 : 173 – 98 .

Hessels L. K. Van Lente H. ( 2008 ) ‘Re-thinking New Knowledge Production: A Literature Review and a Research Agenda’ , Research Policy , 37 / 4 , 740 – 60 .

Huutoniemi K. ( 2010 ) ‘Evaluating Interdisciplinary Research’ , in Frodeman R. Klein J. T. Mitcham C. (eds) The Oxford Handbook of Interdisciplinarity , pp. 309 – 20 . Oxford : Oxford University Press .

de Jong S. P. L. et al.  . ( 2011 ) ‘Evaluation of Research in Context: An Approach and Two Cases’ , Research Evaluation , 20 / 1 : 61 – 72 .

Jahn T. Keil F. ( 2015 ) ‘An Actor-Specific Guideline for Quality Assurance in Transdisciplinary Research’ , Futures , 65 : 195 – 208 .

Kates R. ( 2000 ) ‘Sustainability Science’ , World Academies Conference Transition to Sustainability in the 21st Century 5/18/00 , Tokyo, Japan .

Klein J. T . ( 2006 ) ‘Afterword: The Emergent Literature on Interdisciplinary and Transdisciplinary Research Evaluation’ , Research Evaluation , 15 / 1 : 75 – 80 .

Klein J. T . ( 2008 ) ‘Evaluation of Interdisciplinary and Transdisciplinary Research: A Literature Review’ , American Journal of Preventive Medicine , 35 / 2 Supplment S116–23. DOI: 10.1016/j.amepre.2008.05.010

Royal Netherlands Academy of Arts and Sciences, Association of Universities in the Netherlands, Netherlands Organization for Scientific Research (KNAW) . ( 2009 ) Standard Evaluation Protocol 2009-2015: Protocol for Research Assessment in the Netherlands . Netherlands : KNAW . < www.knaw.nl/sep >

Komiyama H. Takeuchi K. ( 2006 ) ‘Sustainability Science: Building a New Discipline’ , Sustainability Science , 1 : 1 – 6 .

Lahtinen E. et al.  . ( 2005 ) ‘The Development of Quality Criteria For Research: A Finnish approach’ , Health Promotion International , 20 / 3 : 306 – 15 .

Lang D. J. et al.  . ( 2012 ) ‘Transdisciplinary Research in Sustainability Science: Practice , Principles , and Challenges’, Sustainability Science , 7 / S1 : 25 – 43 .

Lincoln Y. S . ( 1995 ) ‘Emerging Criteria for Quality in Qualitative and Interpretive Research’ , Qualitative Inquiry , 1 / 3 : 275 – 89 .

Mayne J. Stern E. ( 2013 ) Impact Evaluation of Natural Resource Management Research Programs: A Broader View . Australian Centre for International Agricultural Research, Canberra .

Meyrick J . ( 2006 ) ‘What is Good Qualitative Research? A First Step Towards a Comprehensive Approach to Judging Rigour/Quality’ , Journal of Health Psychology , 11 / 5 : 799 – 808 .

Mitchell C. A. Willetts J. R. ( 2009 ) ‘Quality Criteria for Inter and Trans - Disciplinary Doctoral Research Outcomes’ , in Prepared for ALTC Fellowship: Zen and the Art of Transdisciplinary Postgraduate Studies ., Sydney : Institute for Sustainable Futures, University of Technology .

Morrow S. L . ( 2005 ) ‘Quality and Trustworthiness in Qualitative Research in Counseling Psychology’ , Journal of Counseling Psychology , 52 / 2 : 250 – 60 .

Nowotny H. Scott P. Gibbons M. ( 2001 ) Re-Thinking Science . Cambridge : Polity .

Nowotny H. Scott P. Gibbons M. . ( 2003 ) ‘‘Mode 2’ Revisited: The New Production of Knowledge’ , Minerva , 41 : 179 – 94 .

Öberg G . ( 2008 ) ‘Facilitating Interdisciplinary Work: Using Quality Assessment to Create Common Ground’ , Higher Education , 57 / 4 : 405 – 15 .

Ozga J . ( 2007 ) ‘Co - production of Quality in the Applied Education Research Scheme’ , Research Papers in Education , 22 / 2 : 169 – 81 .

Ozga J . ( 2008 ) ‘Governing Knowledge: research steering and research quality’ , European Educational Research Journal , 7 / 3 : 261 – 272 .

OECD ( 2012 ) Frascati Manual 6th ed. < http://www.oecd.org/innovation/inno/frascatimanualproposedstandardpracticeforsurveysonresearchandexperimentaldevelopment6thedition >

Overseas Development Institute (ODI) ( 2004 ) ‘Bridging Research and Policy in International Development: An Analytical and Practical Framework’, ODI Briefing Paper. < http://www.odi.org/sites/odi.org.uk/files/odi-assets/publications-opinion-files/198.pdf >

Overseas Development Institute (ODI) . ( 2012 ) RAPID Outcome Assessment Guide . < http://www.odi.org/sites/odi.org.uk/files/odi-assets/publications-opinion-files/7815.pdf >

Pullin A. S. Stewart G. B. ( 2006 ) ‘Guidelines for Systematic Review in Conservation and Environmental Management’ , Conservation Biology , 20 / 6 : 1647 – 56 .

Research Excellence Framework (REF) . ( 2011 ) Research Excellence Framework 2014: Assessment Framework and Guidance on Submissions. Reference REF 02.2011. UK: REF. < http://www.ref.ac.uk/pubs/2011-02/ >

Scott A . ( 2007 ) ‘Peer Review and the Relevance of Science’ , Futures , 39 / 7 : 827 – 45 .

Spaapen J. Dijstelbloem H. Wamelink F. ( 2007 ) Evaluating Research in Context: A Method for Comprehensive Assessment . Netherlands: Consultative Committee of Sector Councils for Research and Development. < http://www.qs.univie.ac.at/fileadmin/user_upload/qualitaetssicherung/PDF/Weitere_Aktivit%C3%A4ten/Eric.pdf >

Spaapen J. Van Drooge L. ( 2011 ) ‘Introducing “Productive Interactions” in Social Impact Assessment’ , Research Evaluation , 20 : 211 – 18 .

Stige B. Malterud K. Midtgarden T. ( 2009 ) ‘Toward an Agenda for Evaluation of Qualitative Research’ , Qualitative Health Research , 19 / 10 : 1504 – 16 .

td-net ( 2014 ) td-net. < www.transdisciplinarity.ch/e/Bibliography/new.php >

Tertiary Education Commission (TEC) . ( 2012 ) Performance-based Research Fund: Quality Evaluation Guidelines 2012. New Zealand: TEC. < http://www.tec.govt.nz/Documents/Publications/PBRF-Quality-Evaluation-Guidelines-2012.pdf >

Tijssen R. J. W. ( 2003 ) ‘Quality Assurance: Scoreboards of Research Excellence’ , Research Evaluation , 12 : 91 – 103 .

White H. Phillips D. ( 2012 ) ‘Addressing Attribution of Cause and Effect in Small n Impact Evaluations: Towards an Integrated Framework’. Working Paper 15. New Delhi: International Initiative for Impact Evaluation .

Wickson F. Carew A. ( 2014 ) ‘Quality Criteria and Indicators for Responsible Research and Innovation: Learning from Transdisciplinarity’ , Journal of Responsible Innovation , 1 / 3 : 254 – 73 .

Wickson F. Carew A. Russell A. W. ( 2006 ) ‘Transdisciplinary Research: Characteristics, Quandaries and Quality,’ Futures , 38 / 9 : 1046 – 59

Month: Total Views:
November 2016 7
December 2016 36
January 2017 51
February 2017 109
March 2017 124
April 2017 72
May 2017 45
June 2017 30
July 2017 70
August 2017 84
September 2017 114
October 2017 76
November 2017 81
December 2017 320
January 2018 522
February 2018 326
March 2018 518
April 2018 661
May 2018 652
June 2018 463
July 2018 411
August 2018 528
September 2018 537
October 2018 361
November 2018 420
December 2018 344
January 2019 374
February 2019 465
March 2019 610
April 2019 456
May 2019 418
June 2019 437
July 2019 346
August 2019 377
September 2019 451
October 2019 376
November 2019 392
December 2019 326
January 2020 436
February 2020 383
March 2020 691
April 2020 444
May 2020 316
June 2020 435
July 2020 376
August 2020 379
September 2020 625
October 2020 443
November 2020 329
December 2020 356
January 2021 418
February 2021 402
March 2021 648
April 2021 519
May 2021 487
June 2021 435
July 2021 449
August 2021 421
September 2021 658
October 2021 537
November 2021 444
December 2021 379
January 2022 428
February 2022 534
March 2022 603
April 2022 688
May 2022 551
June 2022 366
July 2022 375
August 2022 497
September 2022 445
October 2022 457
November 2022 374
December 2022 303
January 2023 364
February 2023 327
March 2023 499
April 2023 404
May 2023 335
June 2023 350
July 2023 340
August 2023 419
September 2023 444
October 2023 595
November 2023 585
December 2023 498
January 2024 691
February 2024 728
March 2024 667
April 2024 611
May 2024 422
June 2024 382
July 2024 129

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1471-5449
  • Print ISSN 0958-2029
  • Copyright © 2024 Oxford University Press
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

  • - Google Chrome

Intended for healthcare professionals

  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • Assessing the quality...

Assessing the quality of research

  • Related content
  • Peer review

This article has a correction. Please see:

  • Errata - September 09, 2004
  • Paul Glasziou ([email protected]) , reader 1 ,
  • Jan Vandenbroucke , professor of clinical epidemiology 2 ,
  • Iain Chalmers , editor, James Lind library 3
  • 1 Department of Primary Health Care, University of Oxford, Oxford OX3 7LF
  • 2 Leiden University Medical School, Leiden 9600 RC, Netherlands
  • 3 James Lind Initiative, Oxford OX2 7LG
  • Correspondence to: P Glasziou
  • Accepted 20 October 2003

Inflexible use of evidence hierarchies confuses practitioners and irritates researchers. So how can we improve the way we assess research?

The widespread use of hierarchies of evidence that grade research studies according to their quality has helped to raise awareness that some forms of evidence are more trustworthy than others. This is clearly desirable. However, the simplifications involved in creating and applying hierarchies have also led to misconceptions and abuses. In particular, criteria designed to guide inferences about the main effects of treatment have been uncritically applied to questions about aetiology, diagnosis, prognosis, or adverse effects. So should we assess evidence the way Michelin guides assess hotels and restaurants? We believe five issues should be considered in any revision or alternative approach to helping practitioners to find reliable answers to important clinical questions.

Different types of question require different types of evidence

Ever since two American social scientists introduced the concept in the early 1960s, 1 hierarchies have been used almost exclusively to determine the effects of interventions. This initial focus was appropriate but has also engendered confusion. Although interventions are central to clinical decision making, practice relies on answers to a wide variety of types of clinical questions, not just the effect of interventions. 2 Other hierarchies might be necessary to answer questions about aetiology, diagnosis, disease frequency, prognosis, and adverse effects. 3 Thus, although a systematic review of randomised trials would be appropriate for answering questions about the main effects of a treatment, it would be ludicrous to attempt to use it to ascertain the relative accuracy of computerised versus human reading of cervical smears, the natural course of prion diseases in humans, the effect of carriership of a mutation on the risk of venous thrombosis, or the rate of vaginal adenocarcinoma in the daughters of pregnant women given diethylstilboesterol. 4

To answer their everyday questions, practitioners …

Log in using your username and password

BMA Member Log In

If you have a subscription to The BMJ, log in:

  • Need to activate
  • Log in via institution
  • Log in via OpenAthens

Log in through your institution

Subscribe from £184 *.

Subscribe and get access to all BMJ articles, and much more.

* For online subscription

Access this article for 1 day for: £33 / $40 / €36 ( excludes VAT )

You can download a PDF version for your personal record.

Buy this article

research paper quality

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

What makes a high quality clinical research paper?

Affiliation.

The quality of a research paper depends primarily on the quality of the research study it reports. However, there is also much that authors can do to maximise the clarity and usefulness of their papers. Journals' instructions for authors often focus on the format, style, and length of articles but do not always emphasise the need to clearly explain the work's science and ethics: so this review reminds researchers that transparency is important too. The research question should be stated clearly, along with an explanation of where it came from and why it is important. The study methods must be reported fully and, where appropriate, in line with an evidence based reporting guideline such as the CONSORT statement for randomised controlled trials. If the study was a trial the paper should state where and when the study was registered and state its registration identifier. Finally, any relevant conflicts of interest should be declared.

PubMed Disclaimer

Similar articles

  • Editorial policies of pediatric journals: survey of instructions for authors. Meerpohl JJ, Wolff RF, Niemeyer CM, Antes G, von Elm E. Meerpohl JJ, et al. Arch Pediatr Adolesc Med. 2010 Mar;164(3):268-72. doi: 10.1001/archpediatrics.2009.287. Arch Pediatr Adolesc Med. 2010. PMID: 20194261
  • The REFLECT statement: methods and processes of creating reporting guidelines for randomized controlled trials for livestock and food safety by modifying the CONSORT statement. O'Connor AM, Sargeant JM, Gardner IA, Dickson JS, Torrence ME; Consensus Meeting Participants; Dewey CE, Dohoo IR, Evans RB, Gray JT, Greiner M, Keefe G, Lefebvre SL, Morley PS, Ramirez A, Sischo W, Smith DR, Snedeker K, Sofos J, Ward MP, Wills R. O'Connor AM, et al. Zoonoses Public Health. 2010 Mar;57(2):95-104. doi: 10.1111/j.1863-2378.2009.01311.x. Epub 2010 Jan 12. Zoonoses Public Health. 2010. PMID: 20070653
  • The REFLECT statement: methods and processes of creating reporting guidelines for randomized controlled trials for livestock and food safety. O'Connor AM, Sargeant JM, Gardner IA, Dickson JS, Torrence ME, Dewey CE, Dohoo IR, Evans RB, Gray JT, Greiner M, Keefe G, Lefebvre SL, Morley PS, Ramirez A, Sischo W, Smith DR, Snedeker K, Sofos J, Ward MP, Wills R. O'Connor AM, et al. Prev Vet Med. 2010 Jan 1;93(1):11-8. doi: 10.1016/j.prevetmed.2009.10.008. Epub 2009 Nov 18. Prev Vet Med. 2010. PMID: 19926151
  • Procedures and methods of benefit assessments for medicines in Germany. Bekkering GE, Kleijnen J. Bekkering GE, et al. Eur J Health Econ. 2008 Nov;9 Suppl 1:5-29. doi: 10.1007/s10198-008-0122-5. Eur J Health Econ. 2008. PMID: 18987905
  • Basic structure and types of scientific papers. Peh WC, Ng KH. Peh WC, et al. Singapore Med J. 2008 Jul;49(7):522-5. Singapore Med J. 2008. PMID: 18695858
  • Evaluation of Publications from the American Academy of Ophthalmology: A 5-Year Analysis of Ophthalmology Literature. Kalaw FGP, Tavakoli K, Baxter SL. Kalaw FGP, et al. Ophthalmol Sci. 2023 Sep 11;3(4):100395. doi: 10.1016/j.xops.2023.100395. eCollection 2023 Dec. Ophthalmol Sci. 2023. PMID: 38025157 Free PMC article.
  • Trustworthiness of randomized trials in endocrinology-A systematic survey. González-González JG, Dorsey-Treviño EG, Alvarez-Villalobos N, Barrera-Flores FJ, Díaz González-Colmenero A, Quintanilla-Sánchez C, Montori VM, Rodriguez-Gutierrez R. González-González JG, et al. PLoS One. 2019 Feb 19;14(2):e0212360. doi: 10.1371/journal.pone.0212360. eCollection 2019. PLoS One. 2019. PMID: 30779814 Free PMC article.
  • Reproducibility and transparency in biomedical sciences. Gannot G, Cutting MA, Fischer DJ, Hsu LJ. Gannot G, et al. Oral Dis. 2017 Oct;23(7):813-816. doi: 10.1111/odi.12588. Epub 2016 Nov 24. Oral Dis. 2017. PMID: 27718283 Free PMC article. No abstract available.
  • Essential Components of Educational Programs on Biomedical Writing, Editing, and Publishing. Barroga E, Vardaman M. Barroga E, et al. J Korean Med Sci. 2015 Oct;30(10):1381-7. doi: 10.3346/jkms.2015.30.10.1381. Epub 2015 Sep 12. J Korean Med Sci. 2015. PMID: 26425033 Free PMC article. Review.
  • Comparative analysis of quantity and quality of biomedical publications in Gulf Cooperation Council countries from 2011-2013. Abu-Dawas RB, Mallick MA, Hamadah RE, Kharraz RH, Chamseddin RA, Khan TA, AlAmodi AA, Rohra DK. Abu-Dawas RB, et al. Saudi Med J. 2015 Sep;36(9):1103-9. doi: 10.15537/smj.2015.9.12369. Saudi Med J. 2015. PMID: 26318469 Free PMC article.

Publication types

  • Search in MeSH

LinkOut - more resources

Full text sources.

  • Ovid Technologies, Inc.
  • MedlinePlus Health Information

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

  • UNC Libraries
  • HSL Academic Process
  • Systematic Reviews
  • Step 6: Assess Quality of Included Studies

Systematic Reviews: Step 6: Assess Quality of Included Studies

Created by health science librarians.

HSL Logo

  • Step 1: Complete Pre-Review Tasks
  • Step 2: Develop a Protocol
  • Step 3: Conduct Literature Searches
  • Step 4: Manage Citations
  • Step 5: Screen Citations

Assess studies for quality and bias

Critically appraise included studies, select a quality assessment tool, a closer look at popular tools, use covidence for quality assessment.

  • Quality Assessment FAQs
  • Step 7: Extract Data from Included Studies
  • Step 8: Write the Review

  Check our FAQ's

   Email us

   Call (919) 962-0800

   Make an appointment with a librarian

  Request a systematic or scoping review consultation

About Step 6: Assess Quality of Included Studies

In step 6 you will evaluate the articles you included in your review for quality and bias. To do so, you will:

  • Use quality assessment tools to grade each article.
  • Create a summary of the quality of literature included in your review.

This page has links to quality assessment tools you can use to evaluate different study types. Librarians can help you find widely used tools to evaluate the articles in your review.

Reporting your review with PRISMA

If you reach the quality assessment step and choose to exclude articles for any reason, update the number of included and excluded studies in your PRISMA flow diagram.

Managing your review with Covidence

Covidence includes the Cochrane Risk of Bias 2.0 quality assessment template, but you can also create your own custom quality assessment template.

How a librarian can help with Step 6

  • What the quality assessment or risk of bias stage of the review entails
  • How to choose an appropriate quality assessment tool
  • Best practices for reporting quality assessment results in your review

After the screening process is complete, the systematic review team must assess each article for quality and bias. There are various types of bias, some of which are outlined in the table below from the Cochrane Handbook.

The most important thing to remember when choosing a quality assessment tool is to pick one that was created and validated to assess the study design(s) of your included articles.

For example, if one item in the inclusion criteria of your systematic review is to only include randomized controlled trials (RCTs), then you need to pick a quality assessment tool specifically designed for RCTs (for example, the Cochrane Risk of Bias tool)

Once you have gathered your included studies, you will need to appraise the evidence for its relevance, reliability, validity, and applicability​.

Ask questions like:

Relevance:  ​.

  • Is the research method/study design appropriate for answering the research question?​
  • Are specific inclusion / exclusion criteria used? ​

Reliability:  ​

  • Is the effect size practically relevant? How precise is the estimate of the effect? Were confidence intervals given?  ​

Validity: ​

  • Were there enough subjects in the study to establish that the findings did not occur by chance?    ​
  • Were subjects randomly allocated? Were the groups comparable? If not, could this have introduced bias?  ​
  • Are the measurements/ tools validated by other studies?  ​
  • Could there be confounding factors?   ​

Applicability:  ​

  • Can the results be applied to my organization and my patient?   ​

What are Quality Assessment tools?

Quality Assessment tools are questionnaires created to help you assess the quality of a variety of study designs.  Depending on the types of studies you are analyzing, the questionnaire will be tailored to ask specific questions about the methodology of the study.  There are appraisal tools for most kinds of study designs.  You should choose a Quality Assessment tool that matches the types of studies you expect to see in your results.  If you have multiple types of study designs, you may wish to use several tools from one organization, such as the CASP or LEGEND tools, as they have a range of assessment tools for many study designs.

Click on a study design below to see some examples of quality assessment tools for that type of study.

Randomized Controlled Trials (RCTs)

  • Cochrane Risk of Bias (ROB) 2.0 Tool Templates are tailored to randomized parallel-group trials, cluster-randomized parallel-group trails (including stepped-wedge designs), and randomized cross-over trails and other matched designs.
  • CASP- Randomized Controlled Trial Appraisal Tool A checklist for RCTs created by the Critical Appraisal Skills Program (CASP)
  • The Jadad Scale A scale that assesses the quality of published clinical trials based methods relevant to random assignment, double blinding, and the flow of patients
  • CEBM-RCT A critical appraisal tool for RCTs from the Centre for Evidence Based Medicine (CEBM)
  • Checklist for Randomized Controlled Trials (JBI) A critical appraisal checklist from the Joanna Briggs Institute (JBI)
  • Scottish Intercollegiate Guidelines Network (SIGN) Checklists for quality assessment
  • LEGEND Evidence Evaluation Tools A series of critical appraisal tools from the Cincinnati Children's Hospital. Contains tools for a wide variety of study designs, including prospective, retrospective, qualitative, and quantitative designs.

Cohort Studies

  • CASP- Cohort Studies A checklist created by the Critical Appraisal Skills Programme (CASP) to assess key criteria relevant to cohort studies
  • Checklist for Cohort Studies (JBI) A checklist for cohort studies from the Joanna Briggs Institute
  • The Newcastle-Ottawa Scale (NOS) for assessing the quality of nonrandomised studies in meta-analyses A validated tool for assessing case-control and cohort studies
  • STROBE Checklist A checklist for quality assessment of case-control, cohort, and cross-sectional studies

Case-Control Studies

  • CASP- Case Control Study A checklist created by the Critical Appraisal Skills Programme (CASP) to assess key criteria relevant to case-control studies
  • Tool to Assess Risk of Bias in Case Control Studies by the CLARITY Group at McMaster University A quality assessment tool for case-control studies from the CLARITY Group at McMaster University
  • JBI Checklist for Case-Control Studies A checklist created by the Joanna Briggs Institute

Cross-Sectional Studies

Diagnostic studies.

  • CASP- Diagnostic Studies A checklist for diagnostic studies created by the Critical Appraisal Skills Program (CASP)
  • QUADAS-2 A quality assessment tool developed by a team at the Bristol Medical School: Population Health Sciences at the University of Bristol
  • Critical Appraisal Checklist for Diagnostic Test Accuracy Studies (JBI) A checklist for quality assessment of diagnostic studies developed by the Joanna Briggs Institute

Economic Studies

  • Consensus Health Economic Criteria (CHEC) List 19 yes-or-no questions, one for each category to assess economic evaluations
  • CASP- Economic Evaluation A checklist for quality assessment of economic studies by the Critical Appraisal Skills Programme

Mixed Methods

  • McGill Mixed Methods Appraisal Tool (MMAT) 2018 User Guide See full site for additional information, including FAQ's, references and resources, earlier versions, and more

Qualitative Studies

  • CASP- Qualitative Studies 10 questions to help assess qualitative research from the Critical Appraisal Skills Programme

Systematic Reviews and Meta-Analyses

  • JBI Critical Appraisal Checklist for Systematic Reviews and Research Syntheses An 11-item checklist for evaluating systematic reviews
  • AMSTAR Checklist A 16-question measurement tool to assess systematic reviews
  • AHRQ Methods Guide for Effectiveness and Comparative Effectiveness Reviews A guide to selecting eligibility criteria, searching the literature, extracting data, assessing quality, and completing other steps in the creation of a systematic review
  • CASP - Systematic Review A checklist for quality assessment of systematic review from the Critical Appraisal Skills Programme

Clinical Practice Guidelines

  • National Guideline Clearinghouse Extent of Adherence to Trustworthy Standards (NEATS) Instrument A 15-item instrument using a scale of 1-5 to evaluate a guideline's adherence to the Institute of Medicine's standard for trust worth guidelines
  • AGREE-II Appraisal of Guidelines for Research and Evaluation The Appraisal of Guidelines for Research and Evaluation (AGREE) Instrument evaluates the process of practice guideline development and the quality of reporting

Other Study Designs

  • NTACT Quality Checklists Quality indicator checklists for correlational studies, group experimental studies, single case research studies, and qualitative studies developed by the National Technical Assistance Center on Transition (NTACT). (Users must make an account.)

Below, you will find a sample of four popular quality assessment tools and some basic information about each. For more quality assessment tools, please view the blue tabs in the boxes above, organized by study design.

More information about popular quality assessment tools.
Tool Study Design About
Randomized controlled trials (RCTs)

The Cochrane Risk of Bias 2.0 tool asks questions about five types of potential bias for individually randomized trials:

Non-randomized studies

The Newcastle-Ottawa scale assesses the quality of nonrandomized studies based on three broad perspectives:

Mixed methods

These quality assessment checklists ask 11 or 12 questions each to help you identify

Available study designs include randomized controlled trials, systematic reviews, qualitative studies, cohort studies, diagnostic studies, case control studies, economic evaluations, and clinical prediction rules.

Mixed methods

These evidence evaluation tools ask questions each to help you examine

across the clinical question domains of intervention, diagnosis & assessment, prognosis, etiology & risk factors, incidence, prevalence, and meaning.

Available study designs include systematic review / meta analysis, meta-synthesis, randomized controlled trials, controlled clinical trials, psychometric studies, cohort-prospective / retrospective, case control, longitudinal, cross sectional, descriptive / epidemiology / case series, qualitative study, quality improvement, mixed methods, decision analysis / economic analysis / computer simulation, case report / n-of-1 study, published expert opinion, bench studies, and guidelines.

Covidence uses Cochrane Risk of Bias (which is designed for rating RCTs and cannot be used for other study types) as the default tool for quality assessment of included studies. You can opt to manually customize the quality assessment template and use a different tool better suited to your review. More information about quality assessment using Covidence, including how to customize the quality assessment template, can be found below. If you decide to customize the quality assessment template, you cannot switch back to using the Cochrane Risk of Bias template.

More Information

  • Quality Assessment on the Covidence Guide
  • Covidence FAQs on Quality Assessment Commonly asked questions about quality assessment using Covidence
  • Covidence YouTube Channel A collection of Covidence-created videos
  • << Previous: Step 5: Screen Citations
  • Next: Step 7: Extract Data from Included Studies >>
  • Last Updated: May 16, 2024 3:24 PM
  • URL: https://guides.lib.unc.edu/systematic-reviews

logo

  • Conferences
  • Editorial Process
  • Recent Advances
  • Sustainability
  • Academic Resources

Katherine Bosworth

20 Ways to Improve Your Research Paper

So, you want to improve your research paper? You’ve come to the right place. Many authors are looking for guidance when publishing their work and we understand that writing up research is hard. We want to help where we can. At MDPI, we are committed to delivering ground-breaking scientific insights to the global scientific community. Here, we provide 20 useful tips to improve your research paper before submission.

1. Choose a specific and accurate title (and subtitles)

This is a very important part of your manuscript and can affect readership. People often choose what to read based on first impressions. Make sure your title doesn’t put people off. The title should give an overview of what your paper is about. This should be accurate and specific and reflect the content of the paper. Avoid jargon where possible. Don’t forget about section titles and table and figure captions. They should be accurate and specific. Readers tend to skip to the content they want to read. You can find even more useful advice in our article on choosing the best research paper title .

2. Writing an interesting abstract can improve your research paper

The abstract is the first part of the paper that’ll be read. You need to persuade the reader to continue reading. A clear abstract should outline the workings of your research. This will help you to carve out a very specific space in your field. You should also consider other published work in the field. Mention some notable achievements and explain how your research builds on them. This will help you place your research. Those who know the area well will be able to understand which direction you’re going in. A great way to make your abstract more dynamic is to add a graphical abstract or video. It should describe the methodology within your paper. This additional media quickly summarises your paper. It makes it more visually appealing to readers at first glance.

Example of graphical abstract for article about improving your research paper

See the journal’s Instructions for Authors page for more information about graphical abstracts.

3. Be selective with keywords

On our journals’ webpages, we use keywords for indexing. This makes work more searchable. Many researchers search the MDPI site using keywords related to their field. This gives you a chance to get more eyes on your paper. Make sure your choices are precise and are not in the title already. You want to cover as much ground as possible.

Depending on the journal, keywords that are also in the journal’s name are sometimes not allowed. For example, authors cannot use the keyword “soil” when publishing in Soil Biology & Biochemistry . You can check the journal’s webpage for more details. Get in touch with the Editorial Office if you have any questions.

4. Make sure that your research is novel

Have you conducted a thorough search of the latest findings? Knowing about these will improve the originality of your work. Reviewers are asked to rate your manuscript on novelty. Your research should advance the current knowledge in your field. Avoid repeating what may already be out there.

You can cite other works and add them to the content of your paper. This shows that you’re aware of the current knowledge in your research area. You should add your own work and findings that bring something new to the field. Editors like studies that push the boundaries and have new and unexpected outcomes.

5. Ensure that your results are exciting

Your results should not only be novel, but also significant. Attracting readers and citations will be easier if the results are exciting. Interesting and exciting work will encourage others to build on what you have discovered.

6. To improve your research paper, keep it simple

When it comes to research, it’s easy to get lost in your own paper. But, there is value in keeping it simple. This will make your work more accessible to others. It may even improve its success. Keeping your paper simple (English included) also means making it consistent. We have a handy guide that can help with this! Avoid including information that is unnecessary. Review what you have written so far – if you can delete something, then you should.

7. Don’t self-plagiarise!

Perhaps you want to repeat something that you have already mentioned in a previous research paper. Be careful, reusing your own words is self-plagiarism.

Self-plagiarism is a problem because you are just producing a copy of your work from before. This creates the illusion of new ideas when there aren’t any. This can happen without you realising it, so be careful. To avoid this, use short quotes from your past paper. You should place these in quotation marks and cite the original. Be succinct but comprehensive.

MDPI takes plagiarism very seriously and we (and other publishers) do our best to ensure that it is not present in our authors’ research by using a plagiarism detector that reviews online content for similarities. This helps to ensure that our it is ready to be published. As part of MDPI’s anti-plagiarism regulations, image manipulation is also not permitted. The peer review process involves an assessment of images and figures.

8. Use the journal template, even in the early stages

Peer review can be a nerve-wracking process. You are waiting for opinions on whether your paper should be in a journal or not. We understand that this is a stressful time for our authors so we do our best to encourage reviewers to provide their reviews as soon as possible.

You can increase your chances of good reviews by making sure your work is clearly organised and easy to understand. Templates are great for this and can definitely help you to improve your research paper throughout the writing process. This can give your paper a more professional look from the outset. It’s also important to maintain good formatting throughout.

Using the template from the start will save you a lot of time later. You can avoid spending precious time transferring your manuscript into the MDPI format. You won’t be at risk of possible errors caused by a late move-over.

9. Keep the topic relevant to the research field or journal

Some journals or Special Issues have broad scopes, while others are narrower. Research papers need to fit well within the range of the topic. This can sometimes be as simple as adding a paragraph of context to cement your paper’s relevance.

You can find information about the scope on journal webpages. You can also reach out to the Editorial Office if you have any questions.

If your work doesn’t fit into the specific scope, an editor may encourage you to submit to a different journal or Special Issue.

10. Keep in touch with co-authors

To improve the direction of your paper, check in with the other authors often. Obviously, this is only if you have co-authors.

Reviewing other sections of the paper can help to ensure that you don’t repeat yourself. It’s a good opportunity to make sure that the English is standardised as well.

11. Swap and share ideas to improve your research paper

Research can be solitary. It is easy to forget that there are other people – co-authors, colleagues, peers, associates – in the same boat as you. Their feedback can help you spot mistakes that you may have missed. Meeting with a colleague can also give you a break from your paper and allow you to come back to it with a new mindset.

12. Write methods and results first, then abstract, introduction and conclusion later

This is commonly given advice, but is worth noting. The content and tone of your paper may change as you write it. You’ll have a better overview of your findings, and be able to include key points from the paper. The introduction and conclusion will be more refined when left until the end.

13. Check your plots and graphs

Nothing in your paper is as important as your data. Your discoveries are the foundation of your work. They need to be clear and easy to understand. To improve your research paper, make sure graphs and images are in high resolutions and show the information clearly.

14. Customise your graphs using external packages in Python

You can use external packages like MatPlotLib or MATLAB to make the creation and editing of high-quality graphs and plots easy and efficient.

15. Improving the language can improve your research paper

It is important to make sure your English is as good as possible. This may mean proof-reading the paper several times (or having someone else look at it).

We can help you edit your project

Type writer showing rewrites and edits.

Improving your research paper can be challenging and time consuming. Academic editing can also be tricky sometimes, and it always pays off to have a professional look at your work. If you’re still not sure, don’t have time, or want a pro to look at your references, let our skilled English Editors help. Visit MDPI Author Services now for a free estimate for fast, accurate, and professional editing.

16. Follow the instructions to format your paper

Review the house rules for the journal and follow these with care. Each journal has an ‘Instructions for Authors’ webpage. It provides extensive information on how to present your work and improve your research paper. Take these into consideration when coming up with the final product.

17. Be thorough with author contributions and acknowledgments

Make sure to add the names of colleagues and supervisors who helped with your paper. This may seem obvious, but there are often people you forget. This may include thanking your funder or grant provider.

18. Declare any conflicts of interest

All authors need to state whether they have any relationships or interests that could influence the paper or its outcomes. This may include (financial or non-financial) connections to organisations or governments.

19. Don’t forget about the importance of references

It may surprise you that many papers are submitted without evidence for their claims. Editors return these papers, and time is then lost in the publication process. The author then needs to locate the sources and resubmit the work. Make sure to provide citations where necessary. If you want to know more about how to cite your work, we have a handy guide to review on this very subject.

Tools like EndNote and Mendeley can help you with the formatting of references in your paper. These manage your references based on what you enter and then organise them in the References section. You can also use free reference generators. For example, the online tool ‘Cite This for Me’ allows you to format individual references.

20. Read through it again

This is where you need to take a step back from what you have written. Looking over your work with a fresh set of eyes is a great way to improve it. Sleep on it and come back in a few days to check your work. A final scan may help you find minor errors and put your mind at ease. Once that’s all done, you can submit your manuscript. You’ll generally receive a response in 1-5 working days. For more details on our speedy submission process, take a look at our article on MDPI submission statuses .

Going through these tips can help you improve your research paper during the writing process. This can increase your chances of having your work published, read, cited, and shared.

During this time, you may be feeling worried or nervous. And that’s perfectly normal! You’re about to release your findings into the world. If you feel tense about this process, you’re not alone. It takes a lot of courage to put ideas out there, even ideas that you’re happy with.

Once you’ve published your manuscript, make sure to share it wherever you can. Talk about it on social media and put a link on your website.

Is there anything else that you do to improve your manuscripts? Make sure to share it in the comments below!

Related posts

research paper quality

Open Science

Insights Into MDPI Top Picks: June 2024

Interview With Academic Editors

Academic Resources , Interviews , Open Science

Interview With Our Academic Editors

Empoweing physcial exercise

Using Citizen Science to Empower Physical Activity

research paper quality

Open Science , Academic Resources , Open Access

Why Open Data is Important

Promising Role of Antidiabetic Drug in Cancer Control

Promising Role of Antidiabetic Drug in Cancer Control

research paper quality

Does Being Creative Help Your Mental Health?

Pharmaceuticals 2024

Conference: Pharmaceuticals 2024 – Recent Advances Pharmaceutical Sciences Towards a Healthy Life

research paper quality

Open Access , Open Science

Open Science Principles Can Improve Artificial Intelligence

Marine biodiversity

Protecting Marine Biodiversity Using Genetic Testing

What is academic editing

Academic Resources , Editorial Process , Open Science

What Are The Benefits of Academic Editing?

53 comments.

' src=

I am a senior and currently writing my last research paper. This information was very useful for me. Thanks for putting it out there!

' src=

Thank you very much. I need this in my report and in my studies. God bless!

' src=

Really helpful. Please share more tips on improving your write-up when writing a research paper.

' src=

Hi Sami, thanks for the comment.

You may also be interested in reading about how to choose the best title for your research paper .

' src=

Thanks for your nice work. I will apply these points on my paper will edit it…

' src=

Thank you, Katherine. Can I transfer it into Chinese and share with the students?

Hi Chenghua, thank you for the comment.

Please feel free to share the article far and wide! Glad it can help your students.

' src=

what happens when we discover a new concept during our research? whereas at the beginning we had not predict it? Defining it in the introductory part isn’t very fair, isn’t it? Thanks

Hi Messka, thank you for the comment.

Agreed. That’s why the advice in #12 is so powerful. Things change as you’re trying to get it right, so it’s always best to leave the introduction and conclusion to the end.

' src=

This is really helpful article, keep it up

' src=

Thanks for reading! We’re glad you found the article useful

' src=

Thank you Ms. Katherine. It is very useful and enlight me..

Hi Teguh, you’re welcome. Thanks for reading the MDPI Blog.

' src=

Thank you so much Mam.. Really it is very useful information mam..

Hi Sivaranjani, glad this helped! Good luck with your manuscript!

' src=

Thank you very much, for this concise and informative piece.

Thanks for reading, Saliu!

' src=

Very precise and very informative. It will be by personal giude moving forward

We’re glad you found this article useful, Yao!

' src=

Very useful informations.

Thanks for reading, Chetan!

' src=

Informative and good guide for Phd researchers.

Thanks Olaniyi!

' src=

Thank you MDPI for valuable information

Thanks for reading, Sardar!

' src=

Thank you very mutch for your valuable information.

' src=

Very outstanding and informative. This will go along way toward improving my manuscript

' src=

Thanks for sharing such an informational article which will a great help to the students.

' src=

Good guidelines

' src=

Nice quotes

' src=

Thanks for this great and concise work.

' src=

Nice… it’s very useful for new beginner’s

' src=

Certainly very helpful tips indeed. Thank you MDPI for this kind exercise.

' src=

Valuable and concise 100%

' src=

This is helpful. I have leant a lot from the quality research tips you provided.

' src=

Interesting write up for research

' src=

These 25 ways of improving my research are very helpful because they touch every area of a research scheduling and arrangement. Most exciting is the suggestion to write the introduction and conclusion last. Hearing this for the first time; I will adopt it

Many thanks MDPI

' src=

It’s really nice points

' src=

Thank you 🙏

' src=

very informative and substantive article

' src=

Good pieces of advice

' src=

Very informative guidance. Thank you for sharing.

' src=

Hello very interesting tips.

' src=

Its really useful for new researchers.

' src=

Nice writing

' src=

The information provided is very important to write a good research article

' src=

Amazingly helpful article, from a PhD candidate!

' src=

Hello, Very interesring

' src=

Thank you for your valuable 25 ways to improve the research .would you include examples or case studies . Thank you

Add comment Cancel reply

Save my name, email, and website in this browser for the next time I comment.

Privacy Preference Center

Privacy preferences.

From sharing the latest MDPI blog news, to showcasing our most popular articles, the newsletters will keep you in the loop with everything good going on in the science world.

  • SPECIAL COLLECTIONS
  • COVID-19 Library Updates
  • Make Appointment

Faculty Research Guide: Where do I publish my research?

  • Where do I publish my research?
  • Impact Metrics
  • Institutional Repository
  • Grants and funding resources
  • Connect to your Scholarly Community

Finding a Journal to Publish In

  • Cabells Scholarly Analytics Effective July 1, 2024, Cabells "Journalytics" will longer be available to DePaul University. Similar alternatives resources include Web of Science and Scopus. Visit our Cancellations page to learn more. If you have any questions, please contact John Leeker.
  • Ulrich's International Periodicals This link opens in a new window Directory of journals and magazines published worldwide. Extent: Multidisciplinary
  • DOAJ: Directory of Open Access Journals This link opens in a new window The most comprehensive searchable index of free scientific and scholarly content in full-text format; more than 8,300 journal titles and more than 920,000 articles Access note: Freely available to the public. Extent: Multidisciplinary

Tools to Measure Journal Impact (Impact Factor)

Journal-level metrics on scopus .

  • CiteScore metrics 

SCImago Journal Rank (SJR) 

Source normalized impact per paper (snip).

research paper quality

A family of eight indicators that offer complementary views to analyze the publication influence of serial titles of interest. Derived from the Scopus database, CiteScore metrics offer a more transparent, current, comprehensive and accurate indication of a serial’s impact. CiteScore metrics are available for 28,000+ active titles, including 15,000+ more than Journal Impact Factor.

CiteScore only includes peer-reviewed research: articles, reviews, conference papers, data papers and book chapters, covering 4 years of citations and publications. Historical data back to CiteScore 2011 have been recalculated and are displayed on Scopus. 

Based on the concept of a transfer of prestige between journals via their citation links. Drawing on a similar approach to the Google PageRank algorithm - which assumes that important websites are linked to from other important websites - SJR weights each incoming citation to a journal by the SJR of the citing journal, with a citation from a high-SJR source counting for more than a citation from a low-SJR source. The calculation of the final SJR of a journal is a complex and iterative process.

Measures contextual citation impact by weighting citations based on the total number of citations in a subject field. The impact of a single citation is given higher value in subject areas where citations are less likely, and vice versa. 

For more information, see  Overview of journal metrics tutorial.

Comparing Journals in Scopus

Compare up to 10 sources and review results on a chart or in table format 

Search for sources to compare by title, ISSN, publisher, subject area 

Compare CiteScore for each publication by year 

Compare SNIP for each publication by year 

Compare SJR for each publication by year 

Compare number of documents for each publication by year 

Compare percent of articles cited for each publication by year 

Compare percent of review articles published in each publication by year  

For more information, see  How to compare sources tutorial.

Evaluating Journal Quality and Reputation

Principles of Transparency

1.  Peer review process:  All of a journal’s content, apart from any editorial material that is clearly marked as such, shall be subjected to peer review. Peer review is defined as obtaining advice on individual manuscripts from reviewers expert in the field who are not part of the journal’s editorial staff. This process, as well as any policies related to the journal’s peer review procedures, shall be clearly described on the journal’s Web site.

2.  Governing Body:  Journals shall have editorial boards or other governing bodies whose members are recognized experts in the subject areas included within the journal’s scope. The full names and affiliations of the journal’s editors shall be provided on the journal’s Web site.

3.  Editorial team/contact information  Journals shall provide the full names and affiliations of the journal’s editors on the journal’s Web site as well as contact information for the editorial office.

4.  Author fees:  Any fees or charges that are required for manuscript processing and/or publishing materials in the journal shall be clearly stated in a place that is easy for potential authors to find prior to submitting their manuscripts for review or explained to authors before they begin preparing their manuscript for submission.

5.  Copyright:   Copyright and licensing information shall be clearly described on the journal’s Web site, and licensing terms shall be indicated on all published articles, both HTML and PDFs.

6.  Identification of and dealing with allegations of research misconduct:  Publishers and editors shall take reasonable steps to identify and prevent the publication of papers where research misconduct has occurred, including plagiarism, citation manipulation, and data falsification/fabrication, among others. In no case shall a journal or its editors encourage such misconduct, or knowingly allow such misconduct to take place. In the event that a journal’s publisher or editors are made aware of any allegation of research misconduct relating to a published article in their journal – the publisher or editor shall follow COPE’s guidelines (or equivalent) in dealing with allegations.

7.  Ownership and management:  Information about the ownership and/or management of a journal shall be clearly indicated on the journal’s Web site. Publishers shall not use organizational names that would mislead potential authors and editors about the nature of the journal’s owner.

8.  Web site:   A journal’s Web site, including the text that it contains, shall demonstrate that care has been taken to ensure high ethical and professional standards.

9.  Name of journal : The Journal name shall be unique and not be one that is easily confused with another journal or that might mislead potential authors and readers about the Journal’s origin or association with other journals.

10.  Conflicts of interest : A journal shall have clear policies on handling potential conflicts of interest  of editors, authors, and reviewers and the policies should be clearly stated.

11.  Access : The way(s) in which the journal and individual articles are available to readers and whether there are associated subscription or pay per view fees shall be stated.

12.  Revenue sources : Business models or revenue sources (eg, author fees, subscriptions, advertising, reprints, institutional support, and organizational support) shall be clearly stated or otherwise evident on the journal’s Web site.

13.  Advertising : Journals shall state their advertising policy if relevant, including what types of ads will be considered, who makes decisions regarding accepting ads and whether they are linked to content or reader behavior (online only) or are displayed at random.

14.  Publishing schedule:  The periodicity at which a journal publishes shall be clearly indicated.

15.  Archiving:  A journal’s plan for electronic backup and preservation of access to the journal content (for example, access to main articles via CLOCKSS or PubMedCentral) in the event a journal is no longer published shall be clearly indicated.

16.  Direct marketing:  Any direct marketing activities, including solicitation of manuscripts that are conducted on behalf of the journal, shall be appropriate, well targeted, and unobtrusive.

From the  Principles of Transparency and Best Practice in Scholarly Publishing , jointly developed by the  Committee on Publication Ethics , the  Directory of Open Access Journals , the  Open Access Scholarly Publishers Association , and the  World Association of Medical Editors .

Journal Impact

The  impact factor (IF)  is a measure of the frequency with which the average article in a journal has been cited in a particular year. It is used to measure the importance or rank of a journal by calculating the times its articles are cited.

Google Scholar

Google Scholar list of Top Publications . Use the Categories and Subcategories drop down menus to select the relevant field and view journals by h-index.

  • << Previous: ORCID iD's
  • Next: Impact Metrics >>
  • Last Updated: Jul 9, 2024 2:06 PM
  • URL: https://libguides.depaul.edu/c.php?g=1403732

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HAL Author Manuscripts

Quality versus quantity: assessing individual research performance

José-alain sahel.

1 Institut de la vision INSERM : U968, Université Pierre et Marie Curie - Paris VI, CNRS : UMR7210, 17 rue Moreau 75012 Paris, FR

2 CIC - Quinze-Vingts INSERM : CIC503, Chno Des Quinze-Vingts PARIS VI 28, Rue de Charenton 75012 Paris, FR

3 Fondation Ophtalmologique Adolphe de Rothschild, 75019 Paris, FR

4 Institute of Ophthalmology University College of London (UCL), GB

Evaluating individual research performance is a complex task that ideally examines productivity, scientific impact, and research quality––a task that metrics alone have been unable to achieve. In January 2011, the French Academy of Sciences published a report on current bibliometric (citation metric) methods for evaluating individual researchers, as well as recommendations for the integration of quality assessment. Here, we draw on key issues raised by this report and comment on the suggestions for improving existing research evaluation practices.

BALANCING QUANTITY AND QUALITY

Evaluating individual scientific performance is an essential component of research assessment, and outcomes of such evaluations can play a key role in institutional research strategies, including funding schemes, hiring, firing, and promotions. However, there is little consensus and no internationally accepted standards by which to measure scientific performance objectively. Thus, the evaluation of individual researchers remains a notoriously difficult process with no standard solution. Marcus Tullius Cicero once wrote, “Non enim numero haec iudicantur, sed pondere” ( 1 ). Translation: The number does not matter, the quality does. In line with Cicero’s outlook on quality versus quantity, the French Academy of Sciences analyzed current bibliometric (citation metric) methods for evaluating individual researchers and made recommendations in January 2011 for the integration of quality assessment ( 2 ). The essence of the report is discussed in this Commentary.

Evaluation by experts in the field has been the primary means of assessing a researcher’s performance, although it can be biased by subjective factors, such as conflicts of interest, disciplinary or local favoritism, insufficient competence in the research area, or superficial examination. To ensure objective evaluation by experts, a quantitative analytical tool known as bibliometry (science metrics or citation metrics) has been integrated gradually into evaluation processes ( Fig. 1 ). Bibliometry started with the idea of an impact factor, which was first mentioned in Science in 1955 ( 3 ), and has evolved to weigh several aspects of published work, including journal impact factor, total number of citations, average number of citations per paper, average number of citations per author, average number of citations per year, the number of authors per paper, Hirsch’s h -index, Egghe’s g -index, and the contemporary h -index. The development of science metrics has accelerated recently, with the availability of online databases used to calculate bibliometric indicators, such as the Thomson Reuters Web of Science ( http://thomsonreuters.com/ ), Scopus ( http://www.scopus.com/home.url ), and Google Scholar ( http://scholar.google.com/ ). Within the past decade, metrics have secured a foothold in the evaluation of individual, team, and institutional research because the use of such metrics appears to be easier and faster than the qualitative assessment by experts. Because of the ease of use of various metrics, however, bibliometry tends to be applied in excessive and even incorrect ways, especially when used as standalone analyses.

An external file that holds a picture, illustration, etc.
Object name is halms608624f1.jpg

Can individual research performance be summarized by numbers?

CREDIT: IMAGE COURTESY OF D. FRANGOV (FRANGOV DIMITAR PLAMENOV COMPANY)

The French Academy of Sciences (FAS) is concerned that some of the current evaluation practices––in particular, the uncritical use of publication metrics––might be inadequate for evaluating individual scientific performance. In its recent review ( 2 ), the FAS addressed the advantages and limitations of the main existing quantitative indicators, stressed that judging the quality of a scientific work in terms of conceptual and technological innovation of the research is essential, and reaffirmed its position about the decisive role that experts must play in research assessment ( 2 , 4 ). It also strongly recommended that additional criteria be taken into consideration when assessing individual research performance. These criteria include teaching, mentoring, participation in collective tasks, and collaboration-building, in addition to quantitative parameters that are not measured by bibliometrics, such as number of patents, speaker invitations, international contracts, distinctions, and technology transfers. It appears that the best course of action will be a balanced combination of the qualitative (experts) and the quantitative (bibliometrics).

BIBLIOMETRICS: INDICATORS OR NOT?

Bibliometrics use mathematical and statistical methods to measure scientific output; thus, they provide a quantitative—not a qualitative—assessment of individual research performance. The most commonly used bibliometric indicators, as well as their strengths and weaknesses, are described below.

Impact factor

The impact factor, a major quantitative indicator of the quality and popularity of a journal, is defined by the median number of citations for a given period of the articles published in a journal. The impact factor of a journal is calculated by dividing the number of current-year citations by the source items published during the previous two years ( 5 ). According to the FAS, the impact factor of journals in which a researcher has published is a useful but highly controversial indicator of individual performance ( 2 ). The most common issue is variation among subject areas; in general, a basic science journal will have a higher average impact factor than journals in specialized or applied areas. Individual article quality within a journal is also not reflected by a journal’s impact factor because citations for an individual paper can be much higher or lower than what might be expected on the basis of that journal’s impact factor ( 2 , 6 , 7 ). In addition, self-citations are not corrected for when calculating the impact factor ( 6 ). On account of these limitations, the FAS considers the tendency of certain researchers to organize their work and publication policy according to the journal in which they intend to publish their article to be a dangerous practice. In extreme situations, such journal-centric behavior can trigger scientific misconduct. The FAS notes that there has been an increase in the practice of using journal impact factors for the evaluation of an individual researcher for the purpose of career advancement in some European countries, such as France, and in certain disciplines, such as biology and medicine ( 2 ).

Number of citations

The number of times an author has been cited is an important bibliometric indicator; however, it is a value that has several important limitations. First, citation number depends on the quality of the database used. Second, it does not consider where the author is located in the author list. Third, sometimes articles can have a considerable number of citations for reasons that might not relate to the quality or importance of the scientific content. Fourth, articles published in prestigious journals are privileged as compared with those with equal quality but published in journals of average notoriety. Fifth, depending on cultural issues, advantage can be given to citations of scientists from the same country, to scientists from other countries (in particular Americans, as often is the case in France), or to articles written in English rather than in French, for example ( 2 ). For these cultural reasons, novel and important papers might attract little attention for several years after their publication. Lastly, citation numbers also tend to be greater for review articles than for original research articles. Self-citations do not reflect the impact of a publication and should therefore not be included in a citation analysis when this is intended to give an assessment of the scientific achievement of a scientist ( 8 ).

New indicators ( h -index, g -index)

Recently, new bibliometric indicators borne out of databases indexing articles and their citations were introduced to address the needs of objectively evaluating individual researchers. In 2005, Jorge Hirsch proposed the h -index as a tool for quantifying the scientific impact of an individual researcher ( 9 ). The h -index of a scientist is the number of papers co-authored by the researcher with at least h citations each; for example, an h -index of 20 means that an individual researcher has co-authored 20 papers that have each been cited at least 20 times each. This index has the major advantage to measure simultaneously the scientist’s productivity (number of papers published over years) with the cumulative impact of the scientist’s output (the number of citations for each paper). Although the h -index is preferable to other standard single-number criteria (such as the total number of papers, total number of citations, or number of citations per paper), it has several disadvantages. First, it varies with scientific fields. As an example, h -indices in the life sciences are much higher than in physics ( 9 ). Second, it favors senior researchers by never decreasing with age, even if an individual discontinues scientific research ( 10 ). Third, citation databases provide different h -indexes as a result of differences in coverage, even when generated for the same author at the same time ( 11 , 12 ). Fourth, the h -index does not consider the context of the citations (such as negative findings or retracted works). Fifth, it is strongly affected by the total number of papers, which may underestimate scientists with short careers and scientists who have published only a few although notable papers. The h -index also integrates every publication of an individual researcher, regardless of his or her role in authorship, and does not distinguish articles of pathbreaking or influential scientific impact. Contemporary h -index (referred to as hc -index), as suggested by Sidiropoulos et al . ( 10 ), takes into account the age of each article and weights recently published work more heavily. As such, the hc -index may offer a fairer comparison between junior and senior academics than the regular h -index ( 13 ).

The g -index was introduced ( 14 ) to distinguish quality, giving more weight to highly cited articles. The g -index of a scientist is the highest number g of articles (a set of articles ordered in terms of decreasing citation counts) that together received g 2 or more citations; for example, a g -index of 20 means that 20 publications of a researcher have a total number of citations of at least 400. Egghe pointed out that the g -index value will always be higher than the h -index value, making easier to differentiate the performance of authors. If Researcher A has published 10 articles, and each has received 4 citations, the researcher’s h -index is 4. If the Researcher B has also written 10 articles, and 9 of them have received 4 citations each, the researcher’s h -index is also 4, regardless of how many citations the 10th article has received. However, if the tenth article has received 20 citations the g -index of the Researcher B would be 6; for 50 citations, the g -index would be 9 ( 15 ). Thus, one or several highly cited articles can influence the final g -index of an individual researcher, thus highlighting the impact of authors.

CHOOSING AN INDICATOR

Bibliometry is easy to use because of its simple calculations. However, it is important to realize that the purely bibliometric approaches are inadequate because no indicator alone can summarize the quality of the scientific performance of a researcher. The use of a set of metrics (such as number of citations, h -index, or g -index) would give a more accurate estimation of the researcher’s scientific impact. At the same time, metrics should not be made too complex because they can become a source of conceptual errors that are then difficult to identify. FAS discourages the use of metrics as a standalone evaluation tool, the use of only one bibliometric indicator, the use of the journal’s impact factor to evaluate the quality of an article, neglecting the impact of the scientific field/sub-field, and ignoring author placement in the case of multiple authorship ( 2 ).

In 2004, INSERM (the French National Institute of Health and Medical Research) introduced bibliometrics as part of its research assessment procedures. Bibliometric analysis is based on publication indicators that are validated by the evaluated researchers and are at the disposal of the evaluation committees. In addition to the basic indicators (citation numbers and journal impact factor), the measures used by INSERM include the number of publications in the first 10% of journals ranked by decreasing impact factor in a given field (top 10% impact factor, according to Thomson Reuters Journal Citation Reports) and the number of publications from an individual researcher that fall within the top 10% of articles (ranked by the total citations) in annual cohorts from each of the 22 disciplines defined by Thomson Reuters Essential Science Indicators. All indicators take into account the research fields, the year of publication, and the author’s position among the signers by assigning an index of 1 to the first or last author, an index of 0.5 for the second or the next to last author, and 0.25 for all other author positions. Notably, the author’s index can only be used in biomedical research because for other fields the rank of the authors may follow different rules, such as in physics, in which they are listed in alphabetical order.

Bibliometric indicator interpretation requires competent expert knowledge of metrics, and in order to ensure good practice, INSERM trains members of evaluation committees on state-of-the-art science metric methods. INSERM has noted that correlation analysis of publication—in other words, scoring by members of evaluation committees and the use of any bibliometric indicator alone—is rather low. For example, the articles of all teams received a number of citations irrespective of the journal in which they were published, with only low correlation between the journal impact factor and the number of times each publication was cited. No correlation was found between the journal impact factor and the individual publication citations, or between the “Top 1%” publications and the impact factor ( 16 ). INSERM analysis emphasizes the fact that each indicator has its advantages and limitations, and care must be taken not to consider them alone as “surrogate” markers of team performance. Several indicators must be taken into account when evaluating the overall output of a research team. The use of bibliometric indicators requires great vigilance; but, according to the INSERM experience, metrics enrich the evaluation committees’ debates about the scientific quality of team performance ( 16 ).

As reported by the FAS, bibliometric practices vary considerably from country to country. A worldwide Nature survey ( 17 ) emphasized that 70% of the interviewed academic scientists, department heads, and other administrators believe that bibliometrics are used for recruitment and promotions, and 63% of them consider the use of these measures to be inadequate. Many Anglo-Saxon countries use bibliometrics to evaluate performances of universities and research organizations, whereas for hiring and promotions, the curriculum vitae, interview process, and letters of recommendation “count” more than the bibliometric indicators ( 2 ). In contrast, metrics are used for recruiting in Chinese and Asian universities in general, although movement toward the use of letters of recommendation is currently underway ( 2 ). In France, an extensive use of publication metrics for individual and institutional evaluations has been noted in the biomedical sciences ( 2 ).

Research evaluation practices also vary by field and subfield owing in part to the large disparities across community sizes and the literature coverage provided by citation databases. As reviewed by the FAS, evaluation of individual researchers in the mechanical sciences, computing, and applied mathematics fields includes both the quality and the number of publications, as well as scientific awards and the number of invitations to speak at conferences, software, patents, and technology transfer agreements. Organization of scientific meetings and editorial responsibilities are also taken into consideration. The younger researchers are evaluated by experts during interviews and while they give seminars. In these fields, publication does not always play a leading role in transferring knowledge; thus, during a long professional career, metrics give rather weak and inaccurate estimation of research performance. Bibliometrics are therefore used only as a decision-making aid, but not as a main tool for evaluation.

In physics and its subfields, evaluation methods vary. In general, a combination of quantitative (number of publications, h -index) and qualitative measures (keynote and invited speeches, mentoring programs) plays a decisive role in the evaluation of senior scientists only. In astrophysics, metrics are largely used for evaluation, recruiting, promotions, and funding allocations. In chemistry, the main bibliometric indicators ( h -index, total number of citations, and number of citations per article) are taken into consideration when discussing the careers of senior researchers (those with more than 10 to 12 years of research activity). In recruiting young researchers, experts interview the candidate to examine ability to present and discuss the subject matter proficiently; the individual’s publication record is also considered. However, the national committees for chemistry of French scientific and university institutions [Centre National de la Recherche Scientifique (CNRS) and Conseil National des Universités (CNU), respectively] usually avoid bibliometrics altogether for an individual’s evaluation.

In economics, evaluation by experts in the field plays the most important role for recruitments and promotions, but bibliometric indicators are used to help this decision-making. For the humanities and social sciences (philosophy, history, law, sociology, psychology, languages, political sciences, and art) and for mathematics, the existing databases do not cover these fields sufficiently. As a consequence, these fields are not able to properly use bibliometrics. In contrast, in biology and medicine the quantitative indicators—in particular the journal’s impact factor—are widely used for evaluating individual researchers ( 2 ).

STRATEGIES AND RECOMMENDATIONS

The FAS acknowledged that bibliometrics could be a very useful evaluation tool when handled by experts in the field. According to its recommendations, the use of bibliometrics by monodisciplinary juries should be of nondecisive value; instead, the experts of these evaluation committees know the candidates well enough to compare more precisely and objectively the individual performance of each of them. In the case of pluridisciplinary (interdisciplinary) juries, bibliometrics can be successfully used, but only if the experts consider the differences between scientific fields and subfields (as mentioned above). For this purpose, the choice of indicators and the methodology to evaluate the full spectrum of research activity of a scientist should be initially validated. As emphasized by the FAS, bibliometrics should not be used for deciding which young scientists to recruit. In addition, the bibliometric set should be chosen depending on the purpose of the evaluation: recruitment, promotion, funding allocation, or distinction. Calculations should not be left to nonspecialists (such as administrators that could use the rapidly accessible data in a biased way) because the number of potential errors in judgement and assessment is too large. Frequent errors to be avoided include the homonyms, variations in the use of name initials, and the use of incomplete databases. It is important that the complete list of publications be checked by the researcher concerned. Researchers could even be asked to produce their own indicators (if provided with appropriate guidelines for calculation); these calculations should subsequently be approved. The evaluation process must be transparent and replicable, with clearly defined targets, context, and purpose of the assessment.

To improve the use of bibliometrics, a consensus has been reached by the FAS ( 2 ) to perform a series of studies and to evaluate various methodological approaches, including (i) retrospective studies to compare decisions made by experts and evaluating committees, with results potentially obtained by bibliometrics; (ii) studies to refine the existing indicators and bibliometric standards; (iii) authorship clarification; (iv) development of standards for originality and innovation; (v) discussion on the citation discrepancies on the basis of geographical- or field-localism; (vi) monitoring the bibliometric indicators of outstanding researchers (a category reserved for those who have made important and lasting research contributions their specific field and who have obtained international recognition); (vii) examining the prospective values of the indicators for researchers that changed their field orientation with time; (viii) examining the indicators of researchers receiving great awards such as Nobel Prize, Fields Medal, and medals of notorious academies and institutions; (ix) studies on how bibliometrics affect the scientific behavior of the researchers; and (x) establishment of standards of good practice in the use of bibliometrics for analyzing individual research performance.

FIXING THE FLAWS

Assessing research performance is important for recognizing productivity, innovation, and novelty and plays a major role in academic appointment and promotion. However, the means of assessment—namely, bibliometrics—are often flawed. Bibliometrics have enormous potential to assist the qualitative evaluation of individual researchers; however, none of the bibliometric indicators alone (or even considering a set of them) allow for an acceptable and well-balanced evaluation of the activity of a researcher. The use of bibliometrics should continue to evolve through in-depth discussion on what the metrics mean and how they can be best interpreted by experts in the given scientific field.

Acknowledgments

The author thanks K. Marazova (Institut de la Vision) for her major help in preparing this commentary and N. Haeffner-Cavaillon (INSERM) for critical reading and insights.

References and Notes

Total Quality Management Research Paper Topics

Academic Writing Service

Total quality management research paper topics have grown to become an essential area of study, reflecting the critical role that quality assurance and continuous improvement play in modern organizations. This subject encompasses a wide array of topics, methodologies, and applications, all aimed at enhancing operational efficiency, customer satisfaction, and competitive advantage. The purpose of this text is to provide students, researchers, and practitioners with a comprehensive guide on various aspects of total quality management (TQM). It includes an extensive list of potential research paper topics categorized into ten main sections, a detailed article explaining the principles and practices of TQM, guidelines on how to choose and write on TQM topics, and an introduction to iResearchNet’s custom writing services that cater to this field. This comprehensive resource aims to assist students in navigating the complex landscape of TQM, inspiring insightful research, and offering practical tools and support for academic success.

100 Total Quality Management Research Paper Topics

Total Quality Management (TQM) has evolved to become a strategic approach to continuous improvement and operational excellence. It has applications across various industries, each with its unique challenges and opportunities. Below is an exhaustive list of TQM research paper topics, divided into ten categories, offering a rich source of ideas for students and researchers looking to explore this multifaceted domain.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% off with 24start discount code.

Total Quality Management transcends traditional boundaries and integrates concepts from various disciplines. Its goal is to create a culture where quality is at the forefront of every decision and process. The following list presents 100 TQM research topics divided into ten different categories. Each category represents a specific aspect of TQM, providing an extensive foundation for exploring this complex field.

  • Historical Development of TQM
  • Core Principles of TQM
  • TQM and Organizational Culture
  • Deming’s 14 Points: A Critical Analysis
  • Six Sigma and TQM: A Comparative Study
  • TQM in Manufacturing: Case Studies
  • TQM and Leadership: Role and Responsibilities
  • Customer Focus in TQM
  • Employee Involvement in TQM Practices
  • Challenges in Implementing TQM
  • TQM in Healthcare
  • TQM in Education
  • TQM in the Automotive Industry
  • TQM in the Food and Beverage Industry
  • TQM in Information Technology
  • TQM in Hospitality
  • TQM in the Banking Sector
  • TQM in Construction
  • TQM in Supply Chain Management
  • TQM in Government Services
  • Statistical Process Control in TQM
  • The 5S Method in Quality Management
  • Kaizen and Continuous Improvement
  • Root Cause Analysis in TQM
  • Quality Function Deployment (QFD)
  • The Fishbone Diagram in TQM
  • Process Mapping and Quality Improvement
  • Benchmarking for Quality Enhancement
  • The Role of FMEA in Quality Management
  • Design of Experiments (DOE) in TQM
  • ISO 9001 and Quality Management
  • The Benefits of ISO 14001
  • Understanding Six Sigma Certifications
  • The Impact of OHSAS 18001 on Safety Management
  • Lean Manufacturing and Quality Standards
  • Implementation of ISO 22000 in Food Safety
  • The Role of ISO/IEC 17025 in Testing Laboratories
  • Quality Management in ISO 27001 (Information Security)
  • Achieving CE Marking for Product Safety
  • The Influence of SA 8000 on Social Accountability
  • Measuring Customer Satisfaction in TQM
  • The Role of Service Quality in Customer Retention
  • Customer Complaints and Quality Improvement
  • Building Customer Loyalty Through TQM
  • Customer Feedback and Continuous Improvement
  • Customer Relationship Management (CRM) and TQM
  • Emotional Intelligence and Customer Satisfaction
  • The Impact of Branding on Customer Loyalty
  • Customer Experience Management in TQM
  • Customer Segmentation and Targeting in TQM
  • The Role of Training in TQM
  • Employee Empowerment in Quality Management
  • Motivational Theories and TQM
  • Building a Quality Culture Through Employee Engagement
  • Employee Recognition and Reward Systems in TQM
  • Leadership Styles and Employee Performance in TQM
  • Communication and Teamwork in TQM
  • Managing Change in TQM Implementation
  • Conflict Resolution Strategies in TQM
  • Work-Life Balance in a Quality-Oriented Organization
  • Key Performance Indicators (KPIs) in TQM
  • Balanced Scorecard and Quality Management
  • Performance Appraisals in a TQM Environment
  • Continuous Monitoring and Evaluation in TQM
  • Risk Management in Quality Performance
  • Process Auditing and Quality Control
  • The Role of Quality Circles in Performance Evaluation
  • Value Stream Mapping and Process Optimization
  • The Impact of E-business on Quality Performance
  • Outsourcing and Quality Assurance
  • Environmental Sustainability and TQM
  • Social Responsibility and Ethical Practices in TQM
  • Green Manufacturing and Environmental Performance
  • Corporate Social Responsibility (CSR) Strategies in TQM
  • Waste Reduction and Recycling in TQM
  • Community Engagement and Social Impact
  • Sustainable Development Goals (SDGs) and TQM
  • Energy Efficiency and Sustainable Quality Management
  • Ethical Sourcing and Supply Chain Responsibility
  • Human Rights and Labor Practices in TQM
  • TQM Practices in Different Cultures
  • The Influence of Globalization on TQM
  • Cross-Cultural Communication and Quality Management
  • International Regulations and Quality Standards
  • TQM in Emerging Economies
  • Quality Management in Multinational Corporations
  • The Role of WTO in Global Quality Standards
  • Outsourcing and Global Supply Chain Quality
  • Global Competition and Quality Strategies
  • International Collaboration and Quality Innovation
  • Technological Innovations and Quality Management
  • Big Data and Analytics in TQM
  • Quality 4.0 and the Role of IoT
  • Artificial Intelligence and Quality Prediction
  • The Impact of Social Media on Quality Perception
  • Sustainability and Future Quality Management
  • Agile Methodologies and Quality Flexibility
  • Blockchain Technology and Quality Traceability
  • Cybersecurity and Quality Assurance
  • The Future Role of Human Resource in Quality Management

The vast array of topics listed above provides a comprehensive insight into the dynamic and multifaceted world of Total Quality Management. From foundational principles to future trends, these topics offer students a diverse range of perspectives to explore, understand, and contribute to the ongoing dialogue in TQM. With proper guidance, dedication, and an open mind, scholars can delve into these subjects to create impactful research papers, case studies, or projects that enrich the existing body of knowledge and drive further innovation in the field. Whether one chooses to focus on a specific industry, a particular tool, or an emerging trend, the possibilities are endless, and the journey towards quality excellence is both challenging and rewarding.

Total Quality Management and the Range of Research Paper Topics

Total Quality Management (TQM) represents a comprehensive and structured approach to organizational management that seeks to improve the quality of products and services through ongoing refinements in response to continuous feedback. This article aims to provide an in-depth exploration of TQM, shedding light on its evolution, its underlying principles, and the vast range of research topics it offers.

Historical Background

Total Quality Management has its roots in the early 20th century, with the development of quality control and inspection processes. However, it wasn’t until the mid-1980s that TQM became a formalized, systematic approach, greatly influenced by management gurus like W. Edwards Deming, Joseph Juran, and Philip Crosby.

  • Early Quality Control Era : During the industrial revolution, emphasis on quality control began, primarily focusing on product inspection.
  • Post-World War II Era : The concept of quality management grew as the U.S. sought to rebuild Japan’s industry. Deming’s teachings on quality greatly influenced Japanese manufacturing.
  • TQM’s Formalization : The integration of quality principles into management practices led to the formalization of TQM, encompassing a holistic approach towards quality improvement.

Principles of Total Quality Management

TQM is underpinned by a set of core principles that guide its implementation and contribute to its success. Understanding these principles is fundamental to any research into TQM.

  • Customer Focus : At the heart of TQM is a strong focus on customer satisfaction, aiming to exceed customer expectations.
  • Continuous Improvement : TQM promotes a culture of never-ending improvement, addressing small changes that cumulatively lead to substantial improvement over time.
  • Employee Engagement : Engaging employees at all levels ensures that everyone feels responsible for achieving quality.
  • Process Approach : Focusing on processes allows organizations to optimize performance by understanding how different processes interrelate.
  • Data-Driven Decision Making : Utilizing data allows for objective assessment and decision-making.
  • Systematic Approach to Management : TQM requires a strategic approach that integrates organizational functions and processes to achieve quality objectives.
  • Social Responsibility : Considering societal well-being and environmental sustainability is key in TQM.

Scope and Application

Total Quality Management is applicable across various domains and industries. The following areas showcase the versatility of TQM:

  • Manufacturing : Implementing TQM principles in manufacturing ensures efficiency and consistency in production processes.
  • Healthcare : TQM in healthcare focuses on patient satisfaction, error reduction, and continuous improvement.
  • Education : In educational institutions, TQM can be used to improve the quality of education through better administrative processes and teaching methods.
  • Service Industry : Whether in hospitality, banking, or IT, TQM’s principles can enhance service quality and customer satisfaction.
  • Public Sector : Governmental bodies and agencies can also employ TQM to enhance public service delivery and satisfaction.

TQM’s multifaceted nature offers a wide range of research paper topics. Some areas of interest include:

  • TQM Tools and Techniques : Research on tools like Six Sigma, Kaizen, and statistical process control.
  • Quality Standards : Investigating the impact and implementation of ISO standards.
  • Industry-Specific Applications : Exploring how TQM is applied and adapted in different industries.
  • Challenges and Opportunities : Assessing the difficulties and advantages of implementing TQM in contemporary business environments.
  • Emerging Trends : Examining future trends in TQM, such as the integration of technology and sustainability considerations.

Total Quality Management has evolved from a simple focus on product inspection to a strategic approach to continuous improvement that permeates the entire organization. Its application is not confined to manufacturing but has spread across various sectors and industries.

Research in TQM is equally diverse, offering students and scholars a rich and complex field to explore. Whether delving into the historical evolution of TQM, examining its principles, evaluating its application in different sectors, or exploring its myriad tools and techniques, the study of TQM is vibrant and multifaceted.

By undertaking research in Total Quality Management, one not only contributes to the academic body of knowledge but also plays a role in shaping organizational practices that emphasize quality, efficiency, customer satisfaction, and social responsibility. In a global business environment characterized by competitiveness, complexity, and constant change, the principles and practices of TQM remain more relevant than ever.

How to Choose Total Quality Management Research Paper Topics

Choosing the right topic for a research paper in Total Quality Management (TQM) is a crucial step in ensuring that your paper is both engaging and academically relevant. The selection process should align with your interests, the academic requirements, the targeted audience, and the available resources for research. Here is an in-depth guide, including an introductory paragraph, ten essential tips, and a concluding paragraph to help you make an informed choice.

Total Quality Management encompasses a broad spectrum of theories, tools, techniques, and applications across various industries. This richness and diversity offer a plethora of potential research topics. However, selecting the perfect one can be daunting. The following tips are designed to guide students in choosing a research topic that resonates with their interests and the current trends in TQM.

  • Identify Your Area of Interest : TQM has many facets, such as principles, tools, applications, challenges, and trends. Pinpointing the area that piques your interest will help in narrowing down your topic.
  • Consider Academic Relevance : Your chosen topic should align with your course objectives and academic guidelines. Consult your professor or academic advisor to ensure that the topic fits the scope of your course.
  • Research Current Trends : Stay up-to-date with the latest developments in TQM by reading scholarly articles, attending conferences, or following industry leaders. Current trends may inspire a relevant and timely topic.
  • Evaluate Available Resources : Make sure that your chosen topic has enough existing literature, data, and resources to support your research.
  • Assess the Scope : A too broad topic might be overwhelming, while a too narrow one might lack content. Balance the scope to ensure depth without over-extending.
  • Consider Practical Implications : If possible, choose a topic that has real-world applications. Connecting theory to practice makes your research more impactful.
  • Check Originality : Aim for a topic that offers a new perspective or builds on existing research in a unique way. Your contribution to the field should be clear and valuable.
  • Evaluate Your Expertise : Choose a topic that matches your level of expertise. Overly complex subjects might lead to difficulties, while overly simple ones might not challenge you enough.
  • Consider the Target Audience : Think about who will be reading your research paper. Tailoring your topic to the interests and expectations of your readers can make your paper more engaging.
  • Conduct a Preliminary Research : Before finalizing your topic, conduct some preliminary research to ensure there’s enough material to work with and that the topic is feasible within the given timeframe.

Selecting the right topic for a Total Quality Management research paper is a thoughtful and multifaceted process. It requires considering personal interests, academic requirements, current industry trends, available resources, and practical implications.

By following the guidelines provided, students can align their research with both personal and academic objectives, paving the way for a successful research experience. The ideal topic is one that not only aligns with the ever-evolving field of TQM but also resonates with the researcher’s passion and curiosity, laying the foundation for a meaningful and insightful investigation into the dynamic world of Total Quality Management.

How to Write a Total Quality Management Research Paper

Writing a Total Quality Management (TQM) research paper is a valuable endeavor that requires a clear understanding of the subject, strong analytical skills, and a methodical approach to research and writing. This guide outlines how to write an impressive research paper on TQM, including an introductory paragraph, ten actionable tips, and a concluding paragraph.

Total Quality Management is a comprehensive approach that emphasizes continuous improvement, customer satisfaction, employee involvement, and integrated management systems. Writing a research paper on TQM is not just an academic exercise; it is an exploration into the principles and practices that drive quality in organizations. The following detailed guidance aims to equip you with the necessary knowledge and skills to compose a compelling TQM research paper.

  • Understand the Basics of TQM : Start by immersing yourself in the foundational principles of TQM, including its history, methodologies, and various applications across industries. A deep understanding will form the basis of your research.
  • Choose a Specific Topic : As outlined in the previous section, select a specific and relevant topic that aligns with your interest and the current trends in the field of TQM.
  • Conduct Comprehensive Research : Use reputable sources such as academic journals, books, industry reports, and expert opinions to gather information. Always critically evaluate the reliability and relevance of your sources.
  • Create a Thesis Statement : Your thesis statement is the guiding force of your paper. It should be clear, concise, and articulate your main argument or focus.
  • Develop an Outline : Organize your research into a logical structure. An outline will guide you in presenting your ideas coherently and ensuring that you cover all essential points.
  • Write the Introduction : Introduce the topic, provide background information, and present the thesis statement. Make sure to engage the reader and provide a roadmap for the paper.
  • Compose the Body : Divide the body into sections and subsections that explore different aspects of your topic. Use evidence, examples, and logical reasoning to support your arguments.
  • Incorporate Case Studies and Examples : If applicable, include real-world examples or case studies that demonstrate the application of TQM principles in a practical context.
  • Write the Conclusion : Summarize the key findings, restate the thesis, and provide insights into the implications of your research. A strong conclusion leaves a lasting impression.
  • Revise and Edit : Pay attention to both content and form. Check for logical flow, coherence, grammar, and formatting. Consider seeking feedback from peers or professionals.

Writing a research paper on Total Quality Management is a complex but rewarding task. By understanding the fundamentals of TQM, selecting a precise topic, conducting thorough research, and following a structured writing process, students can produce a paper that not only meets academic standards but also contributes to the understanding of quality management in the modern world.

Emphasizing critical thinking, analytical prowess, and attention to detail, the journey of writing a TQM research paper enriches the student’s academic experience and provides valuable insights into the field that continues to shape organizations globally.

The strategies and tips provided in this guide serve as a roadmap for aspiring researchers, helping them navigate the challenges and triumphs of academic writing in the realm of Total Quality Management. With dedication, creativity, and adherence to scholarly standards, the result can be a meaningful and enlightening piece that resonates with both academics and practitioners alike.

iResearchNet Writing Services

For custom total quality management research paper.

Total Quality Management (TQM) research papers require a specialized approach, encompassing a wide array of methodologies, tools, and applications. iResearchNet, as a leading academic writing service provider, is committed to assisting students in crafting top-notch custom Total Quality Management research papers. Here’s a detailed look at the 13 standout features that make iResearchNet the ideal choice for your TQM research paper needs:

  • Expert Degree-Holding Writers : Our team of highly qualified writers possesses advanced degrees in management, business, and related disciplines, ensuring authoritative and insightful content tailored to Total Quality Management.
  • Custom Written Works : Every research paper we undertake is customized to your specific requirements, providing unique, plagiarism-free content that aligns with your academic objectives.
  • In-Depth Research : Equipped with access to vast academic and industry resources, our writers conduct comprehensive research, delivering TQM papers replete with the latest findings, theories, and applications.
  • Custom Formatting (APA, MLA, Chicago/Turabian, Harvard) : We adhere to your institution’s specific formatting guidelines, including the prevalent APA, MLA, Chicago/Turabian, and Harvard styles.
  • Top Quality : iResearchNet’s commitment to excellence ensures that each TQM research paper passes through stringent quality control, offering you not only well-crafted content but insightful and compelling perspectives.
  • Customized Solutions : We understand that every student’s needs are unique, and our services are designed to be flexible enough to cater to individual requirements, whether partial or end-to-end support.
  • Flexible Pricing : Our pricing structure is both competitive and transparent, reflecting the complexity, length, and urgency of your project without compromising on quality.
  • Short Deadlines up to 3 Hours : Even the most urgent projects with deadlines as short as 3 hours are manageable by our adept team.
  • Timely Delivery : Understanding the importance of punctuality, we ensure that every project is delivered within the agreed timeframe.
  • 24/7 Support : Our around-the-clock support team is always available to assist you, answer queries, and provide project updates.
  • Absolute Privacy : We prioritize your privacy, handling all personal and payment details with utmost confidentiality, ensuring that your information is never shared or resold.
  • Easy Order Tracking : Our user-friendly platform enables you to effortlessly track your order’s progress, maintaining control and direct communication with the writer.
  • Money Back Guarantee : Standing firmly behind the quality of our work, we offer a money-back guarantee, promising to make things right or refund your money if the delivered TQM research paper doesn’t meet the agreed standards.

iResearchNet takes pride in delivering excellence in custom Total Quality Management research paper writing. By combining the expertise of seasoned writers, comprehensive research capabilities, and a student-focused approach, we aim to facilitate academic success. Our carefully curated features provide a reliable, quality-driven solution to TQM research paper writing. Let iResearchNet guide you in creating exceptional, engaging, and authoritative papers in the realm of Total Quality Management.

Unleash Your Academic Potential with iResearchNet

At iResearchNet, we understand the complexity and nuance of crafting an impeccable Total Quality Management (TQM) research paper. As you explore the fascinating world of quality management principles, methodologies, and applications, our seasoned professionals are here to ensure that your academic pursuits reach new heights. Here’s why iResearchNet is your go-to partner for top-tier TQM research papers:

  • Tailored to Your Needs : From topic selection to final submission, our custom writing services are fine-tuned to meet your unique requirements. With a dedicated focus on Total Quality Management, our experts provide insightful, relevant, and comprehensive research that not only fulfills academic criteria but also fuels intellectual curiosity.
  • Quality You Can Trust : Quality isn’t just a subject we write about; it’s what defines us. Our commitment to academic excellence is evident in every paper we craft. Supported by thorough research, critical thinking, and precise alignment with your specifications, iResearchNet ensures a product that stands out in your academic journey.
  • Support at Every Step : We know that writing a TQM research paper is a process filled with questions and uncertainties. That’s why our team is available around the clock to support you. From understanding your assignment to addressing revisions, our 24/7 customer service provides peace of mind.
  • Invest in Your Success : With flexible pricing options, a robust money-back guarantee, and a seamless ordering process, iResearchNet makes it simple and risk-free to secure professional assistance for your Total Quality Management research paper. Embrace the opportunity to showcase your understanding of TQM principles through a well-articulated, compelling research paper.

Don’t let the challenges of writing a Total Quality Management research paper hold you back. Tap into the expertise and resources of iResearchNet, where we transform your academic goals into reality. Your perfect Total Quality Management research paper is just a click away!

ORDER HIGH QUALITY CUSTOM PAPER

research paper quality

  • DOI: 10.30955/gnj.004534
  • Corpus ID: 254719614

Research of Air Quality in Closed Car Parks in Konya Province

  • Published in Global NEST: the… 14 December 2022
  • Environmental Science
  • Global NEST: the international Journal

Related Papers

Showing 1 through 3 of 0 Related Papers

Skip to content

Read the latest news stories about Mailman faculty, research, and events. 

Departments

We integrate an innovative skills-based curriculum, research collaborations, and hands-on field experience to prepare students.

Learn more about our research centers, which focus on critical issues in public health.

Our Faculty

Meet the faculty of the Mailman School of Public Health. 

Become a Student

Life and community, how to apply.

Learn how to apply to the Mailman School of Public Health. 

First Study to Measure Toxic Metals in Tampons Shows Arsenic and Lead, Among Other Contaminants

Tampons from several brands that potentially millions of people use each month can contain toxic metals like lead, arsenic, and cadmium, a new study by Columbia University Mailman School of Public Health  and UC Berkeley has found. This is the first paper to measure metals in tampons. The findings are published in Environmental International .

Tampons are of particular concern as a potential source of exposure to chemicals, including metals, because the skin of the vagina has a higher potential for chemical absorption than skin elsewhere on the body. In addition, the products are used by a large percentage of the population on a monthly basis—50–80 percent of those who menstruate use tampons—for several hours at a time.

“Despite this large potential for public health concern, very little research has been done to measure chemicals in tampons,” said lead author Jenni A. Shearston, a postdoctoral scholar at the UC Berkeley School of Public Health and UC Berkeley’s Department of Environmental Science, Policy, & Management who had been a doctoral student in Columbia Mailman School’s Department of Environmental Health Sciences . “Concerningly, we found concentrations of all metals we tested for, including toxic metals like arsenic and lead.”

Metals have been found to increase the risk of dementia, infertility, diabetes, and cancer. They can damage the liver, kidneys, and brain, as well as the cardiovascular, nervous, and endocrine systems. In addition, metals can harm maternal health and fetal development.

“Although toxic metals are ubiquitous and we are exposed to low levels at any given time, our study clearly shows that metals are also present in menstrual products, and that women might be at higher risk for exposure using these products,” said study co-author Kathrin Schilling , assistant professor at Columbia University Mailman School of Public Health.

Researchers evaluated levels of 16 metals (arsenic, barium, calcium, cadmium, cobalt, chromium, copper, iron, manganese, mercury, nickel, lead, selenium, strontium, vanadium, and zinc) in 30 tampons from 14 different brands. The metal concentrations varied by where the tampons were purchased (US vs. EU/UK), organic vs. non-organic, and store- vs. name-brand. However, they found that metals were present in all types of tampons; no category had consistently lower concentrations of all or most metals. Lead concentrations were higher in non-organic tampons but arsenic was higher in organic tampons.

Metals could make their way into tampons a number of ways: The cotton material could have absorbed the metals from water, air, soil, through a nearby contaminant (for example, if a cotton field was near a lead smelter), or some might be added intentionally during manufacturing as part of a pigment, whitener, antibacterial agent, or some other process in the factory producing the products.

The researchers hope that manufacturers are required to test their products for metals, especially for toxic metals. “It would be exciting to see the public call for this, or to ask for better labeling on tampons and other menstrual products,” said Shearston.

For the moment, it’s unclear if the metals detected by this study are contributing to any negative health effects. Future research will test how much of these metals can leach out of the tampons and be absorbed by the body; as well as measuring the presence of other chemicals in tampons.

Other co-authors include: Kristen Upson, Michigan State University;  Khue Nguyen and Beizhan Yan of Lamont-Doherty Earth Observatory of Columbia University; and Milo Gordon, Vivian Do, Olgica Balac, and Marianthi-Anna Kioumourtzoglou of Columbia University Mailman School of Public Health.

Funding was provided by the National Institute of Environmental Health Sciences; the National Heart, Lung, and Blood Institute; and the National Institute of Nursing Research.

Media Contact

Stephanie Berger, [email protected]

Related Information

Meet our team, kathrin schilling, msc, phd.

  • Associate Member of the Columbia Center for Environmental Health and Justice in Northern Manhattan

Evaluating the quality of scientific research papers in entrepreneurship

  • Published: 15 October 2021
  • Volume 56 , pages 3013–3027, ( 2022 )

Cite this article

research paper quality

  • Yoganandan G.   ORCID: orcid.org/0000-0002-3000-9183 1 &
  • Vasan M.   ORCID: orcid.org/0000-0003-4600-4683 2  

975 Accesses

2 Citations

1 Altmetric

Explore all metrics

The study aims to find the quality of research papers published in the domain of entrepreneurship in India. This study covers 100 research papers. A standardized measurement tool developed by the earlier researchers was used to evaluate the research quality. The data compiled using the measurement tool were analyzed with the support of the SPSS. The statistical tools such as descriptive statistics, Friedman’s test, factor analysis, two-sample ‘t’ test, and ANOVA are applied to analyze the data. The study findings reported that the quality of research papers published in the field of entrepreneurship is not up to the quality standards. The quality of multiple-author papers is better than single-author papers. Similarly, the quality of papers published by foreign authors is comparatively better than Indian authors. Further, the quality of papers published with the combination of foreign and Indian authors is substantially good. The quality of papers published in foreign journals is higher as compared with Indian journals. Further, the standard of papers published under the qualitative approach was comparatively better than the quantitative approach. The authors developed a Conceptual Model of Process and Product of Research (YOVA model). This model clearly shows that the whole research process yields six levels of research products. The study recommended that the researchers need to go for international collaborations to improve the quality of the publication. The funding agencies, higher learning institutions and research institutions should focus on enhancing research infrastructure. The study examined the validity of research articles searched by novice researchers in India in Google by using keywords related to entrepreneurship and, as such this non-focused approach is a big impediment to quality research.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

research paper quality

Source: Developed by Authors

research paper quality

Similar content being viewed by others

research paper quality

Entrepreneurship Research in Iran: Current Trends and Future Agendas

research paper quality

The evolution of university entrepreneurship over the past 20 years: a bibliometric analysis

research paper quality

The art of crafting a systematic literature review in entrepreneurship research

Abdel Moneim Aly: Quality of medical journals with special reference to the Eastern Mediterranean health journal. Saudi Med. J. 25 (1), 18–20 (2004)

Google Scholar  

Adams, J.: The fourth age of research. Nature 497 , 557–560 (2013)

Article   Google Scholar  

Akkerman, S., Admiraal, W., Brekelmans, M., Oost, H.: Auditing quality of research in social sciences. Qual. Quant. 42 (2), 257–274 (2006)

Aphinyanaphongs, Y., Tsamardinos, I., Statnikov, A., Hardin, D., Aliferis, C.F.: Text categorization models for high-quality article retrieval in internal medicine. J. Am. Med. Inform. Assoc. 12 (2), 207–217 (2005)

Bonaccorsi, A., Cicero, T., Ferrara, A., Malgarini. M., (2015). Journal Ratings as Predictors of Articles Quality in Arts, Humanities and Social Sciences: An Analysis based on the Italian Research Evaluation Exercise. F1000Research , 4, 196.

Bornmann, L., Schier, H., Marx, W., Daniel, H.-D.: Does the h index for assessing single publications really work? a case study on papers published in chemistry”. Scientometrics 89 , 835–843 (2011)

Bornmann, L., Schier, H., Marx, W., Daniel, H.-D.: What factors determine citation counts of publications in chemistry besides their quality? J. Informet. 6 , 11–18 (2012)

Bornmanna, L., Daniel, H.-D.: The Citation speed index: a useful bibliometric indicator to add to the h index. J. Informet. 4 , 444–446 (2010)

Chawla, R., Gupta, M., Anand, N.: Ranking of Indian journals with popular international journals: a comparative study. Int. J. Sci. Technol. Res. 9 (3), 72–81 (2020)

Cho, M.K., Bero, L.A.: Instruments for assessing the quality of drug studies published in the medical literature. J. Am. Med. Assoc. 272 , 101–104 (1994)

Djulbegovic, B., Lacevic, M., Cantor, A.: The uncertainty principle and industry-sponsored research. Lancet 356 , 635–638 (2000)

Durieux, V., Gevenois, P.A.: Bibliometric indicators: quality measurements of scientific publication. Radiology 255 (2), 342–351 (2010)

Ghazavi, R., Taheri, B., Ashrafi-rizi, H.: Article quality indicator: proposing a new indicator for measuring article quality in scopus and web of science. J. Sci. Res. 8 (1), 9–17 (2019)

Gupta, P., Kaur, G., Sharma, B., Shah, D., Choudhury, P.: What is submitted and what gets accepted in Indian Pediatrics: analysis of submissions, review process, decision making, and criteria for rejection. Indian Pediatr. 43 (6), 479–489 (2006)

Gupta, R., Tiwari, R., Mueen Ammed, K.K.: Dengue research in India: a scientometric analysis of publications - 2003–12. Int. J. Med. Public Health 4 (1), 1–9 (2014)

Hair, J.F., Black, W.C., Babin, B.J., Anderson, R.E.: Multivariate Data Analysis, 7th edn. Pearson Education, U.K (2010)

Hair, H.H., Jr, Black, W.C., Babin, B.J., Anderson, R.E., & Tatham, R.L., (2009). Multivariate Data Analysis . London, U.K: Pearson Education.

Hudson, J.: Trends in multi-authored papers in economics. J. Econom. Perspect. 10 (3), 153–158 (1996)

Inayatullah, S., Fitzgerald, J.: Gene Discourses: politics, culture, law, and futures. Technol. Forecast. Soc. Chang. 52 (2–3), 161–183 (1996)

James, J.B., Vincent, C.B.: Perception of journal quality. Account. Rev. 49 (2), 360–362 (1974)

Lawani, S.M.: Some bibliometric correlates of quality in scientific research. Scientometrics 9 , 13–25 (1986)

Li, E.Y., Liao, C.H., Yen, H.R.: Co-authorship networks and research impact: a social capital perspective. Res. Policy 42 (9), 1515–1530 (2013)

Low, W.Y., Ng, K.H., Kabir, M.A., et al.: Trend and impact of international collaboration in clinical medicine papers published in Malaysia. Scientometrics 98 , 1521–1533 (2014)

Mårtensson, P., Fors, U., Wallin, S.-B., Zander, U., Nilsson, G.H.: Evaluating research: a multidisciplinary approach to assessing research practice and quality. Res. Policy 45 , 593–603 (2015)

Mattsson, P., Laget, P., Nilsson, A., & Sundberg, C.J. (2008). Intra-EU Vs. Extra-EU Scientific Co-publication Patterns in EU. Scientometrics , 75(3), 555–574.

Mays, N., & Pope, C. (2000). Quality in Qualitative Health Research. In: Mays, N., and Pope, C. Qualitative Research in Health Care . 2nd ed. London: BMJ Books.

Michael, J. Cuellar, Duane, P. Truex, & Hirotoshi Takeda. (2016). Can we trust journal ranking to assess article quality? Proceedings of the Twenty-second Americas Conference on Information Systems , San Diego. 1–11.

Nanjundaiah & Dinesh, K.S. : Ranking and comparison of journals published in india with special reference to humanities and social sciences. Int. J. Inf. Dissem. Technol. 6 (4), 251–257 (2016)

National Center for the Dissemination of Disability Research - NCDDR. (2002). A Technical Brief: What Are the Standards for Quality Research? Southwest Educational Development Laboratory.

Popay, R., Rogers, A., Williams, G.: Rationale and standards for the systematic review of qualitative literature in health services research. Qual Health Research 8 , 341–351 (1998)

Ram, S.: A quantitative assessment of “chikungunya” research publications, 2004–2013. Trop. J. Med. Res. 19 , 52–60 (2016)

RAND. (2010). Technical Report: Standards for High-Quality Research and Analysis . RAND Corporation.

Salimi, N.: Quality assessment of scientific outputs using the BWM. Scientometrics 112 , 195–213 (2017)

Seglen, P.O.: Why the impact factor of journals should not be used for evaluating research? BMJ 314 (7079), 497 (1997)

Sendhilkumar, S., Elakkiya, E., Mahalakshmi, G.S.: Citation semantic based approaches to identify article quality. Comp Sci Inform Technol 1 , 411–420 (2013)

Sheikh Tariq Mahmood: Factors affecting the quality of research in education: student’s perceptions. J. Educ. Pract. 2 (11 & 12), 34–39 (2011)

Shewfelt, R.L.: What is quality? Postharvest Biol. Technol. 15 , 197–200 (1999)

Stokols, D., Harvey, R., Gress, J., Fuqua, J., Phillips, K.: In vivo studies of transdisciplinary scientific collaboration: lessons learned and implications for active living research. Am. J. Prev. Med. 28 (2), 202–213 (2005)

Moyses Szklo. (2006). Quality of Scientific Articles. Rev Saúde Pública , 40 (N Esp), 30–35.

Timmer, A., Sutherland, L.R., Hilsden, R.J.: Development and evaluation of a quality score for abstracts. BMC Med Res Methodol 3 , 2 (2003)

Victor, B.G., Hodge, D.R., Perron, B.E., Vaughn, M.G., Salas-Wright, C.P.: The rise of co-authorship in social work scholarship: a longitudinal study of collaboration and article quality, 1989–2013. Br. J. Soc. Work. 47 (8), 1–16 (2016)

Walter, J., Lechner, C., Kellermanns, F.W.: Knowledge transfer between and within alliance partners: private versus collective benefits of social capital. J. Bus. Res. 60 (7), 698–710 (2007)

Wasko, M.M., Faraj, S.: Why should I share? examining social capital and knowledge contribution in electronic networks of practice. MIS Q. 29 (1), 35–57 (2005)

Welch, C., Piekkari, R.: How should we (not) judge the ‘quality’ of qualitative research? a re-assessment of current evaluative criteria in international business. J. World Bus. 52 (5), 714–725 (2017)

Yokuş, G., Akdaği, H.: Identifying quality criteria of a scientific research adopted by academic community: a case study. Int. J. Eurasia Soc. Sci. 10 (36), 516–527 (2019)

Zuber-Skerritt, O., Fletcher, M.: The quality of an action research thesis in the social sciences. Qual. Assur. Educ. 15 (4), 413–436 (2007)

Download references

No funding received.

Author information

Authors and affiliations.

Department of Management Studies, Periyar University, Salem, Tamilnadu, 636 011, India

Yoganandan G.

Department of Commerce, National College (Autonomous), Tiruchirappalli, Tamilnadu, 620 001, India

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Vasan M. .

Ethics declarations

Conflict of interest.

The authors declare that there is no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Yoganandan, G., Vasan, M. Evaluating the quality of scientific research papers in entrepreneurship. Qual Quant 56 , 3013–3027 (2022). https://doi.org/10.1007/s11135-021-01254-z

Download citation

Accepted : 30 September 2021

Published : 15 October 2021

Issue Date : October 2022

DOI : https://doi.org/10.1007/s11135-021-01254-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Entrepreneurship
  • Quality criteria
  • Research collaboration
  • Research quality
  • Find a journal
  • Publish with us
  • Track your research
  • Election 2024
  • Entertainment
  • Photography
  • AP Buyline Personal Finance
  • AP Buyline Shopping
  • Press Releases
  • Israel-Hamas War
  • Russia-Ukraine War
  • Global elections
  • Asia Pacific
  • Latin America
  • Middle East
  • Election Results
  • Delegate Tracker
  • AP & Elections
  • Auto Racing
  • 2024 Paris Olympic Games
  • Movie reviews
  • Book reviews
  • Financial Markets
  • Business Highlights
  • Financial wellness
  • Artificial Intelligence
  • Social Media

House passes GOP bill requiring proof of citizenship to vote, boosting election-year talking point

Image

Speaker of the House Mike Johnson, R-La., speaks at a news conference with Rep. Chip Roy, R-Texas, left, House Majority Leader Steve Scalise, R-La., right, at the Capitol on Tuesday, July 9, 2024 in Washington. (AP Photo/Kevin Wolf)

  • Copy Link copied

WASHINGTON (AP) — The House on Wednesday passed a proof-of-citizenship requirement for voter registration, a proposal Republicans have prioritized as an election-year talking point even as research shows noncitizens illegally registering and casting ballots in federal elections is exceptionally rare.

The legislation, approved largely along partisan lines but with five Democrats voting in favor, is unlikely to advance through the Democratic-led Senate. The Biden administration also says it’s strongly opposed because there already are safeguards to enforce the law against noncitizen voting.

Still, the House vote will give Republicans an opportunity to bring attention to two of their central issues this year — border and election security.

It also provides an opportunity to fuel former President Donald Trump’s claims that Democrats have encouraged the surge of migrants so they can register them to vote, which would be illegal. Noncitizens are not allowed to vote in federal elections, nor is it allowed for any statewide elections.

Research and audits in several states show there have been incidences of noncitizens who successfully registered to vote and cast ballots, but it happens rarely and is typically by mistake. States have mechanisms to check for it, although there isn’t one standard protocol they all follow.

Republican House Speaker Mike Johnson, a key backer of the bill, said in a news conference earlier this week that the Democratic opposition means many Democrats “want illegals to participate in our federal elections; they want them to vote.”

During a speech Wednesday, he called the vote a “generation-defining moment.”

“If just a small percentage, a fraction of a fraction of all those illegals that Joe Biden has brought in here to vote, if they do vote, it wouldn’t just change one race,” he said. “It might potentially change all of our races.”

On his Truth Social platform this week, Trump suggested that Democrats are pushing to give noncitizen migrants the right to vote and urged Republicans to pass the legislation — the Safeguard American Voter Eligibility Act — or “go home and cry yourself to sleep.”

The fixation on noncitizen voting is part of a broader and long-term Trump campaign strategy of casting doubt on the validity of an election should he lose, and he has consistently pushed that narrative during his campaign rallies this year. Last month in Las Vegas, he told supporters, “The only way they can beat us is to cheat.” It also is part of a wider Republican campaign strategy, with GOP lawmakers across the country passing state legislation and putting noncitizen voting measures on state ballots for November.

One Democrat who voted for the GOP bill, Rep. Vincente Gonzalez, said he only did so because the bill was doomed in the Senate.

“It’s not going anywhere,” said Gonzalez, who represents a competitive border district in Texas. “It’s just another Republican messaging bill.”

The majority of Democrats and voting rights advocates have said the legislation is unnecessary because it’s already a felony for noncitizens to register to vote in federal elections, punishable by fines, prison or deportation. Anyone registering must attest under penalty of perjury that they are a U.S. citizen. Noncitizens also are not allowed to cast ballots at the state level. A handful of municipalities allow them to vote in some local elections.

They also have pointed to surveys showing that millions of Americans don’t have easy access to up-to-date documentary proof of citizenship, such as a birth certificate, naturalization certificate or passport, and therefore the bill could inhibit U.S. citizen voters who aren’t able to further prove their status.

During the Wednesday floor debate, Rep. Joe Morelle of New York, the top Democrat on the House Administration Committee, expressed concern that the bill would disenfranchise various American citizens.

He mentioned military members stationed abroad who couldn’t show documentary proof of citizenship in person at an election office, as well as married women whose names have changed, Native Americans whose tribal IDs don’t show their place of birth and natural disaster survivors who have lost their personal documents.

Morelle said he doesn’t see the bill as an attempt to maintain voter rolls, but as part of larger GOP-led plans to question the validity of the upcoming election.

“The false claim that there is a conspiracy to register noncitizens is a pretext for trying to overturn the 2024 election, potentially leading to another tragedy on January 6th, 2025,” he said.

Yet Republicans who support the bill contend the unprecedented surge of migrants illegally crossing the U.S.-Mexico border creates too large a risk of noncitizens slipping through the cracks and casting ballots that sway races in November.

“Every illegal vote cancels out the vote of a legal American citizen,” said Rep. Bryan Steil of Wisconsin, the Republican chair of the House Administration Committee.

If passed, the bill would require noncitizens to be removed from state voter rolls and require new applicants to provide documentary proof of U.S. citizenship. It also would require states to establish a process for applicants who can’t show proof to provide other evidence beyond their attestation of citizenship, though it’s unclear what that evidence could include.

Ohio Secretary of State Frank LaRose recently found 137 suspected noncitizens on the state’s rolls — out of roughly 8 million voters — and said he was taking action to confirm and remove them.

In 2022, Georgia’s Republican secretary of state, Brad Raffensperger, conducted an audit of his state’s voter rolls specifically looking for noncitizens. His office found that 1,634 had attempted to register to vote over a period of 25 years, but election officials had caught all the applications and none had been able to register.

In North Carolina in 2016, an audit of elections found that 41 legal immigrants who had not yet become citizens cast ballots, out of 4.8 million total ballots cast. The votes didn’t make a difference in any of the state’s elections.

In a document supporting the bill, Johnson listed other examples of noncitizens who had been removed from the rolls in Boston and Virginia.

Several secretaries of state, interviewed during their summer conference in Puerto Rico this week, said noncitizens attempting to register and vote is not a big problem in their state.

Utah Lt. Gov. Deidre Henderson, a Republican who oversees elections, said she supports the legislation in concept but provided a cautionary tale about how aggressively culling voter rolls can sometimes result in the removal of qualified voters.

A few years ago, everyone in her household received mail ballots for a municipal election, except her. She had been removed from the rolls because she had been born in the Netherlands, where her father was stationed with the U.S. Air Force.

“I was the lieutenant governor, I was overseeing elections, and I got taken off because I was born in the Netherlands,” she said, “So I think we definitely have those checks and balances in the state of Utah, maybe to an extreme.”

The House vote comes days after the Republican National Committee released its party platform, which emphasizes border security and takes a stand against Democrats giving “voting rights” to migrants living in the country illegally. Republicans are expected to shine a light on immigration and election integrity concerns at the Republican National Convention next week in Milwaukee.

Swenson reported from New York. Associated Press writer Christina A. Cassidy in San Juan, Puerto Rico, contributed to this report.

The Associated Press receives support from several private foundations to enhance its explanatory coverage of elections and democracy. See more about AP’s democracy initiative here . The AP is solely responsible for all content.

Image

IMAGES

  1. How to Write a High Quality Research Paper 2023

    research paper quality

  2. How to Write a High Quality Research Paper 2023

    research paper quality

  3. How To Compose A Quality Research Paper_

    research paper quality

  4. Benefits of Writing a Quality Research Paper

    research paper quality

  5. The Right Quality Paper for Your Thesis and Projects

    research paper quality

  6. (PDF) Writing Quality Research Papers

    research paper quality

VIDEO

  1. Finally engagement💅💍

  2. Easy method for writing ABSTRACT for Research paper #science #phd #research #shorts #gradschoollife

  3. 4 steps to Enhance the Quality of Research Paper

  4. Warning: The dangers of excessive PhD journal publications

  5. Quality versus quantity in research papers?

  6. How to make a Phylogenetic tree with two or more node values (publishing quality tree)

COMMENTS

  1. Research quality: What it is, and how to achieve it

    2.1. What research quality historically is. Research assessment plays an essential role in academic appointments, in annual performance reviews, in promotions, and in national research assessment exercises such as the Excellence in Research for Australia (ERA), the Research Excellence Framework (REF) in the United Kingdom, the Standard ...

  2. How do you determine the quality of a journal article?

    1. Where is the article published? The journal (academic publication) where the article is published says something about the quality of the article. Journals are ranked in the Journal Quality List (JQL). If the journal you used is ranked at the top of your professional field in the JQL, then you can assume that the quality of the article is high.

  3. Criteria for Good Qualitative Research: A Comprehensive Review

    Fundamental Criteria: General Research Quality. Various researchers have put forward criteria for evaluating qualitative research, which have been summarized in Table 3.Also, the criteria outlined in Table 4 effectively deliver the various approaches to evaluate and assess the quality of qualitative work. The entries in Table 4 are based on Tracy's "Eight big‐tent criteria for excellent ...

  4. Defining and assessing research quality in a transdisciplinary context

    Effective research quality criteria are necessary to guide the funding, management, ongoing development, and advancement of research methods, projects, and programs. ... These papers recognize the need for new quality measures to encourage and promote high-quality research and to overcome perceived biases against TDR approaches in research ...

  5. Assessing the quality of research

    Figure 1. Go to: Systematic reviews of research are always preferred. Go to: Level alone should not be used to grade evidence. Other design elements, such as the validity of measurements and blinding of outcome assessments. Quality of the conduct of the study, such as loss to follow up and success of blinding.

  6. Quality in Research: Asking the Right Question

    This column is about research questions, the beginning of the researcher's process. For the reader, the question driving the researcher's inquiry is the first place to start when examining the quality of their work because if the question is flawed, the quality of the methods and soundness of the researchers' thinking does not matter.

  7. Citations, Citation Indicators, and Research Quality: An Overview of

    Dag W. Aksnes is research professor at the Nordic Institute for studies in Innovation, Research and Education (NIFU) and affiliated with the Centre for Research Quality and Policy Impact Studies (R-QUEST). Aksnes' research covers various topics within the field of bibliometrics, such as studies of citations, citation analyses and assessments ...

  8. Evaluating research: A multidisciplinary approach to assessing research

    In Canada, standard quality assessment criteria for research papers have been developed, and these deal separately with quantitative and qualitative research studies (Kmet et al., 2004). However, it is not our goal to distinguish some types of scientific methods that are inherently 'good' from others that may be 'bad'.

  9. Assessing the quality of research

    The widespread use of hierarchies of evidence that grade research studies according to their quality has helped to raise awareness that some forms of evidence are more trustworthy than others. This is clearly desirable. However, the simplifications involved in creating and applying hierarchies have also led to misconceptions and abuses.

  10. Content-based quality evaluation of scientific papers using coarse

    High-quality research is the engine of scientific and technological progress. Many countries have elevated the identification and management of high-quality research to the national level. ... In scientific paper quality evaluation, conventional machine learning models have traditionally been utilized to predict paper quality based on manually ...

  11. What makes a high quality clinical research paper?

    The quality of a research paper depends primarily on the quality of the research study it reports. However, there is also much that authors can do to maximise the clarity and usefulness of their papers. Journals' instructions for authors often focus on the format, style, and length of articles but d …

  12. Quality Indicators of Scientific Research

    A high value of h indicates a high quality of research. A scientist has an index h if h of his/her Np papers (total publications) have at least h citations each, and the other (Np- h) papers have no more than h citations each. For example, if a scientist is rated to have h = 20, it means that 20 of his papers (out of, say, total of 50, Np is ...

  13. A Review of the Quality Indicators of Rigor in Qualitative Research

    Abstract. Attributes of rigor and quality and suggested best practices for qualitative research design as they relate to the steps of designing, conducting, and reporting qualitative research in health professions educational scholarship are presented. A research question must be clear and focused and supported by a strong conceptual framework ...

  14. A Step-To-Step Guide to Write a Quality Research Article

    These research papers will be of more use to you in the process of preparing a high-quality research paper. On the other hand, the majority of reputable journals advise against citing more than two publications from the pre-print or ArXiv database in a single paper. We are only permitted to refer to articles that have been published by ...

  15. Systematic Reviews: Step 6: Assess Quality of Included Studies

    Use quality assessment tools to grade each article. Create a summary of the quality of literature included in your review. This page has links to quality assessment tools you can use to evaluate different study types. Librarians can help you find widely used tools to evaluate the articles in your review.

  16. How to Write and Publish a Research Paper for a Peer ...

    Communicating research findings is an essential step in the research process. Often, peer-reviewed journals are the forum for such communication, yet many researchers are never taught how to write a publishable scientific paper. In this article, we explain the basic structure of a scientific paper and describe the information that should be included in each section. We also identify common ...

  17. Measuring Research Quality and Impact

    Bibliometrics - Bibliometrics is the quantitative analysis of traditional academic literature, such as books, book chapters, conference papers or journal articles, to determine quality and impact. Cited reference search - A cited reference search allows you to use appropriate library resources and citation indexes to search for works that cite ...

  18. (PDF) Assessment of Research Quality

    This paper considers assessment of research quality by focusing on definition and. solution of research problems. W e develop and discuss, across different classes of. problems, a set of general ...

  19. How to … assess the quality of qualitative research

    A further important marker for assessing the quality of a qualitative study is that the theoretical or conceptual framework is aligned with the research design, the research question(s) and the methodology used in the study, as well as with the reporting of the research findings. High-quality qualitative research necessitates critical ...

  20. 20 Ways to Improve Your Research Paper

    13. Check your plots and graphs. Nothing in your paper is as important as your data. Your discoveries are the foundation of your work. They need to be clear and easy to understand. To improve your research paper, make sure graphs and images are in high resolutions and show the information clearly. 14.

  21. Faculty Research Guide: Where do I publish my research?

    CiteScore only includes peer-reviewed research: articles, reviews, conference papers, data papers and book chapters, covering 4 years of citations and publications. Historical data back to CiteScore 2011 have been recalculated and are displayed on Scopus.

  22. Quality versus quantity: assessing individual research performance

    Abstract. Evaluating individual research performance is a complex task that ideally examines productivity, scientific impact, and research quality--a task that metrics alone have been unable to achieve. In January 2011, the French Academy of Sciences published a report on current bibliometric (citation metric) methods for evaluating ...

  23. Full article: Quality 2030: quality management for the future

    The paper is also an attempt to initiate research for the emerging 2030 agenda for QM, here referred to as 'Quality 2030'. This article is based on extensive data gathered during a workshop process conducted in two main steps: (1) a collaborative brainstorming workshop with 22 researchers and practitioners (spring 2019) and (2) an ...

  24. Total Quality Management Research Paper Topics

    100 Total Quality Management Research Paper Topics. Total Quality Management (TQM) has evolved to become a strategic approach to continuous improvement and operational excellence. It has applications across various industries, each with its unique challenges and opportunities. Below is an exhaustive list of TQM research paper topics, divided ...

  25. Research of Air Quality in Closed Car Parks in Konya Province

    Nowadays, due to the development of the world, harmful gases and vapours have begun to enter the atmosphere significantly, so we are faced with the problem of air pollution and its low quality. This paper, therefore, focused on enclosed car parks, a parking system that needs to be well examined from an environmental perspective. It is not possible to predict in advance what levels of pollution ...

  26. First Study to Measure Toxic Metals in Tampons Shows Arsenic and Lead

    This is the first paper to measure metals in tampons. The findings are published in Environmental International. Tampons are of particular concern as a potential source of exposure to chemicals, including metals, because the skin of the vagina has a higher potential for chemical absorption than skin elsewhere on the body.

  27. Evaluating the quality of scientific research papers in

    The study aims to find the quality of research papers published in the domain of entrepreneurship in India. This study covers 100 research papers. A standardized measurement tool developed by the earlier researchers was used to evaluate the research quality. The data compiled using the measurement tool were analyzed with the support of the SPSS. The statistical tools such as descriptive ...

  28. House passes GOP bill requiring proof of citizenship to vote

    The House has passed a bill requiring proof of U.S. citizenship for voter registration. It's legislation Republicans have prioritized as an election-year talking point even as research shows noncitizens illegally registering and casting ballots in federal elections is exceptionally rare.

  29. NHS cancer services and systems—ten pressure points a UK cancer control

    In this Policy Review we discuss ten key pressure points in the NHS in the delivery of cancer care services that need to be urgently addressed by a comprehensive national cancer control plan. These pressure points cover areas such as increasing workforce capacity and its productivity, delivering effective cancer survivorship services, addressing variation in quality, fixing the reimbursement ...

  30. Equipment Quality Information Mining Method ...

    Equipment quality-related data contains valuable information. Data mining technology seems to be an efficient method for extracting knowledge from large amounts of data. ... The research in this paper has significantly improved the performance of a mature association rule mining algorithm and further enriched association rule mining methods ...