Research Excellence

Penn State is committed to growing its interdisciplinary research enterprise.

​Penn State is among the top research institutions in the country, with more than $1.2 billion in research expenditures in 2022-23 . Every day, talented researchers, including postdoctoral fellows, graduate students and undergraduate students across hundreds of disciplines, are making a difference in Pennsylvania and around the globe, helping to solve critical societal challenges and preparing students to be the leaders and thinkers of tomorrow. Ranked among the top 30 research institutions (and among the top 15 public institutions) nationally, Penn State employs some of the world’s leading researchers, who create knowledge for the benefit of all, and help drive discovery, introspection, curiosity, rigor and growth among our students as they prepare for successful careers.

Investing in research and commercialization

A thriving interdisciplinary research enterprise is imperative to Penn State’s future. University leaders are making strategic investments to help maximize the impacts of our researchers’ work and foster excellence. This means investing in areas where the University is already great, as well as in areas that are poised to go from good to great. For example, in fiscal year 2023, Penn State invested nearly $300 million in research, including in advanced infrastructure and equipment. The University also prioritizes the recruitment and retention of top faculty members through targeted seed grants, competitive retention packages, investment in state-of-the-art equipment, building interdisciplinary teams of faculty in areas of strategic impact, and faculty recognition through chairs and other means.

This investment also supports commercialization of research via licensing of Penn State’s intellectual property to existing companies or start-ups with a goal of generating real-world impacts. In turn, such efforts also have the potential to generate significant novel revenue streams for the University and its inventors, which can include undergraduate and graduate students and postdoctoral fellows, in addition to faculty members. This work also serves communities by supporting economic development in Pennsylvania and beyond.

Investing in research also means investing in our graduate students, who play a vital role in the success of our institution’s research enterprise. Not only are our graduate students beneficiaries of our research efforts, which provide the foundation for them to flourish, but they are also active contributors and catalysts for innovation.

University leaders also have launched the Research Support Transformation Project (RSTP) , which will directly support faculty success by reducing unnecessary administrative burdens and increasing the efficiency of operations that support researchers in their work. Specifically, investments will be made in research support staff and in the tools and resources our faculty and staff need to be efficient and effective. Researchers can provide input to help guide the direction of RSTP by visiting the project’s information portal .

Impacting Pennsylvania and the world

In addition, as Pennsylvania’s only land-grant university, Penn State is uniquely positioned to conduct research that serves the commonwealth. Penn State research promotes economic development, helps to grow the workforce, and supports local businesses, while providing a world-class education to students at twenty-four campuses across the state. To support research taking place at campuses across Pennsylvania every day, university leaders recently announced grant opportunities for Commonwealth Campus researchers . In addition, to reach our goal of excellence in interdisciplinary research, Penn State provides support to foster collaborations across colleges and campuses, such as those between University Park and the College of Medicine.

Research is a foundational cornerstone of Penn State. University leaders are committed to making strategic investments where they will have the most impact on the institution’s mission and to identifying new revenue streams to build an even stronger future for Penn State.

Frequently Asked Questions

How will penn state ensure that research remains a cornerstone of the university.

As an R1 university, research is part of Penn State’s DNA. University leaders are making strategic investments to reduce administrative burdens, simplify processes, and increase support of researchers so they can focus on what they do best. For example, in fiscal year 2023, Penn State invested nearly $300 million in research, including in advanced infrastructure and equipment. The University also prioritizes the recruitment and retention of top faculty members through targeted seed grants, competitive retention packages, investment in state-of-the-art equipment, building interdisciplinary teams of faculty in areas of strategic impact, and faculty recognition through chairs and other means.

How is the University investing in research at the Commonwealth Campuses?

We are engaging with campus chancellors to identify investment opportunities where we can really make a difference. Investment areas may include supporting research that includes undergraduate research experiences, potential for external grant success, potential for local and national impacts, community engagement, and support from local stakeholders including industry and community organizations.

Are there any further updates on the Research Support Transformation project?

There are three workstreams underway to make the University’s research support infrastructure more efficient by taking a more standardized, institution-wide approach to research support. The workstreams are focused on analyzing and making recommendations to:

  • Streamline the invoice and payment process with funding agencies;
  • Analyze the functional gaps within our current research information systems, including the portal researchers use to track financial progress; and
  • Modernize research support moving forward, starting with research accounting.

The project at its core is focused on better understanding the root causes of the administrative burdens that researchers face that prevent additional growth. Researchers can provide input to help guide the directions of RSTP by visiting the project’s information portal . Community feedback will be critical to inform what actions we prioritize within the research support ecosystem.

Latest News

A scientist wearing safety glasses and a mask holds a transparent, square dish in her gloved hands.

Researchers can now access their financial accounts in updated portal

Update is part of the Research Support Transformation Project aimed at increasing the efficiency of research operations.

A woman wearing a lab coat looks into a microscope.

Research Support Transformation Project moves forward, feedback solicited

Since RSTP was launched last fall, several updates are available, and more input is needed from researchers and support staff to inform continued progress

Fpo 600 400

Penn State announces new public-impact research funding opportunities for campuses

Links to resources:.

  • Researchers: Please provide input to help guide the direction of the Research Support Transformation Project .
  • Search Menu
  • Sign in through your institution
  • Advance articles
  • Author Guidelines
  • Submission Site
  • Open Access
  • Why Publish?
  • About Science and Public Policy
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

1. introduction, 2. quantifying research excellence for policy purposes, 3. methodology, 4. meanings, metrics, processes, and reimagination, 5. discussion, 6. final remarks, acknowledgements.

  • < Previous

Research excellence indicators: time to reimagine the ‘making of’?

  • Article contents
  • Figures & tables
  • Supplementary Data

Federico Ferretti, Ângela Guimarães Pereira, Dániel Vértesy, Sjoerd Hardeman, Research excellence indicators: time to reimagine the ‘making of’?, Science and Public Policy , Volume 45, Issue 5, October 2018, Pages 731–741, https://doi.org/10.1093/scipol/scy007

  • Permissions Icon Permissions

In the current parlance of evidence-based policy, indicators are increasingly called upon to inform policymakers, including in the research and innovation domain. However, few studies have scrutinized how such indicators come about in practice. We take as an example the development of an indicator by the European Commission, the Research Excellence in Science & Technology indicator. First, we outline tensions related to defining and measuring research excellence for policy using the notion of ‘essentially contested concept’. Second, we explore the construction and use of the aforementioned indicator through in-depth interviews with relevant actors and the co-production of indicators, that is the interplay of their making vis-à-vis academic practices and policy expectations. We find that although many respondents in our study feel uncomfortable with the current usage of notions of excellence as indicator of quality of research practices, few alternatives are suggested. We identify a number of challenges which may contribute to the debate of indicator development, suggesting that the making of current indicators for research policy in the EU may be in need of serious review.

When it comes to research policy, excellence is on top of the agenda. Yet, the meaning attributed to the notion of excellence differs markedly among both academics and policymakers alike.

There is an extensive scholarly debate around the breadth and depth of the meaning of excellence, its capacity to provide quantitative assessments of research activities and its potential to support policy choices. Yet, there is a considerable agreement that it strongly influences the conduction of science. The contentedness of the excellence concept can be derived from the discomfort it has evoked among scholars, leading some even to plea for an altogether rejection of the concept (Stilgoe 2015). The discomfort with the concept is higher whenever proposals are made to measure it. The critique of measuring excellence follows two lines. One is technical and emphasises the need for methodological rigour. While in principle not denying the need for and the possibility of designing science and technology indicators, this line of criticism stresses the shortcomings of methodological approaches used up until now (Grupp and Mogee 2004; Grupp and Schubert 2010 ). The other critique is more philosophical and, while not denying the theoretical and political relevance of excellence, it takes issue with the use of current metrics in assessing it ( Weingart 2005 ; Martin 2011 ; Sørensen et al. 2015 ). Notwithstanding these criticisms though, and especially given the period of science professionalization where policymaking finds itself in ( Elzinga 2012 ), these same metrics are frequently called upon to legitimate policy interventions (Wilsdon et al. 2015).

In addition, highly reflected shortcomings in the existing mechanisms for science’s quality control system, undermine trust in assessment practices around scientific excellence—in other words, if the peer review system is in crisis, what research outcomes are evaluated as excellent? (See Martin 2013 ; Sarewitz 2015 ; Saltelli and Funtowicz 2017 .)

The aspiration for an ‘evidence-based society’ ( Smith 1996 ) requests that policy makers and alike, especially those operating at the level of transnational governmental organisations, rely on information on the current state of research to identify policy priorities, or to allocate funds. Indicators are typically proposed as tools catering this need ( Saltelli et al. 2011 ). A main issue holds, however: how to come up with indicators of research excellence in the face of its often controversial underpinnings, as well as their situated nature?

At the Joint Research Centre of the European Commission, we have been actively involved in the design and construction of a country-level indicator of excellence, the Research Excellence Science & Technology indicator (RES&T) offered and used by the European Commission (cf. European Commission 2014 ; Hardeman et al. 2013 ). Hence we are in a unique position to critically reflect upon challenges of quantifying research excellence for policy purposes.

Here we adopt the notion of essentially contested concept as our theoretical work horse ( Gallie 1955 ; Collier et al. 2006 ) to discuss why the usefulness of research excellence for policy purposes is a subject of contention and what this means for its quantification. Essentially contested concepts are concepts ‘the proper use of which inevitably involves endless disputes about their proper uses on the part of their users’ ( Gallie 1955 : 169).

The work presented in this article revolves around two questions which evolved with the learning through the empirical material: First, we examine whether research excellence can be ‘institutionalised’ in the form of stable research excellence indicators, from the vantage point of Gallie’s notion of ‘essentially contested concept’. Second, whether the re-negotiation of meanings of research excellence that underpin current indicators revolves around the articulation of different imaginaries of excellence displayed by different actors. These initial questions were reframed with the progressive understanding of the authors that the focus in the practices were certainly relevant but larger questions emerged, such as whether ‘excellence’ alone was indeed the relevant descriptor to evaluate quality of research in the EU. Hence, this discussion is also offered vis-à-vis our findings throughout the research process.

The article starts by looking into the notion of excellence and its function as a proxy for scientific quality using the notion of essentially contested concept as well as elements of tension around its conceptualization (Section 2) as reported in the literature. It proceeds with describing briefly the development of the indicator that we are taking as an example to respond to the research questions described earlier. The second part of the article explains the methodology applied (Section 3) and the outcomes (Section 4) of the empirical research carried out to inform this article, which consisted of a number of in-depth interviews with relevant actors, that is developers of the RES&T indicator, EU policymakers, and academics. The interviews aimed at exploring meanings, challenges, and ways to reimagine the processes behind indicators development. In those interviews, we explore ‘re-imagination’ as a space for our interviewees to reflect further and discuss alternatives to current research indicators frameworks. These are offered in a discussion (Section 5) of current challenges to reimagine an indicator to qualify quality in science.

2.1 Measuring and quantifying indicators-for-policy

The appeal of numbers is especially compelling to bureaucratic officials who lack a mandate of popular election or divine right; scientific objectivity thus provides an answer to a moral demand for impartiality and fairness; is a way of making decisions without seeming to decide. (T. M. Porter 1995)

Indicators seek to put into numbers phenomena that are hard to measure ( Boulanger 2014 ; Porter 2015 ). Therewith, measuring is something else than quantifying ( Desrosieres 2015 ): while measuring is about putting into numbers something that already exists, quantifying is about putting into numbers something that requires an interpretative act. Indicators are often exemplary of quantifications. They are desirable because they offer narratives to simplify complex phenomena and therewith attempt to render them comprehensible ( Espeland 2015 ). Such simplifications are especially appealing whenever information is called for by policymakers operating at a distance from the real contexts that is, the actual purpose of their policy action. Simplification means that someone decides which aspects of complex phenomena are stripped away while others are taken on board. The (knowledge and values) grounds for that operation are not always visible. The risk is that, in stripping away some aspects (and focusing on others), a distorted view on the phenomenon of interest may arise, with potentially severe consequences for policy decisions derived from them. Lacking the opportunity to gather detailed information on each and every aspect of a phenomenon of concern, policymakers are nevertheless drawn to indicators offering them the information needed in the form of summary accounts ( Porter 2015 ).

Constructing an indicator on research excellence typically involves activities of quantification as research excellence has no physical substance in itself. For an indicator on research excellence to come into existence one first needs a meaning and understanding about what ‘research excellence’ is about before one can even start assigning numbers to the concept ( Barré 2001 ). We find that the notion of ‘co-production’ ( Jasanoff 2004 ) is relevant as it makes visible that indicators are not developed in a vacuum but respond and simultaneously normalise scientific practice and policy expectations.

2.2 Research excellence as an essentially contested concept

Research excellence could be straightforwardly defined as going beyond a superior standard in research ( Tijssen 2003 ). However, straightforward and intuitively appealing as this definition may seem, it merely shifts the issue of defining what is meant by research excellence towards what counts as ‘a superior standard in research’. For one thing, it remains unclear what should be counted as research to begin with, as well as how standards of superiority should be set, on which account and by whom. Overall, the notion of research excellence is potentially much more controversial than it might seem at first. In fact, whenever it comes to articulating what should count as excellent research and why this is so, scientific communities systematically strive for coming to an agreement ( Lamont 2009 ).

One way to conceive of research excellence then, is to think of it as an essentially contested concept . The notion of essentially contested concept was first introduced by Gallie (1955) to describe cases, that is ideas or phenomena that are widely appraised but controversial at the same time. In substantiating his view, Gallie (1955) listed five properties of essentially contested concepts (see also: Collier et al. 2006 ). Essentially contested concepts are (1) appraisive , (2) internally complex, (3) describable in multiple ways, (4) inherently open, and (5) recognized reciprocally among different parties ( Gallie 1955 ). Due to their complex, open and value-laden nature, essentially contested concepts cannot be defined in a single-best, fixed, and objective way from the outset. Hence, they are likely to produce endless debates on their interpretation and implications.

Research excellence might well serve as an instance of an essentially contested concept. First, research excellence, by its very appeal to superior standards, evokes a general sense of worth and, therewith, shareability . Although one can argue about its exact definition and the implications that such definitions could have, it is hard to be against excellence altogether (Stilgoe 2015). Second, research excellence is likely to be internally complex as it pertains to elements of the research enterprise that need not be additive in straightforward ways.

For example, research excellence can be about process as well as outcomes, whereby the former need not automatically transform into the latter ( Merton 1973 ). Third, it follows that research excellence can be described in multiple ways: while some might simply speak of research excellence with reference to science’s peer review system ( Tijssen 2003 ), others prefer to broaden the notion of research excellence beyond its internal value system to include science’s wider societal impact as well (Stilgoe 2015). Fourth, what counts as excellent research now might not necessary count as excellent research in the future, and any definition of research excellence might well be subject to revision. Finally, the fact that one can have a different view on what research excellence is or should be, is agreed upon by proponents of different definitions. Ultimately, proponents of a particular notion of research excellence could or could not be aware of alternative interpretations.

Recently, Sir Keith Bernett (2016) argued that a mechanical vision of academia is driving ‘mechanical and conventional ways we think about “excellence”. We measure a community of scholars in forms of published papers and league tables’ (Bernett 2016). Hence, what counts as excellence is entertained by the imagination of some about what ‘excellent research’ is; but what, political, social, and ethical commitments are built into the adopted notion and the choice of what needs to be quantified?

2.3 Quantifying research excellence for policy purposes: critical issues

Following the previous discussion, if one acknowledges research excellence as an essentially contested concept, the construction of indicators faces difficulties, which start with the mere act of attempting quantification, that is agreeing on a shared meaning of research excellence. In the 1970s Merton (1973 : 433–435) introduced three questions that need to be addressed to come to terms with the notion of research excellence (see also: Sorensen et al. 2015).

First, what is the basic unit of analysis to which research excellence pertains? Merton (1973) suggested that this could be everything ranging from a discovery, a paper, a painting, a building, a book, a sculpture, a symphony, a person’s life work, or an oeuvre. There is both a temporal, as well as a socio-spatial dimension to the identification of a unit for research excellence. Temporal in the sense that research excellence does not need to be attributable to a specific point in time only but might span across larger time periods. Also, though not so much discussed by Merton (1973) , a unit for research excellence also has a socio-spatial dimension. Research excellence might pertain to objects (books, papers, sculptures, etc.) or people. When it comes to the latter a major issue holds to whom excellence can be attributed (individuals, groups, organisations, territories) and how to draw appropriate boundaries among them (cf. Hardeman 2013 ). Expanding or restricting a unit’s range in time and/or space effects the quantification of research excellence accordingly.

Second, what qualities of research excellence are to be judged? Beyond the identification of an appropriate unit of analysis, the second issue raised by Merton (1973) points out several concerns. One is about the domain of research itself. As with disputes about science and non-science ( Gieryn 1983 ), demarcating research from non-research is more easily said than done. Yet, to attribute excellence to research, such boundary work needs to be done nevertheless. Should research excellence, as in the article Republic of Science (Polanyi 1962) be judged according to its own criteria? Or should research, in line with Weinberg’s (1962) emphasis on external criteria, be judged according to its contribution to society at large? To the same extent of setting the unit of excellence, setting the qualities in one way (and not another) produce certainly different outcomes for the policies derived therefrom. That said, focusing on a particular notion of excellence (i.e. using a particular set of qualities) might crowd out other—in principle equally valid—qualities ( Rafols et al. 2012 ; Sorensen et al. 2015).

Third, who shall judge? For example, a researcher working in a public lab might have a whole different idea on what counts as excellent research than one working in a private lab. This brings Stilgoe (2015) to argue that ‘“Excellence” tells us nothing about how important the science is and everything about who decides’. it is undoubtedly of eminent importance to determine the goals and interests that excellence serves. Likewise, and in line with Funtowicz and Ravetz’s (1990) focus on fit-for-purpose to describe the quality of a process or product, the quality of an indicator of research excellence crucially depends on its use. One concern here is that research excellence indicators might set the standard of research practices that do not conform to the underlying concept of excellence they seek to achieve ( Hicks 2012 ; Sorensen et al. 2015). For example, in Australia, in seeking to achieve excellence, the explicit focus on publication output indeed increased the number of papers produced but left the issue of the actual worth of those ‘unaddressed’ papers (Butler 2003). Interestingly, in 2009 a new excellence framework came into existence in Australia to replace the former quality framework. While the latter made use of a one-size-fits-all model, the new excellence based one presents a matrix approach in which entire sets of indicators, as well as the experts’ reviews coexist as measures of quality. Again, any definition of research excellence and its implications for quantification need to be positioned against the background of the goals and interests it serves.

2.4 The construction of the Research Excellence Indicator (RES&T) at the Joint Research Centre

The development of the Research Excellence Indicator (RES&T) at the Joint Research Centre of the European Commission (JRC) inspired this research. Its history, developments, and our privileged position of proximity to its developments motivate the basis from which we departed to conduct our inquiries.

In 2011, an expert group on the measurement of innovation set up by the European Commission’s Directorate-General Research and Innovation (DG-RTD) was requested ‘to reflect on the indicators which are the most relevant to describe the progress to excellence of European research’ ( Barré et al. 2011 : 3). At that point the whole notion of excellence was said to be ‘in a rather fuzzy state’ ( Barré et al. 2011 : 3). To overcome the conceptual confusion surrounding research excellence and to come up with a short list of indicators capable of grasping research excellence, the expert group proceeded in four steps. First, they defined and described types of activities eligible for being called excellent. Second, a set of potential indicators were identified. Third, from this set of potential indicators a short list of (actually available) indicators was recommended. And fourth, a process for interpreting research excellence as a whole at the level of countries was proposed.

This was followed by Vertesy and Tarantola (2012) proposing ways to aggregate the set of indicators identified by the expert group into a single composite index measuring research excellence. The index closely resembled the theoretical framework offered by the expert group while aiming for statistical soundness at the same time.

Presented at a workshop organised in Ispra (Italy) during fall 2012 by the European Commission and attended by both policymakers and academic scholars, the newly proposed composite indicator met with fierce criticism. A first critique raised was that the proposed composite indicator mixed up both inputs and outputs while research excellence, according to the critiques, should be about research outputs only. Whereas the outcomes of research and innovation activities are fundamentally uncertain, the nature and magnitude of research and innovation inputs say little to nothing about their outputs. A second critique raised during the workshop was that some of the indicators used, while certainly pertaining to research, need not say much about their excellent content. Focusing on outputs only, would largely exclude other dimensions that could refer to any kind of input (e.g. gross investment in R&D) or any kind of process organizing the translation of inputs into outputs (e.g. university–industry collaborations).

Taking these critiques on board, the research excellence indicator was further refined towards the finalization of the 2013 report ( Hardeman et al. 2013 ). First, the scope of the indicator was made explicit by limiting it to research in science and technology only. Second, following upon the critique strongly to distinguish inputs from outputs, it was put clear which among underlying indicators were primarily focused on outputs. Given that the underlying indicators were not available for all countries, the rankings presented in the 2013 Innovation Union Competitiveness Report was based on a single composite indicator aggregating either three (non-ERA countries) or four (ERA countries) underlying indicators (European Commission 2013).

In a subsequent report aimed at refining the indicator, Hardeman and Vertesy (2015) addressed a number of methodological choices, some of which were also pointed out by Sørensen et al. (2015) . These concerned the scope of coverage in terms of the number and kind of countries and the range of (consecutive) years, the variables included (both numerators and denominators), and the choice of weighting and aggregating components. The sensitivity and uncertainty analyses highlighted that some of the methodological choices were more influential than others. While these findings highlighted the importance of normative choices, such normative debates materialized only within a limited arena.

Based on our research and experience with the RES&T, we will discuss whether careful reconsideration of the processes by which these types of indicators are developed and applied is needed.

A qualitative social research methodology is adopted to gain insights from different actors’ vantage point on concepts, challenges, and practices that sustain the quantification of research excellence.

A series of in-depth interviews was carried out by two of the authors of this paper between March and May 2016. A first set of interviews was conducted with five people directly involved in the construction of the RES&T indicator from the policy and research spheres, or people that were identified through our review of relevant literature. This was followed by a second set of interviews partially suggested by the interviewees in the first set. Hence, eleven telephone semi-structured in-depth interviews were conducted with experts, scholars, and users concerned with research indicators and metrics.

This was followed by a second set of interviews (six participants), partially suggested by the interviewees in the first set. This second set was thus composed of senior managers and scholars of research centres with departments on scientometrics and bibliometrics, as well as policymakers.

Hence, the eleven interviewees included people that were either involved in different phases of the RES&T indicator development or were professionally close (in research or policy areas) to the topic of research indicators and metrics. Awareness of the RES&T indicator constituted a preferable requirement. The eleven telephone semi-structured in-depth interviews conducted with the experts, scholars, and users of indicators may seem numerically little; however, this pool offered relevant insights to shed light on the practices of research evaluation in the EU. So, the interviewees were the relevant actors for our work.

We performed coding as suggested by Clarke (2003) as soon as data were available and such an approach allowed us setting more focus on some aspects of the research that emerged as particularly important. The accuracy of our interpretations was checked through multiple blind comparisons of the coding generated by the authors of this paper. Often our codes have also explicitly been verified with the interviewees to check potential misalignments in the representativeness of our interpretations.

RES&T indicator developers (hereafter referred to as ‘developers’) The three interviewees of this group were all somehow involved in the design and implementation of the RES&T indicator. Among them, two senior and one retired researchers, all of them active in the areas of innovation and statistics. Given that we knew two of the interviewees before the interview, we paid particular attention to the influence of the interviewer–interviewee identities at the moment of data analysis, along the recommendations of Gunasekara (2007) .

Policy analysts (hereafter referred to as ‘users’) This group was composed of four senior experts, who are users of research indicators. They are active as policymakers at the European Commission; they all have been involved in various expert groups and at least two of them have also published own research.

Practitioners and scholars in relevant fields for our endeavour concerned with science and technology indicators and active at different levels in their conceptualization, use and construction (hereafter referred to as ‘practitioners’) This group was composed of four scholars (one senior researcher, one full professor, one department director, and one scientific journal editor in chief) who critically study statistical indicators.

Insights into meanings of excellence.

Critical overview of current metrics in general and processes and backstage dynamics in the development of the RES&T indicator (applicable if the interviewee was personally involved).

Reimagination of ways to assess and assure the quality of processes of indicators development, taking stock of transformations of knowledge production, knowledge governance, and policy needs (new narratives).

All interviews, which were in average one hour long, have been transcribed and data analysis was conducted according the principles of grounded theory, particularly as presented in Charmaz (2006) . The analysis of these interviews consisted of highlighting potential common viewpoints, identifying similar themes and patterns around the topics discussed with the interviewees, which will be discussed in the next sections of this article.

In this section, we attempt to make sense of the issues raised by our interviewees, summarising the main recurrent elements of the three main axes that were at the core of the questionnaire structure: (1) meanings of ‘excellence’, (2) challenges backstage processes of developing and using research excellence indicators, (3) ways to reimagine the process of indicator development to promote better quality frameworks to assess scientific quality.

4.1 On meanings of research excellence

Many of us are persuaded that we know what we mean by excellence and would prefer not to be asked to explain. We act as though we believe that close inspection of the idea of excellence will cause it to dissolve into nothing. ( Merton 1973 : 422)

Our starting question to all interviewees was ‘please, define research excellence’. To this question, our interviewees found themselves rather unprepared, which could suggest that either this expression is taken for granted and not in need of reflection or—as the literature review shows—no shared definition seems to exist, to which, in the end, our interviewees largely agree. Such unpreparedness seems somehow paradoxical as it implies an assumption that the definition of excellence is stable, in no need for reflection, whereas our interviewees’ responses seem to suggest rather the contrary. Excellence is referred to as ‘hard to define’, ‘complex’, ‘ambiguous’, ‘dynamic’, ‘dangerous’, ‘tricky’ as well as, a ‘contextual’ and ‘actor-dependent’ notion. The quotes below reflect different vantage points, indicating some agreement on its multidimensional, contested, distributed, situated, and contextual nature:

[…] this is a dangerous concept, because you have different starting positions . Developer 3 Clearly, excellence is multi-dimensional. Secondly, excellence ought to be considered in dynamic terms , and therefore excellence is also dynamics, movement and progress which can be labelled excellent. Third, excellence is not a natural notion in the absolute, but it is relative to objectives . Therefore, we immediately enter into a more complex notion of excellence I would say, which of course the policy makers do not like because it is more complicated. Developer 2 […] you need to see the concept of research excellence from different perspectives . For universities it might mean one thing, for a private company it might mean something completely different. Developer 1 You could say that excellence is an emergent property of the system and not so much an individual attribute of particular people or groups . Stakeholder 1

The quotes suggest agreement among the interviewees that research excellence is a multidimensional, complex, and value-laden concept which link well with the notion of essentially contested concept introduced earlier. While some experts simply think of highly cited publications as the main ingredient for a quantification of excellence, others tend to problematize the notion of excellence once they are invited to carefully reflect upon it, getting away from the initial official viewpoint. Indeed, the lack of consensus about meanings of excellence is highlighted by different interviewees, and, not surprisingly, seem to be a rather important issue at the level of institutional users and developers, who described it as an unavoidable limitation. For example:

It is extremely difficult to have a consensus and [therefore] it is impossible to have a perfect indicator. User 2 I do see that there was no clear understanding of the concept [of excellence] [since] the Lisbon agenda. This partly explains why [a] high level panel was brought together [by DG RTD], [whose] task was to define it and they gave a very broad definition, but I would not identify it as the Commission’s view . Developer 3

The way users and developers responded to this lack of consensus seems to be different though. Developers, on the one hand, do not seem to take any definition of research excellence for granted. It seems that, as a way out of the idea that research excellence constitutes an essentially contested concept, developers stick to a rather abstract notion of research excellence: specific dimensions, aggregation methods, and weights are not spelled out in detail. For example, when asked to define excellence, one developer responded:

I would say there is a natural or obvious standard language meaning , which is being in the first ranks of competition. Excellence is coming first. Now, we know that such a simple definition is not very relevant [for the task of indicators making]. Developer 2

The more concrete, and perhaps more relevant decisions are therewith avoided, as it is immediately acknowledged that research excellence constitutes an essentially contested concept. Users, on the other hand, seem to take for granted established definitions much easier. Here, one interviewee simply referred to the legal basis of Horizon 2020 in defining excellence:

I think I would stick to the definition of the legal basis : what is important is that the research that is funded is the top research. How is this defined? In general, it is what is talented, looking for future solutions, preparing for the next generation of science and technology, being able to make breakthroughs in society. 1 User 3

What both developers and users share is their insistence on the need for quantification of research excellence, albeit for different reasons. From the user-perspective, the call for a research excellence indicator seems to be grounded in a desire for evidence-based policy (EBP) making.

To our question on whether excellence is the right concept for assessing quality in science, interviewees responded saying that the call for EBP all costs surely plays a fundamental role in the mechanisms of promotion of excellence measures and therefrom indicators development:

There is a huge debate on what the real impact of that is in investment and we need to have a more scientific and evidence-based approach to measure this impact , both to justify the expense and the impact of reform, but also to better allocate spending. User 2

Notwithstanding the difficulty involved in operationalizing a notion of excellence towards indicators, what comes upfront is that no single agreed-upon solution is to be expected from academia when it comes to defining excellence for quantification purposes. This seems to be acknowledged by one of the developers, commenting on the composition of the high-level expert panel that:

You have a bunch of researchers who have a very different understanding of what research excellence would be, and some were selected for this high level panel. I am not aware of any reasoning why specific researchers or professors were selected while others were not. I am sure that if there was a different group, there would have been a different outcome , but this is a tricky thing. Developer 3

As such, similar considerations seem to confirm that the processes behind indicators development, such as the involvement of certain academic communities, potentially influence further conceptualisations of research excellence. These aspects will be discussed in the last section of this article.

4.2 Metrics and processes of research excellence

… the whole indicator activity is a social process, a socio-political process; it is not just a technical process. Developer 2

Indicators and metrics respond and correspond to social and political needs and are not mere technical processes, and this is made visible by different types of tensions identified by our interviewees.

First, the process of quantification of research excellence requires an agreement on its definition. Any definition is neither definitive nor stable, not least because of its contextual dependencies. In the above section, it emerged that what needs to be quantified is substantially contested. However, our interviews show that other at least equally contested dimensions exist, such as methodological (quantification practices), social (involved actors), normative (scientific and policy practices).

In the remainder of this section, we explore through our interviews the production of indicators vis-a-vis their processes and outcomes.

4.2.1 Normativity: who is involved in the design of an indicator?

Indicators clearly require political choices to be made. What needs to be quantified and who decides remains an important question. The normativity aspects remit always to definition issues, social actors of concern and institutional dependencies.

The observation of one of the practitioners resonates with Jack Stillgoe’s provocation that ‘excellence tells us nothing about how important the science is and everything about who decides’. 2

Who decides what excellence is? Is it policy makers, is it the citizen, is it the researchers themselves? Is it absolute or does it depend on the conditions? Does it depend on the discipline? Does it depend on the kind of institution concerned? You see what I mean. Developer 2

A practitioner suggests that the level of satisfaction, and therefore acceptance, of an indicator is primarily defined by its usage:

Who will decide when an indicator is good enough and to what extent? […] The short answer is the users, the people out there who use indicators and also whose careers are affected by indicators, they decide whether it’s good enough. Practitioner 3

These quotes raise different questions related to what we here call ‘normativity’ and ideas of co-production, both in terms of indicators development and usage: first, what are power relations between the actors involved and how can they influence the processes behind indicators? Second, to what extent can these kinds of quantification be deemed unsatisfactory and, ultimately rejected and by whom? Third, in the idiom of co-production, how does research excellence metrics influence research practices in both mainstream knowledge production systems and other emerging systems of knowledge production (namely what is designated as ‘DIY science’, ‘citizen science’, ‘the maker movement’, etc.)?

4.2.2 Inescapable simplifications?

Simplification seems to be an inescapable avenue in any attempt to represent complex concepts with just one number; as it implies inclusion and exclusion of dimensions, it begs the question of responsibility and accountability. In the end of the day, complex systems of knowledge production are evaluated through very limited information. Although we do not want to expand this discussion herein, it is important to point out that when using these scientific tools in realms that will have major implications on systems of knowledge production and governance, the ‘who’ and ‘to whom’ necessarily need careful consideration.

At some point, you need to reduce the complexity of reality . Otherwise you cannot move on. We tend to be in favour of something. The problem is that we have far too many possibilities for indicators […]. In general, we need to take decisions on the basis of a limited amount of information . Practitioner 4 What is the limitation of the index? These were the main issues and dimensions that it was not able to address. I do not know what the most problematic things were. I have seen many questions, which address for instance the choice of a certain indicator, data or denominator and the exclusion or inclusion of an index. I do not know which ones were more important than the others. We ran a number of sensitivity tests, which showed that some of the choices had a more substantial impact on country rankings. You could put the ranks upside down. Developer 3

Different interviewees deem that quantification practices ought to be robust to deviations due to different theoretical assumptions therefrom when specific variables, time periods, weights, and aggregation schemes are varied.

With regards to the RES&T, one user remarked purposeful correction as an issue of major concern for the quantification of research excellence:

Part of [the] alignment [of the RES&T indicator] led to counter-intuitive results, like a very low performance from Denmark, and then we reinstated the previous definition because it led to slightly better results. The definition of results is also important for political acceptance. User 4

As reported in the literature and also emerged throughout our interviews, excellence has been contended as the relevant policy concept to tackle major challenges of measuring impacts of science in society. The two quotes below stress the importance of aiming for indicators that go beyond the mere scientific outputs, suggesting that frameworks of assessment should also encompass other critical issues related to process (e.g. research ethics):

It is OK to measure publications, but not just the number . For instance, also how a certain topic has been developed in a specific domain or has been taken into a wider issue, or any more specific issues, these needs to be tracked as well. User 3 Let’s imagine two research groups: one does not do animal testing, and obtain mediocre results, the other does animal testing and have better results and more publications. How those two very different ethical approaches can be accounted? We should correct excellence by ethics! Developer 1

Our material illustrates several facets of different types of reductionism: first, the loss of multidimensionality as an inevitable consequence of reducing complexity; second, rankings following from indicators sometimes work as drivers and specifications for the production of the indicators themselves; finally, volatility in the results is expected to become an issue of major concern specifically along ever-changing systems of knowledge production (see e.g. Hessels and van Lente 2008 ).

4.2.3 Backstage negotiations

Indicators largely depend on negotiations among actors seeking to implement their own vision and interest. From such a view, research indicators redefine reputation and prioritise funding. This process is depicted as an embedded and inevitable dynamic within indicators production:

[When developing an indicator] you will always have a negotiation process . You will always discuss ‘what you will do in this case’; ‘you would not include that’ or ‘you would do that’; ‘this does not cover it all’. You will always have imperfect and to a certain extent wrong data in whatever indicator you will have . User 1 [Developers] mainly do methodological work. The political decisions on the indicator are taken a bit higher up. Practitioner 3 Many politicians have a very poor view of what actually goes into knowledge production. This is what we have experienced in Europe, The Netherlands and the UK. Give me one number and one A4 with a half page summary and I can take decisions. We need to have some condensation and summarisation, and you cannot expect politicians to deal with all the complexities. At the same time, they must be aware that too poor a view of what knowledge production is, kills the chicken that lays the eggs. Practitioner 1

These quotes seem to suggest that there are ‘clear’ separate roles for those who participate in the production of the indicator and those who are empowered to decide what the final product may look like. In the case of the development of the RES&T indicator, the process of revision and validation of the indicator included a workshop organised by EC policymakers, in which developers and academics were invited to review the indicator’s proposed theoretical framework. The publication of the feasibility study by Barré et al. (2011) was the main input of this workshop; one of the developers that we interviewed remarked the following:

I find it interesting that [at the workshop] also policymakers had their own idea of what it [the indicator] should be. Developer 3

In other words, even if roles seem to be rather defined, in the end of the day indicators respond to predefined political requests. On the other hand, it is interesting to note how this workshop worked as a space for clarifying positions and what the relevant expertise is.

Workshops are interesting in showing the controversies , and even if that is not the case for all indicators, the excellence one has gone through a certain level of peer review, revision and criticism . Even when you want to have an open underpinning, as a commissioning policy body, you’re in a difficult position: how do you select experts? User 2 Although the aim was reviewing and validating, people came up with another set of variables [different from the one proposed by the EG] that should have been taken into consideration. People make a difference and that is clear . Developer 3

Hence, these quotes seem to suggest that indicators are based on selected ‘facts’ of the selected ‘experts’ that are called upon to perform the exercise. The call for evidence-based policy needs to acknowledge this context and carefully examine ‘factual’ promises that cannot be accomplished, which put unnecessary pressures on developers, as well:

You have to understand, we had to consider the request … . They [DG RTD] just wanted a ranking of member states in terms of this kind of excellence concept. This is what they want; this is what we had to deliver within the project. Developer 1

We found two elements intrinsic to negotiation processes behind indicators development: first, different actors (developers vs. policymakers) move in different arenas (academic vs. political) and are moved by different interests; second, power relationships set what needs to be measured which make indicators not much more than mere political devices, coherent with a performative function.

4.3 Reimagining what?

Our interviewees explored practical strategies to deal with the policy need for research quality assessments. As researchers, we had assumed that because of many controversies and expressed discontent, there would be a lot of ideas about novel ways to look into the quality of research. Yet, our empirical material shows that there are no clear alternative proposals to either measuring ‘excellent research’ or to enhance the robustness of indicators, except for small variations. As frequently emerged throughout almost all the interviews, many actors highlighted the necessity of carefully interrogating the very use of excellence as the right proxy to research quality, as in this quote:

The norm of quality in research that you consider valid and others might not consider valid needs to be discussed as well. A debate is possible and is fundamental within indicators. Developer 2

Despite different positions about the controversial underpinnings of research excellence, widely discussed by the majority of interviewees from each of the three categories, none offered slight or indirect suggestions on how to go beyond the issue of quantification of research quality for policy purposes:

When you have evidence based policy, unfortunately, at the moment, almost the only thing that counts is quantitative data. Case studies and evaluation studies are only strong if they have quantitative data. Then you will get into indicators and it is very difficult to get away from it. User 1

This observation summarises an inevitable commitment to quantification: when asked about research excellence, different actors tend to digress around specific implementations and their implications but do not question in a strong manner the overall scope of the indicator as a means to map or ascertain scientific quality. But, quantifications fit the policy (and political) purpose they are meant to support, as suggested in this honest account by one user:

I think the reasoning is very simple. If we want an indicator that fits its purpose, which are political purposes , for policy makers and objective measures, we need to be very clear on what we measure and, as you say, to have the best matching and mismatching between purpose and reality. I think that is the first question. Then we have to deal with the nitty gritty and see how, sorry, nitty gritty is important, whether we can improve statistically what we have. User 2

Hence, in our interviews the narrative of ‘need for quantification’ inevitability persisted despite the recognition of its inherent limitations and misrepresentations. Interviewees focused on the possibility of improving indicators’ resonance with quality research, avoiding oversimplifications and limiting possible unwanted implications. This quote suggests that the limits of known imperfections of indicators can actually help with raising questions, and therefore we suggest that indicators could be viewed as the prompts to enquire further and not answering devices:

The point is that to take into account the fact that an indicator will never satisfy the totality of the issues concerned, my view is that an indicator is fine when it is built carefully, as long as it is then used not only to provide answers but to raise questions . […] for example, the indicator we are talking about is fine only as long as it goes along with the discussion of what it does not cover, of what it may hide or not consider with sufficient attention; or in what sense other types of institution or objectives can be taken into account. Developer 2

Along these lines, the importance of allowing for frequently (re)adjustments of evaluation exercises and practices that sustain research indicators is seen as a major improvement:

I am more interested in making sure that as someone involved in composite indicator development, I get the chance to revisit regularly an index which was developed. I can look around and have some kind of conceptual, methodological or statistical review, or see if it is reflecting the ongoing discussions. I can do this if I have the freedom to choose my research. This is not necessarily the case in settings where research is very politically or policy driven. Developer 1

The issue of data availability is quite relevant, not only because of the quality of the built indicators, but more interestingly because existing datasets determine what can be measured and ultimately give shape to the indicator itself, which is a normative issue tout court :

Many researchers or many users criticize existing indicators and say they are too conservative. [While they are] always criticized, it is difficult to come with new metrics and the traditional ones are very well grounded in statistics. We have a very good database on data metrics and patents, therefore these databases have some gravitational attraction, and people always go back to them. An indicator needs to be based on existing data. These data has to be acknowledged and there needs to be some experience of them and a bit of time lag between the coverage of new developments by data and then the use for developing indicators. User 4

Finally, excellence does not necessarily need to be a comparative concept, and indeed comparisons ultimately rely on a fair amount of de-contextualisation, which imply overlooking scientific (foremost disciplinary) differences of epistemic, ontological, and practical nature. This is recognised by many of our interviewees:

[Excellence] it is not so useful for comparing EU performance to non-European countries, to US and Japan, because they do not have the same components. They do not have ERC grants, for example! User 4 My suspicion is that [excellence] also depends on the discipline! Practitioner 2

Our quest for reimagination stayed mostly confined to discussing the processes of indicators development, with interviewees largely sharing stances on the apparent inevitability of quantification of research excellence for policy purposes. In fact, we are somehow disappointed that the discussion on other ways to describe and map quality in science did not really produce substantial alternatives. However, few points were raised as central to strengthen the robustness of existing indicators: first, evaluation exercises that deploy research indicators should be frequently checked upon and fine-tuned if necessary; second, what is possible to evaluate should not be constrained by existing datasets but other sources of information should be sought, created, and imagined. In any case, available sources of information are not sufficient when one considers the changing nature of current knowledge production and governance modes which today involve a wider range of societal actors and practices (e.g. knowledge production systems appearing outside mainstream institutions).

In this article, we explored the making of a particular ‘research excellence’ indicator, starting from its underlying concept and institutional framing. Below, we summarise our findings through five main points. Together, these may constitute departing points for future debates around alternative evaluation framings, descriptors, and frameworks to describe and map the quality of scientific research in the EU.

5.1 Research excellence: contested concept or misnomer?

Early in this article, we advanced the idea of excellence as an essentially contested concept, as articulated by Gallie (1955) . Our interviews seem to concur with the general idea that the definition of such a concept does not seem to be stable and that there are genuine difficulties (and actual unpreparedness) among interviewees even to come up with a working definition for ‘research excellence’. In most cases, interviewees seem to agree that research excellence is a multidimensional, complex, and value-laden concept whose quantification is likely to end in controversy. ‘Institutionalised’ definitions, which may not necessarily be subject of a thorough reflection, were often given by our interviewees; they repeatedly remarked that each definition depends very much on the actors involved in developing indicators. So, would more extended debate about the meanings and usefulness of the concept to assess and compare scientific research quality be helpful to address some of the current discussions?

5.2 Inescapability of quantification?

The majority of our interviewees had a hard time imagining the assessment of research that is not reliant on quantification . Yet, quantifying or not quantifying research excellence for policy purposes does not seem to be the question, the issue rather revolved around what really needs to be quantified. Is the number of published papers really an indication of excellence? Does paper citation really imply its actual reading? As with classifications ( Bowker and Star 1999 ), indicators of research excellence are both hard to live with and without. The question is how to make life with indicators acceptable while recognising their fallibility. Recognising that quantifying research excellence requires choices to be made, then the values and interests of such choices serve at the neglect of others becomes an important reflection. We would argue that quantifying research excellence is first and foremost a political and normative issue and as such, Merton’s (1973) pertinent question ‘who is to judge on research excellence?’ remains.

The need for quantification is encouraged by and responds to the trend of evidence-based policy. After all, this is a legacy of the ‘modern’ paradigm for policy making which needs to be based on scientific evidence and this, in turn, needs to be delivered in numbers. However, as Boden and Epstein, (2006) remarked, we might be in a situation of ‘policy based evidence’ instead, where scientific research is assessed and governed to meet policy imaginaries of scientific activity (e.g. focus on outcomes such as the number of publications, ‘one size fits all’ approaches to quantification across different scientific fields, etc.). The question then remains, that is, can ideas of qualifying quantifications be developed also in this case?

5.3 The lamppost

In Mulla Nasruddin’s story, the drunken man tries to find under the lamppost the keys he lost. Some of the interviewees suggested that the bottleneck for quantification is existing data. In other words, data availability influences what is possible to quantify: only those parameters for which there are already considerable data, that is those which are easy to count seem to be the ones taken into account. We argue that this type of a priori limitation needs to be reflected upon, not least because knowledge production and the ways in which researchers make visible their work to the public are not confined to academic formats only. Moreover, if one considers the processes by which scientific endeavour actually develops, then we might really need to see outside the lamppost’s light circle. Can we afford to describe and assess ‘excellent research’ exclusively relying on current parameters for which data are already available?

5.4 Drawing hands

In an introductory piece about the co-production idiom, Jasanoff (2004 : 2) says that ‘the ways in which we know and represent the world are inseparable from the ways we choose to live in it’. We concur with the idea that the construction of indicators is a sociopolitical practice. From such a perspective, it becomes clear that knowledge production practices are in turn conditioned by knowledge production assessment practices, exactly as depicted in artist M. C. Escher’s piece Drawing Hands . In other words, whichever ways (research excellence) indicators are constructed, their normative nature contributes to redefining scientific practices. We suggest that the construction of an indicator is a process in which both the concept (research excellence) and its measurements are mutually defined and are co-produced. If an indicator redefines reputation and eligibility for funding, researchers will necessarily adapt their conduct to meet such pre-established standards. However, this understanding is not shared by all interviewees, which suggests that future practice needs to raise awareness of the normativity inherent to the use of indicators.

5.5 One size does not fit all

Indicators necessarily de-contextualize information. Many of our interviewees suggested that other types of information would need to be captured by research indicators; to us this casts doubts about the appropriateness of the use of indicators alone as the relevant devices for assessing research with the purposes of designing policy. What do such indicators tell us about scientific practices across fields and different countries and institutions? The assumption that citation and publication practices are homogenous within different specialties and fields of science has been previously demonstrated as problematic ( Leydesdorff 2008 ), and it is specifically within the policy context that indicators need to be discussed (see e.g. Moed et al. 2004).

The STS literature offers us examples of cultures of scientific practice that warn us that indicators alone cannot be used to sustain policies, but they certainly are very useful to ask questions.

Nowotny (2007) and Paradeise and Thoenig (2013) argued that like many other economic indicators, ‘research excellence’ is promoted at the EU level as a ‘soft’ policy tool (i.e. it responds to benchmarks to compel Member States to meet agreed obligations). But the implied measurements and comparisons ‘at all costs’ cannot be considered ‘soft’ at all: they inevitably trigger unforeseen and indirect incentives in pursuing a specific kind of excellence (see e.g. Martin 2011 ) often based on varied, synthetic and implicit evaluations. In the interviews, stories were told to us, about purposeful retuning of indicators because some countries would not perform as expected when variations to the original indicators were introduced.

If going beyond quantification eventually turns out not being an option at all, at least we should aim for more transparency in the ‘participatory’ processes behind the construction of indicators. To cite Innes’ words ‘the most influential, valid, and reliable social indicators are constructed not just through the efforts of technicians, but also through the vision and understanding of the other participants in the policy process. Influential indicators reflect socially shared meanings and policy purposes, as well as respected technical methodology’ ( Innes 1990 ).

This work departed from the idea that the concept of research excellence is hard to be institutionalised in the form of stable research excellence indicators, because it inevitably involves endless disputes about its usage. Therefore, we expected to find alternative by other imaginaries and transformative ideas that could sustain potential changes. To test these ideas, we examined the RES&T indicator development and its quantification, highlighting that this indicator is developed in a context in which it simultaneously respond to and normalise both scientific practice and policy expectations. We also explored the difficulties of measuring a concept (research excellence) that lacks agreed meanings. The in-depth interviews conducted with relevant actors involved in the development of the RES&T research indicator suggest that, while respondents widely acknowledge intrinsic controversies in the concept and measurement, and are willing to discuss alternatives (what we called ‘re-imagination’), they did not find it easy to imagine alternatives to address research quality for policy purposes. Quantification is hard-wired into practices and tools to assess and assure the quality of scientific research that are further reinforced by the current blind and at-all-costs call for quantified evidence based policy to be applied in twenty-eight different EU Member States. However, suggestions were made to make reimagination a continuous stage of the process of developing excellence assessments, which reminds us of Barré’s agora model ( Barré 2004 ).

To conclude, more than a contested concept, our research lead us to wonder whether ‘research excellence’ could be a misnomer to assess the quality of scientific research in a world where processes, and not only outcomes, are increasingly subject of ethical and societal scrutiny? Or, what is the significance of excellence indicators when scientific research is a distributed endeavour that involves different actors and institutions often even outside mainstream circles?

Conflict of interest statement . The views expressed in the article are purely those of the writers and may not in any circumstances be regarded as stating an official position of the European Commission and Robobank.

The authors would like to thank the contribution of the interviewees, the comments and suggestions of two anonymous reviewers as well as the participants of the workshop on “Excellence Policies in Science” held in Leiden, 2016. The views expressed in the article are purely those of the authors and may not in any circumstances be regarded as stating an official position of the European Commission.

Barré R. ( 2001 ) ‘ Sense and Nonsense of S&T Productivity Indicators ’, Science and Public Policy , 28 / 4 : 259 – 66 .

Google Scholar

Barré R. , ( 2004 ) ‘S&T indicators for policy making in a changing science-society relationship’, in: Moed H. F. , Glänzel W. , Schmoch U. . (eds) Handbook of Quantitative Science and Technology Research: The Use of Publication and Patent Statistics in Studies of S&T Systems , pp. 115–131. Dordrecht : Springer .

Google Preview

Barré R. ( 2010 ) ‘ Towards Socially Robust S&T Indicators: Indicators as Debatable Devices, Enabling Collective Learning ’, Research Evaluation , 19 / 3 : 227 – 31 .

Barré R. , Hollanders H. , Salter A. ( 2011 ). Indicators of Research Excellence . Expert Group on the measurement of innovation.

Benett K. ( 2016 ) ‘Universities are Becoming Like Mechanical Nightingales’. Times Higher education. < https://www.timeshighereducation.com/blog/universities-are-becoming-mechanical-nightingales>

Boden R. , Epstein D. ( 2006 ) ‘ Managing the Research Imagination? Globalisation and Research in Higher Education ’, Globalisation, Societies and Education , 4 / 2 : 223 – 36 .

Boulanger P.-M. ( 2014 ) Elements for a Comprehensive Assessment of Public Indicators . JRC Scientific and Policy Reports, Luxembourg : Publications Office of the European Union .

Bowker G. C. , Star S. L. ( 1999 ) Sorting Things Out: Classification and its Consequences . Cambridge : MIT Press .

Butler L. ( 2003a ) ‘ Explaining Australia’s Increased Share of ISI Publications—the Effects of a Funding Formula Based On Publication Counts ’, Research Policy , 32 : 143 – 55 .

Charmaz K. ( 2006 ) Constructing Grounded Theory: A Practical Guide Through Qualitative Analysis , Vol. 10. http://doi.org/10.1016/j.lisr.2007.11.003

Clarke A. E. ( 2003 ) ‘ Situational Analyses ’, Symbolic Interaction , 26 / 4 : 553 – 76 .

Collier D. , Daniel Hidalgo F. , Olivia Maciuceanu A. ( 2006 ) ‘ Essentially Contested Concepts: Debates and Applications ’, Journal of Political Ideologies , 11 : 211 – 46 .

Desrosieres A. , ( 2015 ) ‘Retroaction: How Indicators Feed Back onto Quantified Actors’. In: Rottenburg . et al.  (eds) The World of Indicators: The Making of Governmental Knowledge through Quantification . Cambridge : Cambridge University Press .

Elzinga A. ( 2012 ) ‘ Features of the Current Science Policy Regime: Viewed in Historical Perspective ’, Science and Public Policy , 39 / 4 : 416 – 28 .

European Commission ( 2014 ) Innovation Union Competitiveness Report 2013—Commission Staff Working Document, Directorate-General for Research and Innovation . Luxembourg : Publications Office of the European Union .

Espeland W. ( 2015 ) ‘Narating Numbers’. In: Rottenburg et al.  (eds) The World of Indicators: The Making of Governmental Knowledge through Quantification . Cambridge, UK : Cambridge University Press .

Funtowicz S. O. , Ravetz J. R. ( 1990 ) Uncertainty and Quality in Science for Policy , 229 – 41 . Dordrecht : Kluwer Academic Publishers .

Gallie W. B. ( 1955 ) ‘ Essentially Contested Concepts ’, Proceedings of the Aristotelian Society , 56 : 167 – 98 .

Gieryn T. F. ( 1983 ) ‘ Boundary-Work and the Demarcation of Science from Non-science: Strains and Interests in Professional Ideologies of Scientists ’, American Sociological Review , 48 / 6 : 781 – 95 .

Grupp H. , Mogee M. ( 2014 ) ‘ Indicators for National Science and Technology Policy. How Robust are Composite Indicators? ’, Research Policy , 33 / 2004 : 1373 – 84 .

Grupp H. , Schubert T. ( 2010 ) ‘ Review and New Evidence on Composite Innovation Indicators for Evaluating National Performance ’, Research Policy , 39 / 1 : 67 – 78 .

Gunasekara C. ( 2007 ) ‘ Pivoting the Centre: Reflections on Undertaking Qualitative Interviewing in Academia ’, Qualitative Research , 7 : 461 – 75 .

Hardeman S. ( 2013 ) ‘ Organization Level Research in Scientometrics: A Plea for an Explicit Pragmatic Approach ’, Scientometrics , 94 / 3 : 1175 – 94 .

Hardeman S. , Van Roy V. , Vertesy D. ( 2013 ) An Analysis of National Research Systems (I): A Composite Indicator for Scientific and Technological Research Excellence . JRC Scientific and Policy Reports, Luxembourg : Publications Office of the European Union .

Hessels L. K. , van Lente H. ( 2008 ) ‘ Re-thinking New Knowledge Production: A Literature Review and a Research Agenda ’, Research Policy , 37 / 4 : 740 – 60 .

Hicks D. ( 2012 ) ‘ Performance-based University Research Funding Systems ’, Research Policy , 41 / 2 : 251 – 61 .

Innes J. E. ( 1990 ) Knowledge and Public Policy. The Search for Meaningful Indicators . New Brunswick (USA) and London (UK ): Transaction Publishers .

Jasanoff S. (ed.) ( 2004 ) States of Knowledge: The Co-Production of Science and the Social Order . London : Routledge .

Lamont M. ( 2009 ) How Professors Think: Inside the Curious World of Academic Judgment . Cambridge/London : Harvard University Press .

Leydesdorff L. ( 2008 ) ‘ Caveats for the Use of Citation Indicators in Research and Journal Evaluations ’, Journal of the American Society for Information Science and Technology , 59 / 2 : 278 – 87 .

Martin B. R. ( 2011 ) ‘ The Research Excellence Framework and the ‘impact agenda’: are we Creating a Frankenstein Monster? ’, Research Evaluation , 20 / 3 : 247 – 54 .

Martin B. R. ( 2013 ) ‘ Whither Research Integrity? Plagiarism, Self-Plagiarism and Coercive Citation in an Age of Research Assessment ’, Research Policy , 42 / 5 : 1005 – 14 .

Merton R. K. ( 1973 ) ‘Recognition and Excellence: Instructive Ambiguities’. In: Merton R. K. (ed.) The Sociology of Science . Chicago : University of Chicago Press .

Mood H.F. , Glanzel W. , Schmoch U. (eds) ( 2004 ) Handbook of Quantitative Science and Technology Research. The Use of Publication and Patent Statistics in Studies of S&T Systems . Dordrecht : Kluwer .

Nowotny H. ( 2007 ) ‘ How Many Policy Rooms Are There? Evidence-Based and Other Kinds of Science Policies ’, Science, Technology & Human Values , 32 / 4 : 479 – 90 .

Paradeise C. , Thoenig J.-C. ( 2013 ) ‘ Organization Studies Orders and Global Standards ’, Organization Studies , 34 / 2 : 189 – 218 .

Polanyi M. ( 2000 ) ‘ The Republic of Science: Its Political and Economic Theory ’, Minerva , 38 / 1 : 1 – 21 .

Porter T. M. , ( 2015 ) The Flight of the Indicator. In: Rottenburg et al.  (eds) The World of Indicators: The Making of Governmental Knowledge through Quantification . Cambridge : Cambridge University Press .

Rafols I. , Leydesdorff L. , O’Hare A. et al.  ( 2012 ) ‘ How Journal Rankings Can Suppress Interdisciplinary Research: A Comparison Between Innovation Studies and Business & management ’, Research Policy , 41 / 7 : 1262 – 82 .

Saltelli A. , D’Hombres B. , Jesinghaus J. et al.  ( 2011 ) ‘ Indicators for European Union Policies. Business as Usual? ’, Social Indicators Research , 102 / 2 : 197 – 207 .

Saltelli A. , Funtowicz S. ( 2017 ). ‘What is Science’s Crisis Really About?’ Futures —in press. < http://www.sciencedirect.com/science/article/pii/S0016328717301969> accessed July 2017.

Sarewitz D. ( 2015 ) ‘ Reproducibility Will Not Cure What Ails Science ’, Nature , 525 : 159.

Smith A. F. ( 1996 ) ‘ MAD Cows and Ecstasy: Chance and Choice in an Evidence-Based Society ’, Journal-Royal Statistical Society Series A , 159 : 367 – 84 .

Sørensen M. P. , Bloch C. , Young M. ( 2015 ) ‘ Excellence in the Knowledge-Based Economy: from Scientific to Research Excellence ’, European Journal of Higher Education , 1 – 21 .

Stilgoe J. ( 2014 ). ‘Against Excellence’. The Guardian , 19 December 2014.

Tijssen R. J. ( 2003 ) ‘ Scoreboards of Research Excellence ’, Research Evaluation , 12 / 2 : 91 – 103 .

Vertesy D. , Tarantola S. ( 2012 ). Composite Indicators of Research Excellence . JRC Scientific and Policy Reports. Luxembourg : Publications Office of the European Union .

Weinberg A. M. ( 2000 ) ‘ Criteria for Scientific Choice ’, Minerva , 38 / 3 : 253 – 66 .

Weingart P. ( 2005 ) ‘ Impact of Bibliometrics upon the Science System: Inadvertent Consequences? ’, Scientometrics , 62 / 1 : 117 – 31 .

Wilsdon J. ( 2015 ) ‘ We Need a Measured Approach to Metrics ’, Nature , 523 / 7559 : 129 .

The official Horizon2020 document defines that research excellence is about to “[] ensure a steady stream of world-class research to secure Europe's long-term competitiveness. It will support the best ideas, develop talent within Europe, provide researchers with access to priority research infrastructure, and make Europe an attractive location for the world's best researchers. ” (European Commission, 2011) (p.4).

From “Against Excellence”, the Guardian, 19 /12/2014 Retrieved at https://www.theguardian.com/science/political-science/2014/dec/19/against-excellence.

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1471-5430
  • Print ISSN 0302-3427
  • Copyright © 2024 Oxford University Press
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.14; 2018 Aug

New horizons for future research – Critical issues to consider for maximizing research excellence and impact

Wolfgang langhans.

1 Physiology and Behavior Laboratory, ETH Zurich, Schorenstr. 16, 8603, Schwerzenbach, Switzerland

2 Brain Center Rudolf Magnus, Dept. of Translational Neuroscience, University Medical Center Utrecht, Utrecht University, Utrecht, 3584, CG, The Netherlands

3 Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Sweden

Myrtha Arnold

William a. banks.

4 Geriatric Research Education and Clinical Center, Veterans Affairs Puget Sound Health Care System, Seattle, WA, USA

5 Division of Gerontology and Geriatric Medicine, Department of Medicine, University of Washington School of Medicine, Seattle, WA, USA

J. Patrick Card

6 Department of Neuroscience, University of Pittsburgh, Pittsburgh, PA, 15260, USA

Megan J. Dailey

7 Department of Animal Sciences, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA

Derek Daniels

8 Behavioral Neuroscience Program, Department of Psychology, State University of New York at Buffalo, Buffalo, NY 14260, USA

Annette D. de Kloet

9 Department of Physiology and Functional Genomics, College of Medicine, University of Florida, Gainesville, FL, 32611, USA

Guillaume de Lartigue

10 The John B. Pierce Laboratory, New Haven, CT, 06519, USA

11 Department of Cellular and Molecular Physiology, Yale Medical School, New Haven, CT, 06519, USA

Suzanne Dickson

12 Dept Physiology/Endocrine, Institute of Neuroscience and Physiology, The Sahlgrenska Academy at the University of Gothenburg, Medicinaregatan 11, SE-405 30, Gothenburg, Sweden

Shahana Fedele

Harvey j. grill.

13 Lynch Laboratories University of Pennsylvania, Philadelphia, PA, 19104, USA

John-Olov Jansson

Sharon kaufman, grant kolar.

14 Pathology, Saint Louis University School of Medicine, St. Louis, MO, 63104, USA

Eric Krause

15 Department of Pharmacodynamics, College of Pharmacy, University of Florida, 32611, USA

Shin J. Lee

Christelle le foll.

16 Institute of Veterinary Physiology, University of Zurich, Winterthurerstrasse 260, CH 8057, Zurich, Switzerland

Barry E. Levin

17 Department of Neurology, Rutgers, New Jersey Medical School, Newark, NJ, 07103, USA

Thomas A. Lutz

Abdelhak mansouri, timothy h. moran.

18 Department of Psychiatry and Behavioral Sciences, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA

Gustavo Pacheco-López

19 Metropolitan Autonomous University (UAM), Campus Lerma, Health Sciences Department, Lerma, Edo Mex, 52005, Mexico

Deepti Ramachandran

Helen raybould.

20 Dept. of Anatomy, Physiology and Cell Biology, UC Davis School of Veterinary Medicine, Davis, CA, 95616, USA

Linda Rinaman

21 Florida State University, Dept. of Psychology, Tallahassee, FL, 32303, USA

Willis K. Samson

22 Pharmacology and Physiology, Saint Louis University School of Medicine, St. Louis, MO, 63104, USA

Graciela Sanchez-Watts

23 The Department of Biological Sciences, USC Dornsife College of Letters, Arts & Sciences, University of Southern California, Los Angeles, CA 90089, USA

Randy J. Seeley

24 Departments of Surgery, Internal Medicine and Nutritional Science, University of Michigan, Ann Arbor, MI 48109, USA

Karolina P. Skibicka

25 Department of Physiology/Metabolic Physiology, Institute of Neuroscience and Physiology, The Sahlgrenska Academy at the University of Gothenburg, SE-405 30 Gothenburg, Sweden

26 Wallenberg Centre for Molecular and Translational Medicine, University of Gothenburg, Sweden

27 Yale University School of Medicine, The Modern Diet and Physiology Research Center, New Haven, CT 06511, USA

Alan C. Spector

28 Department of Psychology and Program in Neuroscience, Florida State University, Tallahassee, FL, 32306, USA

Kellie L. Tamashiro

Brian templeton.

29 Midwest Community Fundraising, Inc., Cincinnati, OH, 45223, USA

Stefan Trapp

30 Centre for Cardiovascular and Metabolic Neuroscience; Department of Neuroscience, Physiology & Pharmacology, UCL, London WC1E 6BT, UK

Patrick Tso

31 Department of Pathology and Laboratory Medicine, University of Cincinnati College of Medicine, Cincinnati, OH, 45237, USA

Alan G. Watts

Nadja weissfeld, diana williams, christian wolfrum.

32 Translational Nutrition Biology Laboratory, ETH Zurich, 8603, Schwerzenbach, Switzerland

Gina Yosten

Stephen c. woods.

33 Department of Psychiatry and Behavioral Neuroscience, University of Cincinnati School of Medicine, Cincinnati, OH, 45237, USA

1. Prologue

We live in an era in which the pace of research and the obligation to integrate new discoveries into a field's conceptual framework are rapidly increasing. At the same time, uncertainties about resources, funding, positions and promotions, the politics of science, publishing (the drive to publish in so-called high-impact journals) and many other concerns are mounting. To consider many of these phenomena in depth, a meeting was recently convened to discuss issues critical to conducting research with an emphasis on the neurobiology of metabolism and related areas. Attendees included a mix of senior and junior investigators from the United States, Latin America, and Western Europe, representing several relevant disciplines.

Participants were initially assigned to small groups to consider specific questions in depth, and the results of those deliberations were then presented and discussed over several plenary sessions. Although there was spirited discussion with sometimes differing opinions on some issues, in general there was good consensus among individuals and the various groups. While the discussions were wide-ranging, we have condensed the topics into three (albeit often overlapping) major areas:

  • 1) General research issues applicable to multiple areas of translational research; for instance, animal models, sex and gender differences, examples of emerging technologies, as well as the issue of data reproducibility and related topics.
  • 2) Funding issues, such as how to secure industry funding without compromising research direction or academic integrity, and the training of students and fellows, with a focus on how to optimally prepare trainees for the diverse potential career paths available.
  • 3) Finally, specific research topics of interest were discussed, including whether peptides or other signaling compounds, or specific brain areas, have “thematic functions” or the challenges associated with investigating the function of G-protein-coupled receptors (GPCR) in the brain.
  • We consider each in turn.

2. General research issues

2.1. the selection of animal models.

One of the first questions considered was how good or bad are our current experimental models? As might be expected, discussion initially focused on rats vs. mice. Mice have many obvious advantages including size, cost per animal, a large genomic database, readily available genetically modified strains, and the ability to use smaller amounts of expensive, hard-to-get experimental compounds. On the other hand, rats perhaps have more translational value because they are often better models for human systems and behavior. For instance, most commonly used laboratory rats (Sprague Dawley, Wistar, Long–Evans) are outbred strains and hence have considerable genetic variation, a feature which for many research questions better represents the genetic heterogeneity and diversity of humans. In addition, in certain situations such as after gastric bypass surgery, rats may better model humans because, similar to humans, the substantial reduction in body weight after gastric bypass surgery is mainly due to a reduction in food intake. In mice, on the other hand, food intake is often scarcely changed after gastric bypass surgery, and the reduction in body weight is largely due to an increase in energy expenditure (for review see [5] ). Rats have also contributed to a large and rich experimental database and historic development of scientific theories, especially in behavior, physiology, and brain structure.

Given technological advances in molecular genetics, it may be that the ‘genetic manipulation’ advantage offered by mice will soon be available - at least to some extent - for rats and other, larger, mammalian species that better model certain features of human physiology and behavior. This is a key factor as many systems remain difficult to assess at the desired level in rodents. Nevertheless, public concerns about the use of invasive experimental methods and, in particular, about performing genetic manipulations in animals larger than laboratory rodents that are phylogenetically closer to humans than mice and rats may hamper the use of such animal models in science. This also relates to the question of whether we should always use the best animal model for a given pathology or whether we should compromise with a species that is more accepted for ethical reasons and perhaps even less expensive?

An important concern for much current research is “translationability” i.e., whether what is found in one species (e.g., rat) is also true of another (e.g., mouse, human). How does this impact or create unnecessary redundancy on the one hand and reduce the likelihood of obtaining funding on the other? For example, if one group reports a phenotype in the mouse, and a researcher using a rat model has the means to extend the findings in a novel way, must s/he first demonstrate the basic phenotype in the rat? Many felt that reviewers demand this intermediate step; i.e., it is widely recognized that there is a concern for cross-species validation that must be considered. And while the goal of such research could be justified as comparative physiology, the actual goal is often more closely aligned with issues of modeling and which species more closely resembles human physiology.

In any case, interfacing well with reviewers (of grant proposals or manuscripts) requires strong justification for any model system. It was the group's consensus that the primary scientific concern should be the significance of the research question being asked. There are no good or bad models per se, but there are better or worse models for a particular question, meaning that the value of the model depends on the nature of the question. There should be well-defined criteria to justify the choice of any model. In this climate of shrinking extramural funding, the choice of one model or another must be clearly laid out for reviewers of research proposals as well as for manuscripts, and journal editors should pay particular attention to these issues.

For translational research, a possible strategy would be that journals and funding agencies could include a section detailing the use and choice of the model and how it relates to human physiology if appropriate. Due to space constraints, such sections could be included in the online supplementary material to allow the authors to offer a detailed explanation of the proposed or used model system, including its strengths and weaknesses. Such an approach would, over time, hopefully generate a consensus or at least partial agreement on the applicability of certain model systems to specific research questions.

There was considerable discussion about the utility of other experimental models, including dogs, pigs, non-human primates, non-vertebrates, and computer models. Many of the trade-offs when using these models are obvious. For example, while non-human primates can model humans more closely than rodents, costs, ethical, cultural, and political issues can make such research prohibitive. Differences among rodent strains are just as likely to be as important as those between any species (e.g. [3] ). For some less common models that can be justified for particular questions (for example pigs or other large animals), a strong case can be made for collaborating with researchers in animal science, who generally have access to better facilities in which to conduct such research. On the other hand, for more primitive animal models, such as zebrafish, C. elegans and other smaller animals, teaming up with specialists in biology may be a viable option. An excellent, recent review summarizes the strengths and weaknesses of currently used animal models [4] . In general, computer models were deemed to still be somewhat limited for addressing research questions in whole-animal physiology and behavior. On the other hand, they may be useful for specific purposes depending upon what is being modeled. Examples include computational modeling of molecular docking and molecular dynamics in drug design to explore the structure and function of diverse therapeutic targets, or, at the other end of the spectrum, simulation models of obesity trends with a focus on the effects of possible policy interventions on public health and economic outcomes.

The point was made that the use of experimentally modified genes in rodent models is now so common that scientific review groups (e.g., at NIH) routinely assign much lower priorities to proposals that simply describe new phenotypes of genetically modified species. Rather, specific questions regarding gene function need to be addressed which will benefit from the experimental model of genetic modification.

It was noted that industry often takes a different approach to animal models, where their goal is not necessarily to understand a system but rather to perform discovery work that leads to marketable drugs or other products. This aspect of the translation issue is often of ultimate importance: How do such data predict human responses?

2.2. Sex and gender differences

The impact of certain research directives mandated by the NIH and other funding agencies, some of which require researchers to design and conduct experiments in a prescribed manner, was another continuing theme in many discussions. For example, NIH's policy requiring justification for using one or both sexes in research raised several concerns. Some felt that this requirement saps limited resources by “forcing” experiments that are not hypothesis-driven, and may not generate important and/or relevant findings.

While investigating sex as a biological variable might be fruitful, it requires careful experimental design to ensure that the studies are adequately powered and data analysis is based on a solid knowledge of genetically- and hormonally-mediated physiological and behavioral differences between the sexes. Studies in females need to take into account the 4 stages of the estrus cycle and, as such, can result in the need for many more animals being studied, including even an ovariectomized group. Many studies are now including both sexes, but the experiments are not always designed to reveal potential sex differences.

Group discussants recognized the value of focused, hypothesis-driven research on sex differences, and suggestions were offered to improve the science being conducted while remaining compliant with the funding mandates. For example, the NIH could provide funding through which graduate students and postdoctoral fellows could be trained in labs that specialize in studying sex differences, and thus, know how such studies should be conducted [e.g., [1] , [6] ]. As sex as a biological variable is a key part of a recent NIH initiative to enhance reproducibility through rigor and transparency [see [2] ], perhaps NIH could call for additional proposals that specifically focus on revealing potential sex differences. Other suggestions were 1) to have funding agencies provide supplemental funds for expanding already-funded research to include both sexes, 2) To focus on critical developmental stages that might enhance sexual dimorphisms (e.g., puberty, menopause) when there is likely an important difference, and 3) to fund key exploratory experiments in a “look see” approach to determine the effect of sex in established fields whose findings are largely based on males. The overall point is that many researchers now conduct such experiments in order to be compliant, but they actually have little or no interest in sex differences per se and no pertinent knowledge.

2.3. Examples of emerging technologies

Many topics were considered, although in depth discussions occurred for only a few. The following paragraphs reflect an extended summary of one topic that generated particular interest. There was considerable discussion on the use of designer viruses to define neural networks and investigate their functional architecture. A show of hands revealed that there was widespread use of viruses by the discussants, in part because they are relatively inexpensive to use, are readily available, and provide important anatomical specificity within the nervous system. However, there are often strict biohazard regulatory issues requiring adherence for some viruses.

As with all aspects of research, it is important to know the specific question being asked and whether use of a particular virus is appropriate. In this regard, it was emphasized that viruses can be divided into two general categories – replication-competent strains (such as pseudorabies virus which is used for tracing multisynaptic pathways) and replication-incompetent strains (such as recombinant adeno-associated virus and lentiviruses expressing cDNAs encoding light-sensitive channels, calcium sensitive fluorophores or any other protein or shRNA). Replication-incompetent strains that are broadly used as expression vectors are generally considered harmless. In both of these categories, it is essential to consider the biological properties of the reagent that is to be employed in the experiments and how they will impact upon the interpretation of the data that are produced. For example, the virulence of infecting, replication-competent virus strains has a clear impact upon the specificity of transport through synaptically linked populations of neurons as well as the function of infected neurons within the circuit. The strains of virus most widely used for circuit analysis have been genetically modified to reduce virulence without compromising invasiveness. Nevertheless, these viruses still evoke an immune response in the nervous system that will ultimately compromise the function of infected neurons. Thus, temporal analysis of viral invasiveness of a circuit is an essential component in evaluating both the organization of the circuit and the function of its constituent neurons. There was also concern of toxicity of genes that were cloned into viruses. Fluorescent proteins themselves may generate an immune response and be toxic when overexpressed. Short hairpin RNAs (shRNA), which are used to silence gene expression, may saturate the cellular RNAi machinery such that endogenous miRNAs are not processed properly, necessitating the use of both scrambled compounds and non-injected animals as proper controls to interpret the results in physiological and behavioral experiments, particularly when using adeno-associated and lentiviruses.

The direction of transport of viruses through a neural circuit is also an important consideration in experimental design. Well-characterized strains of viruses have been generated that not only have reduced virulence but also travel selectively either retrogradely or anterogradely through a circuit. Many of these reagents are available from individual investigators as well as through an NIH-funded center headquartered at the University of Pittsburgh (Center for Neuroanatomy with Neurotropic Viruses or CNNV; http://www.cnnv.pitt.edu ). The CNNV also provides resources to aid in experimental design as well as access to reviews characterizing the strengths and limitations of the technology.

There was a clear consensus among discussants that it is incumbent upon the investigator to become informed on the many issues that impact upon successful application of this demanding technology. Taking advantage of resources available from investigators expert in the technology, as well as those available through the CNNV, can help enormously in achieving that informed perspective.

Increasingly, replication-incompetent viruses and expression vectors are being combined in individual experiments in order to identify the connections of functionally defined populations of neurons. These reagents are mostly employed to highlight connections to a defined population of neurons or to restrict transport of virus through a single synapse. Once again, the biological properties of the viruses used and the ability to alter their genomes are foundational to these powerful approaches. Alpha herpesviruses (DNA viruses) have been most widely used to define the connections of individual populations of neurons within a larger network, whereas rabies viruses (RNA viruses) are employed to define single orders of synaptic input to identified neurons. In both instances, the ability to alter the viral genome to express unique reporters of infection, as well as proteins that influence the invasion and transport of the reporter viruses, have created the foundation for the successful development and application of these experimental approaches. Discussion of the strengths and limitations of these approaches highlighted the importance of defining the full extent of the neurons whose connections are being investigated. For example, if a neurochemically defined group of neurons is the target of analysis, do all of those neurons become infected with the virus or the expression vector? Also essential is to carefully consider the cytoarchitecture of the injected region and topographical distribution of the targeted neurons within it. Failure to consider these and other important issues can lead to unwarranted conclusions regarding the connectivity of the circuitry under study.

Another approach that is increasingly being applied in the field uses unique fluorescent probes to identify and quantify multiple RNA species (multiplex analysis) in cell cultures or in tissue sections. The capability to assay the simultaneous expression of multiple genes in single cells is very powerful, and several of these methods are currently in use. However, the fact that some are only available commercially can raise problems because reagents are expensive and proprietary, meaning their identity and compositions are not openly available. To offset these issues collaborative networks among labs have formed to help troubleshoot and circulate alternative approaches among those with similar interests. As with all mRNA hybridization methods in intact cells, there are always questions of quantifiability and whether or not one is measuring functional mRNA from which bioactive proteins can be translated. These uncertainties carry the risk of misinterpreting data; for example, the temptation to use changing mRNA levels as proxies of altered protein function.

2.4. The value of replicating published results- the value of failing to replicate published results

The issue of labs failing to replicate what other labs have reported generated lively discussion. Discounting instances of fraud or simply poor training or practice, the discussion settled on ‘good’ science, why the incidence of failure to replicate is so high (e.g. see [8] ), and possible underlying causes. There is, of course, always something to learn from differing results because when both sets of experiments are reliable within one lab or setting, differences between labs likely indicate that a significant, biologically-relevant and as yet unidentified variable (e.g., different strains of subject, different food, different temperature or other lab conditions, etc.) has been overlooked. Importantly, failures to replicate findings are relevant for both in vivo and in vitro research. Hence, problems to replicate are not a reason to replace in vivo with in vitro experiments, which is an argument often used by animal protectionists.

At another, perhaps subconscious, level there may be conflicts of interest; i.e., there is often pressure to obtain certain data in order to publish or to secure research funding or a job or promotion, which may prompt a researcher to be less critical than she/he should be or to publish data prematurely, i.e., without sufficient replications. Also, there may be a commercial advantage to promoting one finding over another. In some countries, authors receive monetary bonuses for publishing in high-impact journals ( https://www.nature.com/news/don-t-pay-prizes-for-published-science-1.22275 ), or one's salary may even be directly proportional to one's publishing record ( http://www.sciencemag.org/news/2017/08/cash-bonuses-peer-reviewed-papers-go-global ). The point is that failure to replicate can be a complex issue, and we often do not invest sufficient resources in determining the underlying cause.

An extension of a lack of replication, especially in some fields, has been the failure of clinical studies to find therapeutic value for drugs that work quite well in animal models. Although the focus of such failure is currently to lay the blame on animal models, it should be noted that clinical studies also suffer many shortcomings in experimental design. More germane to the lack of replication is the large number of fundamental differences that occur in the design and execution of basic versus clinical studies from statistical handling of missing or uncertain data to constraints from ethical guidelines.

In practice, the first published report of a new finding or phenomenon—particularly if it is in a high profile journal– acquires a certain de facto power from its originality or novelty. This sets a standard against which apparently contradictory reports must be judged for publication. While novelty value is obviously important in science, reports of apparent failures to replicate, when these occur, may consequently have to attain a higher bar for publication, even when their methods are appropriate and rigorous.

Novelty and reproducibility can be reconciled more easily by including as much detail about the methods as possible. Over time, when several papers have addressed the same issue, meta-analyses of the published data may be useful, but such results often are not conclusive. Whatever the cause, it is important to include as much methodological detail as possible in original research reports. But even this can be difficult given the way that some journals impose space constraints or relegate methods details to supplementary materials, which are easily overlooked or disregarded. Some journals (e.g., Journals of the American Physiological Society, BioMed Central, the British Pharmacological Society, the Nature Publishing Group, Physiology and Behavior, PLOS, and others) request that all animal experiments should comply with the ARRIVE guidelines ( https://www.nc3rs.org.uk/arrive-guidelines ) or the National Institutes of Health Guide for the Care and Use of Laboratory animals (NIH Publications No. 8023, revised 1978). Many journals also endorse the completion of a checklist of critical factors that might affect data validity and robustness ( https://www.nature.com/news/surge-in-support-for-animal-research-guidelines-1.19274 ). However, these endorsements alone apparently do not improve reporting [7] , suggesting that the journals should not only support, but more actively enforce adherence to such good practice in publishing. One reason for the lack of adherence may be that complying with these requirements sometimes conflicts with the word or character limit of the manuscript. In any case, adhering to these guidelines might improve researchers' ability to parse out methodological possibilities that underlie differences in results, and it is desirable that publishers, academic societies and funding agencies will soon reach consensus. Nevertheless, and perhaps most importantly, we believe it is our responsibility as scientists to treat each report as a historical record of what took place in a specific set of circumstances. In other words, no single report should be treated as a correct or incorrect finding, but rather as a record of history. The point is that a failure to replicate does not necessarily imply that the initial paper was incorrect. Rather, the implication is that unknown factors are likely at play, and that further attempts at replication from other, independent groups, will be informative.

2.5. Unconscious bias

One perhaps underestimated factor that may contribute to the generation of irreproducible results is unconscious bias. Everything we do is subject to unconscious bias, and it is necessary to be aware of this in order to limit or possibly prevent it. This bias is based on our experiences, culture, prejudices, and many other factors, and it can manifest when designing experiments or interpreting results as well as when reviewing manuscripts or grant proposals. It is occasionally reflected in semantics, when scientists unconsciously state that they perform an experiment to “show something” instead of examining a question or testing a hypothesis. Unconscious bias is difficult to control, but some guidelines are available ( https://royalsociety.org/∼/media/policy/Publications/2015/unconscious-bias-briefing-2015.pdf ). The Royal Society suggests utilizing some key action points to deal with unconscious bias: 1) when preparing for a committee meeting or interview, try to slow down the speed of your decision making; 2) reconsider the reasons for your decision, recognizing that they may be post-hoc justifications; 3) question cultural stereotypes that seem truthful; 4) remember you are unlikely to be fairer and less prejudiced than the average person; 5) you can detect unconscious bias more easily in others than in yourself, so be prepared to call out bias when you see it.

3. Funding and training issues

3.1. funding.

An important discussion question concerned ways to secure industry funding without compromising research direction or academic integrity. Several models that are currently working well were discussed. For example, several companies have formed collaborative funding foundations within local communities that include a number of academic research institutions, providing funds to be used for general areas of interest to them and for which faculty from the various institutions can apply. Likewise, similar foundations are funded by groups of philanthropists.

Important issues to consider relate to who owns the data and publishing rights, what are the indirect costs, and whether or not patents might arise. The percent of any profits that accrue to the PI or the PI's lab differ dramatically among institutions, with examples ranging from 10 to 90% being given. Can or should graduate students be recruited to work on industry-funded projects for which proprietary issues may preclude timely publication? It was clear that different institutions and investigators take quite different approaches when addressing these issues.

In addition to contacting a company's research and development department, it was suggested that academic researchers seeking support for early-stage investigations might market their specific abilities, techniques, newly minted molecules, or genetically-modified mice that could be of special value to the company. Further, researchers might propose to study or utilize a product that the company is already developing or marketing, in which case prospective funding may be more forthcoming from the company's marketing division as opposed to its R&D branches.

A quick survey of the meeting's participants indicated that ∼75% currently enjoy or have used funds from industry in the past. There was no apparent opposition to the use of such funds, but it is increasingly difficult to obtain funding from industry for basic research without constraints, particularly related to the ability to publish obtained findings.

3.2. Training

Pertinent to interactions with private entities, there was discussion of how doctoral students are being trained. In point of fact, given the current poor prospects for jobs in academia, many of our PhDs will end up in non-academic (or non-research) jobs, and a key question is whether or not we are training them properly for those markets. Examples of non-traditional career paths taken by newly minted PhDs or post-docs include positions in administration, law, business, scientific writing, teaching, the government, non-governmental organizations, and many others. It was mentioned that a recent survey by NSF found that ∼70% of the current forty thousand or so PhD students in sciences in the US anticipate doing post-docs when they complete their degree. Students need to be assured, however, that it is acceptable for them to aspire to alternative occupations. It is clear that there are not that many post-doctoral positions available (especially to newly minted PhDs), and that, in many cases, post-doctoral training is unnecessary for the pursuit of alternative non-research-based occupations. As a result, many PhD students will have to, and should, go into these alternate career pathways.

Another and perhaps more problematic bottleneck in academic career paths is finding a position at the assistant professor level. A general consensus was that much of the current graduate training is overly technical and not sufficiently conceptual. So, a key question is, are we training our students appropriately to ensure they are aware of, and competitive for, the wide variety of potential non-academic occupations?

Examples of current strategies and policies that might address these issues include: 1) professional societies or organizations could have more diverse job fairs or clinics, and more informed position listings on their websites; 2) universities could offer specific graduate courses or seminar series that address alternate careers for scientists; 3) graduate programs could include requirements for grant writing and other duties of faculty, put students on department committees, and so on, as these are general skills that are applicable to academic as well as non-academic jobs; 4) industries could establish more apprenticeship programs for PhD students with universities if the funding can be worked out.

One issue that interacts with student training is that, from a mentor's point of view, research has to be completed in order to secure funding, publish papers, advance student careers, and so on. If students are spending large amounts of time on alternative career building activities, it can dilute the mentor's efforts to move projects forward. Because of this, there is considerable variance among mentors and their approach to having students acquire broad skills. In any case, one major goal of a PhD program should be to train the students in “critical thinking” and to emphasize conceptual training (which will be broadly applicable to multiple career paths) in addition to technical training. An interesting possibility is to encourage industrial partners to participate in teaching activities. This could be leveraged (as currently occurs at several institutions) by inviting speakers from industry to PhD program events. Alternatively, it could be beneficial for students to participate in internal training programs (i.e., the Novartis program in drug discovery) which would help educate students about the structures and approaches used in industrial research and development.

4. Specific research topics of interest

As might be expected, myriad specific topics were suggested for discussion, and we therefore highlight a few areas that were broadly considered.

4.1. “Thematic functions” of peptides, signaling molecules, or specific brain areas

There is a historical notion that one or another peptide (or other compound) which acts at one or more receptors in different systems and tissues can be considered to have an overall “thematic” or inter-related function; i.e., the notion that all of its diverse actions can be related to one over-arching goal (effect) was discussed. Several examples of such thematic functions do exist. For instance, vasopressin promotes water retention in the kidney, causes vasoconstriction, and stimulates water intake by acting in brain, all functions that relate to available fluid volume in the body and the circulatory system. Oxytocin (OT) stimulates uterus contractions during birth and myoepithelial contractions in the mammary gland as a peripheral hormone, and it promotes emotional bondage as a neuropeptide, functions that can easily be summarized as being related to reproduction and social bonding. On the other hand, OT is also involved in descending projections from the hypothalamus to the hindbrain that modulate satiation signals, a function that cannot directly be related to reproduction or social bonding. Also, a “thematic” function can hardly be detected for several other neuropeptides: Neuropeptide Y, for instance, is anabolic, anxiolytic, and has been implicated in cell proliferation and differentiation. The cocaine and amphetamine-regulated transcript (CART) is involved in mediation of such diverse functions as pain and eating.

In general, evolutionary pressures likely take advantage of available compounds for novel functions; e.g., a peptide with one original function may over time develop novel functions related to that original “theme” as well as to divergent functions. Over time, this may lead to new compounds (e.g., ancestral insulin evolved into “modern” insulin and insulin-like growth factors) or simply apparently unrelated functions of the same compound. Another important point relates to the anatomical sphere of influence of the compound. For example, circulating hormones may be more likely to have a thematic function because receptors for it in diverse tissues are accessible from the circulation, whereas a thematic function may be less likely for a neuropeptide because the release and action of the peptide is confined to individual, isolated locations.

In short, while the answer to the question of “thematic functions” of signaling compounds is probably too complex to have been discussed comprehensively in the available time, there was consensus that a unified physiological or “thematic” function is certainly not a universal principle. Biological systems are considered to have evolved using whatever ligands and receptors are available to provide important signaling capacity, and the needs might well differ among systems. Biological systems are modular, with many interacting parts and levels, even within a single cell.

Nuclei in the nervous system were originally defined morphologically rather than functionally, a categorization that still remains the basis for the majority of standard animal brain atlases. It is clear, however, that the vast majority of brain nuclei contain diverse cell types that influence diverse physiological systems via diverse axonal projections. Thus, a neuron may synthesize numerous transmitters (peptides, biogenic amines), with different subsets released at different terminals or in the same terminal under different conditions and on a different time scale. As sophisticated techniques became available, and single cells could be phenotyped, the functional diversity of subsets of cells in numerous brain nuclei became apparent. That said there are also examples of nuclei or portions of nuclei in which there is a single, dedicated function. Generally, these are nuclei that are closely allied to sensory or motor functions. For example, this may be the case for autonomic motor nuclei in the hindbrain or for some sensory nuclei (e.g., sensory representation of inputs from the whiskers in the barrel cortex).

4.2. Peptide receptor function

The group discussed challenges associated with investigating the function of G-protein coupled receptors (GPCR) in the brain. GPCR are the largest family in the mammalian genome and represent the targets of many drugs. Therefore, given the relevance of this topic for the participants, we have included an extended discussion of it here.

GPCR function can easily be misinterpreted. This appears to be an underappreciated problem that primarily derives from two technical limitations. First, accurately locating GPCRs within brain cells; and second, from the techniques available to manipulate their function. Accurately locating GPCRs in the brain—particularly at the sub-cellular level—is not a trivial task. They are found post-synaptically on dendrites and neuronal soma and pre-synaptically on axon terminals where they often reside somewhat distally from the synaptic cleft. For the most part, GPCR ligands act as modulators rather than mediators of ionotropic neurotransmission. In addition to occurring on neurons, GPCRs are also expressed by glial, endothelial, epithelial, and ependymal cells, complicating how experimental manipulations must be interpreted.

Accurate GPCR localization is hampered by the lack of suitable probes, particularly high specificity antibodies. For example, commercially available GPCR antibodies are often poorly characterized, meaning that they may provide little useful information. As an alternative, GPCR location can be addressed by means of what are essentially proxy approaches. Two are commonly used: 1) appropriate gene promoters drive the expression of fluorescent markers in target cells; or 2) in situ hybridization (ISH) is used to locate GPCR encoding mRNAs. While both techniques have greatly advanced our knowledge about which specific cell types express GPCRs, neither provides information about the precise subcellular location of target GPCRs, nor how altering their function impacts a neural circuit after a manipulation. For example, it is not clear how the distribution of a GPCR gene promoter-driven GFP signal in a neuron relates to the specific location of the functioning transmembrane receptor protein; while in-situ-hybridization identifies mRNA, and not protein. This situation could be dramatically improved by developing antibodies that are much better targeted to the functionally active epitopes of GPCRs.

With regard to investigating GPCR function, tools are again less than ideal. Traditional pharmacology offers receptor sub-type specificity, but targeted delivery is not always well controlled. An alternative and ostensibly more targeted approach uses shRNA or other methods to knock down (KD) receptor gene expression. However, the way that results from some gene KD experiments appear to be interpreted raises the possibility that the location of the GPCR affected by the KD is not always carefully considered. It should be remembered that a manipulation that reduces the amount of a GPCR mRNA in a target brain area likely only affects receptor expression in neurons that have their cell bodies within the area of the injection. This is important because any presynaptic GPRCs found on afferent neurons projecting into the target area are unaffected by the KD; these are synthesized by distally located neuronal populations. The fact that GPCRs can be found on the axon terminals of target neurons also means that loss of function is unlikely to be confined just to the region containing its soma and dendrites. The efferent projections of these neurons will also lose their pre-synaptic GPCRs, and these may be some distance away. Interpreting the effects of mRNA KDs is therefore far from straightforward, and it is unhelpful that some studies appear to conflate the KD of GPRC mRNAs in neurons within a region with a reduction/loss of all cognate receptor proteins throughout that region, which probably doesn't occur because of presynaptic receptor distribution.

Discussions concluded that the combination of a lack of methods for accurate localization and of the site-specific compromise of function means that current methods still lack the specificity to address GPRC function in a sufficiently sophisticated manner.

4.3. Redundancy

Why does so much redundancy exist in some biological systems? As an example, why are there so many peptides and other eating-generated molecules that act to reduce meal size? The overwhelming response from the participants was that this is what it takes for the system to function optimally. While numerous peptides reduce food intake as a collective, perhaps redundant, activity, each also has other unique features. The redundancy for some activities (e.g., ingestive behavior) makes the overall metabolic process more efficient and emphasizes how critical adequate energy is for the two principal biologic goals – survival and reproduction. For example, whether or not to eat and how much to eat depend on complex economics including prey/predator probabilities, the energy it takes to forage and obtain food, the amount of stored energy on hand, idiosyncratic factors such as stress or illness, etc. Therefore, the “appropriate” decision is the result of compromises or balances of competing goals (e.g., to acquire calories without becoming prey or expending more calories than are gained in the search for food). There may be a greater incidence of this redundancy in the neural processing of sensory signals or for life sustaining activities. We see it as a redundancy but it may be an artifact of our measuring a single variable at a time, using assays and measurements that have been standardized across laboratories in order to increase interpretive power. However, these may well miss or even obscure finer behavior details that are unique to a particular signaling pathway.

5. Perspectives/epilogue

As alluded to in the beginning, science at large, as well as research in our field, is currently facing several serious problems: decreases in funding, bad public opinion/perception of research, questions concerning honesty of the actors, reproducibility and/or relevance of the data, etc. We as scientists need to be open to justified criticism from the outside. In particular, as some of these criticisms raise questions about the entire “operating system” of science as a whole. It is clear that doing nothing would be the worst strategy, because it would further discredit science and eventually result in “punishments” by funding organizations and the public. Thus, we need to find answers to the questions and solutions to the problems. But what are these answers and solutions? What is the best way forward and how should we proceed with respect to the various issues where action is needed?

Although the meeting, naturally, did not cover all of the critical issues, the discussions touched upon a broad range of topics that are important for the future of this field of research. Reflecting on the combined thoughts and thorough analysis of a large group of excellent scientists, the results of these discussions may suggest at least some possible ways to proceed. In this spirit, we hope that this summary of the major ideas of the meeting may help to promote this important discussion for our field.

Conflicts of interest

None of the authors declares any conflict of interest.

The Royal Society

A future vision for Research Excellence Framework and what it means to me

An early career researcher’s take on the royal society’s response to the ref consultation..

A large satellite dish

Over the next few months, we shaped a vision for how REF 2021 could be, putting the institution at the centre of the system, rather than pressure falling onto individuals. I also took part in a morning of discussion with other early career researchers (ECRs) from across the sector to see if we had missed any potential challenges. The Royal Society’s proposals focus on ensuring that REF 2021 works better for all. Here are some of the big issues raised during our discussions, and why I think the Society’s proposals should form the basis of REF 2021.

Assessing institutions, not individuals

The Royal Society has proposed introducing a portfolio submission of outputs to support each institution’s environment statement.

The vast majority of researchers across disciplines no longer toil away in their private ivory towers: we work in teams.  At my university, the Schools within the Faculty of Science are so permeable, with a free flow of knowledge and skills between them, allowing great research that spans different specialities. I’d be delighted if we could be rewarded for this more. But the REF 2014 was not designed to capture this: only some individuals were selected to submit a restricted number of outputs, while other essential aspects of a research environment, such as research technicians, librarians, and facilities did not fit neatly into REF submissions, and so risked being undervalued and neglected. Exciting, collaborative research was often considered too risky, and so wasn’t being supported or recognised. In essence, this system failed on many occasions to capture the wealth of research activity that takes place within institutions, and in collaboration with others beyond.

This is where the Royal Society’s proposal is beautiful in its simplicity: The institution submits a range of outputs that evidence the richness and strategic vision of that institution, as outlined in their environment statement. So if the institution claims to have great industry collaborations, these should be evidenced in its outputs.

Hire me as a person, not a bibliography

The undesirable side-effects of the current system don’t stop there. One of the major, and rather unfortunate, issues of the previous REF arose because a researcher’s outputs were portable: if they moved to a new university before the REF deadline – even by just a day – then their past work would go with them. This caused a lot of game-playing that impacted all researchers, but with a particularly negative effect for ECRs, as it created an artificial five year cycle in the hiring of postdoctoral researchers into faculty. If you were lucky to be searching for a job at the right time, then great (you could even look forward to a higher-than-average salary). However, if you happened to need a career break at that point (such as parental leave) then you missed out on promotion for the next five years. And on top of that universities could get credit for research that they potentially had nothing to do with.

So, what have the Royal Society suggested?  Instead of papers going with you, they stay at the institution where the work was done. If REF 2021 is to be a truly institutional assessment, it is the logical next step that the research outputs produced at that institute stay at that institute. Now, this raises the potential objection that this would decrease the “employability” of an ECR. Why would University of X employ me if I don’t actually count towards their REF submission

However, I’d argue that this actually improves the situation for ECRs: rather than just being a sum of your papers (some of which you may not have been the lead driving force behind) you can be hired for your potential in the future. Institutions will have to outline a vision for their future research environment as set out in the environment statement and hire on that basis. I, for one, would rather be thought of for my future potential rather than my past. And all ECRs will be applying for roles under these terms, so everyone will be in the same situation.

If REF 2021 worked under this premise, several questions and queries that arose from the Stern review fall away. It instantly removes pressures on ECRs, or researchers who have taken a career break to have children or for any other reason. It rewards aspects other than publications (outputs from dance performances to datasets), supports research technicians, and encourages collaborative and risky research.

Challenges of the proposals

If the Royal Society’s proposals are taken forward, there would still be challenges to face. Practicalities need to be addressed such as determining how many outputs each institution should submit and accounting for different sizes and types of institutions and departments.  Also, institutions will understandably always look for ways to maximise their income through a high REF score and new forms of game-playing could emerge that could impact ECR careers. However, I think this novel thinking around the REF really shows that there are new ways that we can judge the excellence of research in our universities that minimise the negative implications for the next generation researchers and value  the full breadth of research activities of our world-class universities.

Read the Society’s full response to HEFCE’s consultation on the second Research Excellence Framework (REF) .

Dr Kate Hendry

Dr Kate Hendry

Royal Society University Research Fellow

Email updates

We promote excellence in science so that, together, we can benefit humanity and tackle the biggest challenges of our time.

Subscribe to our newsletters to be updated with the latest news on innovation, events, articles and reports.

What subscription are you interested in receiving? (Choose at least one subject)

Future of Research

Empowering Early Career Researchers

Future of Research champions, engages, and empowers early career researchers (ECRs) with evidence-based resources to make informed career choices and improve scientific research.

Panel of science researchers

Our Mission

We share resources, community, and expertise with those dedicated to advocacy work and the improvement of academic training environments.

Our projects

We promote grassroots advocacy, currently focusing on three specific projects. Click on each below to see how you can get involved.

Mentoring future scientists

Departmental or training program policies have a huge impact on the success of graduate students, postdocs, and scientific endeavors.

We convened experts and early career researchers to define major policy guidelines and excellence tiers for STEM departments.

Labor and policy

Graduate students, postdoctoral researchers and research associates are among groups of academics searchers across the U.S. organizing, or participating in, unionization efforts and votes. Read our statement and review resources for unionization

international scholars

We raise awareness about the challenges faced by foreign-born researchers currently employed in US academic institutions and advocate for their needs by collecting data, disseminating findings to broad audiences.

Our leadership in action

Learn more about the importance of mentorship and resources in early career research from a Future of Research President, Fatima Sanchenznieto at TedX Chicago. 

Latest News

Our blog posts keep you informed of overall organization news as well as with statements that that Future of Research develops as part of our advocacy to better empower ECRs.

Science Policy in our New Administration: Challenges Facing Early Career Researchers

by futureofresearch | Feb 7, 2021

In the week before he was sworn in as the 46th President of the United States, then President-elect Biden took the unprecedented step of elevating the director of the Office of Science and Technology Policy (OSTP) to a cabinet level position. In doing so, Biden...

Police Brutality, Racism, and the killing of Black civilians

by Fátima Sancheznieto | Jun 5, 2020

Dear Future of Research Community, At a time when there is a continued need for as much collective grieving as there is for concrete actions, writing a statement can feel hollow. When so many in the Black community, on a regular basis, decry the systemic,...

U.S. Senate Finance Committee meeting on foreign influences highlights federal agency urgency without clarity

by Gary McDowell | Jun 7, 2019

On Wednesday, the United States Senate Finance Committee met to discuss Foreign Threats to Taxpayer – Funded Research: Oversight Opportunities and Policy Solutions. The webpage includes a video of the session (which begins approximately 30 mins in) and written...

EARE

Welcome to the European Alliance for Research Excellence 

  • Tue, Dec 05, 2023 European researchers and innovators call for EU to reconsider copyright obligations in the AI Act Ahead of the last trilogue on the AI Act, EARE urges EU institutions to reconsider the amendments relating to copyright and introduced at a late stage in the text.
  • Fri, Nov 17, 2023 EARE signs letter for a balanced approach on AI and data mining in the UK EARE is pleased to be a signatory of the Open Letter on Text and Data Mining (TDM) addressing issues regarding the upcoming AI legislation in the UK.
  • Wed, Jul 12, 2023 Data and AI: Researchers and innovators need legal clarity, not another copyright reform We call on Members of the European Parliament and the Council to take into consideration the needs of European research organisations, academic institutions and start-ups during their negotiations on the AI Act.
  • Tue, Feb 07, 2023 The Data Act risks falling short of its ambitions for research and innovation in Europe  We call on Members of the European Parliament to take into consideration the needs of European research organisations, academic institutions and start-ups during their negotiations on the Data Act.
  • Thu, Jan 27, 2022 Artificial Intelligence and IP: How the UK can adopt a copyright framework that fosters innovation Read EARE's contribution to the UK’s Intellectual Property Office’s Consultation on Artificial Intelligence (AI) and Intellectual Property (IP).
  • Tue, Sep 07, 2021 Data Act: Creating a Coherent Framework to Increase Stakeholders' Trust in Data Sharing Read our contribution to the European Commission's consultation on the Data Act. 
  • Tue, Jul 20, 2021 Singapore’s new Text and Data Mining exception will support innovation in the digital economy Singapore’s new broad Text and Data Mining exception will support innovation in the digital economy.
  • Tue, Jun 08, 2021 Research Libraries UK, SCONUL and UCL Library Services join EARE We have joined EARE to strengthen the voices calling for TDM to be available to any person with legal access so that it can live up to its full potential.
  • Tue, Apr 20, 2021 LIBER joins the European Alliance for Research Excellence The European Alliance for Research Excellence (EARE) is thrilled to welcome LIBER as the newest member of its coalition.
  • Fri, Jan 29, 2021 Done right, Europe’s Data Strategy can unlock tremendous new opportunities for European innovators Read EARE's position on the European Data Governance Act (DGA).
  • Tue, Oct 13, 2020 The European Alliance for Research Excellence: from text and data mining to open data Read EARE's new focus on open data now that the text and data mining and Copyright Directive discussions are over.
  • Thu, Jul 09, 2020 EARE’s Statement at WIPO’s Conversation on Intellectual Property and Artificial Intelligence Read EARE's Statement given during WIPO's Conversation on Intellectual Property (IP) and Artificial Intelligence (AI)
  • Wed, Apr 03, 2019 Europe’s ability to lead in AI will be helped by the new TDM exception Read EARE's final statement on the adoption of the EU Copyright Directive.
  • Fri, Feb 22, 2019 EARE’s position on the results of the trilogue discussions on the Copyright Directive EARE welcomes a positive development in last week’s trilogue meeting on the copyright directive.
  • Thu, Sep 13, 2018 The European Parliament’s position on the Copyright Directive will hold back European research and innovation EARE members are disappointed in the limited TDM exception adopted by the European Parliament as part of the Copyright reform.
  • Mon, Sep 10, 2018 European innovators call on the European Parliament to support MEP Marietje Schaake’s amendments on Article 3 for the benefit of European research and innovation A broad TDM exception is vital for EU's competitiveness.
  • Wed, Sep 05, 2018 The copyright reform bug that risks derailing Europe’s AI ambitions European startups can become world leaders in new technology, so why would September 12’s crucial vote on the copyright reform hold them back?
  • Mon, Sep 03, 2018 Japan amends its copyright legislation to meet future demands in AI and Big Data Japan just reformed its copyright laws to encourage the development of AI in the country. EU policy-makers should do the same.
  • Thu, Aug 30, 2018 The Copyright reform must also be fair to European researchers and entrepreneurs Research and innovation organisations call on the European Parliament to revise Article 3 on Text and Data Mining (TDM) in the copyright directive.
  • Thu, Jul 05, 2018 EARE’s position on the Plenary vote on the Copyright Directive EARE warmly welcomes the decision of MEPs to reject the JURI mandate.
  • Thu, Jun 28, 2018 EARE’s position on the JURI Committee Report on the Copyright Directive Read EARE's statement on the JURI Committee Report on the Copyright Directive.
  • Fri, Jun 08, 2018 24 organisations urge Rapporteur Axel Voss MEP to strike a more ambitious deal on TDM 24 organisations express their deepest concerns about the second version of the draft compromise amendments on Text and Data Mining - TDM (Article 3).
  • Thu, May 31, 2018 EARE’s position on the COREPER agreement on the Copyright Directive Read EARE's statement on the COREPER agreement on the Copyright Directive.
  • Mon, Apr 09, 2018 Open letter: Maximising the benefits of artificial intelligence through future-proof rules on Text and Data Mining 23 organisations call on the European Commission to adopt a future-proof TDM exception to maximise the benefits of AI in Europe.
  • Mon, Apr 09, 2018 The European Parliament must improve the TDM exception to benefit European research 28 European organisations signed an open letter asking MEP Axel Voss to improve the TDM exception in the copyright reform.
  • Wed, Apr 04, 2018 Unleashing Big Data’s Potential for journalism, economy and research The extent to which TDM is revolutionising the way both public and private sector researchers work has yet to be fully realised by EU policymakers, argue TDM experts.
  • Wed, Mar 28, 2018 EARE’s position on the Bulgarian Presidency’s compromise text on the copyright Directive Read EARE's statement on the latest Bulgarian Presidency Compromise on the Copyright directive.
  • Mon, Feb 05, 2018 How Zalando links languages with TDM Dr Alan Akbik is a Research Scientist at Zalando Research. He’s using text and data mining to create tools which can be developed in one language and then applied automatically to other languages.
  • Mon, Jan 08, 2018 THE RIGHT TO READ IS THE RIGHT TO MINE To celebrate the 10th International Open Access Week, Cambridge University has placed a digitised version of Stephen Hawking’s 1966 PhD thesis, “Properties of expanding universes” online for anyone to read and download.
  • Fri, Dec 15, 2017 EARE’s position on the latest Estonian Presidency Compromise on TDM Read EARE's statement on the latest Estonian Presidency Compromise on Article 3 of the Copyright directive.
  • Fri, Dec 01, 2017 Europe can lead the information revolution, but does it have the political will? European Leaders are progressively making Artificial Intelligence one of their key priorities, but to achieve the information revolution we need the right copyright framework.
  • Thu, Oct 19, 2017 European data miners: “We were told to relocate our servers to the US” Tech firms are speaking out against changes to EU copyright rules that they say could force them to leave Europe for the sake of protecting their businesses.
  • Tue, Sep 26, 2017 Open Letter: Securing Europe’s Leadership in the Data Economy by Revising the Text and Data Mining (TDM) Exception EARE and 21 European organisation penned an open letter to European policy-makers asking for a broader TDM exception.
  • Wed, Jun 28, 2017 The hope and despair of science and TDM Read Chris Hartgerink's story about the opportunities TDM can bring and the challenges he faces due to inadequate Copyright rules.
  • Thu, Jun 08, 2017 EARE welcomes new coalition members Research Libraries UK, SCONUL and UCL Library Services EARE is delighted to welcome Research Libraries UK, SCONUL (Society of College, National and University Libraries) and UCL Library Services in its coalition today.
  • Mon, Apr 03, 2017 Europe needs more successful data mining startups The European Commission’s proposed reform on copyright is preventing young companies from using TDM technology with full legal certainty, warns Michal Sadowski and Michał Brzezicki.
  • Tue, Mar 21, 2017 Whoops! EU’s copyright reforms might suck for AI startups Governments are having a hard time keeping up with the world of technology, and the EU is no different.
  • Mon, Mar 20, 2017 EARE welcomes improvements made by MEP Comodini Cachia on TDM Today, the European Parliament took an important step to save the future of European research and innovation.
  • Tue, Feb 14, 2017 Culture Committee Doubles Down on Restricting Research Opportunities in the EU Last week the Culture and Education Committee of the European Parliament (CULT) released its draft opinion on the European Commission’s proposal for a Directive on Copyright in the Digital Single Market.
  • Tue, Feb 14, 2017 We are launching the European Alliance for Research Excellence Innovative companies unite to promote a better policy environment for TDM in Europe.
  • Wed, Feb 08, 2017 Avoiding an EU own goal on digital access to knowledge The EU should listen to the innovators, knowledge creators and developers when it comes to data mining: the potential benefits are too great to be ignored, writes Helen Frew.
  • Mon, Oct 17, 2016 Startups attack digital media rights Innovative Danish companies are complaining about a new EU proposal that gives media new rights and limits data collection.
  • Tue, Sep 06, 2016 Tech industry join forces to tell the Commission not to limit its proposed TDM exception. In a letter to the EU Commission, Allied for Startups, BSA | The Software Alliance and DIGITALEUROPE expressed their concern regarding the EU's approach on TDM practices.
  • Tue, Sep 06, 2016 The EU just told data mining startups to take their business elsewhere Startups will be hurt by a restriction on text and data mining in the European Commission’s proposal to change EU copyright law, writes Lenard Koschwitz.

Our Policy Work

We are a coalition of companies and research organisations committed to fostering excellence in research and innovation in Europe.

Artificial Intelligence

We are committed to ensure that Europe is at the forefront of global AI innovation, fostering an environment that prioritizes transparent access to data for AI innovation while ensuring a balanced approach to copyright protections.

Text & Data Mining

We support the use of Text and Data Mining by European governments, researchers, and small business to unlock the power of data.

We work to safeguard researchers’ access to data by advocating for robust data-sharing frameworks.

Supported by

future research excellence

Want to become a member?

Join us at EARE, where we value collaboration and welcome new organizations that share our vision and commitment to research excellence in Europe.

We are passionate about advancing research excellence and harnessing the power of data and innovation. Whether you are interested in becoming a member or engaging with us on our important initiatives, please reach out to us using the contact form provided below.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • CAREER COLUMN
  • 31 July 2023

UK research assessment is being reformed — but the changes miss the mark

  • Richard Watermeyer 0 ,
  • Gemma Derrick 1 &
  • Kate Sang 2

Richard Watermeyer is professor of higher education and co-director of the Centre for Higher Education Transformations at the University of Bristol, UK.

You can also search for this author in PubMed   Google Scholar

Gemma Derrick is associate professor of higher education at the University of Bristol, UK.

Kate Sang is a gender- and employment-studies researcher and director of the Centre for Research on Employment, Work and the Professions (CREWS) at Heriot-Watt University, Edinburgh, UK.

On 15 June, updated rules were proposed for the next round of the Research Excellence Framework (REF), the assessment system used to distribute around £2 billion (US$2.5 billion) of annual funding across UK universities. These were unveiled by the United Kingdom’s four higher-education funding agencies.

Access options

doi: https://doi.org/10.1038/d41586-023-02469-w

This is an article from the Nature Careers Community, a place for Nature readers to share their professional experiences and advice . Guest posts are encouraged.

Competing Interests

The authors declare no competing interests.

Related Articles

future research excellence

  • Institutions
  • Research management

We pulled together for ocean science

We pulled together for ocean science

Career Feature 03 JUN 24

How I run a virtual lab group that’s collaborative, inclusive and productive

How I run a virtual lab group that’s collaborative, inclusive and productive

Career Column 31 MAY 24

Defying the stereotype of Black resilience

Defying the stereotype of Black resilience

Career Q&A 30 MAY 24

Japan’s push to make all research open access is taking shape

Japan’s push to make all research open access is taking shape

News 30 MAY 24

Protests over Israel–Hamas war have torn US universities apart: what’s next?

Protests over Israel–Hamas war have torn US universities apart: what’s next?

News Explainer 22 MAY 24

Dozens of Brazilian universities hit by strikes over academic wages

Dozens of Brazilian universities hit by strikes over academic wages

News 08 MAY 24

Researcher parents are paying a high price for conference travel — here’s how to fix it

Researcher parents are paying a high price for conference travel — here’s how to fix it

Career Column 27 MAY 24

How researchers in remote regions handle the isolation

How researchers in remote regions handle the isolation

Career Feature 24 MAY 24

Professor, Associate Professor, Postdoctoral Fellow Recruitment

Candidate shall have an international academic vision, and have a high academic level and strong scientific research ability.

Shenzhen, Guangdong, China

Shenzhen University of Advanced Technology

future research excellence

Open Faculty Position in Mathematical and Information Security

We are now seeking outstanding candidates in all areas of mathematics and information security.

Dongguan, Guangdong, China

GREAT BAY INSTITUTE FOR ADVANCED STUDY: Institute of Mathematical and Information Security

Faculty Positions & Postdocs at Institute of Physics (IOP), Chinese Academy of Sciences

IOP is the leading research institute in China in condensed matter physics and related fields. Through the steadfast efforts of generations of scie...

Beijing, China

Institute of Physics (IOP), Chinese Academy of Sciences (CAS)

future research excellence

2024 Recruitment notice Shenzhen Institute of Synthetic Biology: Shenzhen, China

The wide-ranging expertise drawing from technical, engineering or science professions...

Shenzhen,China

Shenzhen Institute of Synthetic Biology

future research excellence

Multi-Disciplinary Studies of Biomolecular Condensates

Dallas, Texas (US)

The University of Texas Southwestern Medical Center (UT Southwestern Medical Center)

future research excellence

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Future Research Assessment Programme

This information is hosted by Jisc on behalf of the four UK higher education funding bodies.

About the programme

The Future Research Assessment Programme aims to explore possible approaches to the assessment of UK higher education research performance. It has been initiated at the request of the UK and devolved government ministers and funding bodies. This significant piece of work will be led by the four UK higher education funding bodies:

  • Research England
  • Scottish Funding Council
  • Higher Education Funding Council for Wales 
  • Department for the Economy, Northern Ireland

UK higher education funding bodies

Research England logo

Development timetable for Research Excellence Framework 2029

Summer 2023 : Initial decisions consultation; launch commissioned work on people, culture and environment indicators 

Autumn 2023 : Initial decisions consultation closes (6 October 2023); open access consultation; publish further decisions on REF 2029; recruit committee chairs

Winter 2023-24 : Invite nominations for panel members; appoint panels 

Spring 2024 : Publish open access requirements; panels meet to develop criteria 

Summer/Autumn 2024 : Publish draft guidance; consultation on panel criteria

2025 : Complete preparation of submission systems 

2027 : Submission phase

2028 : Assessment phase

Research Excellence Framework (REF) 2029 update November 2023

The funding bodies have provided an update on current activity and next steps in development of REF 2029.

Read the update

Initial decisions

The funding bodies have published key decisions on the high-level design of the next research assessment exercise and outlining issues for further consultation. These decisions represent a shift towards a broader and more holistic approach to research assessment.

Find out about the initial decisions and how to engage with the consultation .

People, culture and environment

The funding bodies are inviting written comments on the assessment of people, culture and environment element in REF 2029. This coincides with the launch of a tender for work to develop the indicators to be used in this assessment.

Find out more about people, culture and environment in REF 2029 .

Evaluating research assessment

From understanding how the current assessment system is perceived to reviewing the role of metrics, the funding bodies are undertaking a programme of evaluation activities.

Find out about the evaluation activities .

International advisory group

This group of international experts in different aspects of research assessment provided advice to the future research assessment programme board, focussing on the general principles of research assessment and developing specific recommendations relevant to the future nature of the Research Excellence Framework.

Find out more about the international advisory group .

Equality, diversity and inclusion in research assessment

The funding bodies have a clear aim to embed and support equality, diversity and inclusion in the research ecosystem through the future research assessment framework.

Find out more about the activity undertaken to support this .

Open access

At the launch of the UKRI Open Access Review, the funding bodies agreed that any open access policy within a future research assessment exercise would seek commonality with the UKRI open access policy position.

It is the funding bodies’ intention that a UKRI open access compliant publication will be considered to meet the REF 2029 open access requirements without additional action from the author and/or institution. However, the funding bodies note that the scope of an open access policy for REF 2029 is much broader than the UKRI Open Access Policy and will consult with the sector before developing the full REF 2029 open access policy.

The funding bodies commit to providing appropriate notice of any new requirements and prior to the announcement of the new policy the REF 2021 requirements will continue to apply.

Get in touch

Future Research For Excellence

[email protected]

  • To provide students, researchers, professionals, and scholars with the platform where they could upload their achievement profile (CV) in order to get automatic recognition all over the world.
  • To arrange various seminars/workshops and conferences on different emerging academic/professional issues for developing and fostering a knowledge-sharing community.
  • To create a higher consciousness of the need for intellectual knowledge management in the field of Management and Social Sciences.
  • To provide students, researchers, professionals and scholars with the platform where they can publish their research (articles, research assignments, reports, research papers, presentations) and could have a broader readership for their scholarly developments.
  • To provide international research institutions, organizations, investment institutions, universities, and funding agencies with a database of researchers/scholars/professionals where they can easily find their desired research team.
  • To provide the teachers/professors with the facility of the academic soft board where they could manage their modules/lectures and teaching material virtually in order to facilitate the teaching-learning process.

College of Engineering

College recognizes 8 faculty with 2024 excellence awards.

Honorees have demonstrated outstanding service, teaching, inventorship and commercialization.  

The College of Engineering has announced the third annual Faculty Awards, honoring eight faculty members for their excellence in research, service, teaching, inventorship, and commercialization.

Candidates were nominated by their peers or submitted self-nominations. Materials were reviewed by a committee of academic and research faculty members within the College. Each honoree receives $2,000.

Saad Bhamla

Outstanding Faculty Achievement in Research Award (Early Career)

Saad Bhamla Assistant Professor School of Chemical and Biomolecular Engineering

Bhamla focuses on the physics of living systems, uncovering the principles underlying ultrafast movements in biology to inform the design of bioinspired robotics. He’s regularly called on to share his expertise with top-tier media outlets. In recent years, he’s been featured for his work on cicadas , worm blobs , and leaping springtails . 

Bhamla also is involved in the emerging field of frugal science , developing affordable and accessible tools for global health. His inventive solutions include a 20-cent paper centrifuge, 23-cent electroporator, and 96-cent hearing aid.

Outstanding Faculty Achievement in Research Award (Midcareer)

Wilbur Lam W. Paul Bowers Research Chair, Professor Wallace H. Coulter Department of Biomedical Engineering Associate Dean for Innovation, Emory School of Medicine

Lam’s research  focuses on developing and applying micro- and nanotechnologies to study, diagnose, and treat blood disorders, cancer, and childhood diseases. His lab also works to create inexpensive technologies that allow children and their families to diagnose and monitor their own conditions at home. He also led a  national project to evaluate diagnostic tests for Covid-19  — a test-the-tests effort that was responsible for getting Covid-19 at-home rapid tests widely available on store shelves during the pandemic.

Lam was elected to the National Academy of Medicine in 2023. Membership is considered one of the highest recognitions in health and medicine.

Wilbur Lam headshot

Outstanding Faculty Achievement in Research Award (Research Faculty)

Zhongyun Liu Research Engineer I School of Chemical and Biomolecular Engineering

Liu’s work focuses on creating scalable polyimide and carbon molecular sieve (CMS) membranes for industrial gas separations. Liu’s research has made substantial contributions toward enabling the transition to less energy-intensive gas separation processes with reduced carbon footprints.

For example, Liu has created CMS membranes that perform well across a wide range of industrially important gas pairs, such as natural gas purification and propylene/propane separation. His research offers strategies to control physical aging and use it as a valuable tool to tune the separation performance of CMS membranes for demanding gas separations. 

Outstanding Teacher Award (Early Career)

Daniel Molzahn Assistant Professor School of Electrical and Computer Engineering

In addition to his research on energy systems, Molzahn has a goal of educating the next generation of electric power engineers. For instance, he leads a 30-student Vertically Integrated Projects (VIP) team that develops video game simulations of power grids operating during extreme events. A first iteration of the game currently is installed at the  Georgia Tech Dataseum in the Price Gilbert Library and plans are underway to incorporate a version into next year’s  Seth Bonder high school summer camps .

While vice-chairing and chairing the Power Systems Computation Conference, he worked to record and publish 200+ videos of conference presentations on YouTube, forming the basis for class assignments that placed students in the role of National Science Foundation (NSF) program managers critiquing the latest research. He also used NSF funding to help develop a virtual reality simulation of a substation, providing students the opportunity to work with high-voltage hardware that would otherwise be inaccessible. 

Daniel Molzahn

Outstanding Teacher Award (Midcareer or Senior)

Yonathan Thio Senior Lecturer School of Chemical and Biomolecular Engineering

Students in Thio’s courses don’t take typical notes. Rather, 20 years ago he started giving students “fill-in-the-blanks” cards to promote active listening and learning. It’s just one way Thio takes unique approaches to teaching his students. He also uses “Muddy Cards,” which students use to inform him of topics that were confusing during lectures. The topics are covered again in the next class.  Thio serves on ChBE’s Undergraduate Curriculum committee and has advocated for — and then helped implement — updates that include numerical methods in core courses. He also has developed tools to gather and analyze data on student progress and uses them to help academic advisors in their efforts to help students who are not progressing in their major core courses. 

Outstanding Achievement as an Inventor Award

F. Levent Degertekin George W. Woodruff Chair in Mechanical Systems and Professor George W. Woodruff School of Mechanical Engineering

Degertekin’s research group uses acoustics and optics concepts to creatively address a wide variety of important engineering problems, such as acoustic and seismic measurements, medical ultrasound imaging, and sensors for magnetic resonance imaging.

Degertekin is an inventor or co-inventor on 65 U.S. patents and six international patents. More than 50 of them are granted for his work at Georgia Tech with his students and collaborators. His compact, micromachined optical interferometers form the basis of seismometers used by major oil companies and the technology is part of a NASA project that will someday explore Europa, an icy moon of Jupiter.

Levent Degertekin headshot

Outstanding Achievement in Commercialization and Entrepreneurship Award

Lakshmi “Prasad” Dasi Rozelle Vanda Wesley Professor Wallace H. Coulter Department of Biomedical Engineering

Dasi develops and translates heart valve technology that helps lower healthcare costs and allows doctors to personalize treatment, reducing disparities in patient outcomes.

He owns 10 patents, has another 10 pending, and has two Federal Drug Administration-cleared products through his startup company, DASI Simulations. One of them, a software tool, uses CT scan angiograms to build 3D models and an interactive platform for doctors to simulate surgery before they insert heart valves into patients. To date, nearly 1,200+ patients have benefitted from the technology, which is being used in approximately 105+ U.S. hospitals.

Outstanding Service Award

Jonathan Colton Eugene C. Gwaltney Jr. Professorship in Manufacturing, Professor George W. Woodruff School of Mechanical Engineering

Colton is chair of Georgia Tech’s Institute Statutes Committee, and he serves on the Faculty Executive Board and the Institute Steering Committee. He has taken an active role in rewriting significant portions of the faculty handbook related to the reappointment, promotion, and tenure (RPT), annual evaluations, and post-tenure review (PTR) processes. Within the Woodruff School, Colton was responsible for coordinating and leading a rewrite of the faculty handbook to include policy changes from the Board of Regents. This included the RPT and PTR processes. He also chaired the School’s tenured faculty annual evaluation committee.

Jonathan Colton headshot

Related Stories

composite of five headshots - Thomas Kurfess, Pat Mokhtarian, Alexander Alexeev, Omer Inan, and Rampi Ramprasad

USG Honors 5 with Regents’ Titles

The University System of Georgia Board of Regents honored five College of Engineering faculty members with Regents’ appointments.

Wilbur Lam

Wilbur Lam Elected to National Academy of Medicine

Lam is a biomedical engineer and pediatrician whose work has included leading national efforts to rapidly verify Covid-19 tests and get them to market.

Image of the Tech Tower in navy blue with overlaid gold text "2023 Faculty Awards"

2023 Faculty Award Winners

Eight faculty members honored by the College of Engineering for their excellence in research, service, teaching, inventorship, and commercialization.

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

Publications

  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

Internet & Technology

6 facts about americans and tiktok.

62% of U.S. adults under 30 say they use TikTok, compared with 39% of those ages 30 to 49, 24% of those 50 to 64, and 10% of those 65 and older.

Many Americans think generative AI programs should credit the sources they rely on

Americans’ use of chatgpt is ticking up, but few trust its election information, whatsapp and facebook dominate the social media landscape in middle-income nations, sign up for our internet, science, and tech newsletter.

New findings, delivered monthly

Electric Vehicle Charging Infrastructure in the U.S.

64% of Americans live within 2 miles of a public electric vehicle charging station, and those who live closest to chargers view EVs more positively.

When Online Content Disappears

A quarter of all webpages that existed at one point between 2013 and 2023 are no longer accessible.

A quarter of U.S. teachers say AI tools do more harm than good in K-12 education

High school teachers are more likely than elementary and middle school teachers to hold negative views about AI tools in education.

Teens and Video Games Today

85% of U.S. teens say they play video games. They see both positive and negative sides, from making friends to harassment and sleep loss.

Americans’ Views of Technology Companies

Most Americans are wary of social media’s role in politics and its overall impact on the country, and these concerns are ticking up among Democrats. Still, Republicans stand out on several measures, with a majority believing major technology companies are biased toward liberals.

22% of Americans say they interact with artificial intelligence almost constantly or several times a day. 27% say they do this about once a day or several times a week.

About one-in-five U.S. adults have used ChatGPT to learn something new (17%) or for entertainment (17%).

Across eight countries surveyed in Latin America, Africa and South Asia, a median of 73% of adults say they use WhatsApp and 62% say they use Facebook.

5 facts about Americans and sports

About half of Americans (48%) say they took part in organized, competitive sports in high school or college.

REFINE YOUR SELECTION

Research teams, signature reports.

future research excellence

The State of Online Harassment

Roughly four-in-ten Americans have experienced online harassment, with half of this group citing politics as the reason they think they were targeted. Growing shares face more severe online abuse such as sexual harassment or stalking

Parenting Children in the Age of Screens

Two-thirds of parents in the U.S. say parenting is harder today than it was 20 years ago, with many citing technologies – like social media or smartphones – as a reason.

Dating and Relationships in the Digital Age

From distractions to jealousy, how Americans navigate cellphones and social media in their romantic relationships.

Americans and Privacy: Concerned, Confused and Feeling Lack of Control Over Their Personal Information

Majorities of U.S. adults believe their personal data is less secure now, that data collection poses more risks than benefits, and that it is not possible to go through daily life without being tracked.

Americans and ‘Cancel Culture’: Where Some See Calls for Accountability, Others See Censorship, Punishment

Social media fact sheet, digital knowledge quiz, video: how do americans define online harassment.

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

© 2024 Pew Research Center

WHAT ARE YOU LOOKING FOR?

Key searches, usc researchers pioneer new brain imaging technique through clear “window” in patient’s skull.

In a proof-of-concept study, a research team based at the Keck School of Medicine of USC showed that functional ultrasound imaging can record brain activity through a transparent skull implant.

Clear experimental skull implant may enable functional ultrasound imaging of the brain for patients with serious head injuries. Photo/Todd Patterson

In the first study of its kind, researchers from the Keck School of Medicine of USC and the California Institute of Technology (Caltech) designed and implanted a transparent window in the skull of a patient, then used functional ultrasound imaging (fUSI) to collect high-resolution brain imaging data through the window. Their preliminary findings suggest that this sensitive, non-invasive approach could open new avenues for patient monitoring and clinical research, as well as broader studies of how the brain functions.

“This is the first time anyone had applied functional ultrasound imaging through a skull replacement in an awake, behaving human performing a task,” said Charles Liu, MD, PhD , a professor of clinical neurological surgery, urology and surgery at the Keck School of Medicine and director of the USC Neurorestoration Center . “The ability to extract this type of information noninvasively through a window is pretty significant, particularly since many of the patients who require skull repair have or will develop neurological disabilities. In addition, ‘windows’ can be surgically implanted in patients with intact skulls if functional information can help with diagnosis and treatment.”

The research participant, 39-year-old Jared Hager, sustained a traumatic brain injury (TBI) from a skateboarding accident in 2019. During emergency surgery, half of Hager’s skull was removed to relieve pressure on his brain, leaving part of his brain covered only with skin and connective tissue. Because of the pandemic, he had to wait more than two years to have his skull restored with a prosthesis.

During that time, Hager volunteered for earlier research conducted by Liu, Jonathan Russin, MD , associate surgical director of the USC Neurorestoration Center, and another Caltech team on a new type of brain imaging called fPACT. The experimental technique had been done on soft tissue, but could only be tested on the brain in patients like Hager who were missing a part of their skull. When the time came for implanting the prosthesis, Hager again volunteered to team up with Liu and his colleagues, who designed a custom skull implant to study the utility of fUSI—which cannot be done through the skull or a traditional implant—while repairing Hager’s injury.

Before the reconstructive surgery, the research team tested and optimized fUSI parameters for brain imaging, using both a phantom (a scientific device designed to test medical imaging equipment) and animal models. They then collected fUSI data from Hager while he completed several tasks, both before his surgery and after the clear implant was installed, finding that the window offered an effective way to measure brain activity. The research, funded in part by the National Institutes of Health, was just published in the journal Science Translational Medicine .

Functional brain imaging, which collects data on brain activity by measuring changes in blood flow or electrical impulses, can offer key insights about how the brain works, both in healthy people and those with neurological conditions. But current methods, such as functional magnetic resonance imaging (fMRI) and intracranial electroencephalography (EEG) leave many questions unanswered. Challenges include low resolution, a lack of portability or the need for invasive brain surgery. fUSI may eventually offer a sensitive and precise alternative.

“If we can extract functional information through a patient’s skull implant, that could allow us to provide treatment more safely and proactively,” including to TBI patients who suffer from epilepsy, dementia, or psychiatric problems, Liu said.

A new frontier for brain imaging

As a foundation for the present study, Liu has collaborated for years with Mikhail Shapiro, PhD and Richard Andersen, PhD, of Caltech, to develop specialized ultrasound sequences that can measure brain function, as well as to optimize brain-computer interface technology, which transcribes signals from the brain to operate an external device.

With these pieces in place, Liu and his colleagues tested several transparent skull implants on rats, finding that a thin window made from polymethyl methacrylate (PMMA)—which resembles plexiglass—yielded the clearest imaging results. They then collaborated with a neurotechnology company, Longeviti Neuro Solutions, to build a custom implant for Hager.

Before surgery, the researchers collected fUSI data while Hager did two activities: solving a “connect-the-dots” puzzle on a computer monitor and playing melodies on his guitar. After the implant was installed, they collected data on the same tasks, then compared the results to determine whether fUSI could provide accurate and useful imaging data.

“The fidelity of course decreased, but importantly, our research showed that it’s still high enough to be useful,” Liu said. “And unlike other brain-computer interface platforms, which require electrodes to be implanted in the brain, this has far less barriers to adoption.”

fUSI may offer finer resolution than fMRI and unlike intracranial EEG, it does not require electrodes to be implanted inside the brain. It is also less expensive than those methods and could provide some clinical advantages for patients over non-transparent skull implants, said Russin, who is also an associate professor of neurological surgery at the Keck School of Medicine and director of cerebrovascular surgery at Keck Hospital of USC.

“One of the big problems when we do these surgeries is that a blood clot can form underneath the implant, but having a clear window gives us an easy way to monitor that,” he said.

Refining functional ultrasound technology

In addition to better monitoring of patients, the new technique could offer population-level insights about TBI and other neurological conditions. It could also allow scientists to collect data on the healthy brain and learn more about how it controls cognitive, sensory, motor and autonomic functions.

“What our findings shows is that we can extract useful functional information with this method,” Liu said. “The next step is: What specific functional information do we want, and what can we use it for?”

Until the new technologies undergo clinical trials, fUSI and the clear implant are experimental. In the meantime, the research team is working to improve their fUSI protocols to further enhance image resolution. Future research should also build on this early proof-of-concept study by testing more participants to better establish the link between fUSI data and specific brain functions, the researchers said.

“Jared is an amazing guy,” said Liu, who is continuing to collaborate with the study participant on refining new technologies, including laser spectroscopy, which measures blood flow in the brain. “His contributions have really helped us explore new frontiers that we hope can ultimately help many other patients.”

About this research

In addition to Liu, Russin, Shapiro and Andersen, the study’s other authors are Kay Jann, PhD, from the Mark and Mary Stevens Neuroimaging and Informatics Institute, Keck School of Medicine of USC; Claire Rabut, Sumner Norman and Whitney Griggs from the California Institute of Technology; and Vasileios Christopoulos from the University of California Riverside.

This work was supported by the National Institutes of Health [R01NS123663]; the T&C Chen Brain-Machine Interface Center; the Boswell Foundation; the National Eye Institute [F30 EY032799]; the Josephine de Karman Fellowship; the UCLA-Caltech Medical Scientist Training Program [NIGMS T32 GM008042]; the Della Martin Postdoctoral Fellowship; the Human Frontier Science Program Cross-Disciplinary Fellowship [LT000217/2020-C]; the USC Neurorestoration Center; and the Howard Hughes Medical Institute.

Disclosure: Claire Rabut, Whitney S. Griggs, Sumner L. Norman , Richard A. Andersen, Charles Liu and Mikhail G. Shapiro have filed a provisional patent application based on this research.

  • Press Release
  • Department of Neurologic Surgery
  • USC Neurorestoration Center

future research excellence

Launch of the future research assessment programme

Abstract blue and orange panel

19 May 2021

The four UK higher education funding bodies are launching the Future Research Assessment Programme.

This programme has been initiated at the request of UK and devolved government ministers and funding bodies. It is a significant piece of work aimed at exploring possible approaches to the assessment of UK higher education research performance.

Through dialogue with the higher education sector, the programme seeks to understand what a healthy, thriving research system looks like and how an assessment model can best form its foundation.

Assessment approach

The programme will investigate possible different approaches to the evaluation of UK higher education research performance. It will look to identify those that can encourage and strengthen the emphasis on delivering excellent research and impact, and support a positive research culture, while simplifying and reducing the administrative burden on the HE sector.

In line with responsible research assessment practices, the programme will evaluate the current Research Excellence Framework (REF) 2021 exercise. This will include examining the impact of COVID-19 and the mitigations put in place by the REF team.

Alongside this work, the funding bodies will engage in extensive consultation with the HE sector to understand how future assessment exercises might best support a thriving, inclusive and impactful research system in the UK.

A series of engagement events and a formal written consultation will aim to foster bold and creative discussions about the UK’s future research assessment system. This programme of work is expected to conclude by late 2022.

International advisory group

An international advisory group, chaired by Sir Peter Gluckman (President-elect, International Science Council), has been set up to advise the funding bodies in their evaluation and consultation activities.

The group will also assist in the development and evaluation of options for the future approach to UK-wide research assessment.

Its members will provide a sounding board for emerging ideas, challenging the assumptions and scope of the programme, where appropriate.

Members have been appointed from across the globe and represent a range of expertise and national contexts.

Full membership can be found on the Research England website, along with the Terms of Reference for the Group.

Sir Peter Gluckman said:

I look forward to working with my international colleagues to advise the funding bodies as they explore possible assessment models for the future. This is an exciting opportunity to consider how national research assessment can form the foundation for a healthy, inclusive and dynamic research system. It is important that we think about what we value as carefully as how we evaluate it and listen closely to priorities and concerns from across the UK’s research community.

Programme board

A programme board made up of senior representatives from the four funding bodies will have oversight over this bold and ambitious programme. These are:

  • Research England
  • Scottish Funding Council
  • Higher Education Funding Council for Wales
  • Department for the Economy, Northern Ireland.

The Board’s Terms of Reference  and programme of work can be downloaded from the Research England website .

Questions or requests for further information should be directed to [email protected] .

Top image:  Unsplash

Share this page

  • Share this page on Twitter
  • Share this page on LinkedIn
  • Share this page on Facebook

This is the website for UKRI: our seven research councils, Research England and Innovate UK. Let us know if you have feedback or would like to help improve our online products and services .

Language selection

  • Français fr

Celebrating excellence in Canadian research: Announcing recipients of the Vanier Canada Graduate Scholarships and the Banting Postdoctoral Fellowships

From: Canadian Institutes of Health Research

News release

Doctoral students and post-doctoral researchers are the future leaders of innovation and research excellence in Canada. Tackling some of the world’s biggest challenges, their discoveries will strengthen the economy of the future, boost productivity, and enhance the health and quality of life of Canadians.

The Government of Canada invests in 236 of the nation’s top-tier doctoral students and post-doctoral researchers

May 29, 2024 – Ottawa, Ontario –  Canadian Institutes of Health Research

Today, the Honourable François-Philippe Champagne, Minister of Innovation, Science and Industry, and the Honourable Mark Holland, Minister of Health, announced the recipients of 166 Vanier Canada Graduate Scholarships and 70 Banting Postdoctoral Fellowships. These talented doctoral students and post-doctoral researchers are part of Canada’s next generation of research leaders, spanning the health sciences, natural sciences and engineering, and social sciences and humanities.

Examples of the diverse research being supported include:

  • Zhenwei Ma, Banting Fellow, University of British Columbia, looks at new ways of treating and managing esophageal cancer.
  • Kristy Ferraro, Banting Fellow, Memorial University of Newfoundland, researches how conserving large mammals such as caribou, deer and elk acts as a nature-based solution to climate change.
  • Daniel Romm, Vanier Scholar, McGill University, assesses how sustainable transportation systems such as bike sharing can connect people in small population centres to major cities.
  • Camille Bédard, Vanier Scholar, Université Laval, studies how fungal pathogens are mutating to become resistant to the drugs we use to treat fungal infections.

Funded through the three federal granting councils – the Natural Sciences and Engineering Research Council (NSERC), the Canadian Institutes of Health Research (CIHR), and the Social Sciences and Humanities Research Council (SSHRC), the Vanier Graduate Scholarships and Banting Postdoctoral Fellowships help Canadian institutions attract and retain highly qualified trainees, establishing Canada as a global centre for research training and career support.

“Congratulations to the 2024 Vanier Canada Graduate Scholarships and Banting Postdoctoral Fellowships recipients! Their dedication to advancing knowledge for the benefit of all is truly impressive and their hard work will help find solutions that have the potential to make the world a better place and drive Canadian prosperity.” The Honourable François-Philippe Champagne Minister of Innovation, Science and Industry
“Canada is a world-leader in research and innovation, and the individuals we are recognizing today are a testament to that. Their research holds tremendous promise for making our lives better and healthier in a variety of ways.” The Honourable Mark Holland Minister of Health
“On behalf of Canada’s granting agencies, I congratulate these top-tier researchers. The Vanier and Banting awards recognize outstanding scholars whose research has the potential to drive meaningful change. I wish you luck as you pursue your careers and thank you for your commitment to advancing research.” Dr. Tammy Clifford Acting President, Canadian Institutes of Health Research

Quick facts

In Budget 2024, the Government of Canada re-committed to investing in homegrown research talent by proposing to provide $825 million over five years, starting in 2024-25, with $199.8 million per year ongoing. This enhanced suite of scholarships and fellowships, including Vanier and Banting, will be streamlined into one talent program and will include an increase in the number of scholarships and fellowships provided, building up to approximately 1,720 more graduate students or fellows benefiting each year. 

The  Vanier Canada Graduate Scholarship program  helps Canadian institutions attract highly qualified doctoral students who demonstrate academic excellence, research potential, and leadership potential and demonstrated ability.

The  Banting Postdoctoral Fellowships program  provides funding to the top postdoctoral applicants, both nationally and internationally, who will positively contribute to Canada’s economic, social, and research-based growth.

Since 2016, the Government has invested more than $16 billion in science and research across the country.

Related products

  • 2024 Banting post-doctoral researchers
  • 2024 Vanier scholars

Media Relations Innovation, Science and Economic Development Canada [email protected]

Media Relations Canadian Institutes of Health Research [email protected]

Stay connected

Find more services and information at  Canada.ca/ISED .

Follow Canadian Science on social media. X (Twitter):  @CDNScience  | Facebook:  Canadian Science  | Instagram:  @cdnscience

Page details

IMAGES

  1. Research Excellence Framework

    future research excellence

  2. Top 8 Technology Trends & Innovations driving Scientific Research in 2023

    future research excellence

  3. Research Excellence Framework (REF)

    future research excellence

  4. What is the Research Excellence Framework?

    future research excellence

  5. Research Collaboration Values

    future research excellence

  6. Future Research

    future research excellence

VIDEO

  1. Conference on Research Careers 2023

  2. Future of Universities: The Future of Research

  3. Transferable skills for researchers

  4. University of Cambridge: A Beacon of Academic Excellence and Research Innovation

  5. Launch of Future of Work Research Initiative

  6. Believe in a Brilliant Future || EduSign Academy

COMMENTS

  1. Research Excellence

    Penn State is among the top research institutions in the country, with more than $1.2 billion in research expenditures in 2022-23.Every day, talented researchers, including postdoctoral fellows, graduate students and undergraduate students across hundreds of disciplines, are making a difference in Pennsylvania and around the globe, helping to solve critical societal challenges and preparing ...

  2. Research Excellence Framework 2029: initial decisions and issues ...

    REF 2029: initial decisions and issues for further consultation webinars. The funding bodies held two webinars in July and September focusing on the key areas for further consultation. Download slides from the webinars (.pptx) Download a list of frequently asked questions (pdf) View a recording of the webinar.

  3. Science needs to redefine excellence

    Science needs to redefine excellence. The concept of research excellence is ubiquitous, but its meaning depends on context. Excellence is used to rank research and universities but it is a hard ...

  4. Looking forward 25 years: the future of medicine

    The drive for research excellence—to which Wellcome has certainly contributed—has created a culture that cares more about what is achieved than how it is achieved. ... Africa is the continent ...

  5. Research excellence indicators: time to reimagine the 'making of

    Research excellence could be straightforwardly defined as going beyond a superior standard in research ... Fourth, what counts as excellent research now might not necessary count as excellent research in the future, and any definition of research excellence might well be subject to revision. Finally, the fact that one can have a different view ...

  6. Mammoth UK research assessment concludes as leaders eye ...

    Mammoth UK research assessment concludes as leaders eye radical shake up. Funding councils are considering changes to the Research Excellence Framework to improve research culture. By. Holly Else ...

  7. New horizons for future research

    New horizons for future research - Critical issues to consider for maximizing research excellence and impact. Wolfgang Langhans, 1, ... the discussions touched upon a broad range of topics that are important for the future of this field of research. Reflecting on the combined thoughts and thorough analysis of a large group of excellent ...

  8. The Present and the Future of the Research Excellence Framework Impact

    the possible research questions posed within this national system. We conclude by opening up some broader questions for the future of impact raised through the consideration of linearity, including the question of 'measurement'. Keywords Research Excellence Framework, impact agenda, linearity, direct impact, critical inquiry Accepted: 4 May ...

  9. The Present and the Future of the Research Excellence Framework Impact

    Stephanie Collins is a Lecturer in Political Theory at the University of Manchester. She is the author of The Core of Care Ethics (Palgrave Macmillan, 2015). Her research interests lie across all aspects of analytic political theory and moral philosophy, but her recent research has focused in particular on collective responsibility and obligation, human rights and associative duties, on which ...

  10. The Present and the Future of the Research Excellence Framework Impact

    We link such consequences to our own research agendas to provide a sense of empirical richness to the broad concerns that arise from the impact agenda and to highlight the effects of the Research Excellence Framework's linear focus and, crucially, the types of alternative narratives it potentially silences.

  11. Early decisions made for REF 2028

    The exercise drives research excellence across the UK, provides accountability for public investment in research, and informs the allocation of around £2 billion of block-grant research funding each year. ... The Future Research Assessment Board also sought advice from an International Advisory Group, which provided insights into how research ...

  12. A future vision for Research Excellence Framework and what it means to

    An early career researcher's take on the Royal Society's response to the REF consultation. Over the next few months, we shaped a vision for how REF 2021 could be, putting the institution at the centre of the system, rather than pressure falling onto individuals. I also took part in a morning of discussion with other early career researchers ...

  13. New Future Research Assessment Programme reports published

    REF costs. The first report, an in-depth analysis of the costs of the Research Excellence Framework (REF) 2021, was conducted by Technopolis. The headline cost of REF 2021 was estimated as £471 million. This comprised: These costs amount to around 3 to 4% of the total funding distributed on the basis of the REF results, which is lower than the ...

  14. Future of Research

    Future of Research champions, engages, and empowers early career researchers (ECRs) with evidence-based resources to make informed career choices and improve scientific research. ... We convened experts and early career researchers to define major policy guidelines and excellence tiers for STEM departments. ...

  15. Research Excellence Framework

    The Research Excellence Framework (REF) is the UK's system for assessing the excellence of research in UK higher education providers (HEPs). The REF outcomes are used to inform the allocation of around £2 billion per year of public funding for universities' research. The REF was first carried out in 2014, replacing the previous Research ...

  16. Home

    Tue, Oct 13, 2020 The European Alliance for Research Excellence: from text and data mining to open data. Thu, Jul 09, 2020 EARE's Statement at WIPO's Conversation on Intellectual Property and Artificial Intelligence. Wed, Apr 03, 2019 Europe's ability to lead in AI will be helped by the new TDM exception. Fri, Feb 22, 2019 EARE's ...

  17. UK research assessment is being reformed

    On 15 June, updated rules were proposed for the next round of the Research Excellence Framework (REF), the assessment system used to distribute around £2 billion (US$2.5 billion) of annual ...

  18. Future Research Assessment Programme

    The Future Research Assessment Programme aims to explore possible approaches to the assessment of UK higher education research performance. It has been initiated at the request of the UK and devolved government ministers and funding bodies. This significant piece of work will be led by the four UK higher education funding bodies: Research England.

  19. Precision Medicine

    Precision Medicine. Genetics and personalized medicine have become increasingly important in effective patient care. The AMA is on the cutting edge of medical innovation, offering the resources physicians need to learn about the rapidly changing field, apply it to patient care and drive the future of medicine.

  20. Future Research For Excellence

    Future Research for Excellence (FRE) is an academic research center that works for innovation. FRE aims to identify, explore and nurturing human intellectual capital in the fields of academia and industry to attain a significant place in the world by its commendable activities.Its endeavor includes the establishment of academic/professional researchers/students through the involvement of ...

  21. College Recognizes 8 Faculty with 2024 Excellence Awards

    By: Jason Maderer ([email protected]) Wednesday, 29 May 2024. The College of Engineering has announced the third annual Faculty Awards, honoring eight faculty members for their excellence in research, service, teaching, inventorship, and commercialization. Candidates were nominated by their peers or submitted self-nominations.

  22. Internet & Technology

    Americans' Views of Technology Companies. Most Americans are wary of social media's role in politics and its overall impact on the country, and these concerns are ticking up among Democrats. Still, Republicans stand out on several measures, with a majority believing major technology companies are biased toward liberals. short readsApr 3, 2024.

  23. USC researchers pioneer new brain imaging technique through clear

    In a proof-of-concept study, a research team based at the Keck School of Medicine of USC showed that functional ultrasound imaging can record brain activity through a transparent skull implant. Zara Abrams May 29, 2024. Clear experimental skull implant may enable functional ultrasound imaging of the brain for patients with serious head injuries.

  24. Launch of the future research assessment programme

    The four UK higher education funding bodies are launching the Future Research Assessment Programme. ... In line with responsible research assessment practices, the programme will evaluate the current Research Excellence Framework (REF) 2021 exercise. This will include examining the impact of COVID-19 and the mitigations put in place by the REF ...

  25. 2024 Celebration of Excellence in Research and Teaching

    The hard work of the 2023-2024 award recipients and of the entire Faculty of Science community has made another academic year a success, and continues to support our mission of world-leading research and inspired teaching and training of the next generation of scientists. "Today is an opportunity to celebrate the incredible work of this ...

  26. The Present and the Future of the Research Excellence Framework Impact

    We link such consequences to our own research agendas to provide a sense of empirical richness to the broad concerns that arise from the impact agenda and to highlight the effects of the Research Excellence Framework's linear focus and, crucially, the types of alternative narratives it potentially silences.

  27. The effects of the research excellence framework research impact agenda

    Early- and mid-career researchers will shape the future of library and information science (LIS) research and it is crucial they be well placed to engage with the research impact agenda. Their understanding of research impact may influence their capacity to be returned to research excellence framework (REF), the UK's research quality ...

  28. Celebrating excellence in Canadian research: Announcing recipients of

    The Government of Canada invests in 236 of the nation's top-tier doctoral students and post-doctoral researchers. May 29, 2024 - Ottawa, Ontario - Canadian Institutes of Health Research Doctoral students and post-doctoral researchers are the future leaders of innovation and research excellence in Canada.

  29. Weise leads NIH pediatric eye research organization

    Weise has been involved with PEDIG research since 2002. She first became the local principal investigator for ATS-3, a study about treating lazy eye in older kids and has remained part of every related study since. ATS-23 launches this summer. Other involvement in the group includes having a role in 26 of the 135 manuscripts PEDIG has published.

  30. Excellence in forensic psychiatry services: International survey of

    Background: Excellence is that quality that drives continuously improving outcomes for patients. Excellence must be measurable. We set out to measure excellence in forensic mental health services according to four levels of organisation and complexity (basic, standard, progressive and excellent) across seven domains: values and rights; clinical organisation; consistency; timescale ...