Using single impact metrics to assess research in business and economics: why institutions should use multi-criteria systems for assessing research.

AutorOlavarrieta, Sergio

Introduction

There is a continuous and increasing interest in how to assess research at academic institutions (Adler and Haring, 2009). University and school administrators need to manage their resources to increase research output and school reputation, raise rankings, achieve or keep international accreditations and maintain or increase external funding (Peters et al., 2018). Research assessment then is linked to relevant strategic goals of these institutions. At the same time, research assessment plays an important role at the micro or individual faculty level. Research assessment practices may be linked to research promotion policies, economic incentives, academic careers and school- and university-level promotions. Good assessment practices may improve individual and institutional research output, due to the direct and indirect effects of assessment methods on individual performance. Moreover, shrinking budgets and increased societal pressures regarding the sustainability of universities in terms of fulfilling the needs of multiple stakeholders (Jack, 2021) suggest that sound research assessment practices may be more important if universities and schools want to fulfil their strategic goals and remain sustainable over time.

Universities, schools and national agencies establish assessment procedures to evaluate existing/previous research and assign research funds and benefits (e.g. courses reduction, travel funds, etc.) honours and awards, academic promotion and direct economic incentives. Different assessment methods have been used including journal lists (institutional or external lists like ABCD in Australia, ABS in the UK or the Financial Times), individual citation patterns, peer-reviewed assessments and collegiate review committees. Strategic control and assessment systems are crucial for guiding an institution's behaviour and performance (Kaplan and Norton, 1996).

With the increasing bibliographic information on journals and citations (e.g. Salcedo, 2021a), and the rising burden/complexity of faculty and school assessment tasks, quality peer evaluation has been somewhat substituted for the use of journal impact metrics (Garfield, 1972, 2006; Adler and Harzing, 2009; Rizkallah and Sin, 2010; Haustein and Lariviere, 2015; Brown and Gutman, 2019). Two factors are probably driving this trend, their availability and their objectivity status. These effects might be even more relevant for institutions where management needs to use discretion and judgement rather than just financial measures to assess performance or for institutions that have a less "formal" or well-understood strategy (Gibbons and Kaplan, 2015). In those cases, Gibbons and Kaplan (2015) argue that formal measures--included in assessment systems--may give "clarity to the strategy" (p. 449) and the school and faculty actions. The design of a school's research assessment system is then a key element for facilitating the implementation of a higher education institutions' strategy. This choice of research impact indicators will affect both individual and institutional research behaviour (Fischer et al., 2019; Jack, 2021).

Research assessment systems that are based on single impact indicators may be risky for institutions because they may channel faculty and school efforts towards indicators that are consistent with particular disciplines, stakeholders or goals that do not consider the entire spectrum of outcomes that are expected for a sustainable Business School or university. These results may be very complex when university or business school revenues are contingent upon serving those other needs (Peters et al., 2018; Morales and Calderon, 1999). We argue that these challenges are even higher when schools and institutions embrace different disciplines and are included in the same assessment process.

Despite the problems derived from overestimating the value of these impact metrics institutions continue using them, with potentially complex implications for the assessment process itself, and for achieving the schools' strategic goals and sustainability (see for example Jack, 2021, for the challenges of using too narrow metrics in business school rankings).

Only a few authors have addressed this issue empirically, warning about the problems of using single journal/level indicators for assessing research contribution. Mingers and Yang (2017) in a recent study for business and management journals provide evidence that in the business disciplines multiple impact indicators should be used in order to overcome the biases that particular indicators may entail when ranking journals and using those rankings for assessing business research. We aim to provide further empirical evidence regarding the risks of using single indicators in assessing research outputs, especially when assessing journals or researchers from different disciplines.

This paper explores the effect of using particular single impact metrics when assessing research contributions in related disciplines, in this particular case: Business and Economics. Even though both disciplines are regularly taught in Business Schools and programs their relationship is not as strong as one might think. Azar (2009), for example, reports that only 6.9% of citations in business journal articles come from economics, and with a reducing trend over time. For Business, other disciplines like psychology, sociology, decision sciences and communications, have a strong influence. Since specific research impact indicators have different objectives and assumptions and are sensitive to specific citation patterns (the raw input for those indicators), the use of particular impact indicators may significantly affect the relative assessment of scientific work when different scientific disciplines are evaluated together.

In this paper, first, we briefly cover the literature of research and journal assessment and impact metrics and its connection with university rankings and strategic performance and sustainability. Then we define the main Web of Science (WoS)-based impact metrics and analyse these metrics for Business and Economics journals. We analyse the effects of using single impact indicators: standard impact factor (IF) measures and the new eigenfactor and article influence scores, for ranking Business and Economics journals and assessing the work of Business School scholars. As in previous research, we compute the correlations of these different indicators finding generally consistent results with existing literature. We then generate relative rankings for all journals in the Business and Economics WoS categories, using these different indicators. Significant changes in rankings are identified depending on the type of measure used (e.g. standard WoS impact factors vs eigenfactor scores or AIS scores). By calculating the implicit academic value of different disciplines using the AIS journal scores, we provide further insights regarding the reasons for these different results, providing additional support for the need to use multiple families of indicators when attempting to design a sound and fair research and promotion assessment system that helps institutions to achieve their strategic goals. Implications for theory and practice of research assessment, future research avenues and conclusions are provided in the last section of the paper.

Impact research assessment in higher education and business schools

The evaluation of the research output is very important in academic life since it drives hiring, funding and tenure and promotion decisions. The implications are very relevant for individual researchers since their academic careers and economic incentives may be driven by these decisions. In the following sections, we will examine relevant literature addressing research assessment systems and metrics.

Research assessment systems and indicators

As stated earlier, research assessment is a relevant but very complex process that affects the behaviour of individual faculty and the whole institution. For this reason, the academic tradition established peer review committees of senior faculty members as a reasonable way to deal with this strategic process. These committees normally review individual manuscripts and outputs for quality, relevance and overall value. As a way to provide a more standard rule to compare different research productions, some schools developed internal lists of desired journals, ranking them in terms of subjective quality. Other schools also used journal quality lists developed by external parties and associations (e.g. ABCD list in Australia, ABS in the UK, Univeristy of Texas -- Dallas list in the USA, Capes/Qualis in Brazil, see for example Harzing.com).

Additionally, research publications may be evaluated through quantitative indicators like the direct citations to the paper or through some sort of impact metric of the journal (based on the total citations to the journal, Garfield, 1972, 2006; Franceshet, 2010). The availability of large bibliometric databases (WoS, Scopus or even Google Scholar), has made citation-based metrics easier to find and use and a more common assessment approach (Haustein and Lariviere, 2015; Harzing, 2019). Journals and editors engage in reputation, through the expansion of indexing and becoming more known and cited by relevant research communities (see for example Salcedo, 2021b).

Despite some concerns regarding the validity of impact metrics (see for example Carey, 2016; Paulus et al., 2018), the burden of assessing research output for an increasing faculty body has made a common practice the use of journal impact metrics to assess individual faculty research outputs in many institutions. Here we present the main impact metrics used in academia separated into two groups: the standard or more traditional IF scores and the newer eigenfactor-related scores.

Standard/traditional impact factor scores

Total cites (TotCite). The total number of...

Para continuar leyendo

Solicita tu prueba

VLEX utiliza cookies de inicio de sesión para aportarte una mejor experiencia de navegación. Si haces click en 'Aceptar' o continúas navegando por esta web consideramos que aceptas nuestra política de cookies. ACEPTAR