Elsevier

Journal of Informetrics

Volume 6, Issue 4, October 2012, Pages 674-688
Journal of Informetrics

A further step forward in measuring journals’ scientific prestige: The SJR2 indicator

https://doi.org/10.1016/j.joi.2012.07.001Get rights and content

Abstract

A new size-independent indicator of scientific journal prestige, the SJR2 indicator, is proposed. This indicator takes into account not only the prestige of the citing scientific journal but also its closeness to the cited journal using the cosine of the angle between the vectors of the two journals’ cocitation profiles. To eliminate the size effect, the accumulated prestige is divided by the fraction of the journal's citable documents, thus eliminating the decreasing tendency of this type of indicator and giving meaning to the scores. Its method of computation is described, and the results of its implementation on the Scopus 2008 dataset is compared with those of an ad hoc Journal Impact Factor, JIF(3y), and SNIP, the comparison being made both overall and within specific scientific areas. All three, the SJR2 indicator, the SNIP indicator and the JIF distributions, were found to fit well to a logarithmic law. Although the three metrics were strongly correlated, there were major changes in rank. In addition, the SJR2 was distributed more equalized than the JIF by Subject Area and almost as equalized as the SNIP, and better than both at the lower level of Specific Subject Areas. The incorporation of the cosine increased the values of the flows of prestige between thematically close journals.

Highlights

► A new size-independent indicator of scientific publication prestige, SJR2, is proposed. ► SJR2 takes into account the prestige of the citing scientific publication and its subject closeness. ► The method of computation of SJR2 is described. ► Results of SJR2 are compared with those of a Journal Impact Factor, JIF(3y) and SNIP.

Introduction

It is accepted by the scientific community that neither all scientific documents nor all journals have the same value.1 Instead of each researcher assigning a subjective value to each journal, there has always been strong interest in determining objective valuation procedures. In this regard, it is accepted by the scientific community that, in spite of different motivations (Brooks, 1985), citations constitute recognition of foregoing work (Moed, 2005).

One of the first generation of journal metrics based on citation counts developed to evaluate the impact of scholarly journals is the Impact Factor which has been extensively used for more than 40 years (Garfield, 2006). Nevertheless, different research fields have different yearly average citation rates (Lundberg, 2007), and this type of indicator is almost always lower in the areas of Engineering, Social Sciences, and Humanities (Guerrero-Bote et al., 2007, Lancho-Barrantes et al., 2010a, Lancho-Barrantes et al., 2010b).

Since neither all documents nor all journals have the same value, a second generation of indicators emerged with the idea of assigning them different weights. Rather than an index of popularity, the concept that it was intended to measure was prestige in the sense of Bonacich (1987) that the most prestigious journal will be the one that is most cited by journals also of high prestige. The first proposal in this sense in the field of Information Science was put forward by Pinski and Narin (1976), with a metric they called “Journal Influence”. With the arrival of the PageRank algorithm (Page, Brin, Motwani, & Winograd, 1998) developed by the creators of Google, there have arisen other metrics such as the Invariant Method for the Measurement of Intellectual Influence (Palacios-Huerta & Volij, 2004), the Journal Status (Bollen, Rodríguez, & van de Sompel, 2006), the Eigenfactor (Bergstrom, 2007), and the Scimago Journal Rank (González-Pereira, Guerrero-Bote, & Moya-Anegón, 2010).

Despite the progress represented by this second generation of indicators, they have some features that make them ill-suited for journal metrics:

  • The scores obtained by scientific journals typically represent their prestige, or their average prestige per document, but this score only makes sense in comparison with the scores of other journals.

  • The scores are normalized by making them sum to a fixed quantity (usually, unity). The result is that as the number of journals increases the scores tend to decrease, which can lead to sets of indicators that all decrease with time. This characteristic complicates the study of the temporal evolution of scientific journals.

  • Different scientific areas have different citation habits, and these are not taken into account in these indices, so that neither are the values obtained in different areas comparable (Lancho-Barrantes et al., 2010b). Added to this is that there is no consensus on the classification of scientific journals into different areas (Janssens, Zhang, Moor, & Glänzel, 2009).

In the sciences, it has always been accepted that peer review in a field should be by experts in that same field (Kostoff, 1997). In this same sense, it seems logical to give more weight to citations from journals of the same or similar fields, since, although all researchers may use some given scientific study, they do not all have the same capacity to evaluate it. Even the weighting itself may not be comparable between different fields. Given this context, in a process of continuing improvement to find journal metrics that are more precise and more useful, the SJR2 indicator was designed to weight the citations according to the prestige of the citing journal, also taking into account the thematic closeness of the citing and the cited journals. The procedure does not depend on any arbitrary classification of scientific journals, but uses an objective informetric method based on cocitation. It also avoids the dependency on the size of the set of journals, and endows the score with a meaning that other indicators of prestige do not have.

In the following sections, we shall describe the methodological aspects of the development of the SJR2 indicator, and the results obtained with its implementation on Elsevier's Scopus database, for which the data were obtained from the Scimago Journal and Country Rank website, an open access scientometric directory with almost 19,000 scientific journals and other types of publication (2009).

Section snippets

Data

We used Scopus as the data source for the development of the SJR2 indicator because it best represents the overall structure of world science at a global scale. Scopus is the world's largest scientific database if one considers the period 2000–2011. It covers most of the journals included in the Thomson Reuters Web of Science (WoS) and more (Leydesdorff et al., 2010, Moya-Anegón et al., 2007). Also, despite its only relatively recent launch in 2004, there are already various studies of its

Method

The SJR2 indicator, as also the SJR indicator (González-Pereira et al., 2010), is computed over a journal citation network in which the nodes represent the active source journals, and the directed links between the nodes, the citation relationships among those journals. The main differences with respect to SJR are:

  • The SJR2 prestige of the citing journal is distributed among the cited journals proportionally both to the citations from the former to the latter (in the three-year citation window)

Statistical characterization

As in González-Pereira et al. (2010), in this section we shall present a statistical characterization of the SJR2 indicator in order to contrast its capacity to depict what could be termed “average prestige” with journals’ citedness per document and the SNIP indicator. The study was performed for the year 2008 since its data can be considered stable. The data were downloaded from the Scimago Journal and Country Rank database (http://www.scimagojr.com) on 20 October 2011. It needs to be noted

Conclusions

Beyond the metrics of the prestige of scientific journals which weight the Citation in terms of the prestige of the citing journal, the present SJR2 indicator solves the problem of the tendency for prestige scores to decrease over time by the use of stochastic matrices. It endows the resulting scores with meaning, and uses the cosine between the cocitation profiles of the citing and cited journals to weight the thematic relationship between the two journals.

The problem of the tendency for the

Acknowledgments

This work was financed by the Junta de Extremadura e Consejería de Educación Ciencia & Tecnología and the Fondo Social Europeo as part of Research Group grant GR10019, and by the Plan Nacional de Investigación Científica, Desarrollo e Innovación Tecnológica 2008e2011 and the Fondo Europeo de Desarrollo Regional (FEDER) as part of research projects TIN2008-06514-C02-01 and TIN2008-06514-C02-02.

References (28)

  • E. Garfield

    The history and meaning of the journal impact factor

    Journal of the American Medical Association

    (2006)
  • V.P. Guerrero-Bote et al.

    The iceberg hypothesis: Import–export of knowledge between scientific subject categories

    Scientometrics

    (2007)
  • P. Jacso

    Péter's digital reference shelf

    (2009)
  • R.N. Kostoff

    The principles and practices of peer review

    Science and Engineering Ethics

    (1997)
  • Cited by (278)

    View all citing articles on Scopus
    View full text