The Leiden Ranking is based on publications in the Web of Science database produced by Clarivate. The most up-to-date statistics made available in the Leiden Ranking are based on publications in the period 2019–2022, but statistics are also provided for earlier periods. Web of Science includes a number of citation indices. The Leiden Ranking uses the Science Citation Index Expanded, the Social Sciences Citation Index, and the Arts & Humanities Citation Index. Only publications of the Web of Science document types article and review are taken into account. The Leiden Ranking does not consider book publications, publications in conference proceedings, and publications in journals not indexed in the above-mentioned citation indices of Web of Science.
The Leiden Ranking takes into account only a subset of the publications in the Science Citation Index Expanded, the Social Sciences Citation Index, and the Arts & Humanities Citation Index. We refer to the publications in this subset as core publications. Core publications are publications in international scientific journals in fields that are suitable for citation analysis. In order to be classified as a core publication, a publication must satisfy the following criteria:
The last criterion is a very important one. In the Leiden Ranking, a journal is considered a core journal if it meets the following conditions:
In the calculation of the Leiden Ranking indicators, only core publications are taken into account. Excluding non-core publications ensures that the Leiden Ranking is based on a relatively homogeneous set of publications, namely publications in international scientific journals in fields that are suitable for citation analysis. The use of such a relatively homogeneous set of publications enhances the international comparability of universities. It should be emphasized that non-core publications are excluded not because they are considered less important than core publications. Non-core publications may have an important scientific value. About one-sixth of the publications in Web of Science are excluded because they have been classified as non-core publications.
Our concept of core publications should not be confused with the Web of Science Core Collection. The Web of Science Core Collection represents a subset of the citation indices available in Web of Science. As explained above, the core publications on which the Leiden Ranking is based represent a subset of the publications in the Science Citation Index Expanded, the Social Sciences Citation Index, and the Arts & Humanities Citation Index.
A list of core and non-core journals is available in this Excel file.
Indicators included in the Leiden Ranking have two variants: A size-dependent and a size-independent variant. In general, size-dependent indicators are obtained by counting the absolute number of publications of a university that have a certain property, while size-independent indicators are obtained by calculating the proportion of the publications of a university with a certain property. For instance, the number of highly cited publications of a university and the number of publications of a university co-authored with other organizations are size-dependent indicators. The proportion of the publications of a university that are highly cited and the proportion of a university’s publications co-authored with other organizations are size-independent indicators. In the case of size-dependent indicators, universities with a larger publication output tend to perform better than universities with a smaller publication output. Size-independent indicators have been corrected for the size of the publication output of a university. Hence, when size-independent indicators are used, both larger and smaller universities may perform well.
The Leiden Ranking provides the following indicators of scientific impact:
Citations are counted until the end of 2023 in the calculation of the above indicators. Author self–citations are excluded. All indicators except for TCS and MCS are normalized for differences in citation patterns between scientific fields. For the purpose of this field normalization, about 4000 fields are distinguished. These fields are defined at the level of individual publications. Using a computer algorithm, each publication in Web of Science is assigned to a field based on its citation relations with other publications.
The TCS, MCS, TNCS, and MNCS indicators are not available on the main ranking page. These indicators can be accessed by clicking on the name of a university. An overview of all bibliometric statistics available for the university will then be presented. This overview also includes the TCS, MCS, TNCS, and MNCS indicators.
The Leiden Ranking provides the following indicators of collaboration:
Some limitations of the above indicators need to be mentioned. In the case of the P(industry) and PP(industry) indicators, we have made an effort to identify industrial organizations as accurately as possible. Inevitably, however, there will be inaccuracies and omissions in the identification of industrial organizations. In the case of the P(<100 km), pp(<100 km), P(>5000 km), and PP(>5000 km) indicators, we rely on geocoding of addresses listed in Web of Science. There may be some inaccuracies in the geocoding that we have performed, and for addresses that are used infrequently no geocodes may be available. In general, we e xpect these inaccuracies and omissions to have only a small effect on the indicators.
The Leiden Ranking provides the following indicators of open access publishing:
In the calculation of the P(OA) and PP(OA) indicators, a publication is considered open access if it is gold, hybrid, bronze, or green open access. The open access status of a publication is determined based on OpenAlex data.
The Leiden Ranking provides the following indicators of gender diversity:
For each authorship of a university, the gender is determined using the following four-step procedure:
Using the above procedure, the gender can be determined for about 70% of all authorships of universities included in the Leiden Ranking. For the remaining authorships, the gender is unknown.
The scientific impact indicators in the Leiden Ranking can be calculated using either a full counting or a fractional counting method. The full counting method gives a full weight of one to each publication of a university. The fractional counting method gives less weight to collaborative publications than to non-collaborative ones. For instance, if a publication has been co-authored by five researchers and two of these researchers are affiliated with a particular university, the publication has a weight of 2 / 5 = 0.4 in the calculation of the scientific impact indicators for this university. The fractional counting method leads to a more proper field normalization of scientific impact indicators and therefore to fairer comparisons between universities active in different fields. For this reason, fractional counting is the preferred counting method for the scientific impact indicators in the Leiden Ranking. Collaboration, open access, and gender indicators are always calculated using the full counting method.
To facilitate trend analyses, the Leiden Ranking provides statistics not only based on publications from the period 2019–2022, but also based on publications from earlier periods: 2006–2009, 2007–2010, ..., 2018–2021. The statistics for the different periods are calculated in a fully consistent way. For each period, citations are counted until the end of the first year after the period has ended. For instance, in the case of the period 2006–2009 citations are counted until the end of 2010, while in the case of the period 2019–2022 citations are counted until the end of 2023.
Stability intervals provide some insight into the uncertainty in bibliometric statistics. A stability interval indicates a range of values of an indicator that are likely to be observed when the underlying set of publications changes. For instance, the PP(top 10%) indicator may be equal to 15.3% for a particular university, with a stability interval ranging from 14.1% to 16.5%. This means that the PP(top 10%) indicator equals 15.3% for this university, but that changes in the set of publications of the university may relatively easily lead to PP(top 10%) values in the range from 14.1% to 16.5%. The Leiden Ranking employs 95% stability intervals constructed using a statistical technique known as bootstrapping.
More information on the indicators available in the Leiden Ranking can be found in a number of papers published by CWTS researchers. A detailed discussion of the Leiden Ranking is presented by Waltman et al. (2012). This paper relates to the 2011/2012 edition of the Leiden Ranking. Although the paper is not up-to-date anymore, it still provides relevant information on the Leiden Ranking. Field normalization of scientific impact indicators based on algorithmically defined fields is studied by Ruiz-Castillo and Waltman (2014). The methodology adopted in the Leiden Ranking for identifying core publications and core journals is outlined by Waltman and Van Eck (2013a, 2013b). Finally, the importance of using fractional rather than full counting in the calculation of field-normalized scientific impact indicators is explained by Waltman and Van Eck (2015).