Proposed changes to RePEc rankings up for vote

The RePEc rankings are a popular feature of RePEc. As we gather more information about the profession, we can also refine the criteria that are used for those rankings, as well as add more of them. With this post, we seek from our users their opinion about a few potential changes. For each proposed change a poll is attached, which we hope will help in deciding what to do.

Regarding citation counts

Citation counts are the basis for a series of criteria used in ranking authors and departments. Our citation project, CitEc, uses references from open access material, or those offered directly by publishers in their metadata, or from user contributions. But citations may also appear elsewhere and count be counted. One source is Wikipedia, which has a little less than 3000 references pointing to items in RePEc. Another is the RePEc Biblio, a curated bibliography of the most relevant research in an increasing number of topics (1100 references, but it is just starting). And a third one are blog post indexed on EconAcademics.org, the blog aggregator focusing on economic research (8000 blog mentions so far). The question here is whether references listed on Wikipedia, the RePEc Biblio or EconAcademics.org should count as citations for ranking purposes. All these citations are already listed on the relevant IDEAS pages, but they have so far not counted towards any statistic. As usual, self-citations would not count, as much as possible. For this poll, we want to distinguish whether they should count for the ranking of economists and institutions on the one hand, and journals and series impact factors on the other hand, or both.

Regarding breadth of citations

Citation clubs bias how we try to measure the impact of someone’s research. We have already some criteria that try to measure the breadth of one citations, the number of citing authors, and the same weighted by the citing author’s rank. Another way to measure breadth is to measure how widely an author has been cited across fields. For this, we can measure in how many NEP fields an author is cited. To this effect, the analysis is of course limited to working papers that have been disseminated through NEP, which has currently 92 fields. Again, self-citations are excluded, and this new criterion would only apply to author rankings.

Doctoral advisors and programs

The RePEc Genealogy is a crowd-sourced project that gathers informations about who studied under whom, where and when. From this information, one could determine who is the best dissertation advisor and which doctoral programs are the strongest. Some preliminary rankings are already available in this regard, based on the data that has been gathered so far: 1869 advisors and 499 programs at the time of this writing. It is expected that these numbers would significantly increase once a ranking would be incorporated. Instead of the h-index currently computed, it would be calculated in the same way that institutional rankings are determined: by adding up the scores of the relevant student for each criterion, and ranking within each criterion and then aggregating criterion ranks. As one can expect that only a fraction of authors and institutions can be ranked this way, all the others would be ranked right after the last author or institution with a student. It is to be expected that this ranking would matter mostly for the top ranked authors and institutions. Note that a ranking of economists by graduating cohorts is going to be first released in early May.

Exclusion of extremes

For author and institution rankings, the various criteria are aggregated after excluding the best and worst criterion. This was introduced when there were about 25 criteria. Now, there are 33 for authors and 31 for institutions. Depending on the outcomes of the votes above, there may be even more. Thus one may want to exclude even more extreme scores to avoid taking outliers into account. How many extremes should be excluded on each end? The status quo is one.

Questions or concerns?

Feel free to post a comment below!

21 Responses to Proposed changes to RePEc rankings up for vote

  1. Citations on Wikipedia etc should be counted but separately, for example under a heading like “outreach”.

  2. I am afraid that there are too few of them. there are about 3,000 Wikipedia citations, and we have 36,000 registered authors.

  3. So, about 35,000 out of 36,000 registered have no impact outside academia at all. That strikes me as about right.

  4. To address a question received by email: there are no plans to count for ranking purposes how many people follow authors on MyIDEAS. This is too easy to manipulate by asking others to follow you. Also, we do not want something similar to the “like” addiction on Facebook to happen on RePEc.

  5. phraathit says:

    Ranking of institutions might be heavily biased due to counting of associates not working at the institution. A good example is e.g. the IZA which ranks top before the ECB this month. There are only working about 50 people of the institute but it account for 634 authors. Often they have only be invited to give a lecture and regularly submit papers at the IZA-working paper series beside many other like NBER-papers, CEPR-papers etc.This artificially increases the significance of the work done by the IZA. My suggestion would be that only publications with authors who actually work at the institute should be included. Publications of authors were all are not working at the institute should be excluded. This would give a more realistical picture than the current counting.

  6. The mentioning in Wikipedia can be manipulated easily. A blog might be different as it does reflect outside-academia-impact, and I agree wit Richard we should value this highly. Finally, research should serve society, and responding to society’s needs to discuss things is also ressource-costly. What about adding newspaper articles?

  7. This issue was recognized several years ago, and affiliations started being weighted in 2008. Recently, we have allowed authors to set these weights themselves. They have every incentive to put larger weights for their main employers.

  8. The blog that are analyzed for this are academic blogs, thus the goal is not necessarily to measure impact outside of academia. A for newspapers, this would be much more difficult, because newspaper publishers typically do not allow robotic access to their articles. And I do not think a newspaper citation is valued as much in the profession as an academic citation.

  9. Anyway, since we are all financed by public money, communicating findings to the outside or having an impact on the outside should also be valued. Otherwise we create incentives for researchers not to invest time into communication, into diffusion – and we generate researchers being isolated in ivory towers. There hould be an incentive to do society-relevat research, partcularly in the light that societies around the world are in much need of advice by economists nowadays.

  10. I am not denying that real-world usefulness is very important. But I do not think that newspaper citations are the right metric. That is too easily manipulated by avid PR (public relations). And journalists are not the best in judging what is good.

  11. ….and citations among economists are biased by citation networks, leading to citations disregarding content relevance. I have witnessed the omission of groundbreaking articles because the author was from a competing group ;-) ….so what alternative measure of ‘outside relevance’ would you propose? newspaper-articles authorships? blog-authorships?

  12. Other biases in citing are by a) hierarchy and b) gender, based on my own experience. That means: full professors are cited more often than young researchers, and men are cited more than women. Honestly, I think that women researchers are strongly discriminated when it comes to citations. One reason might be that men have problems with acknowledging that also women researchers are innovative.

  13. Citation in what ever plaform gives the cited individual the zeal and ethusiasm to contribute more to knowledge. Therefore all citations should be encouraged and ranked appropriately.

  14. Newspaper articles & the like can easily be manipulated: say something controversial (e.g. the earth is flat) and you get plenty of notice. Keep it academic.

  15. Any citation index can be easily manipulated: e.g., academic indices can be influenced through self-citations. Maybe we all have to overcome our idea of objectively and quantitatively measuring academic performance.

  16. Self-citations do not count on RePEc!

  17. …. leaving many other ways of manipulating and biasing citation indices, which undoubtedly exist (see my other comments).
    What was your suggestion concerning outside reach, Christian? We are still missing an answer ….

  18. As mentioned by others, this is even easier to manipulate.

  19. quod siat demonstrandum!!
    so far, you and the ‘others’ are lacking any proof of such a statement.

  20. My suggestion to overcome citation network bias:
    use the genealogy-information to down-weight citations received from academic ‘relatives’ such as supervisors, brothers, sisters, uncles, cousins, etc.
    In other words: attach larger weights to references given by ‘strangers’.

  21. Thanks to all for voting. According to the polls, there will be two modifications to the rankings: for authors, there will be a new criterion, citations breadth across NEP fields. And for authors and institutions, strengths of students will be added, too.

Leave a comment