How to optimize an institution’s ranking in RePEc

February 25, 2015

As we will very soon add a new institution ranking (voting on it will soon close. Edit: The new ranking is now live), it is a good opportunity to recap how institutions can optimize how they are getting ranked in RePEc. For the existing rankings, see here.

Get your authors listed, and with proper affiliations
The score of an institution is determined by the authors that declare being affiliated with in. If authors are not registered with the RePEc Author Service or have not declared the institution among their affiliation(s), we cannot count them towards the institutional ranking. Institutions can check who is registered by looking themselves up at EDIRC, RePEc’s institutional directory. Note that authors with several affiliations have to allocate percentages to each, and their scores are distributed according to those shares.
Get your authors to maintain their profiles
Authors are ranked as well, and whatever allows them to optimize their ranking scores works for their home institutions as well. You can follow this blog post on this topic.
Get your publications listed
This applies in particular to working papers. If the local working papers are indexed in RePEc, then local authors can add them to their profiles, and then only can they count towards the institution. Knowing that working papers are downloaded much more frequently than articles, this makes a difference. Also, if an article or book chpater is therwise not available, citations to them can still be captured if there is a working paper version available. Instructions on how to index publications in RePEc are here.
Link to those publications
As the various RePEc services provide listings for your working papers, you can link to them. Some even skip listing them on their website, linking only to RePEc. Two popular sites for that are EconPapers and IDEAS.
Have graduates listed as well
Through the RePEc Genealogy, departments can now list the graduates from their doctoral program. This matters because the performance of graduate programs is one of the factors in determining the rankings. Again, EDIRC has those alumni listed.

Poll about new RePEc institution ranking

January 26, 2015

The rankings provided by RePEc are becoming increasingly popular. They are far from perfect, though. One frequent criticism is that institution rankings depend on the size of the institution, as they simply add the scores of all affiliated economists. It is unfortunately not possible to offer per-capita rankings, as the registration system does not distinguish between faculty and students. What one could do, however, is to count only the top x people from every institution. The question is what this x should be.

We want to ask the RePEc community to determine this x with the poll below. The vote will be open for a month, the option closest to the median will be selected.

A few technicalities: As economists with multiple affiliations have to set shares for each, those shares will therefore also be used to determine who counts up to x. This means that more than x people will likely count towards the institution’s score. Also, institutions with fewer than x registered economists will not be compensated for the remainder of the allocation.

How authors can improve their RePEc ranking

October 31, 2014

The main purpose of RePEc is the dissemination of economics research. Over time, various services were added to the bibliographic mission, including rankings of economists according to their publication output. Despite being still experimental (we will see why), these rankings have become quite important in the profession. This post explains how authors can best leverage the various RePEc services to improve their standing in these rankings. The rankings are computed using a number of criteria, each highlighting different aspects of research productivity. The headers below reflect these categories.

Number of works

Several criteria are just counts of works. The difference is that the works are weighted using various impact factors. To best leverage this, it is important that an author has as many works as possible listed in RePEc. This indexing is typically done by the publisher, which would be a publishing house in the case of books, book chapters or articles, or the local research institution for working papers. Over 1700 such publishers currently participate, and more can join by following these instructions. It is all free.

If that does not help, one can upload a working paper version at the Munich Personal RePEc Archive (MPRA). It will be listed as a working paper, not as an article, but at least it is on RePEc. Many publishers allow a version that is prior to the final published version to be disseminated there, you can check on the policies at SHERPA/RoMEO

Finally, only the works that the author has claimed on the RePEc Author Service are counted. Authors get emails when something new may be there, but action by author is required to add this to the profile (very few publishers add a code that puts the work directly into the profile). So check your account on a regular basis, and make sure all the possible variations of your name are listed there, or RePEc cannot find all matches.

Number of pages

This criterion applies only to works published in journals and uses several weighting schemes. But the same principles apply as above: if some article is missing on RePEc, get the publisher to participate. Sometimes the publisher is actually participating but is not indexing that particular journal, volume, issue or article. Complain with the publisher, not RePEc. Any page on a RePEc service associated with that publisher has a contact email address for such purposes. Complain there as well if there is an error in the listing for any of your works.

Note that some journals do not provide page numbers. It is therefore not possible to count pages in such cases.

Number of citations

Again, these counts are weighted in various ways. The basis are the citations discovered by the CitEc project. This is likely where the data is the most experimental as some publishers do not allow access to full texts to extract references or link to an intersticial page before the full text. Authors can help here, though, by supplying reference lists. There is a form that asks for all references of an articles, not just those that cite the author. The hope is that this will help complete more rapidly the database, and this gives everyone the opportunity to contribute to a public good. Over 1000 authors have helped so far.

Note that matching references to documents in RePEc is a difficult exercise and pairs that fall in a grey zone are sent to the RePEc Author Service for authors to verify. So check from time to time whether there is something waiting for you there.

It can also help to consolidate different versions of the same work. This is done automatically if the title is identical and the author has all versions in the profile. If the titles differ, this form allows to establish the links. Also, encourage also those who cite you to be registered, as two criteria use this information.

Finally, we cannot count citations to works that are not listed in RePEc. If the article is not listed, getting a working paper version listed will help.

Abstract views and downloads

We can only count what is going through the participating RePEc services. For example, a link from an author’s homepage to a full text on the publisher’s website cannot be captured because it did not transit through RePEc. Thus, either provide a link to the author’s profile from IDEAS or EconPapers, or put a link to abstract pages from these services. Put a link to the profile page in the email signature. Note also that working papers generate many more downloads than articles. So, keep your working papers in your profile if you publish an article!

Unfortunately, the temptation to manipulate these numbers is big. Hence, a number of safeguards have been put in place: repeatedly downloading a paper will count only once, for example. Tell your class to download your papers and you will earn a zero. More details (but not too many) can be found on the site that published these statistics, LogEc.

Co-authorship network

Two criteria are based on how central an author is in the co-authorship network. Details can be found at CollEc. To improve one’s score here, one needs of course to get co-authors to be listed on RePEc with a profile (and their co-authors, too).

Student advising

This looks at how good an author’s doctoral students are performing with respect to all the criteria above. Thus, if one has been advising students, one needs to make sure this is recorded in RePEc. If the students have a profile, head to the RePEc Genealogy and complete their entry in the academics family tree of economics. Or do it for your advisor.

Final thoughts

One may be disappointed that it is a little bit of work to ensure that one is properly taken into account in the RePEc rankings. RePEc is an crowd-sourced project, it thus relies on the contributions of the community, and has done so, we think successfully, since 1997. If everyone pitches in a little (or more), we can make it even better. And if this helps improve one’s ranking, even better!

Of course, there is also the fact that writing better papers helps for one’s ranking, too.

Small changes to RePEc ranking statistics

July 1, 2014

With the next RePEc rankings and statistics to be released in a few days, a couple of small changes will be implemented.

First, we have adjusted the way the recursive and discounted recursive impact factors were used for weighting article pages, documents, and citations. A scaling factor was previously applied twice instead of once. The correction does not affect the rankings in any way, but it allows authors to replicate their scores. New scores are 21.4 times higher.

Second, we have started preventing republished articles from counting in the impact factors of the republishing journal. Also, these articles are dropped from the list of top recent works.

New: 10-year rankings on RePEc

October 20, 2013

RePEc has been publishing rankings of various sorts for over a decade. While many of them can still be considered experimental due to limitations in the data, they have had an impact on the evaluations of institutions, economists and journals in quite a few instances. Gradually, these rankings have been expanded to cover more and more aspects of academic life, as well as slicing them by fields, geography, gender and age. These rankings have typically considered all publications listed in RePEc. This can be a disadvantage for younger economists and publication series (although there are criteria that discount citations by age, for example).

We are now introducing a new set of rankings that limit themselves to publications in the last 10 years. For example, to compute an impact factor for a journal, only articles published in the last 10 years are considered. For an economist, anything published over 10 years ago is dismissed (unless the article version falls within 10 years). The ranking page has links to all those new 10-year rankings.

A few caveats: As samples are smaller than for the general rankings, the 10-year rankings will be more volatile, and any measurement error will be larger. For this reason, the 10-year rankings are not computed for fields and geographies. Also, any criterion that is based on recursive factors will still need some time to stabilize as they have to go through several iterations for them to converge, and they will never fully converge, as new data keeps coming in and data will have to be dropped every year. Finally, we cannot count research from publishers who do not supply publication years.

Proposed changes to RePEc rankings up for vote

April 29, 2013

The RePEc rankings are a popular feature of RePEc. As we gather more information about the profession, we can also refine the criteria that are used for those rankings, as well as add more of them. With this post, we seek from our users their opinion about a few potential changes. For each proposed change a poll is attached, which we hope will help in deciding what to do.

Regarding citation counts

Citation counts are the basis for a series of criteria used in ranking authors and departments. Our citation project, CitEc, uses references from open access material, or those offered directly by publishers in their metadata, or from user contributions. But citations may also appear elsewhere and count be counted. One source is Wikipedia, which has a little less than 3000 references pointing to items in RePEc. Another is the RePEc Biblio, a curated bibliography of the most relevant research in an increasing number of topics (1100 references, but it is just starting). And a third one are blog post indexed on, the blog aggregator focusing on economic research (8000 blog mentions so far). The question here is whether references listed on Wikipedia, the RePEc Biblio or should count as citations for ranking purposes. All these citations are already listed on the relevant IDEAS pages, but they have so far not counted towards any statistic. As usual, self-citations would not count, as much as possible. For this poll, we want to distinguish whether they should count for the ranking of economists and institutions on the one hand, and journals and series impact factors on the other hand, or both.

Regarding breadth of citations

Citation clubs bias how we try to measure the impact of someone’s research. We have already some criteria that try to measure the breadth of one citations, the number of citing authors, and the same weighted by the citing author’s rank. Another way to measure breadth is to measure how widely an author has been cited across fields. For this, we can measure in how many NEP fields an author is cited. To this effect, the analysis is of course limited to working papers that have been disseminated through NEP, which has currently 92 fields. Again, self-citations are excluded, and this new criterion would only apply to author rankings.

Doctoral advisors and programs

The RePEc Genealogy is a crowd-sourced project that gathers informations about who studied under whom, where and when. From this information, one could determine who is the best dissertation advisor and which doctoral programs are the strongest. Some preliminary rankings are already available in this regard, based on the data that has been gathered so far: 1869 advisors and 499 programs at the time of this writing. It is expected that these numbers would significantly increase once a ranking would be incorporated. Instead of the h-index currently computed, it would be calculated in the same way that institutional rankings are determined: by adding up the scores of the relevant student for each criterion, and ranking within each criterion and then aggregating criterion ranks. As one can expect that only a fraction of authors and institutions can be ranked this way, all the others would be ranked right after the last author or institution with a student. It is to be expected that this ranking would matter mostly for the top ranked authors and institutions. Note that a ranking of economists by graduating cohorts is going to be first released in early May.

Exclusion of extremes

For author and institution rankings, the various criteria are aggregated after excluding the best and worst criterion. This was introduced when there were about 25 criteria. Now, there are 33 for authors and 31 for institutions. Depending on the outcomes of the votes above, there may be even more. Thus one may want to exclude even more extreme scores to avoid taking outliers into account. How many extremes should be excluded on each end? The status quo is one.

Questions or concerns?

Feel free to post a comment below!

The Purpose of Journals

February 14, 2013

The editor of the Economics Bulletin, John Conley, has noted that many things go wrong with economic journals. Here is the abstract of his letter:

This letter calls attention a recent trend in economics publishing that seems to have slipped under the radar: large increases in submissions rates across a wide range of economics journals and steeply declining acceptance rates as a consequence. It is argued that this is bad for scholarly communication, bad for economics as a science, and imposes significant and wasteful costs on editors, referees. authors. and especially young people trying to establish themselves in the profession. It is further argued that the new “Big Deal” business model used by commercial publishers is primarily responsible for this situation. Finally it is argued that this presents a compelling reason to take advantage of new technologies to take control of certifying and distributing research away from commercial publishers and return it to scholarly community.

According to Conley,

The purpose of academic journals is to facilitate scholarly communication, filter for errors, and maintain the record of scientific advance.

This is, in my opinion, an idealized conception that does not reflect  the purpose of economic journals anymore. For economic research, the current economic journals are largely redundant. Conley himself notes this:

I seldom actually read journals  any more. I research topics using Google Scholar, RePEc, SSRN, and so on. It is inconvenient to sign up  with publishers to get tables of contents emailed to me or to login to my university’s library web portal to  search a journal issue by issue. I find it adds very little value over a more general search in any event. In  short, certification remains important to help people gain tenure and promotion and to get a sense of the  quality and centrality of individual scholars. However, neither certification by a journal, nor the collection  of similar papers within the bound or even electronic pages of a specific journal has very much meaning to  me when I am trying to understand where the debate in a subfield is at any given moment. As a result, I  was beginning to come to the conclusion that while they are irritating, commercial publishers are “mostly  harmless” to the research enterprise itself as publishing itself is becoming mostly irrelevant.

This coincides with my own observation: researchers don’t need journals. The main purpose of the journals is currently to ease the work of hiring committees. People publish in order to get a job. The wish to communicate new findings appears secondary in most cases.

Journals could serve worthier aims, however: they are needed by students, college teachers, and others who would like to obtain reliable information but can not as easily  separate the wheat from the chaff as active researchers can.

The important point Conley is making is, however, that the current journal system, although largely irrelevant for research, is nevertheless

bad for scholarly communication, bad for economics as a science, and imposes significant and wasteful costs on editors, referees. authors. and especially young people trying to establish themselves in the profession.

I fear, however, that John Conley’s suggestion to increase the number of journals would not improve the situation very much. As long as hiring committees use the reputation of journals, rather than the reputation of individuals,  a useful system of  “communication, filter for errors, and maintain the record of scientific advance” is practically blocked.

What can be done besides increasing the number of journals? Here some further suggestions.

1. Hiring committees can restrict the number of papers to be considered for judging an applicant to, say, three and disregard all other writings. This may help to reduce the number of publications and thereby reduce the need for further journals; it would also tilt the quality-quantity trade-off in favor of quality. (I think this has been a practice in Berkeley.)

2. Hiring committees that feel incompetent to judge the substantive quality of a contribution and have to resort to statistics of some sort may turn to citation counts of individual authors, as obtainable through  Google Scholar, Web of Science, or RePEc). This is a better solution than the the current practice of relying on the prestige of journals and would take account of the fact that  many papers in top journals are not so good, and medium-quality journals publish excellent articles.


Get every new post delivered to your Inbox.

Join 516 other followers