Proposed changes to IDEAS/RePEc ranking: Euclid and outliers

April 19, 2017

The IDEAS/RePEc rankings are a popular way to assess the strength of economists, serials and institutions. They are built on a large number of criteria that are relatively stable in definition. As rankings matter a lot for some, we do not want to disturb the definitions before prior approval by the community. Hence, we are putting to a vote several changes for the aggregate rankings of economists and serials. Voting ends one month from this post, 19 May 2017. Any approved change is expected to be available in the May rankings released in early June 2017.

Euclidian ranking for economists

The Euclidian ranking of economists has already been computed for several months. It is based on an AER article by Motty Perry and Philip Reny. The score is measured by taking the square root of the sum across all distinct works of the square of the number of adjusted citations, the adjustment being for the typical number of references in the field of the paper (the field is defined by NEP, if there are several fields a geometric average is used, no adjustment if field is not defined). Thus, this ranking favors those who have some work that that is much more cited than others. The other citation criteria that are currently used are sums of citations, several with citations weighted by various impact factors. The only ones that differ from this model are the H-index (which favors a more uniform distribution of citations across works), and criteria based on the number of economists citing.

The addition of the Euclidian criterion would thus introduce a new angle in the list of criteria: having some particularly influential work instead of a larger body of work. As noted in the article, this criterion also has a number of properties that no other criterion has. Vote below if you think such a criterion should be added to those retained for the aggregate ranking.

Exclusion of extreme criteria for economists

The procedure for the aggregation of the ranking criteria currently excludes the worst and best criteria for every economist. The rationale for this is that one should not be penalized too much for being particularly bad with one criterion, and not shoot up too much in the rankings if only one criterion is very good. The fact that only one of each is excluded has been repeatedly discussed in the community, and given that that we may be adding another criterion (potentially the 36th) is an opportunity to revisit this. The vote below asks whether more extremes should be excluded from consideration. Note that it will be assumed that if you vote for “3”, you would also approve of “2” as the current status quo is “1”.

Euclidian ranking for serials

Similarly to economists, an Euclidian ranking can be computed for serials (journals, working paper series, book series, chapter series). For serials, we similarly construct an aggregate ranking across criteria. This ranking is little used to our knowledge. The question here is whether to include the Euclidian ranking in that computation. Note that ranking is not yet available.

Euclidian ranking for institutions

It is currently not clear how the Euclidian ranking for institutions would be computed. The problem is how to treat economists with multiple affiliations. For other criteria that are based on simple sums, the current practice is to multiply the scores by the relevant affiliation share. This does not make much sense for a sum of squares. There is a similar problem for the H-index that was resolved by redefining it. Should a solution be found, the same aggregation approach would be used as for economists.

Exclusion of extreme criteria for institutions

This poll is the mirror image of the one for economists. The only difference is that we have currently 32 instead of 35 criteria. Due to this difference, a separate poll is offered.

Update, 19 May 2017 Polls are closed. The Euclidian ranking will be added and two best and two lowest criteria will be removed. Same rules apply for economist and institutional rankings. The latter will include experimentally the root of the weighted sum of square Euclidian scores of their members.


Ranking optimization

November 18, 2016

RePEc is all about the free dissemination of economic research, but for many economists it is most known for its rankings. While would really emphasize that the rankings are only a by-product and to some degree a motivator for people and publishers to have their works listed on RePEc, we want to acknowledge that the rankings have become important, as they are use for evaluations in funding agencies and for promotion or tenure. So here are some recommendations on how to optimize rankings, both for authors and institutions.

For authors


  1. Foremost, make sure your profile is current. Go to RePEc Author Service and log in. Click on research to see whether the system has found any suggestions. Make sure you have all the relevant name variations for you so that it can make the best suggestions. Check also if the system needs some help in attributing some citations.
  2. A few publishers still do not participate, particularly among book publishers. Encourage yours to index its works in RePEc.
  3. If you have advised graduate students and they are registered in RePEc, add them to your RePEc Genealogy record. Help your own advisor’s record as well. This is likely the lowest hanging fruit for many economists.
  4. RePEc sometimes fails to find the bibliography for some articles. If this makes you miss some citations, you can help by uploading those references. The full bibliography is required. The input form is here.
  5. Working papers get downloaded many more times than journal articles. Thus make sure to have them listed! Your institution can have its WP series indexed following these instructions. If that does not work out, upload them to MPRA. Most publishers allow it, as long as it is not the final version. See details at SHERAP/RoMEO.
  6. Finally, link to your profile on IDEAS or EconPapers from your webpage.

For institutions


  1. Foremost, make sure that all members of your institution are registered at the RePEc Author Service. You can look up who is already there by finding your record at EDIRC. Note that if someone is listed with a question mark, it means their email address is not valid, and they will not count towards your score. Please get it corrected (or tell us about the new address or whether this person may have died. It happens).
  2. If you have a graduate program, you want to have the graduates listed in the RePEc Genealogy. Your EDIRC record also lists who is already linked. There is already a ranking using these records.
  3. If you have a working paper series or some other serial, make sure it is indexed in RePEc. Instructions.
  4. Of course, have your members follow the recommendations for authors above.


RePEc to raise funds through citation boosts

April 1, 2015

RePEc is a volunteer initiative that is run without any source of revenue. The hosting of RePEc services is provided by sponsors who either pay for this hosting or share part of their infrastructure. Over the history of RePEc, however, there have been circumstances where an emergency situation required finding a quick solution where buying hardware or hosting would have been much more efficient that first seeking a sponsor.

For this reason, RePEc is now introducing a way to generate some funds. One problem is that RePEc will always provide all its services for free, so there is no way to provide some exclusive service for a fee. There is one area where we can help with the most common complaint of authors: they are not sufficiently cited. Authors can help there for free, by adding the full references of articles and papers citing their work. CitEc has a convenient and fairly popular form for that. But if you want a boost beyond that, RePEc now offers a way to buy a higher citation count that will be towards the RePEc rankings.

Specifically, this applies to all ranking criteria that use citation counts, except for the H-Index, and the two criteria that count the number of authors citing someone. You can buy a series of modifiers to the base number that is currently published, call it R. You can increase that base by f. Then you can multiply the sum R+f by p. These two enhancement are valid, by default for one month. But you can increase their validity up to L periods, during which they depreciate linearly. This gives you three citation score enhancements you can buy, f, p, and L.

The actual formula that will be used to determine the new score would then be (after some necessary normalization):

CodeCogsEqn

where:
A is the new score,
pi is the multiplicative boost for period i,
fi is the additive boost for period i,
L is the duration boost,
R is the initial score, and
o=√i.

The price of each boost will be determined each month according to demand. To this effect, interested users need to log into this form during the first 15 days of the month to express interest in buying a number of each of the three boosts. The system then determines a unit price for each and they can purchase the boosts during the rest of the month (including adjusting the numbers) at the same address. Note that it is required to log in during the first 15 days to do a purchase.

The new scores will be used for the rankings, and there will be no visible mark that they have been boosted. Of course, a user can go and count the citations on an author’s profile and compare that to the numbers in the rankings (remember not to count self-citations and multiple versions of the same work). The new scores will also count towards the affiliations.


How to optimize an institution’s ranking in RePEc

February 25, 2015

As we will very soon add a new institution ranking (voting on it will soon close. Edit: The new ranking is now live), it is a good opportunity to recap how institutions can optimize how they are getting ranked in RePEc. For the existing rankings, see here.


Get your authors listed, and with proper affiliations
The score of an institution is determined by the authors that declare being affiliated with in. If authors are not registered with the RePEc Author Service or have not declared the institution among their affiliation(s), we cannot count them towards the institutional ranking. Institutions can check who is registered by looking themselves up at EDIRC, RePEc’s institutional directory. Note that authors with several affiliations have to allocate percentages to each, and their scores are distributed according to those shares.
Get your authors to maintain their profiles
Authors are ranked as well, and whatever allows them to optimize their ranking scores works for their home institutions as well. You can follow this blog post on this topic.
Get your publications listed
This applies in particular to working papers. If the local working papers are indexed in RePEc, then local authors can add them to their profiles, and then only can they count towards the institution. Knowing that working papers are downloaded much more frequently than articles, this makes a difference. Also, if an article or book chpater is therwise not available, citations to them can still be captured if there is a working paper version available. Instructions on how to index publications in RePEc are here.
Link to those publications
As the various RePEc services provide listings for your working papers, you can link to them. Some even skip listing them on their website, linking only to RePEc. Two popular sites for that are EconPapers and IDEAS.
Have graduates listed as well
Through the RePEc Genealogy, departments can now list the graduates from their doctoral program. This matters because the performance of graduate programs is one of the factors in determining the rankings. Again, EDIRC has those alumni listed.


Poll about new RePEc institution ranking

January 26, 2015

The rankings provided by RePEc are becoming increasingly popular. They are far from perfect, though. One frequent criticism is that institution rankings depend on the size of the institution, as they simply add the scores of all affiliated economists. It is unfortunately not possible to offer per-capita rankings, as the registration system does not distinguish between faculty and students. What one could do, however, is to count only the top x people from every institution. The question is what this x should be.

We want to ask the RePEc community to determine this x with the poll below. The vote will be open for a month, the option closest to the median will be selected.

A few technicalities: As economists with multiple affiliations have to set shares for each, those shares will therefore also be used to determine who counts up to x. This means that more than x people will likely count towards the institution’s score. Also, institutions with fewer than x registered economists will not be compensated for the remainder of the allocation.


How authors can improve their RePEc ranking

October 31, 2014

The main purpose of RePEc is the dissemination of economics research. Over time, various services were added to the bibliographic mission, including rankings of economists according to their publication output. Despite being still experimental (we will see why), these rankings have become quite important in the profession. This post explains how authors can best leverage the various RePEc services to improve their standing in these rankings. The rankings are computed using a number of criteria, each highlighting different aspects of research productivity. The headers below reflect these categories.

Number of works

Several criteria are just counts of works. The difference is that the works are weighted using various impact factors. To best leverage this, it is important that an author has as many works as possible listed in RePEc. This indexing is typically done by the publisher, which would be a publishing house in the case of books, book chapters or articles, or the local research institution for working papers. Over 1700 such publishers currently participate, and more can join by following these instructions. It is all free.

If that does not help, one can upload a working paper version at the Munich Personal RePEc Archive (MPRA). It will be listed as a working paper, not as an article, but at least it is on RePEc. Many publishers allow a version that is prior to the final published version to be disseminated there, you can check on the policies at SHERPA/RoMEO

Finally, only the works that the author has claimed on the RePEc Author Service are counted. Authors get emails when something new may be there, but action by author is required to add this to the profile (very few publishers add a code that puts the work directly into the profile). So check your account on a regular basis, and make sure all the possible variations of your name are listed there, or RePEc cannot find all matches.

Number of pages

This criterion applies only to works published in journals and uses several weighting schemes. But the same principles apply as above: if some article is missing on RePEc, get the publisher to participate. Sometimes the publisher is actually participating but is not indexing that particular journal, volume, issue or article. Complain with the publisher, not RePEc. Any page on a RePEc service associated with that publisher has a contact email address for such purposes. Complain there as well if there is an error in the listing for any of your works.

Note that some journals do not provide page numbers. It is therefore not possible to count pages in such cases.

Number of citations

Again, these counts are weighted in various ways. The basis are the citations discovered by the CitEc project. This is likely where the data is the most experimental as some publishers do not allow access to full texts to extract references or link to an intersticial page before the full text. Authors can help here, though, by supplying reference lists. There is a form that asks for all references of an articles, not just those that cite the author. The hope is that this will help complete more rapidly the database, and this gives everyone the opportunity to contribute to a public good. Over 1000 authors have helped so far.

Note that matching references to documents in RePEc is a difficult exercise and pairs that fall in a grey zone are sent to the RePEc Author Service for authors to verify. So check from time to time whether there is something waiting for you there.

It can also help to consolidate different versions of the same work. This is done automatically if the title is identical and the author has all versions in the profile. If the titles differ, this form allows to establish the links. Also, encourage also those who cite you to be registered, as two criteria use this information.

Finally, we cannot count citations to works that are not listed in RePEc. If the article is not listed, getting a working paper version listed will help.

Abstract views and downloads

We can only count what is going through the participating RePEc services. For example, a link from an author’s homepage to a full text on the publisher’s website cannot be captured because it did not transit through RePEc. Thus, either provide a link to the author’s profile from IDEAS or EconPapers, or put a link to abstract pages from these services. Put a link to the profile page in the email signature. Note also that working papers generate many more downloads than articles. So, keep your working papers in your profile if you publish an article!

Unfortunately, the temptation to manipulate these numbers is big. Hence, a number of safeguards have been put in place: repeatedly downloading a paper will count only once, for example. Tell your class to download your papers and you will earn a zero. More details (but not too many) can be found on the site that published these statistics, LogEc.

Co-authorship network

Two criteria are based on how central an author is in the co-authorship network. Details can be found at CollEc. To improve one’s score here, one needs of course to get co-authors to be listed on RePEc with a profile (and their co-authors, too).

Student advising

This looks at how good an author’s doctoral students are performing with respect to all the criteria above. Thus, if one has been advising students, one needs to make sure this is recorded in RePEc. If the students have a profile, head to the RePEc Genealogy and complete their entry in the academics family tree of economics. Or do it for your advisor.

Final thoughts

One may be disappointed that it is a little bit of work to ensure that one is properly taken into account in the RePEc rankings. RePEc is an crowd-sourced project, it thus relies on the contributions of the community, and has done so, we think successfully, since 1997. If everyone pitches in a little (or more), we can make it even better. And if this helps improve one’s ranking, even better!

Of course, there is also the fact that writing better papers helps for one’s ranking, too.


Small changes to RePEc ranking statistics

July 1, 2014

With the next RePEc rankings and statistics to be released in a few days, a couple of small changes will be implemented.

First, we have adjusted the way the recursive and discounted recursive impact factors were used for weighting article pages, documents, and citations. A scaling factor was previously applied twice instead of once. The correction does not affect the rankings in any way, but it allows authors to replicate their scores. New scores are 21.4 times higher.

Second, we have started preventing republished articles from counting in the impact factors of the republishing journal. Also, these articles are dropped from the list of top recent works.


New: 10-year rankings on RePEc

October 20, 2013

RePEc has been publishing rankings of various sorts for over a decade. While many of them can still be considered experimental due to limitations in the data, they have had an impact on the evaluations of institutions, economists and journals in quite a few instances. Gradually, these rankings have been expanded to cover more and more aspects of academic life, as well as slicing them by fields, geography, gender and age. These rankings have typically considered all publications listed in RePEc. This can be a disadvantage for younger economists and publication series (although there are criteria that discount citations by age, for example).

We are now introducing a new set of rankings that limit themselves to publications in the last 10 years. For example, to compute an impact factor for a journal, only articles published in the last 10 years are considered. For an economist, anything published over 10 years ago is dismissed (unless the article version falls within 10 years). The ranking page has links to all those new 10-year rankings.

A few caveats: As samples are smaller than for the general rankings, the 10-year rankings will be more volatile, and any measurement error will be larger. For this reason, the 10-year rankings are not computed for fields and geographies. Also, any criterion that is based on recursive factors will still need some time to stabilize as they have to go through several iterations for them to converge, and they will never fully converge, as new data keeps coming in and data will have to be dropped every year. Finally, we cannot count research from publishers who do not supply publication years.


Proposed changes to RePEc rankings up for vote

April 29, 2013

The RePEc rankings are a popular feature of RePEc. As we gather more information about the profession, we can also refine the criteria that are used for those rankings, as well as add more of them. With this post, we seek from our users their opinion about a few potential changes. For each proposed change a poll is attached, which we hope will help in deciding what to do.

Regarding citation counts

Citation counts are the basis for a series of criteria used in ranking authors and departments. Our citation project, CitEc, uses references from open access material, or those offered directly by publishers in their metadata, or from user contributions. But citations may also appear elsewhere and count be counted. One source is Wikipedia, which has a little less than 3000 references pointing to items in RePEc. Another is the RePEc Biblio, a curated bibliography of the most relevant research in an increasing number of topics (1100 references, but it is just starting). And a third one are blog post indexed on EconAcademics.org, the blog aggregator focusing on economic research (8000 blog mentions so far). The question here is whether references listed on Wikipedia, the RePEc Biblio or EconAcademics.org should count as citations for ranking purposes. All these citations are already listed on the relevant IDEAS pages, but they have so far not counted towards any statistic. As usual, self-citations would not count, as much as possible. For this poll, we want to distinguish whether they should count for the ranking of economists and institutions on the one hand, and journals and series impact factors on the other hand, or both.

Regarding breadth of citations

Citation clubs bias how we try to measure the impact of someone’s research. We have already some criteria that try to measure the breadth of one citations, the number of citing authors, and the same weighted by the citing author’s rank. Another way to measure breadth is to measure how widely an author has been cited across fields. For this, we can measure in how many NEP fields an author is cited. To this effect, the analysis is of course limited to working papers that have been disseminated through NEP, which has currently 92 fields. Again, self-citations are excluded, and this new criterion would only apply to author rankings.

Doctoral advisors and programs

The RePEc Genealogy is a crowd-sourced project that gathers informations about who studied under whom, where and when. From this information, one could determine who is the best dissertation advisor and which doctoral programs are the strongest. Some preliminary rankings are already available in this regard, based on the data that has been gathered so far: 1869 advisors and 499 programs at the time of this writing. It is expected that these numbers would significantly increase once a ranking would be incorporated. Instead of the h-index currently computed, it would be calculated in the same way that institutional rankings are determined: by adding up the scores of the relevant student for each criterion, and ranking within each criterion and then aggregating criterion ranks. As one can expect that only a fraction of authors and institutions can be ranked this way, all the others would be ranked right after the last author or institution with a student. It is to be expected that this ranking would matter mostly for the top ranked authors and institutions. Note that a ranking of economists by graduating cohorts is going to be first released in early May.

Exclusion of extremes

For author and institution rankings, the various criteria are aggregated after excluding the best and worst criterion. This was introduced when there were about 25 criteria. Now, there are 33 for authors and 31 for institutions. Depending on the outcomes of the votes above, there may be even more. Thus one may want to exclude even more extreme scores to avoid taking outliers into account. How many extremes should be excluded on each end? The status quo is one.

Questions or concerns?

Feel free to post a comment below!


The Purpose of Journals

February 14, 2013

The editor of the Economics Bulletin, John Conley, has noted that many things go wrong with economic journals. Here is the abstract of his letter:

This letter calls attention a recent trend in economics publishing that seems to have slipped under the radar: large increases in submissions rates across a wide range of economics journals and steeply declining acceptance rates as a consequence. It is argued that this is bad for scholarly communication, bad for economics as a science, and imposes significant and wasteful costs on editors, referees. authors. and especially young people trying to establish themselves in the profession. It is further argued that the new “Big Deal” business model used by commercial publishers is primarily responsible for this situation. Finally it is argued that this presents a compelling reason to take advantage of new technologies to take control of certifying and distributing research away from commercial publishers and return it to scholarly community.

According to Conley,

The purpose of academic journals is to facilitate scholarly communication, filter for errors, and maintain the record of scientific advance.

This is, in my opinion, an idealized conception that does not reflect  the purpose of economic journals anymore. For economic research, the current economic journals are largely redundant. Conley himself notes this:

I seldom actually read journals  any more. I research topics using Google Scholar, RePEc, SSRN, and so on. It is inconvenient to sign up  with publishers to get tables of contents emailed to me or to login to my university’s library web portal to  search a journal issue by issue. I find it adds very little value over a more general search in any event. In  short, certification remains important to help people gain tenure and promotion and to get a sense of the  quality and centrality of individual scholars. However, neither certification by a journal, nor the collection  of similar papers within the bound or even electronic pages of a specific journal has very much meaning to  me when I am trying to understand where the debate in a subfield is at any given moment. As a result, I  was beginning to come to the conclusion that while they are irritating, commercial publishers are “mostly  harmless” to the research enterprise itself as publishing itself is becoming mostly irrelevant.

This coincides with my own observation: researchers don’t need journals. The main purpose of the journals is currently to ease the work of hiring committees. People publish in order to get a job. The wish to communicate new findings appears secondary in most cases.

Journals could serve worthier aims, however: they are needed by students, college teachers, and others who would like to obtain reliable information but can not as easily  separate the wheat from the chaff as active researchers can.

The important point Conley is making is, however, that the current journal system, although largely irrelevant for research, is nevertheless

bad for scholarly communication, bad for economics as a science, and imposes significant and wasteful costs on editors, referees. authors. and especially young people trying to establish themselves in the profession.

I fear, however, that John Conley’s suggestion to increase the number of journals would not improve the situation very much. As long as hiring committees use the reputation of journals, rather than the reputation of individuals,  a useful system of  “communication, filter for errors, and maintain the record of scientific advance” is practically blocked.

What can be done besides increasing the number of journals? Here some further suggestions.

1. Hiring committees can restrict the number of papers to be considered for judging an applicant to, say, three and disregard all other writings. This may help to reduce the number of publications and thereby reduce the need for further journals; it would also tilt the quality-quantity trade-off in favor of quality. (I think this has been a practice in Berkeley.)

2. Hiring committees that feel incompetent to judge the substantive quality of a contribution and have to resort to statistics of some sort may turn to citation counts of individual authors, as obtainable through  Google Scholar, Web of Science, or RePEc). This is a better solution than the the current practice of relying on the prestige of journals and would take account of the fact that  many papers in top journals are not so good, and medium-quality journals publish excellent articles.