Suggestion box

RePEc is entirely driven by volunteers, who are also users. Most current volunteers came to RePEc because either they wanted to help with a current project or because they had some idea they wanted implemented in RePEc. We are opening this suggestion box for several reasons: as way to encourage feedback, to encourage more volunteers to come forward and pick a suggestion, and finally have users and RePEc team members discuss the proposed suggestions.

At RePEc, we like to be open. After all, we are creating open bibliographies using open source software, and we encourage open access. RePEc is there for you, so tell us how you want it to be. So, make your suggestion in the comment section below.

28 Responses to Suggestion box

  1. sterndavidi says:

    It would be nice to get full stats on any member. Currently this is only available through the links that are e-mailed to members or for the world’s top 5% of economists we can get their ranking in each category. At least give the same ranking according to each criterion for the top 20% in each country.

  2. To sterndavidi: I am afraid that many people would unregister once it is revealed that their stats are not that good. I am more interested in having more people participating than exposing full statistics.

  3. economiclogic says:

    There should be a Facebook application that allows one to have a link to one’s IDEAS and/or Econpapers page. So many economists are now on Facebook. And RePEc profiles could also include a link back to Facebook.

  4. ricardo97 says:

    For users outside of the top 5%, it’d be nice to know about others with similar ranks. Maybe 10-20 people immediately above and below each user. For people who are not at research institutions and who are unlikely to crack the top 5% this would be a useful way of seeing people with similar profiles.

  5. koczyl says:

    economiclogic: I think this is a brilliant idea that could be discussed further in the RePEc group at facebook: http://www.facebook.com/group.php?gid=102538583896

  6. ricardo97: This is an interesting suggestion. However, I am afraid that it would not be as informative as expected. Indeed, beyond the top 5%, 20 closely ranked economists are usually separated by minute differences which makes ranking among them mostly random. I do not see what this would bring.

  7. ricardo97 says:

    The usefulness of looking at similarly ranked people is that they are similarly ranked–whether a bit above or a bit below–rather than worrying about being a minute bit better or worse than them. The point of my suggestion is to have some sense of one’s peers not to play a one-upsmanship game.

    To avoid people overinterpreting the minute differences, would it be possible to simply report, say, 20 similarly ranked economists without an indication of which ones are ranked higher and which ones are ranked lower?

  8. ricardo97: That is certainly feasible and can be added to the personalized ranking analysis. I have put this on my to-do list.

  9. sterndavidi says:

    Here’s another idea. How about something like a crosstab search in IDEAS. At the moment we can get a list of everyone working in Energy Economics. I can also get a list of the top 20% of economists in Australia. How about a list of all economists in Australia (without ranks as we discussed above) and lists of say everyone working on energy economics in Australia (the latter is more useful than the first for Australia).

  10. kfa4 says:

    Hello everyone,
    I recently noticed that the bibtex file on IDEAS does not contain an abstract and keywords on an item. This revelation inspired me to make a comment on this suggestion box because I strongly believe that including these items will be very beneficial for those who use referencing software (allows you to remember why you were interested in that item in the first place). In addition this update will allow other institutions to quickly compile relevant information on an author and his work. I really do hope that this update is available soon.

  11. I can certainly add abstract and keywords to this features on IDEAS, and for the other bibliographic formats as well. Before doing so, however, I want to make sure this would not interfere with the operations of those who already use this and may suddenly get more than what they bargained for.

  12. Abstract and keyword fields are now available. It will take a few days until they are actually populated with data (where available, of course).

  13. verwimpp says:

    dear Christian,
    REPEC recently published rankings for economists with max 5 and max 10 years of publication experience. Could you also do that for researchers with max 12 or max 15 or max 20 years ?

    best,
    philip

  14. This now implemented for 15 and 20 years, thanks for your suggestion.

  15. economist76 says:

    This site is very frustrating because there is no way to correct many kinds of errors. This matters in itself, but particularly so since you provide “rankings”. When you do that, it strikes me that you have an ethical obligation to at least provide a mechanism for people to suggest corrections.

    Examples:

    1) One of my better-cited publications is missing, while every other paper in that issue is listed. Clearly just a clerical error — but there is no contact information to fix it, and no incentive for some clerk at some journal to fix one paper from 7 years back anyway. Why not just let _me_ fix it, by providing the necessary information? Sure, I could lie, but the reputational consequences of something like that would be severe. I could also make up data for publications, but I don’t.

    2) You provide rankings of departments by field, and you allocate fields by NEP reports. Well, my NEP reports are warped. I’m included in things that have nothing to do with my field. I am not alone in that, and it has a major influence on several people in my department. Why not just let _me_ tell you what I do? I can, for example, give you JEL codes by paper. For those who don’t want to, just stick with NEP.

    3) Why not let users just report papers that cite them (by repec handle), since some journals won’t let you automate it? I can’t imagine anyone so petty or risk-loving as to lie on that, but you can combat the possibility by alerting the citing authors (if registered) in some non-intrusive fashion.

    4) Sometimes my working papers change titles before publication. Let me tag two papers with different titles as the same paper.

    5) Labeling rankings as “experimental” is a cop-out, because you know they are widely used anyway. If you’re going to do them, at least let the people and departments affected correct the data issues, as above. That is almost surely the quickest and soundest way to improve the quality of this site.

  16. Thanks for your comments. Let me address your points one by one.

    1) Every abstract page on IDEAS and EconPapers has a contact email for such corrections. Series and journal pages as well. These contacts are the ones that feed the data to RePEc. They hold the keys.

    2) The principle in RePEc is that the information about papers is supplied by the respective publishers. This reduces considerably the need for monitoring. Now, publishers unfortunately rarely use JEL codes (40% for working papers, 7% for articles). This makes them of little used for this. I explored automatic JEL code discovery, but the computational power required for this is overwhelming, and there would be errors. Also, to allow author amendments to existing material requires some serious changes in current software, and current volunteers have no time (or desire) for this. But we are always looking for new blood…

    Note that one advantage for using NEP data is that it is generated by a human, who is a specialist of her field.

    3) This is an idea I have had in mind for a while, but the major problem here is monitoring for abuse. It would be easy for anyone to increase citation counts by “expanding” bibliographies. And yes, people are petty, we already had to deal with various attempts of illicit citations. I am open to suggestions on how this could be done efficiently for all parties.

    4) Email us with the RePEc handles, and we will link them. No need to do so if titles are close to identical, and both works are in your profile. They will be linked at the next refresh.

    5) Believe me, we are trying to do our best. Contact the people who can change the data, and I hope they will oblige. Pressure content providers to let us get to the reference sections. And I hope you understand that we want to limit opportunities for abuse to give credibility to our data.

    It is my intention to remove the “experimental” label once we have more citing than cited works. The score is currently 248223 to 308223.

  17. economist76 says:

    Thank you for your reply. Three major points.
    1) Search for the word “Environmental” at this Post-Keynesian NEP link: http://ideas.repec.org/n/nep-pke/2009-09-26.html About 25 environmental economics papers from the same school are listed there. No big deal, except you then use that misinformation to “rank” departments by field.
    2) I understand that changes, such as allowing self-description of field require software changes. I like coding. I’d probably send in some patches, as I’m sure would other people. However, I can’t find your software publicly posted anywhere. Is it out there?
    3) I know you’re trying to do your best. I’m arguing that you have misguided principles that stand in the way. Harness the power of incentives. Authors and institutions have ample incentive to get things right. Publishers have very little. You fear the power of incentives rather than harness them. You say “we want to limit opportunities for abuse to give credibility to our data”. Well, as it stands your data has no credibility because it is massively incomplete and error-ridden. You say “Contact the people who can change the data”. That’s what I just did. _You_ can change the data, or at least provide mechanisms for people to correct it. It’s possible that I have just had particularly bad luck with your site. However, I’ve talked to 5 or 6 other people I know, and everyone has similar complaints.

  18. 1) A particular paper is not uniquely attributed to a field. Thus is can be environmental and, say, game theory. Field weights can add up to more than 100%. Thus there is little “damage” done if a NEP editor recognizes a paper to be in his field in a way you do not agree with.
    2) The relevant software is ACIS and available here. By the very nature of the task at hand, it is quite complex. Contact Thomas Krichel if you want to contribute to it.
    3) The way the system was designed gives a lot of incentives for publishers to participate with metadata. All the data in RePEc comes from them, and is made available for free. But publishers care little about rankings and this is where authors and institutions need to pressure them to make references available.

    I can understand that people have complaints. The way RePEc has always worked is that if someone has a complaint, he should always provide a way to fix the problem. Plenty of improvements in RePEc services came that way. Thus if you find something is incomplete, find a way to make it complete. Do not wait for us to do it, we have already our hands full (and we also do this as volunteers with other priorities).

  19. economist76 says:

    What is repec trying to accomplish? The data from publishers is already available on the web both through their own sites and through seearch-engines. Repec is unlikely to ever any value over e.g. Google Scholar, except by leveraging unique information. The only unique information you have will always be user-generated.

    The current non-trivial user-generated data are, to my knowledge:

    (1) being able to claim or reject ownership of works in the database and thus disambiguate name conflicts.

    (2) being able to reject citations that don’t match.

    (3) Download counts — which are by nature highly manipulable without risk of reputational consequences. (Unlike specific reforms I suggested).

    That’s not much.

    If you want a volunteer who will help implement ways to gather and validate user-generated info, I have the skills and I’ll do it. However, that seems to conflict with your stated principles. I’m not interested in volunteering under the current principles because there is little value added by the project. Except rankings, which are now poorly done and unreliable and thus harmful because, as you state, publishers care little about author/rankings and they provide all the data.

    You say “if you find something is incomplete, find a way to make it complete.” Well, that’s why I took the time to make specific suggestions in the suggestion box. I’m even willing to do them myself, but they seem to conflict with your principles. It’s easy enough to point the finger at publishers and say they should fix it. But, they didn’t put up a web-site and issue rankings, Repec did.

    As a side-note, if I understand your formula description right, it probably does quite a bit of harm for 25 papers from one school to be placed in the wrong NEP while being missed in the right one. No human editor “recognized” those papers as Post-Keynesian; it’s obvious that some automated tool mistakenly spit those out and the human editor didn’t fully read the list. That’s not a frivolous complaint from a fragile ego. Many students now look those rankings up when considering schools, and that has real implications right down the line.

    I’ll not reply further, because I understand this is not a chat forum. But, while I appreciate your replies, my frustration with the site has not been reduced at all and seems unlikely to ever be reduced, given your current principles.

  20. Let me be clear about the principle: we privilege information from official sources, which would be the publishers. We would welcome author input, but for this we need to have a good monitoring system. We catch daily authors who attribute to themselves works they have not written. This is rather easy to detect. Monitoring citations is several times more difficult.

    The citation analysis is performed by José Manuel Barrueco Cruz, who can certainly use some help.

    Now as to what RePEc brings compared to other services. One is that we unify structured data from many publishers (currently 1218), and we have a focus on Economics. This reduces noise. Given that we have structured data, we can offer better search results (and improve Google Scholar results). Also, we offer email services for new working papers (NEP), links between works, authors and institutions. And there are more projects in the works.

  21. Bill Shobe says:

    I have a number of discussion papers published to more than one archive. This leads to two distinct records for the same paper. Plus, citations are divided up as between the two records. It would be nice to be able to consolidate two such records. Google Scholar provides such an option. If you felt this was a good idea, I might be able to help with implementing it.

  22. We consolidate automatically if: 1) the papers are all in an authors profile, 2) the titles are very close. When this does not apply, a request can be made by email to me, providing the handles of each relevant paper, article or book chapter.

    Note that this is not a secret. Every IDEAS abstract page has a statement to this effect under “Corrections.”

  23. Asong Sim says:

    The Euclidian citation score favours those who write a few papers that get cited and discourages attempts at scholarship that may not be fortunate to be published in top tier journals and be highly cited. I have never understood why at least one self-citation is not considered in the rankings. This is essentially because, an author should build on atleast one of his/her previous works. I am therefore suggesting another poll to ask whether at least one self-citation should be included in the rankings.

  24. Citation counts are supposed to reveal how *others* appreciate one’s work. Self-citations provide no information of that sort. Besides, allowing one self-citation to each author would not change significantly the rankings, as almost each one would get a self-citation.

  25. Lukas says:

    I like the new ranking on student records measured on the publications from the last 10 years. What would be even more informative, in particular for assessing the quality of a PhD program, is the same ranking that would include only authors graduating within the ten previous years. This would disregard authors that are currently very productive but graduated a very long time ago and thus, no longer provide good information about the current program quality.

  26. I agree, but there are currently only 2842 qualifying profiles. I think we need more mass for this to be credible.

Leave a comment