Tuesday, March 15, 2016

Impact-factoren kan undergrave fagligheten

Flere redaktører av tradisjonelle tidsskrifter er tydeligvis klar over problemene med impact factor. For eksempel siste nummer av European Journal of Pain har på redaktørplass skrevet om den avtagende sammenhengen mellom impact-factor og siteringer. "Twenty years after: Interesting times for scientific editors" av Luis Garcia-Larrea Editor-in-Chief, EJP* eller Circulation: Cardiovascular Quality and Outcomes, Editor’s PerspectiveThe End of Journals" av Harlan M. Krumholz eller Research Policy der redaktøren skriver i februar 2016: "Over recent years, JIF has become the most prominent indicator of a journal's standing, bringing intense pressure on journal editors to do what they can to increase it."


Denne boka fra Springer har en veldig god gjennomgang og i konklusjonen står det mye lettfattelig informasjon:


"The Impact Factor has dominated research evaluation far too long6 due to


its availability and simplicity, and the h-index has been popular because of a


similar reason: the promise to enable the ranking of scientists using only one


number. For policy-makers—and, unfortunately, for researchers as well—it


is much easier to count papers than to read them. Similarly, the fact that these


indicators are readily available on the web interfaces of the Web of Science


and Scopus add legitimacy to them in the eyes of the research community."


og avslutter med den verste effekten med impact-factor:


"Moreover, the


scientific community has been, since the beginning of the twentieth century,


independent when it comes to research evaluation, which was performed


through peer-review by colleagues who understood the content of the


research. We are entering a system where numbers compiled by private


firms are increasingly replacing this judgment. And that is the worst side


effect of them all: the dispossession of researchers from their own evaluation

methods which, in turn, lessens the independence of the scientific community.[min utheving]"
Referanse til Open Access versjon s. 150-151 i bokkapittelet: "The Use of Bibliometrics for Assessing Research: Possibilities, Limitations and Adverse Effects" Stefanie Haustein and Vincent Larivière 


som er å finne i boka "Incentives and performance" 2015 Springer.


Det skjer en holdningsendring etter hvert, og Michael Eisen rapporterer om en dramatisk holdningsendring om bruken av preprint innen biologi på sin blogg, hans skriver:


I honestly don’t know how this happened. Pre-prints are close to invisible in biology (we didn’t really have a viable pre-print server until a year or so ago) and other recent efforts to promote pre-print usage in biology have been poorly received. There is lots of evidence from social media that most members of the community fall somewhere in the skeptical to hostile range when discussing pre-prints. Some of it is selection bias – people hostile to pre-prints weren’t likely to agree to come to a meeting on pre-prints that they (mostly) had to pay their own way to attend.
But I think it’s bigger than that. I think the publishing zeitgeist may have finally shifted.
- See more at: http://www.michaeleisen.org/blog/?p=1863#sthash.55olLecQ.dpuf
Enkelheten med Impact Factor og H-index er arbeidsbesparende men kritikkverdig. Jeg vet ikke om det hjelper å hjelper å bruke Article-Influence som har blitt videreutviklet til forfatternivå siden denne er basert på siteringer og en fagfellevurderingsprosess som i seg selv er sterkt krtisert. Journal of Medical Science and Health har en artikkel der siteringer blir stilt spørsmål med:


"The whole onus of bibliometric indicators rests on citation and citation gives only an indication of impact. There is no linear correlation between citation and quality. "


Alle nylige artikler jeg har sett på er sterkt kritiske til impact factor så det skjer nok en endring i bruken av denne til forskningsevaluering framover.


 






No comments:

Post a Comment