About three years ago Andy Field and I decided there was a gap in the scientific journals market for a journal specifically publishing experimental psychopathology research – a journal willing to publish a range of good quality, empirically-based studies that contributed to our understanding of psychopathology and its treatment – including relevant studies conducted on non-clinical populations (especially since many clinical psychology journals had recently purposefully restricted their scope to clinical populations – and that’s an issue that I’ve posted about before)
We decided that we wanted to have complete control over the journal, including its format, the nature of the material we published, how often we published, to offer the journal to researchers and institutions as cheaply as we could, to directly reach out to the relevant research community to ask what kind of journal and content they would like (rather than be driven by a business model that sought only to sell the journal to librarians and institutions – a model that seems to be the norm for most large international scientific publishers), and to provide a range of open access options.
That journal, the Journal of Experimental Psychopathology (http://jep.textrum.com) is now about to go into its fifth volume in 2014, and has already grown from four issues a year to five.
Now here comes the dilemma. We are at a point where we can now apply for ISI registration. If successful that would mean the journal would be listed in the Thomson-Reuters Web of Knowledge index – arguably the most widely used scientific indexing database in the world. That would, of course, make the articles published by our authors more widely available to researchers than they would previously have been.
But the downside of this (some, sadly, might call it an upside!) is that in being accepted into the Web of Knowledge means your journal will now be given an impact factor and be listed in a league table of journals publishing in the same area as you. We all know that the impact factor of a journal is “highly valued” – the higher that score, the higher the supposed scientific quality of your journal and the greater the kudos to those researchers who publish in those journals. This has the effect of placing immense pressure on researchers – especially young, up-and-coming researchers – to publish primarily in high impact journals, for the sake of their “academic integrity”, and more importantly the sake of their careers (and, of course, ultimately their salary, their ability to pay their mortgages and support their families).
Who holds impact factors in highest esteem is a moot point. It is probably not researchers – but publishing in high impact journals is probably a secondary gain imposed on researchers by others. Publishing in “high impact” journals is sold to us as the gold standard for good research by university administrators, research funding bodies, research assessment exercises, librarians, and even the journal publishers themselves (there is hardly a journal website these days that doesn’t prominently display its impact factor on its home page).
But here lies the dilemma. Once the Journal of Experimental Psychopathology has an impact factor, it will judged by its position in the impact league table, this will immediately impose a pressure on us to take steps to move the journal up that table. Because we are an e-journal and are not subject to the same space and print-run limitations of paper journals, we can effectively publish all articles that our reviewers and associate editors believe are well conducted, well analyzed, relevant, and provide a contribution to knowledge – however small. And this is what we currently do. Once we have ISI registration, there is immediately the temptation to begin to set targets that will “weed out” those articles that are likely to be cited only rarely – even though they are well conducted and have been accepted through peer review. How many times have all of us, as researchers, received that decision letter from a journal editor saying something to the effect that “your submission was well received, but as you know we receive a great number of submissions and we can only accept a minority…” Most journals pride themselves on the size of their rejection rates! That is quite strange when you think they ought to be trying to encourage researchers to submit articles to them – so are they really just trying to impress the librarians who buy their subscriptions?
What I have described will be just one immediate consequence for us of acquiring an impact factor – do we make the decision not to publish perfectly acceptable pieces of research that we judge may not be well cited (with the emphasis there on the word “judge”). This in itself will make life more difficult for many researchers who find it hard to find outlets for their perfectly acceptable research.
Judgmental processes like this also distort the scientific process. As Nobel prize winner Randy Schekman has said recently, pressure to publish in high impact “elite” journals encourages researchers to cut corners and pursue trendy fields of science instead of doing the more important groundwork that science requires – a problem exacerbated by editors who are often not active scientists. It is arguably the less well-cited research that provides this groundwork for science and is important in developing consensus views of accepted knowledge through converging evidence. But this is exactly the kind of research that is most likely to be rejected from journals desperate to protect their impact factor.