• Home
  • Graham Davey's Blog
  • My Books
    • Published Books
    • New Book Projects >
      • Psychopathology 3e Excerpts
  • My Research
  • The Sidcup Mixer
  • Journals
  • News
    • Latest News
    • Clinical Psychology Research & Chat
  • Real Life
Papers from Sidcup
Graham Davey's website

‘Stickers’, ‘Jugglers’ and ‘Switchers & Dumpers’ – Which kind of researcher should you be?

3/20/2013

 
First published 04/12/2012 at http://grahamdavey.blogspot.co.uk
Picture
I often look back on my own research career with some surprise at where it’s all travelled to. When I was a PhD student I was a dyed-in-the-wool behaviourist loading rats into Skinner boxes and clichés into arguments. Cognitions didn’t exist – and even in the remote possibility that they might, they were of no use to a scientific psychology. I was a radical Skinnerian pursuing a brave new world in which behaviour was all that mattered and contingencies of reinforcement would win out against all the airy-fairy vagaries of other approaches to psychology. Just a few years on from this I was still wondering why my PhD thesis on the “determinants of the post-reinforcement pause on fixed-interval schedules in rats” hadn’t been nominated for a Nobel Prize! 

I’ve begun with this personal example, because it emphasizes how relatively narrow interests (and views and approaches) can seem like they are the universe – and that is especially the case when you are personally invested in a specific piece of research like a PhD thesis. But what happens later on in our academic lives? Should we stay focused and hone our skills in a focused research niche, or should we nervously wander out of that niche into new areas with new challenges requiring new skills? 

It is certainly a question for young academics to think about. Stick with what you know, or get other strings to your bow? If you are a newly graduated PhD, you are more likely than not to be a “clone” of your supervisor, and that may well be a block on you getting a lectureship at the institution in which you did your research degree. But then most recruiting Departments will want to know that you are – as they put it - “capable of independent research” before appointing you. Do you go scrabbling for that last section in your thesis entitled “Future Directions” and try to stretch out your PhD research (often in a painfully synthetic way, like seeing how far some bubble-gum will stretch – even though the ‘amount’ there is still the same). Or do you bite the bullet and try your newly-learnt skills on some new and different problems? 

You have one career lifetime (unless you’re Buddhist!) – so should you diversity or should you focus? Let’s begin with those people who focus an entire research career in one specific area – “the stickers” - often concentrating on a small, limited number of research problems but maybe have the benefit of developing more and more refined (and sometimes more complex) theoretical models. Cripes – how boring! Take that approach and you’ll become one or more of the following: (a) The person who sits near the front at international conferences and begins asking questions with the phrase “Thank you for your very interesting talk, but…”, (b) That butcher of a referee who everyone knows, even though your reviews are still anonymous, (c) Someone who sits in Departmental recruitment presentations openly mocking the presentation of any applicant not in your specific area of research (usually by looking down at your clasped hands and shaking your head slowly from side to side while muttering words like “unbelievable” or “where’s the science?”, or, finally, you’ll become (d) Director of a RCUK National Research Centre. 

So what about taking that giant leap for researcher-kind and diversifying? Well first, it’s arguably good to have more than one string to your bow, and become a research “juggler”“. The chances are that at some point you’ll get bored with the programme of research that you first embarked on in early career. Having at least two relatively independent streams of research means you can switch your focus from one to the other. It also increases (a) the range of journals you can publish in, (b) the funding bodies you can apply to, and (c) the diversity of nice people you can meet and chat sensibly to at conferences. It can also be a useful way of increasing your publication rate in early mid-career when you’re looking for an Associate Editorship to put on your CV or a senior lectureship to apply for. 

But there is more to diversifying than generating two streams of research purely for pragmatic career reasons. If you’re a tenured academic, you will probably in principle have the luxury of being able to carry out research on anything you want to (within reason) – surely that’s an opportunity that’s too good to miss? B.F. Skinner himself was one who promoted the scientific principle of serendipity (a principle that seems to have gone missing from modern day Research Methods text books) – that is, if something interesting crops up in your research, drop everything and study it! This apparently was how Skinner began his studies on response shaping, which eventually led to his treatise on operant conditioning. But diversity is not always a virtue. There are some entrepreneurial “switchers and dumpers” out there, who post a new (and largely unsubstantiated) theory about something in the literature, and then move on to a completely new (and often more trending) area of research, leaving researchers of the former topic to fight, bicker and prevaricate, often for years, about what eventually turns out to be a red herring, or a blind alley, or a complete flight of fancy designed to grab the headlines at the time. 

Now, you’ve probably got to the point in this post where you’re desperate for me to provide you with some examples of “stickers”, “jugglers” and “switchers and dumpers” – well, I think you know who some of these people are already, and I’m not going to name names! But going back to my first paragraph, if you’d told me as a postgraduate student about the topics I would be researching now – I would have been scornfully dismissive. But somehow I got here, and through an interesting and enjoyable pathway of topics, ideas, and serendipitous routes. Research isn’t just about persevering at a problem until you’ve tackled it from every conceivable angle, it’s also an opportunity to try out as many candies in the shop as you can – as long as you sample responsibly!

Discovering Facts in Psychology: 10 ways to create “False Knowledge” in Psychology

3/20/2013

 
First published 30/09/2012 on http://grahamdavey.blogspot.co.uk

There’s been quite a good deal of discussion recently about (1) how we validate a scientific fact (http://bit.ly/R8ruMg; http://bit.ly/T5JSJZ; http://bit.ly/xe0Rom), and (2) whether psychology – and in particular some branches of psychology – are prone to generate fallacious scientific knowledge (http://bit.ly/OCBdgJ; http://bit.ly/NKvra6). As psychologists, we are all trained (I hope) to be scientists – exploring the boundaries of knowledge and trying as best we can’ to create new knowledge, but in many of our attempts to pursue our careers and pay the mortgage, are we badly prone to creating false knowledge? Yes – we probably are! Here are just a few examples, and I challenge most of you psychology researchers who read this post to say you haven’t been a culprit in at least one of these processes!

Here are 10 ways to risk creating false knowledge in psychology.

1.  Create your own psychological construct. Constructs can be very useful ways of summarizing and formalizing unobservable psychological processes, but researchers who invent constructs need to know a lot about the scientific process, make sure they don’t create circular arguments, and must be in touch with other psychological research that is relevant to the understanding they are trying to create. In some sub-disciplines of psychology, I’m not sure that happens (http://bit.ly/ILDAa1).

2.  Do an experiment but make up or severely massage the data to fit your hypothesis. This is an obvious one, but is something that has surfaced in psychological research a good deal recently (http://bit.ly/QqF3cZ; http://nyti.ms/P4w43q).

3.  Convince yourself that a significant effect at p=.055 is real. How many times have psychologists tested a prediction only to find that the critical comparison just misses the crucial p=.05 value? How many times have psychologists then had another look at the data to see if it might just be possible that with a few outliers removed this predicted effect might be significant? Strangely enough, many published psychology papers are just creeping past the p=.05 value – and many more than would be expected by chance! Just how many false psychology facts has that created? (http://t.co/6qdsJ4Pm).

4.  Replicate your own findings using the same flawed procedure. Well, we’ve recently seen a flood of blog posts telling us that replication is the answer to fraud and poor science. If a fact can be replicated – then it must be a fact! (http://bit.ly/R8ruMg; http://bit.ly/xe0Rom) Well – no – that’s not the case at all. If you are a fastidious researcher and attempt to replicate a study precisely, then you are also likely to replicate the same flaws that gave rise to false knowledge. We need to understand the reasons why problematic research gives rise to false positives – that is the way to real knowledge (http://bit.ly/UchW4J).

5.  Use only qualitative methods. I know this one will be controversial, but in psychology you can’t just accept what your participants say! The whole reason why psychology has developed as a science is because it has developed a broad range of techniques to access psychological processes without having to accept at face value what a participant in psychological research has to tell us. I’ve always argued that qualitative research has a place in the development of psychological knowledge, but it is in the early stage of that knowledge development and more objective methodologies may be required to understand more proximal mechanisms.

6.  Commit your whole career to a single effect, model or theory that has your name associated with it. Well, if you’ve invested your whole career and credibility in a theory or approach, then you’re not going to let it go lightly. You’ll find multiple ways to defend it, even if it's wrong, and waste a lot of other researchers’ time and energy trying to disprove you. Ways of understanding move on, just like time, and so must the intransigent psychological theorist.

7.  Take a tried and tested procedure and apply it to everything. Every now and then in psychology a new procedure surfaces that looks too good to miss. It is robust, tells you something about the psychological processes involved in a phenomenon, and you can get a publication by applying it to something that no one else has yet applied it to! So join the fashion rush – apply it to everything that moves, and some things that don’t (http://bit.ly/SX37Sn). No I wasn't thinking of brain imaging, but.... Hmmmm, let me think about that! (I was actually thinking about the Stroop!)

8.  If your finding is rejected by the first journal you submit it to, continue to submit it to journals until it’s eventually published. This is a nice way to ensure that your contribution to false knowledge will be permanently recorded. As academic researchers we are all under pressure to publish (http://bit.ly/AsIO8B), if you believe your study has some genuine contribution to make to psychological science, then don’t accept a rejection from the first journal you send it to. In fact, if you don’t think your study has any real contribution to make to psychological knowledge at all, don’t accept a rejection from the first journal you send it to! Because you will probably get it published somewhere. I’d love to know what the statistics are on this, but I bet if you persist enough, your paper will get published.

9.  Publish your finding in a book chapter (non- peer reviewed), or an invited review, or a journal special issue - all of which are likely to have an editorial "light touch”. Well, if you do it might not get cited much (http://t.co/D55VKWDm), but it’s a good way of getting dodgy findings (and dodgy theories) into the public domain.

10.  Do some research on some highly improbable effects - and hope that some turn up significant by chance. (http://bit.ly/QsOQNo) And it won’t matter that people can’t replicate it – because replications will only rarely get published! (http://bit.ly/xVmmOv). The more improbable your finding, the more newsworthy it will be, the more of a celebrity you will become, the more people will try to replicate your research and fail, the more you will be wasting genuine research time and effort. But it will be your 15 minutes of fame!

Finally, if you haven’t been able to generate false psychological knowledge through one of these 10 processes, then try to get your finding included in an Introduction to Psychology textbook. Once your study is enshrined in the good old Intro’ to Psych’ text, then it’s pretty much going to be accepted as fact by at least one and maybe two future generations of psychologists. And once an undergrad has learnt a “fact”, it is indelibly inscribed on their brain and is faithfully transported into future reality!

"An effect is not an effect until it is replicated" - Pre-cognition or Experimenter Demand Effects

3/20/2013

 
First published 15/09/2012 at http://grahamdavey.blogspot.co.uk
Picture
There has been much talk recently about the scientific process in the light of recent claims of fraud against a number of psychologists (http://bit.ly/R8ruMg), and also the failure of researchers to replicate some controversial findings by Darryl Bem purportedly showing effects reminiscent of pre-cognition (http://bit.ly/xVmmOv). This has led to calls for replication to be the cornerstone of good science – basically “an effect is not an effect until it’s replicated” (http://bit.ly/UtE1hb). But is replication enough? Is it possible to still replicate “non-effects”? Well replication probably isn’t enough. If we believe that a study has generated ‘effects’ that we think are spurious, then failure to replicate might be instructive, but it doesn’t tell us how or why the original study came by a significant effect. Whether the cause of the false effect is statistical or procedural, it is still important to identify this cause and empirically verify that it was indeed causing the spurious findings. This can be illustrated by a series of replication studies we have recently carried out in our experimental psychopathology labs at the University of Sussex.

Recently we’ve been running some studies looking at the effects of procedures that generate distress on cognitive appraisal processes. These studies are quite simple in design and highly effective at generating negative mood and distress in our participants (participants are usually undergraduate students participating for course credits), and pilot studies suggest that experienced distress and negative mood do indeed facilitate the use of clinically-relevant appraisal processes.

The first study we did was piloted as a final year student project. It produced nice data that supported our predictions – except for one thing. The two groups (distress group and control group) differed significantly on pre-manipulation baseline measures of mood and other clinically-relevant characteristics. Participants due to undertake the most distressing manipulation scored significantly higher on pre-experimental clinical measures of anxiety (M=6.9, SD 3.6, v M=3.8, SD 2.5)[F(56)=4.01 , p=.05], and depression (M=2.2, SD 2.6, M=1.1, SD 1.1) [F(56)=4.24, p=.04]. Was this just bad luck? The project student had administered the questionnaires herself prior to the experimental manipulations, and she had used a quasi-random participant allocation method (rotating participants to experimental conditions in a fixed pattern).

Although our experimental predictions had been supported (even when pre-experimental baseline measures were controlled for), we decided to replicate the study, this time run by another final year project student. Lo and behold, the participants due to undertake the distressing task scored significantly higher on pre-experimental measures of anxiety (M=9.1, SD 4.1, v M=6.9, SD 3.0) [F(56)=6.01, p=.01], and depression (M=4.3, SD 3.7, v M=2.4, SD 2.4) [F(56)=5.09, p=.02]. Another case of bad luck? Questionnaires were administered and participants allocated in the same way as the first study.

Was this a case of enthusiastic final year project students determined to complete a successful project in some way conveying information to the participant about what they were to imminently undergo? Basically, was this an implicit experimenter demand effect being conveyed by an inexperienced experimenter? To try and clear this up, we decided to replicate again, this time it was to be run by an experienced post doc researcher – someone who was wise to the possibility of experimenter demand effects, aware that this procedure was possibly prone to these demand effects, and would presumably to be able to minimize them.  To cut a long story short – we replicated the study again – but still replicated the pre-experimental group differences in mood measures! Participants who were about to undergo the distress procedure scored higher than participants about to undergo the unstressful control condition.

At this point, we were beginning to believe in pre-cognition effects! Finally, we decided to replicate again. But this time, the experimenter would be entirely blind to the experimental condition that a participant was in. Sixty sealed packs of questionnaires and instructions were made up before any participants were tested – half contained information for the participant about how to complete the questionnaires and how to run either the stressful or the control condition. The experimenter merely allowed the participant to chose a pack from a box at the outset, and was entirely unaware which condition the participant was running during the experiment. To cut another long story short – to our relief and satisfaction, the pre-experimental group differences in anxiety and depression measures disappeared. It wasn’t pre-cognition after all - it was an experimenter demand effect.

The point I’m making is that replication alone may not be sufficient to identify genuine effects – you can also replicate “non-effects” quite effectively - even by actively trying not to, and even more so by meticulously replicating the original procedure. If we have no faith in a particular experimental finding, it is incumbent on us as good scientists to identify the factor or factors that gave rise to that spurious finding wherever we can.

Should You Publish Your Undergraduate Student's Projects?

3/17/2013

 
Most academics and researchers now rely on their undergraduate students’ final year projects as an important research resource. These projects provide opportunities to test out new procedures, methodologies and theories at relatively low cost to the researcher. Nevertheless, no matter how much you might closely supervise this research there is still a nagging doubt that you have delegated important research to a relatively inexperienced individual. How do you decide whether the research they have delivered is worthy of writing up and publishing? Below is a flow-chart that allows the inexperienced junior lecturer to make some decisions about publishing an undergraduate project[1].
Picture
[1] This flow-chart is designed to ensure optimal career development for junior and mid-career academics and researchers.

The Evils of Journal Scope-Shrinkage: The Example of Clinical Psychology

3/17/2013

 
First published 27/01/2012 at http://grahamdavey.blogspot.co.uk
Picture
I’ll come straight to the point. The more that journals have to introduce demand management strategies, the more they end up shrinking their scope. The more they shrink their scope, the more they force research into a cul-de-sac and isolate it from cross-fertilization from other core areas of their over-arching discipline. In this day and age, there are more and more researchers all of whom are under pressure to do research and to publish it – for both the sake of their current jobs and their future careers. Journals find themselves overwhelmed with submissions, to the point where many APA journals now have 80%+ rejection rates. So how do you manage demand? Well, at least some journals manage their demand by shrinking their scope. Demand management is important because it reduces the journal costs incurred by managing a submission through the journal editorial system, and reduces the costs incurred through Associate Editors who manage the peer review process. For most traditional journals that also have a print as well as an on-line version of the journal, it means that increased numbers of submissions mean increased costs for just a fixed publication income. Effectively, scientific publishers believe they are spending good money having to find reasons to reject large numbers of submissions that are of acceptable scientific quality but will never themselves earn money for that publisher.

So you shrink your scope. Often scope-shrinkage has a relatively small impact. But in some areas of psychology it can have a significant impact depending on how a journal redefines its scope. I am an experimental psychopathology researcher. The majority of my research is conducted as an experimental psychologist on a subject matter that is psychopathology, and I have traditionally published in clinical psychology journals – which is a good fit for the subject matter of my research and also gets my research read by clinical psychologists.

But even before scope-shrinkage, I’ve sometimes encountered difficulties publishing in clinical psychology journals because I’ve used analogue rather than clinical samples or my research has not been viewed by editors as being relevant to clinical interventions. They might just as well have said “You’re not a clinical psychologist, and your research can’t have any relevance to clinical populations because your participants weren’t a clinically diagnosed sample, and so your research is of no interest to the clinical research community!” Harsh – but that is the feeling I got.

Well, now that’s official – at least for some Elsevier journals. In 2010 Behaviour Research & Therapy – that traditional bastion of experimental psychopathology research - posted a very brief editor’s note stating “Behaviour Research & Therapy encompasses all of what is commonly referred to as cognitive behaviour therapy (CBT)” with a revised focus on processes that had a direct implication for treatment and the evaluation of empirically-supported interventions (Vol 48, iii, 2010). In effect, it had become a CBT evaluation journal. An email exchange I had with the editor confirmed that this re-scope was a consequence of the large number of submissions to the journal. To be fair, the editor did say that “the goal is not to eliminate research on experimental psychopathology, but to try to have it more ‘directly’ related to prevention and treatment”.

So where do I now go to publish my psychopathology research if it’s not clearly intervention related. BRAT’s editor did say that “a final sentence in the Discussion would not suffice” to make any research intervention relevant. Fair enough. Most of my research is on the aetiology of anxiety disorders, so one journal that I’ve published in quite frequently is another Elsevier journal, Journal of Anxiety Disorders. I submitted a manuscript in June 2010 – around the same time that BRAT had shrunk its scope to intervention-relevant papers. I received a regretful email from the action editor immediately after submission saying “we have made a decision that we will no longer review manuscripts based solely on undiagnosed or analogue samples. This decision can be found within the Editorial Guidance paragraph on the back cover of the journal. Consequently, I will be unable to accept it for publication”.

Scope-shrinkage yet again. I’m sure that these decisions about journal scope were all taken in the best of faith and genuinely meant to help deal with and manage demand, but I can’t help but think of the potential restrictions that changes such as these will place on the discipline-wide exchange of ideas and information that seeds genuine progress in any applied area. OK, so I’m now miffed that I can’t easily publish any more in journals that I used to consider automatic outlets for my research, but there must surely be a bigger and wider cost. As we get more journals with increasingly narrower scopes, it is likely to lead to researchers reading only those journals that have a direct relevance to their research and areas of interest. There could well be significantly fewer left-field ideas, fewer opportunities for the cross-fertilization of ideas. It is also likely to lead to the entrenchment of existing paradigms of research within specific areas – especially applied areas such as clinical psychology where theoretical and empirical sharpness can often be compromised by the need for serviceable outcomes.

During our own weekly lab meetings, I always bring the latest copy of Quarterly Journal of Experimental Psychology along as soon as it’s published, and we look through it for ideas that have relevance to the psychopathology processes that we’re researching. This has already been the source of some exciting new ways for us to conceptualise and study the psychopathology processes we’re interested in. With the scope-shrinkage currently occurring in at least some important clinical psychology journals, I wonder where new ways of thinking about clinically-related research problems will come from unless those researchers who publish in these journals are actively scouring the contents of journals beyond their immediate clinical remit.

    Author

    Graham C. L. Davey, Ph.D. is Professor of Psychology at the University of Sussex, UK. His research interests extend across mental health problems generally, and anxiety and worry specifically. Professor Davey has published over 140 articles in scientific and professional journals and written or edited 16 books including Psychopathology; Clinical Psychology; Applied Psychology; Complete Psychology; Worrying & Psychological Disorders; and Phobias: A Handbook of Theory, Research & Treatment. He has served as President of the British Psychological Society, and is currently Editor-in-Chief of Journal of Experimental Psychopathology and Psychopathology Review. When not writing about psychology he watches football and eats curries.

    Archives

    September 2019
    May 2019
    August 2018
    July 2018
    June 2015
    April 2015
    November 2014
    March 2014
    December 2013
    July 2013
    June 2013
    April 2013
    March 2013

    Categories

    All
    Anxiety
    Books
    Clinical Psychology
    Journal Publishing
    Medicine
    Mental Health
    Phobias
    Psychology
    Research
    Research Councils

    RSS Feed

Proudly powered by Weebly