• Home
  • Graham Davey's Blog
  • My Books
    • Published Books
    • New Book Projects >
      • Psychopathology 3e Excerpts
  • My Research
  • The Sidcup Mixer
  • Journals
  • News
    • Latest News
    • Clinical Psychology Research & Chat
  • Real Life
Papers from Sidcup
Graham Davey's website

Where’s the Psychology in the Medical Curriculum – and Why does it Matter?

3/20/2013

 
First published 27/02/2013 at http://grahamdavey.blogspot.co.uk
Picture
That's rather an extreme blog post title, but was inspired by the APA's (American Psychiatric Association) recent comment that  "Many of the revisions in DSM-5 will help psychiatry better resemble the rest of medicine". This alone would be enough to send shivers down the spine of most psychology-minded mental health practitioners, but it led me to thinking about where that might leave psychology as a rather different knowledge-based approach to understanding and treating mental health problems.

Specifically, if the APA want to impose a medical model on mental health then what will our doctors and physicians be learning about how to deal with their patients with mental health problems? The incremental implications are immense. It is not just that mental health is being aligned with medicine on such an explicit basis in this way, this issue is compounded by the fact that medical training still plays lip service to training doctors in psychological knowledge and, in particular, to a psychological approach to mental health. So has medicine taken the decision to align mental health diagnosis and treatment to fit the constraints of current medical training (rather than vice versa)?

I returned to a President's column I wrote in 2002 about the state of psychology teaching in the UK medical curriculum. The same points I made then seem to apply now. The medical curriculum is not constructed in a way that provides an explicit slot for psychology or psychological knowledge. Even though a recent manifesto for the UK medical curriculum (Tomorrow’s Doctors, 2009) makes it clear that medical students should be able to “apply psychological principles, method and knowledge to medical practice” (p15), there is probably no practical pressure for this to happen. Given that the ‘Tomorrow’s Doctors’ document does advocate more behavioural and social science teaching in the medical curriculum, I suspect that what happens in practice is that a constrained slot for ‘non-core medical teaching’ gets split up between psychology, social science and disciplines such as health economics. If a medical programme decides to take more sociology (because there are sociologists available on campus to teach it) – then there will be less psychology.

The second point I made then was related to the expectations of medical students. This was illustrated by a QAA report for a well-respected medical school. This made the point that:

“...there was a student perception that, in Phase I, the theoretical content relating to the social and behavioural sciences was too large. Particular concern was expressed about aspects of the Health Psychology Module....a number of students suggested that the emphasis placed upon theoretical aspects of these sciences in Phase I was onerous”

Well – death to psychology! My own experience of teaching medical students is that they often have a very skewed perception of science, and in particular, biological science. Interestingly, the ‘Tomorrow’s Doctors’ document advises that medical students should be able to ‘apply scientific method and approaches to medical research’ (p18). But in my experience medical students find it very difficult to conceptualize scientific method unless it is subject matter relevant – i.e. biology relevant. I have spent many hours trying to explain to medical students that scientific method can be applied to psychological phenomena that are not biology based – as long as certain principles of measurement and replicability can be maintained.

But there has been a more recent attempt to define a core curriculum for psychology in undergraduate medical education. This was the report from the Behavioural & Social Sciences Teaching in Medicine (BeSST) Psychology Steering Group (2010) (which I believe to be an HEA Psychology Network group). I am sure this report was conducted with the best of intentions, but I must admit I think it’s core curriculum recommendations are bizarre, and entirely miss the point of what psychology has to offer medicine! It is like someone has gone through a first year Introduction to Psychology textbook and picked out interesting things that might catch the eye of a medical student – piecemeal! For example, the report claims that learning theory is important because it might be relevant to “the acquisition and maintenance of a needle phobia in patients who need to administer insulin” (p30). That is both pandering to the medical curriculum and massively underselling psychology as a paradigmatic way of understanding and changing behaviour!

Medical students need to understand that psychology is an entirely different, and legitimate, method of knowledge acquisition and understanding in biological science. Not all mental health problems are reducible to biological diagnoses, biological explanations or medical interventions, and attempts by the APA to shift our thinking in that direction are either delusional or self-promoting. What is most disappointing from the point of view of the development of mental health services is the impact that entrenched medically-based views such as those of the APA will have on the already introverted medical curriculum. Doctors do need to learn about medicine, but they also need to learn that mental health needs to be understood in many ways – very many of which are not traditionally biological in their aetiology or their cure.

Criticisms of the DSM Development Process

3/20/2013

 
First published 21/02/2013 at http://grahamdavey.blogspot.co.uk
Another short piece written as a Focus Point for the second edition of my Psychopathology textbook (due to be published late 2013).


DSM regularly undergoes an intensive revision process to take account of new research on mental health problems and to refine the diagnostic categories from earlier versions of the system. One would assume that this would be a deliberate and objective process that could only further our understanding of psychopathology, and that is certainly the intention of the majority of those involved. However, at least some people argue that the process of developing a classification system such as DSM can never be entirely objective, free from bias, or free from corporate or political interests. Allen Frances and Thomas Widiger were two individuals who were prominent in the development of the fourth edition of the DSM, and they have written a fascinating account of the lessons they believe should be learned from previous attempts to revised and develop mental health classification systems (Frances & Widiger, 2012). They make the following points:

1.         Just as the number of mental health clinicians grows, so too will the number of life conditions that work their way into becoming disorders. This is because the proliferation of diagnostic categories tends to follow practice rather than guide it.

2.         Because we know very little about the true causes of mental health problems, it is easier and simpler to proliferate multiple categories of disorder based on relatively small differences in descriptions of symptoms.

3.         Most experts involved in developing DSM are primarily worried about false negatives (i.e. the missed diagnosis or patient who doesn’t fit neatly into the existing categorizations), and this leads to either more inclusive diagnostic criteria or even more diagnostic categories. Unfortunately, experts are relatively indifferent to false positives – patients who receive unnecessary diagnosis, treatment, and stigma – and so are less likely to be concerned about over-diagnosis.

4.         Political and economic factors have also shaped the ‘medical model’ view of psychopathology on which DSM is based, and also contributed to the establishment and proliferation of diagnostic categories. For example, the pharmaceutical industry benefits significantly from the sale of medications for mental health problems, and its profits will be dependent on both (1) conceptions of mental health based on a medical model that implies a medical solution, and (2) a diagnostic system that will err towards over-diagnosis rather than under-diagnosis (see Pilecki, Clegg & McKay, 2011).

Changes in DSM-5

3/20/2013

 
First published 13/02/2013 at http://grahamdavey.blogspot.co.uk
As promised, it's my intention to post some new pieces written for the second edition of my Psychopathology textbook (due to be published late 2013). This post begins that process with a new section written to introduce and evaluate DSM-5 from the Chapter on Classification & Assessment in Clinical Psychology.

"Published in 2013, DSM-5 arguably represents the most comprehensive revision of the DSM so far, and it has involved many years of deliberation and field trials to determine what changes to mental health classification and diagnosis are essential and empirically justifiable (Main chapter headings for DSM-5 are provided in Table 1).
Picture
The main changes between DSM-5 and its predecessor (DSM-IV-TR) are listed in Table 2.
Picture
First, previous versions of DSM placed mental health problems on a number of different axes representing clinical disorders (Axis I), developmental and personality disorders (Axis II), or general medical conditions (Axes III). This multiaxial system has been scrapped – largely because there was not enough evidence to justify the differences between them. Instead, in DSM-5 clinicians will be encouraged to rate severity of symptoms along continuums developed for each disorder. Secondly, the importance of some disorder categories has been recognised either by allocating them to their own chapter or by recognising them as new individual diagnostic categories. For example, Obsessive-Compulsive Disorder (OCD) is recognized as a significant mental health problem by being allocated it’s own chapter in DSM-5, and new diagnostic categories within this chapter include Hoarding Disorder (see Chapter 6) and Excoriation Disorder (skin-picking disorder). Similarly, DSM-5 has a new chapter on Trauma & Stress-Related Disorders that now includes Post-Traumatic Stress Disorder (PTSD). DSM-5 focuses more on the behavioural symptoms that accompany PTSD and proposes four distinct diagnostic clusters instead of the previous three. Thirdly, major changes have been made to the criteria for diagnosing Autism Spectrum Disorder (ASD), Personality Disorders, Specific Learning Disorders, and Substance Use Disorders. Autistic Spectrum Disorder has become a diagnostic label that will incorporate many previous separate labels (e.g. Asperger’s disorder, childhood disintegrative disorder, pervasive developmental disorder) in an attempt to provide more consistent and accurate diagnosis for children with autism (see Chapter 16). DSM-5 will retain the categorical model for Personality Disorders outlined in DSM-IV-TR, but rating scales are provided to assess how well an individual’s symptoms fit within these different types (Chapter 12). The new Specific Learning Disorder category is broadened to represent distinct disorders which interfere with the acquisition and use of one or more of a number of academic skills, including oral language, reading, written language or mathematics (Chapter 15), and the new Substance Use Disorder category will combine the previous DSM-IV-TR categories of substance abuse and substance dependence into one overarching disorder. Some other important changes include (1) the elevation of Binge Eating Disorder from an appendix to a recognized diagnostic category, (2) Disruptive Mood Regulation Disorder as a new category for diagnosing children who exhibit persistent irritability and behavioural outbursts, and (3) the removal of the “bereavement exclusion” from the diagnosis of Major Depression; this means that depressive symptoms lasting less than two months following the death of a loved one can be included amongst the criteria for diagnosing Major Depression, and reflects the recognition that bereavement is a severe psychological stressor that can precipitate major depression.

Criticisms of Changes in DSM-5:  While these most recent changes to the DSM have been extensively discussed and researched, many of the revisions have been received critically, and it is worth discussing some of these criticisms because they provide an insight into the difficulties of developing a mental disorders classification system that is fair and objective.

First, many of the diagnostic changes will reduce the number of criteria necessary to establish a diagnosis. This is the case with Attenuated Psychosis Syndrome, Major Depression, and Generalized Anxiety Disorder, and this runs the risk of increasing the number of people that are likely to be diagnosed with common mental health problems such as anxiety and depression. It is a debatable point whether increases in the number of diagnosed cases is a good or a bad thing, but it is likely to have the effects of “medicalizing” many everyday emotional experiences (such as ‘grief’ following a bereavement, or worry following a stress life event), and creating “false-positive” epidemics (Frances, 2010).

Secondly, DSM-5 has introduced disorder categories that are designed to identify populations that are at risk for future mental health problems, and these include Mild Neurocognitive Disorder (which would diagnose cognitive decline in the elderly) and Attenuated Psychosis Syndrome (seen as a potential precursor to psychotic episodes). Once again, these initiatives run the risk of medicalizing states that are not yet full-blown disorders, and could facilitate the diagnosis of normal developmental processes as psychological disorders.

Thirdly, there are concerns that changes in diagnostic criteria will result in lowered rates of diagnosis for some particularly vulnerable populations. For example, applying the DSM-5 criteria for Autism Spectrum Disorder to samples of children with DSM-IV-TR diagnoses that would no longer be available in DSM-5 suggested that 9% of this latter group would lose their autism diagnosis with the introduction of the new DSM-5 criteria (Huerta, Bishop, Duncan, Hus & Lord, 2012). Similar concerns have been voiced about changes to Specific Learning Disorder diagnostic criteria in DSM-5, and the possibility that deletion of the term dyslexia as a diagnostic label will disadvantage individual with specific phonologically-based, developmental reading disabilities (http://www.disabilityrightsohio.org/news/dsm5-dyslexia-june-2012).

Finally, two enduring criticisms of DSM generally that have continued to be fired specifically at DSM-5 have been that (1) DSM-5 has continued the process of attempting to align it’s diagnostic criteria with developments and knowledge from neuroscience (Regier, Narrow, Kuhl & Kupfer, 2011), when there is in fact very little new evidence from neuroscience that helps define specific mental health problems, and (2) most mental health problems (and psychological distress generally) are now viewed as dimensional, so any criteria defining a diagnostic cut-off point will be entirely arbitrary. DSM-5 has attempted to recognise the importance of the dimensionality of symptoms by introducing dimensional severity rating scales for individual disorders. But as we have seen from the discussion above, each iteration change in DSM diagnostic criteria changes the number and range of people who will receive a diagnosis, and this makes it increasingly hard to accept diagnostic categories as valid constructs (e.g. Kendler, Kupfer, Narrow, Phillips & Fawcett, 2009).

Despite its conceptual difficulties and its many critics, DSM is still the most widely adopted classification and diagnostic system for mental health problems. Such a system is needed for a number of reasons, including determining the allocation of resources and support for mental health problems, for circumstances that require a legal definition of mental health problems, and to provide a common language that allows the world to share and compare data on mental health problems. Having said this, there are still many significant problems associated with DSM, and diagnosing and labelling people with specific psychological disorders raises other issues to do with stigma and discrimination. Indeed, we should be clear that diagnostic systems are not a necessary requirement for helping people with mental health problems to recover, and many clinical psychologists prefer not to use diagnostic systems such as DSM-5, but instead prefer to treat each client as someone with a unique mental health problem that can best be described and treated using other means such as case formulation (see Section 2.3 for a fuller description and examples of case formulation)."

‘Stickers’, ‘Jugglers’ and ‘Switchers & Dumpers’ – Which kind of researcher should you be?

3/20/2013

 
First published 04/12/2012 at http://grahamdavey.blogspot.co.uk
Picture
I often look back on my own research career with some surprise at where it’s all travelled to. When I was a PhD student I was a dyed-in-the-wool behaviourist loading rats into Skinner boxes and clichés into arguments. Cognitions didn’t exist – and even in the remote possibility that they might, they were of no use to a scientific psychology. I was a radical Skinnerian pursuing a brave new world in which behaviour was all that mattered and contingencies of reinforcement would win out against all the airy-fairy vagaries of other approaches to psychology. Just a few years on from this I was still wondering why my PhD thesis on the “determinants of the post-reinforcement pause on fixed-interval schedules in rats” hadn’t been nominated for a Nobel Prize! 

I’ve begun with this personal example, because it emphasizes how relatively narrow interests (and views and approaches) can seem like they are the universe – and that is especially the case when you are personally invested in a specific piece of research like a PhD thesis. But what happens later on in our academic lives? Should we stay focused and hone our skills in a focused research niche, or should we nervously wander out of that niche into new areas with new challenges requiring new skills? 

It is certainly a question for young academics to think about. Stick with what you know, or get other strings to your bow? If you are a newly graduated PhD, you are more likely than not to be a “clone” of your supervisor, and that may well be a block on you getting a lectureship at the institution in which you did your research degree. But then most recruiting Departments will want to know that you are – as they put it - “capable of independent research” before appointing you. Do you go scrabbling for that last section in your thesis entitled “Future Directions” and try to stretch out your PhD research (often in a painfully synthetic way, like seeing how far some bubble-gum will stretch – even though the ‘amount’ there is still the same). Or do you bite the bullet and try your newly-learnt skills on some new and different problems? 

You have one career lifetime (unless you’re Buddhist!) – so should you diversity or should you focus? Let’s begin with those people who focus an entire research career in one specific area – “the stickers” - often concentrating on a small, limited number of research problems but maybe have the benefit of developing more and more refined (and sometimes more complex) theoretical models. Cripes – how boring! Take that approach and you’ll become one or more of the following: (a) The person who sits near the front at international conferences and begins asking questions with the phrase “Thank you for your very interesting talk, but…”, (b) That butcher of a referee who everyone knows, even though your reviews are still anonymous, (c) Someone who sits in Departmental recruitment presentations openly mocking the presentation of any applicant not in your specific area of research (usually by looking down at your clasped hands and shaking your head slowly from side to side while muttering words like “unbelievable” or “where’s the science?”, or, finally, you’ll become (d) Director of a RCUK National Research Centre. 

So what about taking that giant leap for researcher-kind and diversifying? Well first, it’s arguably good to have more than one string to your bow, and become a research “juggler”“. The chances are that at some point you’ll get bored with the programme of research that you first embarked on in early career. Having at least two relatively independent streams of research means you can switch your focus from one to the other. It also increases (a) the range of journals you can publish in, (b) the funding bodies you can apply to, and (c) the diversity of nice people you can meet and chat sensibly to at conferences. It can also be a useful way of increasing your publication rate in early mid-career when you’re looking for an Associate Editorship to put on your CV or a senior lectureship to apply for. 

But there is more to diversifying than generating two streams of research purely for pragmatic career reasons. If you’re a tenured academic, you will probably in principle have the luxury of being able to carry out research on anything you want to (within reason) – surely that’s an opportunity that’s too good to miss? B.F. Skinner himself was one who promoted the scientific principle of serendipity (a principle that seems to have gone missing from modern day Research Methods text books) – that is, if something interesting crops up in your research, drop everything and study it! This apparently was how Skinner began his studies on response shaping, which eventually led to his treatise on operant conditioning. But diversity is not always a virtue. There are some entrepreneurial “switchers and dumpers” out there, who post a new (and largely unsubstantiated) theory about something in the literature, and then move on to a completely new (and often more trending) area of research, leaving researchers of the former topic to fight, bicker and prevaricate, often for years, about what eventually turns out to be a red herring, or a blind alley, or a complete flight of fancy designed to grab the headlines at the time. 

Now, you’ve probably got to the point in this post where you’re desperate for me to provide you with some examples of “stickers”, “jugglers” and “switchers and dumpers” – well, I think you know who some of these people are already, and I’m not going to name names! But going back to my first paragraph, if you’d told me as a postgraduate student about the topics I would be researching now – I would have been scornfully dismissive. But somehow I got here, and through an interesting and enjoyable pathway of topics, ideas, and serendipitous routes. Research isn’t just about persevering at a problem until you’ve tackled it from every conceivable angle, it’s also an opportunity to try out as many candies in the shop as you can – as long as you sample responsibly!

The Lost 40%

3/20/2013

 
First published 02/11/2012 at http://www.psychologytoday.com/blog/why-we-worry
Picture
I’ve agonized for some time about how best to write this post. I want to try and be objective and sober about our achievements in developing successful interventions for mental health problems, yet at the same time I don’t want to diminish hope for recovery in those people who rely on mental health services to help them overcome their distress.

The place to start is a meta-analysis of cognitive therapy for worry in generalized anxiety disorder (GAD) just published by my colleagues and myself. For those of you that are unfamiliar with GAD, it is one of the most common mental health problems, is characterized by anxiety symptoms and by pathological uncontrollable worrying, and it has a lifetime prevalence rate of between 5-8% in the general adult population. That means that in a UK population of around 62 million, between 3 and 5 million people will experience diagnosable symptoms of GAD in their lifetime. In a US population of 311 million these figures increase to between 15 to 25 million sufferers within their lifetime. Our meta-analysis found that cognitive therapy was indeed significantly more effective at treating pathological worrying in GAD than non-therapy controls, and we also found evidence that cognitive therapy was superior to other treatments that were not cognitive therapy based.

So, all well and good! This evidence suggests that we’ve developed therapeutic interventions that are significantly better than doing nothing and that are marginally better than some other treatments. Our results also suggest that the magnitude of these effects are slightly larger than had been previously found, possibly indicating that newer forms of cognitive therapy were increasingly more effective.

But what can the service user with mental health problems make of these conclusions? On the face it they seem warmly reassuring – we do have treatments that are more effective than doing nothing, and the efficacy of these treatments is increasing over time. But arguably, what the service user wants to know is not “Is treatment X better than treatment Y?”, but “Will I be cured?” The answer to that is not so reassuring. Our study was one of the first to look at recovery data as well as relative efficacy of treatments. Across all of the studies for which we had data on levels of pathological worrying, the primary recovery data revealed that only 57% of sufferers were classed as recovered at 12 months following cognitive therapy – and, remember, cognitive therapy was found to be more effective than other forms of treatment. To put it another way, 43% of people who underwent cognitive therapy for pathological worrying in GAD were still not classed as recovered one year later. Presumably, they were still experiencing distressing symptoms of GAD that were adversely affecting their quality of life. I think these findings raise two important but relatively unrelated issues.

First, is a recovery rate of 57% enough to justify 50 years of developing psychotherapeutic treatments for mental health disorders such as GAD? To be sure, GAD is a very stubborn disorder. Long-term studies of GAD indicate that around 60% of people diagnosed with GAD were still exhibiting significant symptoms of the disorder 12 years later (regardless or not of whether they’d had treatments for these symptoms during this period). Let’s apply this to the prevalence figures I quoted earlier in this piece. This means that the number of people in the UK and the USA suffering long-term symptoms of GAD during their lifetime might be as high as 3 million and 15 million respectively. In 50-years of developing evidence-based talking therapies, have we been too obsessed with relative efficacy and not enough with recovery? Has too much time been spent just ‘tweaking’ existing interventions to make them competitive with other existing interventions? Perhaps as our starting point we should be taking a more universal view of what is required for recovery from disabling mental health problems? That overview will not just include psychological factors it will inevitably include social, environmental and economic factors as well.

Second, what do we tell the service user? Mental health problems such as GAD are distressing and disabling. Hope of recovery is the belief that most service users will take into treatment, but on the basis of the figures presented in this piece, it can only be a 57% hope!  This level of hope is not just reserved for cognitive therapy for GAD or psychotherapies in general, it is a figure that pretty much covers pharmaceutical treatments for GAD as well, with the best remission/recovery rates for drug treatments being around 60% (fluoxetine) and some as low as 26%.

I have spent this post discussing recovery from GAD in detail, but I suspect similar recovery levels and similar arguments are relevant to other forms of intervention (such as exposure therapies) and other common mental health problems (such as depression and anxiety disorders generally). It may be time to start looking at the bigger picture required for recovery from mental health problems so that hope can also be extended to the 40-45% of service users for whom we have yet to openly admit that we cannot provide a ‘cure’.

Mental health research: Are you contributing to paradigm stagnation or paradigm shift?

3/20/2013

 
First published 27/08/2012 at http://grahamdavey.blogspot.co.uk
Picture
“Normal science does not aim at novelty but at clearing up the status quo. It discovers what it expects to discover.” – Thomas Kuhn.

I was struck by this quote from Thomas Kuhn last week when reading a Guardian blog about the influential philosopher of science. It’s a simple statement suggesting that so-called ‘normal science’ isn’t going to break any new ground, isn’t going to change the way we think about something, but probably will reinforce established ideas, and – perhaps even more importantly – will entrench what scientists think are the important questions that need answering. Filling in the gaps to clear up the status quo is probably a job that 95% of scientists are happy to do. It grows the CV, satisfies your Dean of School, gets you tenure and pays the mortgage.

But when I first read that quote, I actually misread it. I thought it said “Normal science does not aim at novelty but aims to maintain the status quo”! I suspect that when it boils down to it, there is not much difference between my misreading of the quote and what Kuhn had actually meant. Once scientists establish a paradigm in a particular area this has the effect of  (1) framing the questions to be asked, (2) defining the procedures to answer them, and (3) mainstreams the models, theories and constructs within which new facts should be assimilated. I suspect that once a paradigm is established, even those agencies and instruments that provide the infrastructure for research contribute to entrenching the status quo. Funding bodies and journals are good examples. Both tend to map on to very clearly defined areas of research, and at times when more papers are being submitted to scientific journals than ever before, demand management tends to lead to journal scope shrinkage in such a way that traditional research topics are highlighted more and more, and new knowledge from other disciplinary approaches is less likely to fertilize research in a particular area.

This led me to thinking about my own research area, which is clinical psychology and psychopathology. Can we clinical psychology researchers convince ourselves that we are doing anything other than trying to clear up the status quo in a paradigmatic approach that hasn’t been seriously questioned for over half a century – and in which we might want to question it’s genuine achievements? Let’s just take a quick look at some relevant points:

1.         DSM still rules the way that much clinical psychology research is conducted. The launch of DSM-5 in 2013 will merely re-establish the dominance of diagnostic categories within clinical psychology research. There are some who struggle to champion transdiagnostic approaches, but they are doing this against a trend in which clinical psychology and psychiatry journals are becoming more and more reliant on diagnostic criteria for inclusion of papers. Journal of Anxiety Disorders is just one example of a journal whose scope has recently shrunk from publishing papers on anxiety to publishing papers on anxiety only in diagnosed populations. DSM-I was published in 1952 – sixty years on it has become even more entrenched as a basis for doing clinical psychology research. No paradigm shift there then!

This doesn’t represent a conspiracy between DSM and journals to consolidate DSM as the basis for clinical psychology research – it merely reflects the fact that scientific journals follow established trends rather than create new spaces within which new concatenations of knowledge can emerge. Journals will by nature be a significant conservative element in the progress of science.

2.         There is a growing isolation in much of clinical psychology research – driven in part by the shrinking scope of clinical research journals and the adherence of many of them to DSM criteria for publication. This fosters a growing isolation from core psychological knowledge, and because of this, clinical psychology research runs the risk of re-inventing the wheel – and probably re-inventing it badly. Some years ago I expressed my doubts about the value of many clinical constructs that had become the focus of research across a range of mental health problems (Davey,2003). Many of these constructs have been developed from clinical experience and relate to individual disorders or even individual symptoms, but I’m convinced that a majority of them simply fudge a range of different psychological processes, most of which have already been researched in the core psychological literature. I'm an experimental psychologist by training who just happens to have become interested in clinical psychology research, so I was lucky enough to be able to bring some rather different approaches to this research than those who were born and brought up in the clinical psychology way of doing things. What must not happen is for clinical psychology research to become even more insular and even more entrenched in reinventing even more wheels - or the wheels on the bus really will just keep going round and round and round!

3.         OK I'm going to be deliberately provocative here – clinical neuroscience and imaging technology costs a lot of money - so its role needs to be enshrined and ring-fenced in the fabric of psychological knowledge endeavor, doesn’t it? Does it? If that’s the case – then we’re in for a long period of paradigm stagnation. Imaging technology is the Mars Rover of cognitive science while the rest of us are using telescopes - or that's the way it seems. There are some clinical funding bodies I simply wouldn't apply to for experimental psychopathology research – ‘cos if it ain’t imaging it ain't gonna get funded - yet where does the contribution of imaging lay in the bigger knowledge picture within clinical psychology? There may well be a well thought out view somewhere out there that has placed the theoretical relevance of imaging into the fabric of clinical psychology knowledge (advice welcome on this)! There is often a view taken that whatever imaging studies throw up must be taken into account by studies undertaken at other levels of explanation - but that is an argument that is not just true of imaging, it's true of any objective and robust scientific methodology.

Certainly - identifying brain locations and networks for clinical phenomena may not be the way to go - there is growing support for psychological constructionist views of emotion for instance, suggesting that emotions do not have either a signature brain location or a dedicated neural signature at all (e.g. Lindquist,Wager, Kober, Bliss-Moreau & Barrett, 2012). There are some very good reviews of the role of brain functions in psychological disorders -but I'm not sure what they tell us other than the fact that brain function underlies psychological disorders – as it does everything! For me, more understanding of psychological disorders can be gleaned from studying individual experience, developmental and cognitive processes, and social and cultural processes than basic brain function. Brain images are a bit like the snapshot of the family on the beach - The photo doesn't tell you very much about how the family got there or how they chose the beach or how they're going to get home.

But the point I’m trying to make is that if certain ways of doing research require significant financial investment over long periods of time (like imaging technology), then this too will contribute to paradigm stagnation.

4.         When tails begin to wag dogs you know that as a researcher you have begun to lose control over what research you can do and how you might be allowed to do it. Many researchers are aware that to get funding for their research – however ‘blue skies’ it might be – we now have to provide an applied impact story. How will our research have an impact on society? Within clinical psychology research this always seems to have been a reality. Much of clinical psychology research is driven by the need to develop interventions and to help vulnerable people in distress – which is a laudable pursuit. But does this represent the best way to do science? There is a real problem when it comes to fudging understanding and practice. There appears to be a diminishing distinction in clinical psychology between practice journals and psychopathology journals, which is odd because helping people and understanding their problems are quite different things – certainly from a scientific endeavour point of view. Inventing an intervention out of theoretical thin air and then giving it the facade of scientific integrity by testing to see if it is effective in a controlled empirical trial is not good science – but I could name what I think are quite a few popular interventions that have evolved this way – EMDR and mindfulness are just two of them (I expect there will be others who will argue that these interventions didn't come out of a theoretical void, but we still don't really know how they work when they do work). At the end of the day, to put the research focus on ‘what works in practice’ takes the emphasis away from understanding what it is that needs to be changed, and in clinical psychology it almost certainly sets research priorities within establishment views of mental health.

5.         My final point is a rather general one about achievement in clinical psychology research. We would like to believe that the last 40 years has seen significant advances in our development of interventions for mental health problems. To be sure, we’ve seen the establishment of CBT as the psychological intervention of choice for a whole range of mental health problems, and we are now experiencing the fourth wave of these therapies. This has been followed up with the IAPT initiative, in which psychological therapies are being made more accessible to individuals with common mental health problems.  The past 40 years has also seen the development and introduction of second-generation antidepressants such as SSRIs. Both CBT and SSRIs are usually highlighted as state-of-the-art interventions in clinical psychology textbooks, and are hailed by clinical psychology and psychiatry respectively as significant advances in mental health science. But are they? RCTs and meta-analyses regularly show that CBT and SSRIs are superior to treatment as usual, wait-list controls, or placebos – but when you look at recovery rates, their impact is still far from stunning. I am aware that this last point is not one that I can claim reflects a genuinely balanced evidential view, but a meta-analysis we have just completed of cognitive therapy for generalized anxiety disorder (GAD) suggests that recovery rates are around 57% at follow-up. Which means that 43% of those in cognitive therapy interventions for GAD do not reach basic recovery levels at the end of the treatment programme. Reviews of IAPT programmes for depression suggest no real advantage for IAPT interventions based on quality of life and functioning measures (McPherson,Evans & Richardson, 2009). In a review article by Craske, Liao, Brown & Vervliet (2012) that is about to be published in Journal of Experimental Psychopathology, they note that even exposure therapy for anxiety disorders achieves clinically significant improvement in only 51% of patients at follow-up. I found it difficult to find studies that provided either recovery rates or measures of clinically significant improvement for SSRIs, but Arroll et al (2005) report that only 56-60% of patients in primary care responded well to SSRIs compared to 42-47% for placebos.

I may be over-cynical, but it seems that the best that our state-of-the-art clinical psychology and psychopharmacological research has been able to achieve is a recovery rate of around 50-60% for common mental health problems - compared with placebo and spontaneous remission rates of between 30-45%. Intervention journals are full of research papers describing new ‘tweaks’ to these ways of helping people with mental health problems, but are tweaks within the existing paradigms ever going to be significant? Is it time for a paradigm shift in the way we research mental health?

Discovering Facts in Psychology: 10 ways to create “False Knowledge” in Psychology

3/20/2013

 
First published 30/09/2012 on http://grahamdavey.blogspot.co.uk

There’s been quite a good deal of discussion recently about (1) how we validate a scientific fact (http://bit.ly/R8ruMg; http://bit.ly/T5JSJZ; http://bit.ly/xe0Rom), and (2) whether psychology – and in particular some branches of psychology – are prone to generate fallacious scientific knowledge (http://bit.ly/OCBdgJ; http://bit.ly/NKvra6). As psychologists, we are all trained (I hope) to be scientists – exploring the boundaries of knowledge and trying as best we can’ to create new knowledge, but in many of our attempts to pursue our careers and pay the mortgage, are we badly prone to creating false knowledge? Yes – we probably are! Here are just a few examples, and I challenge most of you psychology researchers who read this post to say you haven’t been a culprit in at least one of these processes!

Here are 10 ways to risk creating false knowledge in psychology.

1.  Create your own psychological construct. Constructs can be very useful ways of summarizing and formalizing unobservable psychological processes, but researchers who invent constructs need to know a lot about the scientific process, make sure they don’t create circular arguments, and must be in touch with other psychological research that is relevant to the understanding they are trying to create. In some sub-disciplines of psychology, I’m not sure that happens (http://bit.ly/ILDAa1).

2.  Do an experiment but make up or severely massage the data to fit your hypothesis. This is an obvious one, but is something that has surfaced in psychological research a good deal recently (http://bit.ly/QqF3cZ; http://nyti.ms/P4w43q).

3.  Convince yourself that a significant effect at p=.055 is real. How many times have psychologists tested a prediction only to find that the critical comparison just misses the crucial p=.05 value? How many times have psychologists then had another look at the data to see if it might just be possible that with a few outliers removed this predicted effect might be significant? Strangely enough, many published psychology papers are just creeping past the p=.05 value – and many more than would be expected by chance! Just how many false psychology facts has that created? (http://t.co/6qdsJ4Pm).

4.  Replicate your own findings using the same flawed procedure. Well, we’ve recently seen a flood of blog posts telling us that replication is the answer to fraud and poor science. If a fact can be replicated – then it must be a fact! (http://bit.ly/R8ruMg; http://bit.ly/xe0Rom) Well – no – that’s not the case at all. If you are a fastidious researcher and attempt to replicate a study precisely, then you are also likely to replicate the same flaws that gave rise to false knowledge. We need to understand the reasons why problematic research gives rise to false positives – that is the way to real knowledge (http://bit.ly/UchW4J).

5.  Use only qualitative methods. I know this one will be controversial, but in psychology you can’t just accept what your participants say! The whole reason why psychology has developed as a science is because it has developed a broad range of techniques to access psychological processes without having to accept at face value what a participant in psychological research has to tell us. I’ve always argued that qualitative research has a place in the development of psychological knowledge, but it is in the early stage of that knowledge development and more objective methodologies may be required to understand more proximal mechanisms.

6.  Commit your whole career to a single effect, model or theory that has your name associated with it. Well, if you’ve invested your whole career and credibility in a theory or approach, then you’re not going to let it go lightly. You’ll find multiple ways to defend it, even if it's wrong, and waste a lot of other researchers’ time and energy trying to disprove you. Ways of understanding move on, just like time, and so must the intransigent psychological theorist.

7.  Take a tried and tested procedure and apply it to everything. Every now and then in psychology a new procedure surfaces that looks too good to miss. It is robust, tells you something about the psychological processes involved in a phenomenon, and you can get a publication by applying it to something that no one else has yet applied it to! So join the fashion rush – apply it to everything that moves, and some things that don’t (http://bit.ly/SX37Sn). No I wasn't thinking of brain imaging, but.... Hmmmm, let me think about that! (I was actually thinking about the Stroop!)

8.  If your finding is rejected by the first journal you submit it to, continue to submit it to journals until it’s eventually published. This is a nice way to ensure that your contribution to false knowledge will be permanently recorded. As academic researchers we are all under pressure to publish (http://bit.ly/AsIO8B), if you believe your study has some genuine contribution to make to psychological science, then don’t accept a rejection from the first journal you send it to. In fact, if you don’t think your study has any real contribution to make to psychological knowledge at all, don’t accept a rejection from the first journal you send it to! Because you will probably get it published somewhere. I’d love to know what the statistics are on this, but I bet if you persist enough, your paper will get published.

9.  Publish your finding in a book chapter (non- peer reviewed), or an invited review, or a journal special issue - all of which are likely to have an editorial "light touch”. Well, if you do it might not get cited much (http://t.co/D55VKWDm), but it’s a good way of getting dodgy findings (and dodgy theories) into the public domain.

10.  Do some research on some highly improbable effects - and hope that some turn up significant by chance. (http://bit.ly/QsOQNo) And it won’t matter that people can’t replicate it – because replications will only rarely get published! (http://bit.ly/xVmmOv). The more improbable your finding, the more newsworthy it will be, the more of a celebrity you will become, the more people will try to replicate your research and fail, the more you will be wasting genuine research time and effort. But it will be your 15 minutes of fame!

Finally, if you haven’t been able to generate false psychological knowledge through one of these 10 processes, then try to get your finding included in an Introduction to Psychology textbook. Once your study is enshrined in the good old Intro’ to Psych’ text, then it’s pretty much going to be accepted as fact by at least one and maybe two future generations of psychologists. And once an undergrad has learnt a “fact”, it is indelibly inscribed on their brain and is faithfully transported into future reality!

"Psychology" - The Struggling Science of Mental Life

3/20/2013

 
First published 29/12/2012 at http://grahamdavey.blogspot.co.uk
Picture
Many of you may be old enough to remember George A. Miller’s book “Psychology: The Science of Mental Life”. As an undergraduate psychology student I was brought up with books with titles that variously contained the words science, psychology, behaviour and mind in them. These books had one main purpose – to persuade students of psychology that psychology was a legitimate scientific pursuit, using rigorous scientific methods to understand human behaviour and the human mind. All on a par with the more established sciences such as biology, physics and chemistry.

Even if you’re happy with the notion of psychology as a science, we then have the various debates about whether psychology is a biological science or a social science, and in the UK this isn’t just an issue about terminology, it is also a major issue about funding levels. Do psychologists need labs, do undergraduate psychology students need to do lab classes to learn to be psychologists? This almost became the tail wagging the dog, as funding bodies such as HEFCE (and its predecessor the University Funding Council) looked to save money by re-banding psychology as a half-breed science sitting somewhere between social science and biological science. I even seem to recall that some psychology departments were designated social psychology departments and given little or no lab funding. So were students in those Departments being taught science or not? What breed of psychology was it?

Just one more example before I get to the main point. A few years ago I had the good fortune to teach a small-group elective to second-year medical students. This was a 6-week course on cognitive models of psychopathology. I was fortunate to teach this group because it contained highly motivated and intelligent students. Now, I have never viewed myself as anything other than a scientist using scientific methods to understand human behaviour in general and psychopathology in particular. But these groups of highly able and highly trained medical students inevitably had difficulty with two particular aspects of the material I was teaching them: (1) how can we use science to study “cognitions” when we can’t see them, when we make up ‘arbitrary’ concepts to describe them, and we can’t physically dissect them? and (2) at the end of the day, cognitions will always boil down to biology, so it is biology – and not cognitions – that should be the object of scientific study.

What struck me most was that these students had already developed a conception of science that was not procedure based, but was content based. It was the subject matter that defined science for them, not particularly the methodology.

My argument here is that while psychology had been touted as a science now for a number of generations, psychologists over these generations have failed to convince significant others (scientists in other disciplines, funding organizations, etc.) that psychology is a science on a par with other established sciences. Challenges to psychology as a science come in many forms and from many different sources. Here are a few examples:

(1)      Funding bodies frequently attempt their own ‘redefining’ of psychology, especially when budgets are tight, and psychology is a soft target here, with its large numbers of students offering significant savings if science-related funding is downgraded.

(2)      Students, teachers and researchers in other science disciplines often have very esoteric views of what science is, and these views revolve around their own subject matter and the techniques they specially use to understand that subject matter. Psychologists have probably not been proactive or aggressive enough in broadcasting the ways in which psychology is science and how it uses scientific methodologies in a highly objective and rigorous way.

(3)      Members of other science disciplines frequently have a ‘mental block’ when it comes to categorizing psychology as a science (that’s probably the nicest way I can put it!). This reminds me of the time a few years ago when I was representing psychology on the UK Science Council. There was a long discussion about how to increase the number of women taking science degrees. During this discussion it was pointed out that psychology was extremely successful at recruiting female students, so perhaps we shouldn’t be too pessimistic about recruiting women into at least some branches of science. The discussion paused briefly, and then continued as if nothing of any relevance whatsoever had been said!

(4)      All branches of knowledge are open to allegations of fraud, and there has been some considerable discussion recently about fraud in science, fraud in psychology and the social sciences, and – most specifically – fraud in social psychology. Arguably, psychology is the science discipline most likely to be hurt by such allegations – not because methodology is necessarily less rigorous than in other science disciplines or publication standards any less high, but because many scientists in other disciplines fail to understand how psychology practices as a science. Sadly, this is even true within the discipline of psychology, and it is easy to take the trials and tribulations that have recently been experienced in social psychology research as an opportunity for the more ‘hard-nosed’ end of psychology to sneer at what might be considered the softer under-belly of psychological science. One branch of psychology ‘sneering’ at another branch is not a clever thing to do, because this will all be grist to the mill branding psychology generally as “non-scientific” by members of other science disciplines.

I’ll finish by mentioning a recent report published in 2011 attempting to benchmark UK psychology research within an international context. Interestingly, this report (published jointly by the ESRC, BPS, EPS and AHPD) listed nine challenges to the competitiveness of current psychology research in the UK. A significant majority of these challenges relate to the skills and facilities necessary for pursuing psychology as a science!

Psychology still requires an orchestrated campaign to establish it’s scientific credentials – especially in the eyes of other science disciplines, many of which have their own distorted view of what science is, but already occupy the intellectual high ground. Challenges to psychology as a science come from many diverse sources, including funding bodies, other sciences, intra-disciplinary research fraud, and conceptual differences within psychology as an integrated, but diverse, discipline.

"An effect is not an effect until it is replicated" - Pre-cognition or Experimenter Demand Effects

3/20/2013

 
First published 15/09/2012 at http://grahamdavey.blogspot.co.uk
Picture
There has been much talk recently about the scientific process in the light of recent claims of fraud against a number of psychologists (http://bit.ly/R8ruMg), and also the failure of researchers to replicate some controversial findings by Darryl Bem purportedly showing effects reminiscent of pre-cognition (http://bit.ly/xVmmOv). This has led to calls for replication to be the cornerstone of good science – basically “an effect is not an effect until it’s replicated” (http://bit.ly/UtE1hb). But is replication enough? Is it possible to still replicate “non-effects”? Well replication probably isn’t enough. If we believe that a study has generated ‘effects’ that we think are spurious, then failure to replicate might be instructive, but it doesn’t tell us how or why the original study came by a significant effect. Whether the cause of the false effect is statistical or procedural, it is still important to identify this cause and empirically verify that it was indeed causing the spurious findings. This can be illustrated by a series of replication studies we have recently carried out in our experimental psychopathology labs at the University of Sussex.

Recently we’ve been running some studies looking at the effects of procedures that generate distress on cognitive appraisal processes. These studies are quite simple in design and highly effective at generating negative mood and distress in our participants (participants are usually undergraduate students participating for course credits), and pilot studies suggest that experienced distress and negative mood do indeed facilitate the use of clinically-relevant appraisal processes.

The first study we did was piloted as a final year student project. It produced nice data that supported our predictions – except for one thing. The two groups (distress group and control group) differed significantly on pre-manipulation baseline measures of mood and other clinically-relevant characteristics. Participants due to undertake the most distressing manipulation scored significantly higher on pre-experimental clinical measures of anxiety (M=6.9, SD 3.6, v M=3.8, SD 2.5)[F(56)=4.01 , p=.05], and depression (M=2.2, SD 2.6, M=1.1, SD 1.1) [F(56)=4.24, p=.04]. Was this just bad luck? The project student had administered the questionnaires herself prior to the experimental manipulations, and she had used a quasi-random participant allocation method (rotating participants to experimental conditions in a fixed pattern).

Although our experimental predictions had been supported (even when pre-experimental baseline measures were controlled for), we decided to replicate the study, this time run by another final year project student. Lo and behold, the participants due to undertake the distressing task scored significantly higher on pre-experimental measures of anxiety (M=9.1, SD 4.1, v M=6.9, SD 3.0) [F(56)=6.01, p=.01], and depression (M=4.3, SD 3.7, v M=2.4, SD 2.4) [F(56)=5.09, p=.02]. Another case of bad luck? Questionnaires were administered and participants allocated in the same way as the first study.

Was this a case of enthusiastic final year project students determined to complete a successful project in some way conveying information to the participant about what they were to imminently undergo? Basically, was this an implicit experimenter demand effect being conveyed by an inexperienced experimenter? To try and clear this up, we decided to replicate again, this time it was to be run by an experienced post doc researcher – someone who was wise to the possibility of experimenter demand effects, aware that this procedure was possibly prone to these demand effects, and would presumably to be able to minimize them.  To cut a long story short – we replicated the study again – but still replicated the pre-experimental group differences in mood measures! Participants who were about to undergo the distress procedure scored higher than participants about to undergo the unstressful control condition.

At this point, we were beginning to believe in pre-cognition effects! Finally, we decided to replicate again. But this time, the experimenter would be entirely blind to the experimental condition that a participant was in. Sixty sealed packs of questionnaires and instructions were made up before any participants were tested – half contained information for the participant about how to complete the questionnaires and how to run either the stressful or the control condition. The experimenter merely allowed the participant to chose a pack from a box at the outset, and was entirely unaware which condition the participant was running during the experiment. To cut another long story short – to our relief and satisfaction, the pre-experimental group differences in anxiety and depression measures disappeared. It wasn’t pre-cognition after all - it was an experimenter demand effect.

The point I’m making is that replication alone may not be sufficient to identify genuine effects – you can also replicate “non-effects” quite effectively - even by actively trying not to, and even more so by meticulously replicating the original procedure. If we have no faith in a particular experimental finding, it is incumbent on us as good scientists to identify the factor or factors that gave rise to that spurious finding wherever we can.

How Research Methods Textbooks Fail Final Year Project Students

3/20/2013

 
First published 05/09/2012 at http://grahamdavey.blogspot.co.uk
The time is about to come when all those fresh-faced final year empirical project students will be filing through our office doors looking for the study that’s going to give them the first class degree they are craving for.

Unfortunately, as a supervisor you’ll find that their mind isn’t focused on doing scientific research – it’s focused on getting a good mark for their project. This means that most of your time as a supervisor will be spent not on training your undergraduate supervisees to do research (as it should be), but on (1) telling them what they have to do to write up a good project, and (2) reassuring them that they’ve understood what you said is required for writing up a good project.

As an empirical scientist you might believe that the most important part of the training for your undergraduate project students is learning about experimental design and about statistical analysis. Wrong. Absolutely no over-arching information about experimental design will be absorbed by the student – only that they lie awake at night needing to know how many participants they will need to test and – more importantly – how will they get those participants?

Most project students have a small notebook they’ve bought from W H Smiths and in which they write down the pressing questions they need to ask their supervisor at the next supervision session (just in case they may forget). Questions like “Can I do this experiment in my bathroom in my student flat?”, “Can I test my mother’s budgerigar if I’m short of participants?”, “Will it matter if my breath smells of cider when I’m coding my data?”, “Do I need to worry about where I put the decimal point?”, “Will it affect my participants’ behaviour if I dye my hair day-glow orange in the middle of the study?”… and so on.

I believe that project students ask these kinds of questions because none of these questions are properly addressed or answered in standard Research Methods textbooks – an enormous oversight! Research Methods textbooks mince around talking about balanced designs, counterbalancing, control groups, demand effects, and so on. But what about the real practical issues facing a final year empirical project student? “How will I complete my experiment if I split up with my boyfriend and can’t use his extended local family as participants?”, “Where can I find those jumbo paper clips that I need to keep all the response sheets together?”, “Why do I need to run a control condition when I could be skiing in Austria?”

Perhaps we need some new, young, motivated research methods authors to provide us with the textbooks that will answer the full range of questions asked by undergraduate empirical project students. Sadly, at present, these textbooks answer the questions that students aren’t interested in asking – let’s get real with undergraduate research training!
Picture
<<Previous

    Author

    Graham C. L. Davey, Ph.D. is Professor of Psychology at the University of Sussex, UK. His research interests extend across mental health problems generally, and anxiety and worry specifically. Professor Davey has published over 140 articles in scientific and professional journals and written or edited 16 books including Psychopathology; Clinical Psychology; Applied Psychology; Complete Psychology; Worrying & Psychological Disorders; and Phobias: A Handbook of Theory, Research & Treatment. He has served as President of the British Psychological Society, and is currently Editor-in-Chief of Journal of Experimental Psychopathology and Psychopathology Review. When not writing about psychology he watches football and eats curries.

    Archives

    September 2019
    May 2019
    August 2018
    July 2018
    June 2015
    April 2015
    November 2014
    March 2014
    December 2013
    July 2013
    June 2013
    April 2013
    March 2013

    Categories

    All
    Anxiety
    Books
    Clinical Psychology
    Journal Publishing
    Medicine
    Mental Health
    Phobias
    Psychology
    Research
    Research Councils

    RSS Feed

Proudly powered by Weebly