• Home
  • Graham Davey's Blog
  • My Books
    • Published Books
    • New Book Projects >
      • Psychopathology 3e Excerpts
  • My Research
  • The Sidcup Mixer
  • Journals
  • News
    • Latest News
    • Clinical Psychology Research & Chat
  • Real Life
Papers from Sidcup
Graham Davey's website

Experimental Psychopathology - Is it really necessary to implant an electrode or light up the brain with a scanner to do proper Mental Health Research?

3/14/2014

 
Picture
I've just spent a very stimulating and enlightening couple of weeks, first at the Rome Workshop in Experimental Psychopathology, and then at the University of Exeter - both times talking about experimental psychopathology. But these talks were not just about how to do experimental psychopathology, they were also about how many other researchers were simply not equipped to do experimental psychopathology, or simply had no idea about what this scientific paradigm was. And that has some very dramatic consequences for mental health funding, as well as our broader understanding of the mechanisms that contribute to mental health funding.

Let’s be quite clear about the main issue here. Most funding for mental health research goes to high profile, expensive, medically oriented research on the biological substrates of mental health problems. Why is that? Well, while psychologists learn about both biological mechanisms and psychological mechanisms, medics simply don’t learn about psychological mechanisms - in fact they tend to have no knowledge whatsoever of the inferential methodologies that allow psychologists to develop models of psychological processes - but rather sadly, there is a majority of those medics on the panels of most funding bodies for mental health research.

Is this important? Yes it is, because, I'm quite happy to assert that most common mental health problems are acquired through perfectly normal psychological mechanisms that involve attention, decision-making, learning, memory and other general cognitive processes - so the mechanisms are not in any way abnormal - only the outcomes of the process are abnormal - so why do we waste research time and taxpayers money trying to look for abnormal neurological mechanisms or medically aberrant signatures of psychopathology when they probably do not exist?

As an experimental psychologist studying learning in nonhuman animals I learnt a lot about inferential experimental methodologies that allowed us to infer cognitive processes in any organism – human or nonhuman. These are the same types of methodologies that are used to understand most human cognitive processes - such as memory, attention, decision-making and learning. What many researchers from a medical background do not grasp is that scientific method allows us to infer the nature and structure of psychological mechanisms without having to know anything about the biological underpinnings of these mechanisms. In fact, whatever medical or biological research does subsequently to psychologists elaborating these mechanisms will merely be to substantiate the infrastructure of these mechanisms – and indeed, as radical as it may seem, it will be very little more than that.

Experimental psychopathologists should have the lead on all research questions to do with the aetiology of mental health problems. Their research is cognitive, experimental, inferential, provides evidence for the causal relationships that underlie the acquisition of mental health problems, and allows the development of testable models of mental health problems – and it’s a hell of a lot cheaper than most other medically driven approaches!

I have recently been heard to say that experimental psychopathology needs a manifesto to enable it to compete with other explanatory approaches to mental health problems such as neuroscience and genetics – well, it does. We need this manifesto to prevent other disciplinary lobbies from monopolizing funding and – most importantly – from hijacking the way we explain mental health problems. Most mental health problems develop out of perfectly natural psychological processes – not medical problems. Understanding those processes in the normal, inferential way that psychologists do research will provide the basis for good mental health research.

Mental Health & Stigma

6/2/2013

 
Picture
As promised, here is another piece from the forthcoming second edition of Psychopathology. This time, here is a new section discussing mental health stigma, its causes, why it matters, and how we can eliminate it.

There are still attitudes within most societies that view symptoms of psychopathology as threatening and uncomfortable, and these attitudes frequently foster stigma and discrimination towards people with mental health problems. Such reactions are common when people are brave enough to admit they have a mental health problem, and they can often lead on to various forms of exclusion or discrimination – either within social circles or within the workplace. In the following sections we will look at (1) what mental health stigma is, (2) Who holds stigmatizing beliefs and attitudes?, (3) What causes stigma? (4) Why does stigma matter? And (5) How can we eliminate stigma?

What is mental health stigma?: Mental health stigma can be divided into two distinct types: social stigma is characterized by prejudicial attitudes and discriminating behaviour directed towards individuals with mental health problems as a result of the psychiatric label they have been given. In contrast, perceived stigma or self-stigma is the internalizing by the mental health sufferer of their perceptions of discrimination (Link, Cullen, Struening & Shrout, 1989), and perceived stigma can significantly affect feelings of shame and lead to poorer treatment outcomes (Perlick, Rosenheck, Clarkin, Sirey et al., 2001).

In relation to social stigma, studies have suggested that stigmatising attitudes towards people with mental health problems are widespread and commonly held (Crisp, Gelder, Rix, Meltzer et al., 2000; Bryne, 1997; Heginbotham, 1998). In a survey of over 1700 adults in the UK, Crisp et al. (2000) found that (1) the most commonly held belief was that people with mental health problems were dangerous – especially those with schizophrenia, alcoholism and drug dependence, (2) people believed that some mental health problems such as eating disorders and substance abuse were self inflicted, and (3) respondents believed that people with mental health problems were generally hard to talk to. People tended to hold these negative beliefs regardless of their age, regardless of what knowledge they had of mental health problems, and regardless of whether they knew someone who had a mental health problem. More recent studies of attitudes to individuals with a diagnosis of schizophrenia or major depression convey similar findings. In both cases, a significant proportion of members of the public considered that people with mental health problems such as depression or schizophrenia were unpredictable, dangerous and they would be less likely to employ someone with a mental health problem (Wang & Lai, 2008; Reavley & Jorm, 2011).

Who holds stigmatizing beliefs about mental health problems?: Perhaps surprisingly,  stigmatizing beliefs about individuals with mental health problems are held by a broad range of individuals within society, regardless of whether they know someone with a mental health problem, have a family member with a mental health problem, or have a good knowledge and experience of mental health problems (Crisp et al., 2000; Moses, 2010; Wallace, 2010). For example, Moses (2010) found that stigma directed at adolescents with mental health problems came from family members, peers, and teachers. 46% of these adolescents described experiencing stigmatization by family members in the form of unwarranted assumptions (e.g. the sufferer was being manipulative), distrust, avoidance, pity and gossip, 62% experienced stigma from peers which often led to friendship losses and social rejection (Connolly, Geller, Marton & Kutcher (1992), and 35% reported stigma perpetrated by teachers and school staff, who expressed fear, dislike, avoidance, and under-estimation of abilities. Mental health stigma is even widespread in the medical profession, at least in part because it is given a low priority during the training of physicians and GPs (Wallace, 2010).

What factors cause stigma?: The social stigma associated with mental health problems almost certainly has multiple causes. We’ve seen in the section on historical perspectives that throughout history people with mental health problems have been treated differently, excluded and even brutalized. This treatment may come from the misguided views that people with mental health problems may be more violent or unpredictable than people without such problems, or somehow just “different”, but none of these beliefs has any basis in fact (e.g. Swanson, Holzer, Ganju & Jono, 1990). Similarly, early beliefs about the causes of mental health problems, such as demonic or spirit possession, were ‘explanations’ that would almost certainly give rise to reactions of caution, fear and discrimination. Even the medical model of mental health problems is itself an unwitting source of stigmatizing beliefs. First, the medical model implies that mental health problems are on a par with physical illnesses and may result from medical or physical dysfunction in some way (when many may not be simply reducible to biological or medical causes). This itself implies that people with mental health problems are in some way ‘different’ from ‘normally’ functioning individuals. Secondly, the medical model implies diagnosis, and diagnosis implies a label that is applied to a ‘patient’. That label may well be associated with undesirable attributes (e.g. ‘mad’ people cannot function properly in society, or can sometimes be violent), and this again will perpetuate the view that people with mental health problems are different and should be treated with caution.

            We will discuss ways in which stigma can be addressed below, but it must also be acknowledged here that the media regularly play a role in perpetuating stigmatizing stereotypes of people with mental health problems. The popular press is a branch of the media that is frequently criticized for perpetuating these stereotypes. Blame can also be levelled at the entertainment media. For example, cinematic depictions of schizophrenia are often stereotypic and characterized by misinformation about symptoms, causes and treatment. In an analysis of English-language movies released between 1990-2010 that depicted at least one character with schizophrenia, Owen (2012) found that most schizophrenic characters displayed violent behaviour, one-third of these violent characters engaged in homicidal behaviour, and a quarter committed suicide. This suggests that negative portrayals of schizophrenia in contemporary movies are common and are sure to reinforce biased beliefs and stigmatizing attitudes towards people with mental health problems. While the media may be getting better at increasing their portrayal of anti-stigmatising material over recent years, studies suggest that there has been no proportional decrease in the news media’s publication of stigmatising articles, suggesting that the media is still a significant source of stigma-relevant misinformation (Thornicroft, Goulden, Shefer, Rhydderch et al., 2013).

Why does stigma matter?: Stigma embraces both prejudicial attitudes and discriminating behaviour towards individuals with mental health problems, and the social effects of this include exclusion, poor social support, poorer subjective quality of life, and low self-esteem (Livingston & Boyd, 2010). As well as it’s affect on the quality of daily living, stigma also has a detrimental affect on treatment outcomes, and so hinders efficient and effective recovery from mental health problems (Perlick, Rosenheck, Clarkin, Sirey et al., 2001). In particular, self-stigma is correlated with poorer vocational outcomes (employment success) and increased social isolation (Yanos, Roe & Lysaker, 2010). These factors alone represent significant reasons for attempting to eradicate mental health stigma and ensure that social inclusion is facilitated and recovery can be efficiently achieved.

How can we eliminate stigma?: We now have a good knowledge of what mental health stigma is and how it affects sufferers, both in terms of their role in society and their route to recovery. It is not surprising, then, that attention has most recently turned to developing ways in which stigma and discrimination can be reduced. As we have already described, people tend to hold these negative beliefs about mental health problems regardless of their age, regardless of what knowledge they have of mental health problems, and regardless of whether they know someone who has a mental health problem. The fact that such negative attitudes appear to be so entrenched suggests that campaigns to change these beliefs will have to be multifaceted, will have to do more than just impart knowledge about mental health problems, and will need to challenge existing negative stereotypes especially as they are portrayed in the general media (Pinfold, Toulmin, Thornicroft, Huxley et al., 2003). In the UK, the “Time to Change” campaign is one of the biggest programmes attempting to address mental health stigma and is supported by both charities and mental health service providers (http://www.time-to-change.org.uk). This programme provides blogs, videos, TV advertisments, and promotional events to help raise awareness of mental health stigma and the detrimental affect this has on mental health sufferers. However, raising awareness of mental health problems simply by providing information about these problems may not be a simple solution – especially since individuals who are most knowledgeable about mental health problems (e.g. psychiatrists, mental health nurses) regularly hold strong stigmatizing beliefs about mental health themselves! (Schlosberg, 1993; Caldwell & Jorm, 2001). As a consequence, attention has turned towards some methods identified in the social psychology literature for improving inter-group relations and reducing prejudice (Brown, 2010). These methods aim to promote events encouraging mass participation social contact between individuals with and without mental health problems and to facilitate positive intergroup contact and disclosure of mental health problems (one example is the “Time to Change” Roadshow, which sets up events in prominent town centre locations with high footfall). Analysis of these kinds of inter-group events suggests that they (1) improve attitudes towards people with mental health problems, (2) increase future willingness to disclose mental health problems, and (3) promote behaviours associated with anti-stigma engagement (Evans-Lacko, London, Japhet, Rusch et al., 2012; Thornicroft, Brohan, Kassam & Lewis-Holmes, 2008). A fuller evidence-based evaluation of the Time to Change initiative can be found in a special issue dedicated to this topic in the British Journal of Psychiatry (British Journal of Psychiatry, Vol. 202, Issue s55, April 2013).

For those of you that would like to test your own knowledge of mental health problems, Time to Change provides you with a quiz to assess your own awareness of mental health problems.

Summary: Hopefully, this section has introduced you to the complex nature of mental health stigma and the effects it has on both the daily lives and recovery of individuals suffering from mental health problems. We have discussed how mental health stigma manifests itself, the effect it has on social inclusion, self-esteem, quality of life and recovery. We ended by describing the development of multifaceted programmes to combat mental health stigma and discrimination.

Where’s the Psychology in the Medical Curriculum – and Why does it Matter?

3/20/2013

 
First published 27/02/2013 at http://grahamdavey.blogspot.co.uk
Picture
That's rather an extreme blog post title, but was inspired by the APA's (American Psychiatric Association) recent comment that  "Many of the revisions in DSM-5 will help psychiatry better resemble the rest of medicine". This alone would be enough to send shivers down the spine of most psychology-minded mental health practitioners, but it led me to thinking about where that might leave psychology as a rather different knowledge-based approach to understanding and treating mental health problems.

Specifically, if the APA want to impose a medical model on mental health then what will our doctors and physicians be learning about how to deal with their patients with mental health problems? The incremental implications are immense. It is not just that mental health is being aligned with medicine on such an explicit basis in this way, this issue is compounded by the fact that medical training still plays lip service to training doctors in psychological knowledge and, in particular, to a psychological approach to mental health. So has medicine taken the decision to align mental health diagnosis and treatment to fit the constraints of current medical training (rather than vice versa)?

I returned to a President's column I wrote in 2002 about the state of psychology teaching in the UK medical curriculum. The same points I made then seem to apply now. The medical curriculum is not constructed in a way that provides an explicit slot for psychology or psychological knowledge. Even though a recent manifesto for the UK medical curriculum (Tomorrow’s Doctors, 2009) makes it clear that medical students should be able to “apply psychological principles, method and knowledge to medical practice” (p15), there is probably no practical pressure for this to happen. Given that the ‘Tomorrow’s Doctors’ document does advocate more behavioural and social science teaching in the medical curriculum, I suspect that what happens in practice is that a constrained slot for ‘non-core medical teaching’ gets split up between psychology, social science and disciplines such as health economics. If a medical programme decides to take more sociology (because there are sociologists available on campus to teach it) – then there will be less psychology.

The second point I made then was related to the expectations of medical students. This was illustrated by a QAA report for a well-respected medical school. This made the point that:

“...there was a student perception that, in Phase I, the theoretical content relating to the social and behavioural sciences was too large. Particular concern was expressed about aspects of the Health Psychology Module....a number of students suggested that the emphasis placed upon theoretical aspects of these sciences in Phase I was onerous”

Well – death to psychology! My own experience of teaching medical students is that they often have a very skewed perception of science, and in particular, biological science. Interestingly, the ‘Tomorrow’s Doctors’ document advises that medical students should be able to ‘apply scientific method and approaches to medical research’ (p18). But in my experience medical students find it very difficult to conceptualize scientific method unless it is subject matter relevant – i.e. biology relevant. I have spent many hours trying to explain to medical students that scientific method can be applied to psychological phenomena that are not biology based – as long as certain principles of measurement and replicability can be maintained.

But there has been a more recent attempt to define a core curriculum for psychology in undergraduate medical education. This was the report from the Behavioural & Social Sciences Teaching in Medicine (BeSST) Psychology Steering Group (2010) (which I believe to be an HEA Psychology Network group). I am sure this report was conducted with the best of intentions, but I must admit I think it’s core curriculum recommendations are bizarre, and entirely miss the point of what psychology has to offer medicine! It is like someone has gone through a first year Introduction to Psychology textbook and picked out interesting things that might catch the eye of a medical student – piecemeal! For example, the report claims that learning theory is important because it might be relevant to “the acquisition and maintenance of a needle phobia in patients who need to administer insulin” (p30). That is both pandering to the medical curriculum and massively underselling psychology as a paradigmatic way of understanding and changing behaviour!

Medical students need to understand that psychology is an entirely different, and legitimate, method of knowledge acquisition and understanding in biological science. Not all mental health problems are reducible to biological diagnoses, biological explanations or medical interventions, and attempts by the APA to shift our thinking in that direction are either delusional or self-promoting. What is most disappointing from the point of view of the development of mental health services is the impact that entrenched medically-based views such as those of the APA will have on the already introverted medical curriculum. Doctors do need to learn about medicine, but they also need to learn that mental health needs to be understood in many ways – very many of which are not traditionally biological in their aetiology or their cure.

‘Stickers’, ‘Jugglers’ and ‘Switchers & Dumpers’ – Which kind of researcher should you be?

3/20/2013

 
First published 04/12/2012 at http://grahamdavey.blogspot.co.uk
Picture
I often look back on my own research career with some surprise at where it’s all travelled to. When I was a PhD student I was a dyed-in-the-wool behaviourist loading rats into Skinner boxes and clichés into arguments. Cognitions didn’t exist – and even in the remote possibility that they might, they were of no use to a scientific psychology. I was a radical Skinnerian pursuing a brave new world in which behaviour was all that mattered and contingencies of reinforcement would win out against all the airy-fairy vagaries of other approaches to psychology. Just a few years on from this I was still wondering why my PhD thesis on the “determinants of the post-reinforcement pause on fixed-interval schedules in rats” hadn’t been nominated for a Nobel Prize! 

I’ve begun with this personal example, because it emphasizes how relatively narrow interests (and views and approaches) can seem like they are the universe – and that is especially the case when you are personally invested in a specific piece of research like a PhD thesis. But what happens later on in our academic lives? Should we stay focused and hone our skills in a focused research niche, or should we nervously wander out of that niche into new areas with new challenges requiring new skills? 

It is certainly a question for young academics to think about. Stick with what you know, or get other strings to your bow? If you are a newly graduated PhD, you are more likely than not to be a “clone” of your supervisor, and that may well be a block on you getting a lectureship at the institution in which you did your research degree. But then most recruiting Departments will want to know that you are – as they put it - “capable of independent research” before appointing you. Do you go scrabbling for that last section in your thesis entitled “Future Directions” and try to stretch out your PhD research (often in a painfully synthetic way, like seeing how far some bubble-gum will stretch – even though the ‘amount’ there is still the same). Or do you bite the bullet and try your newly-learnt skills on some new and different problems? 

You have one career lifetime (unless you’re Buddhist!) – so should you diversity or should you focus? Let’s begin with those people who focus an entire research career in one specific area – “the stickers” - often concentrating on a small, limited number of research problems but maybe have the benefit of developing more and more refined (and sometimes more complex) theoretical models. Cripes – how boring! Take that approach and you’ll become one or more of the following: (a) The person who sits near the front at international conferences and begins asking questions with the phrase “Thank you for your very interesting talk, but…”, (b) That butcher of a referee who everyone knows, even though your reviews are still anonymous, (c) Someone who sits in Departmental recruitment presentations openly mocking the presentation of any applicant not in your specific area of research (usually by looking down at your clasped hands and shaking your head slowly from side to side while muttering words like “unbelievable” or “where’s the science?”, or, finally, you’ll become (d) Director of a RCUK National Research Centre. 

So what about taking that giant leap for researcher-kind and diversifying? Well first, it’s arguably good to have more than one string to your bow, and become a research “juggler”“. The chances are that at some point you’ll get bored with the programme of research that you first embarked on in early career. Having at least two relatively independent streams of research means you can switch your focus from one to the other. It also increases (a) the range of journals you can publish in, (b) the funding bodies you can apply to, and (c) the diversity of nice people you can meet and chat sensibly to at conferences. It can also be a useful way of increasing your publication rate in early mid-career when you’re looking for an Associate Editorship to put on your CV or a senior lectureship to apply for. 

But there is more to diversifying than generating two streams of research purely for pragmatic career reasons. If you’re a tenured academic, you will probably in principle have the luxury of being able to carry out research on anything you want to (within reason) – surely that’s an opportunity that’s too good to miss? B.F. Skinner himself was one who promoted the scientific principle of serendipity (a principle that seems to have gone missing from modern day Research Methods text books) – that is, if something interesting crops up in your research, drop everything and study it! This apparently was how Skinner began his studies on response shaping, which eventually led to his treatise on operant conditioning. But diversity is not always a virtue. There are some entrepreneurial “switchers and dumpers” out there, who post a new (and largely unsubstantiated) theory about something in the literature, and then move on to a completely new (and often more trending) area of research, leaving researchers of the former topic to fight, bicker and prevaricate, often for years, about what eventually turns out to be a red herring, or a blind alley, or a complete flight of fancy designed to grab the headlines at the time. 

Now, you’ve probably got to the point in this post where you’re desperate for me to provide you with some examples of “stickers”, “jugglers” and “switchers and dumpers” – well, I think you know who some of these people are already, and I’m not going to name names! But going back to my first paragraph, if you’d told me as a postgraduate student about the topics I would be researching now – I would have been scornfully dismissive. But somehow I got here, and through an interesting and enjoyable pathway of topics, ideas, and serendipitous routes. Research isn’t just about persevering at a problem until you’ve tackled it from every conceivable angle, it’s also an opportunity to try out as many candies in the shop as you can – as long as you sample responsibly!

The Lost 40%

3/20/2013

 
First published 02/11/2012 at http://www.psychologytoday.com/blog/why-we-worry
Picture
I’ve agonized for some time about how best to write this post. I want to try and be objective and sober about our achievements in developing successful interventions for mental health problems, yet at the same time I don’t want to diminish hope for recovery in those people who rely on mental health services to help them overcome their distress.

The place to start is a meta-analysis of cognitive therapy for worry in generalized anxiety disorder (GAD) just published by my colleagues and myself. For those of you that are unfamiliar with GAD, it is one of the most common mental health problems, is characterized by anxiety symptoms and by pathological uncontrollable worrying, and it has a lifetime prevalence rate of between 5-8% in the general adult population. That means that in a UK population of around 62 million, between 3 and 5 million people will experience diagnosable symptoms of GAD in their lifetime. In a US population of 311 million these figures increase to between 15 to 25 million sufferers within their lifetime. Our meta-analysis found that cognitive therapy was indeed significantly more effective at treating pathological worrying in GAD than non-therapy controls, and we also found evidence that cognitive therapy was superior to other treatments that were not cognitive therapy based.

So, all well and good! This evidence suggests that we’ve developed therapeutic interventions that are significantly better than doing nothing and that are marginally better than some other treatments. Our results also suggest that the magnitude of these effects are slightly larger than had been previously found, possibly indicating that newer forms of cognitive therapy were increasingly more effective.

But what can the service user with mental health problems make of these conclusions? On the face it they seem warmly reassuring – we do have treatments that are more effective than doing nothing, and the efficacy of these treatments is increasing over time. But arguably, what the service user wants to know is not “Is treatment X better than treatment Y?”, but “Will I be cured?” The answer to that is not so reassuring. Our study was one of the first to look at recovery data as well as relative efficacy of treatments. Across all of the studies for which we had data on levels of pathological worrying, the primary recovery data revealed that only 57% of sufferers were classed as recovered at 12 months following cognitive therapy – and, remember, cognitive therapy was found to be more effective than other forms of treatment. To put it another way, 43% of people who underwent cognitive therapy for pathological worrying in GAD were still not classed as recovered one year later. Presumably, they were still experiencing distressing symptoms of GAD that were adversely affecting their quality of life. I think these findings raise two important but relatively unrelated issues.

First, is a recovery rate of 57% enough to justify 50 years of developing psychotherapeutic treatments for mental health disorders such as GAD? To be sure, GAD is a very stubborn disorder. Long-term studies of GAD indicate that around 60% of people diagnosed with GAD were still exhibiting significant symptoms of the disorder 12 years later (regardless or not of whether they’d had treatments for these symptoms during this period). Let’s apply this to the prevalence figures I quoted earlier in this piece. This means that the number of people in the UK and the USA suffering long-term symptoms of GAD during their lifetime might be as high as 3 million and 15 million respectively. In 50-years of developing evidence-based talking therapies, have we been too obsessed with relative efficacy and not enough with recovery? Has too much time been spent just ‘tweaking’ existing interventions to make them competitive with other existing interventions? Perhaps as our starting point we should be taking a more universal view of what is required for recovery from disabling mental health problems? That overview will not just include psychological factors it will inevitably include social, environmental and economic factors as well.

Second, what do we tell the service user? Mental health problems such as GAD are distressing and disabling. Hope of recovery is the belief that most service users will take into treatment, but on the basis of the figures presented in this piece, it can only be a 57% hope!  This level of hope is not just reserved for cognitive therapy for GAD or psychotherapies in general, it is a figure that pretty much covers pharmaceutical treatments for GAD as well, with the best remission/recovery rates for drug treatments being around 60% (fluoxetine) and some as low as 26%.

I have spent this post discussing recovery from GAD in detail, but I suspect similar recovery levels and similar arguments are relevant to other forms of intervention (such as exposure therapies) and other common mental health problems (such as depression and anxiety disorders generally). It may be time to start looking at the bigger picture required for recovery from mental health problems so that hope can also be extended to the 40-45% of service users for whom we have yet to openly admit that we cannot provide a ‘cure’.

Mental health research: Are you contributing to paradigm stagnation or paradigm shift?

3/20/2013

 
First published 27/08/2012 at http://grahamdavey.blogspot.co.uk
Picture
“Normal science does not aim at novelty but at clearing up the status quo. It discovers what it expects to discover.” – Thomas Kuhn.

I was struck by this quote from Thomas Kuhn last week when reading a Guardian blog about the influential philosopher of science. It’s a simple statement suggesting that so-called ‘normal science’ isn’t going to break any new ground, isn’t going to change the way we think about something, but probably will reinforce established ideas, and – perhaps even more importantly – will entrench what scientists think are the important questions that need answering. Filling in the gaps to clear up the status quo is probably a job that 95% of scientists are happy to do. It grows the CV, satisfies your Dean of School, gets you tenure and pays the mortgage.

But when I first read that quote, I actually misread it. I thought it said “Normal science does not aim at novelty but aims to maintain the status quo”! I suspect that when it boils down to it, there is not much difference between my misreading of the quote and what Kuhn had actually meant. Once scientists establish a paradigm in a particular area this has the effect of  (1) framing the questions to be asked, (2) defining the procedures to answer them, and (3) mainstreams the models, theories and constructs within which new facts should be assimilated. I suspect that once a paradigm is established, even those agencies and instruments that provide the infrastructure for research contribute to entrenching the status quo. Funding bodies and journals are good examples. Both tend to map on to very clearly defined areas of research, and at times when more papers are being submitted to scientific journals than ever before, demand management tends to lead to journal scope shrinkage in such a way that traditional research topics are highlighted more and more, and new knowledge from other disciplinary approaches is less likely to fertilize research in a particular area.

This led me to thinking about my own research area, which is clinical psychology and psychopathology. Can we clinical psychology researchers convince ourselves that we are doing anything other than trying to clear up the status quo in a paradigmatic approach that hasn’t been seriously questioned for over half a century – and in which we might want to question it’s genuine achievements? Let’s just take a quick look at some relevant points:

1.         DSM still rules the way that much clinical psychology research is conducted. The launch of DSM-5 in 2013 will merely re-establish the dominance of diagnostic categories within clinical psychology research. There are some who struggle to champion transdiagnostic approaches, but they are doing this against a trend in which clinical psychology and psychiatry journals are becoming more and more reliant on diagnostic criteria for inclusion of papers. Journal of Anxiety Disorders is just one example of a journal whose scope has recently shrunk from publishing papers on anxiety to publishing papers on anxiety only in diagnosed populations. DSM-I was published in 1952 – sixty years on it has become even more entrenched as a basis for doing clinical psychology research. No paradigm shift there then!

This doesn’t represent a conspiracy between DSM and journals to consolidate DSM as the basis for clinical psychology research – it merely reflects the fact that scientific journals follow established trends rather than create new spaces within which new concatenations of knowledge can emerge. Journals will by nature be a significant conservative element in the progress of science.

2.         There is a growing isolation in much of clinical psychology research – driven in part by the shrinking scope of clinical research journals and the adherence of many of them to DSM criteria for publication. This fosters a growing isolation from core psychological knowledge, and because of this, clinical psychology research runs the risk of re-inventing the wheel – and probably re-inventing it badly. Some years ago I expressed my doubts about the value of many clinical constructs that had become the focus of research across a range of mental health problems (Davey,2003). Many of these constructs have been developed from clinical experience and relate to individual disorders or even individual symptoms, but I’m convinced that a majority of them simply fudge a range of different psychological processes, most of which have already been researched in the core psychological literature. I'm an experimental psychologist by training who just happens to have become interested in clinical psychology research, so I was lucky enough to be able to bring some rather different approaches to this research than those who were born and brought up in the clinical psychology way of doing things. What must not happen is for clinical psychology research to become even more insular and even more entrenched in reinventing even more wheels - or the wheels on the bus really will just keep going round and round and round!

3.         OK I'm going to be deliberately provocative here – clinical neuroscience and imaging technology costs a lot of money - so its role needs to be enshrined and ring-fenced in the fabric of psychological knowledge endeavor, doesn’t it? Does it? If that’s the case – then we’re in for a long period of paradigm stagnation. Imaging technology is the Mars Rover of cognitive science while the rest of us are using telescopes - or that's the way it seems. There are some clinical funding bodies I simply wouldn't apply to for experimental psychopathology research – ‘cos if it ain’t imaging it ain't gonna get funded - yet where does the contribution of imaging lay in the bigger knowledge picture within clinical psychology? There may well be a well thought out view somewhere out there that has placed the theoretical relevance of imaging into the fabric of clinical psychology knowledge (advice welcome on this)! There is often a view taken that whatever imaging studies throw up must be taken into account by studies undertaken at other levels of explanation - but that is an argument that is not just true of imaging, it's true of any objective and robust scientific methodology.

Certainly - identifying brain locations and networks for clinical phenomena may not be the way to go - there is growing support for psychological constructionist views of emotion for instance, suggesting that emotions do not have either a signature brain location or a dedicated neural signature at all (e.g. Lindquist,Wager, Kober, Bliss-Moreau & Barrett, 2012). There are some very good reviews of the role of brain functions in psychological disorders -but I'm not sure what they tell us other than the fact that brain function underlies psychological disorders – as it does everything! For me, more understanding of psychological disorders can be gleaned from studying individual experience, developmental and cognitive processes, and social and cultural processes than basic brain function. Brain images are a bit like the snapshot of the family on the beach - The photo doesn't tell you very much about how the family got there or how they chose the beach or how they're going to get home.

But the point I’m trying to make is that if certain ways of doing research require significant financial investment over long periods of time (like imaging technology), then this too will contribute to paradigm stagnation.

4.         When tails begin to wag dogs you know that as a researcher you have begun to lose control over what research you can do and how you might be allowed to do it. Many researchers are aware that to get funding for their research – however ‘blue skies’ it might be – we now have to provide an applied impact story. How will our research have an impact on society? Within clinical psychology research this always seems to have been a reality. Much of clinical psychology research is driven by the need to develop interventions and to help vulnerable people in distress – which is a laudable pursuit. But does this represent the best way to do science? There is a real problem when it comes to fudging understanding and practice. There appears to be a diminishing distinction in clinical psychology between practice journals and psychopathology journals, which is odd because helping people and understanding their problems are quite different things – certainly from a scientific endeavour point of view. Inventing an intervention out of theoretical thin air and then giving it the facade of scientific integrity by testing to see if it is effective in a controlled empirical trial is not good science – but I could name what I think are quite a few popular interventions that have evolved this way – EMDR and mindfulness are just two of them (I expect there will be others who will argue that these interventions didn't come out of a theoretical void, but we still don't really know how they work when they do work). At the end of the day, to put the research focus on ‘what works in practice’ takes the emphasis away from understanding what it is that needs to be changed, and in clinical psychology it almost certainly sets research priorities within establishment views of mental health.

5.         My final point is a rather general one about achievement in clinical psychology research. We would like to believe that the last 40 years has seen significant advances in our development of interventions for mental health problems. To be sure, we’ve seen the establishment of CBT as the psychological intervention of choice for a whole range of mental health problems, and we are now experiencing the fourth wave of these therapies. This has been followed up with the IAPT initiative, in which psychological therapies are being made more accessible to individuals with common mental health problems.  The past 40 years has also seen the development and introduction of second-generation antidepressants such as SSRIs. Both CBT and SSRIs are usually highlighted as state-of-the-art interventions in clinical psychology textbooks, and are hailed by clinical psychology and psychiatry respectively as significant advances in mental health science. But are they? RCTs and meta-analyses regularly show that CBT and SSRIs are superior to treatment as usual, wait-list controls, or placebos – but when you look at recovery rates, their impact is still far from stunning. I am aware that this last point is not one that I can claim reflects a genuinely balanced evidential view, but a meta-analysis we have just completed of cognitive therapy for generalized anxiety disorder (GAD) suggests that recovery rates are around 57% at follow-up. Which means that 43% of those in cognitive therapy interventions for GAD do not reach basic recovery levels at the end of the treatment programme. Reviews of IAPT programmes for depression suggest no real advantage for IAPT interventions based on quality of life and functioning measures (McPherson,Evans & Richardson, 2009). In a review article by Craske, Liao, Brown & Vervliet (2012) that is about to be published in Journal of Experimental Psychopathology, they note that even exposure therapy for anxiety disorders achieves clinically significant improvement in only 51% of patients at follow-up. I found it difficult to find studies that provided either recovery rates or measures of clinically significant improvement for SSRIs, but Arroll et al (2005) report that only 56-60% of patients in primary care responded well to SSRIs compared to 42-47% for placebos.

I may be over-cynical, but it seems that the best that our state-of-the-art clinical psychology and psychopharmacological research has been able to achieve is a recovery rate of around 50-60% for common mental health problems - compared with placebo and spontaneous remission rates of between 30-45%. Intervention journals are full of research papers describing new ‘tweaks’ to these ways of helping people with mental health problems, but are tweaks within the existing paradigms ever going to be significant? Is it time for a paradigm shift in the way we research mental health?

Discovering Facts in Psychology: 10 ways to create “False Knowledge” in Psychology

3/20/2013

 
First published 30/09/2012 on http://grahamdavey.blogspot.co.uk

There’s been quite a good deal of discussion recently about (1) how we validate a scientific fact (http://bit.ly/R8ruMg; http://bit.ly/T5JSJZ; http://bit.ly/xe0Rom), and (2) whether psychology – and in particular some branches of psychology – are prone to generate fallacious scientific knowledge (http://bit.ly/OCBdgJ; http://bit.ly/NKvra6). As psychologists, we are all trained (I hope) to be scientists – exploring the boundaries of knowledge and trying as best we can’ to create new knowledge, but in many of our attempts to pursue our careers and pay the mortgage, are we badly prone to creating false knowledge? Yes – we probably are! Here are just a few examples, and I challenge most of you psychology researchers who read this post to say you haven’t been a culprit in at least one of these processes!

Here are 10 ways to risk creating false knowledge in psychology.

1.  Create your own psychological construct. Constructs can be very useful ways of summarizing and formalizing unobservable psychological processes, but researchers who invent constructs need to know a lot about the scientific process, make sure they don’t create circular arguments, and must be in touch with other psychological research that is relevant to the understanding they are trying to create. In some sub-disciplines of psychology, I’m not sure that happens (http://bit.ly/ILDAa1).

2.  Do an experiment but make up or severely massage the data to fit your hypothesis. This is an obvious one, but is something that has surfaced in psychological research a good deal recently (http://bit.ly/QqF3cZ; http://nyti.ms/P4w43q).

3.  Convince yourself that a significant effect at p=.055 is real. How many times have psychologists tested a prediction only to find that the critical comparison just misses the crucial p=.05 value? How many times have psychologists then had another look at the data to see if it might just be possible that with a few outliers removed this predicted effect might be significant? Strangely enough, many published psychology papers are just creeping past the p=.05 value – and many more than would be expected by chance! Just how many false psychology facts has that created? (http://t.co/6qdsJ4Pm).

4.  Replicate your own findings using the same flawed procedure. Well, we’ve recently seen a flood of blog posts telling us that replication is the answer to fraud and poor science. If a fact can be replicated – then it must be a fact! (http://bit.ly/R8ruMg; http://bit.ly/xe0Rom) Well – no – that’s not the case at all. If you are a fastidious researcher and attempt to replicate a study precisely, then you are also likely to replicate the same flaws that gave rise to false knowledge. We need to understand the reasons why problematic research gives rise to false positives – that is the way to real knowledge (http://bit.ly/UchW4J).

5.  Use only qualitative methods. I know this one will be controversial, but in psychology you can’t just accept what your participants say! The whole reason why psychology has developed as a science is because it has developed a broad range of techniques to access psychological processes without having to accept at face value what a participant in psychological research has to tell us. I’ve always argued that qualitative research has a place in the development of psychological knowledge, but it is in the early stage of that knowledge development and more objective methodologies may be required to understand more proximal mechanisms.

6.  Commit your whole career to a single effect, model or theory that has your name associated with it. Well, if you’ve invested your whole career and credibility in a theory or approach, then you’re not going to let it go lightly. You’ll find multiple ways to defend it, even if it's wrong, and waste a lot of other researchers’ time and energy trying to disprove you. Ways of understanding move on, just like time, and so must the intransigent psychological theorist.

7.  Take a tried and tested procedure and apply it to everything. Every now and then in psychology a new procedure surfaces that looks too good to miss. It is robust, tells you something about the psychological processes involved in a phenomenon, and you can get a publication by applying it to something that no one else has yet applied it to! So join the fashion rush – apply it to everything that moves, and some things that don’t (http://bit.ly/SX37Sn). No I wasn't thinking of brain imaging, but.... Hmmmm, let me think about that! (I was actually thinking about the Stroop!)

8.  If your finding is rejected by the first journal you submit it to, continue to submit it to journals until it’s eventually published. This is a nice way to ensure that your contribution to false knowledge will be permanently recorded. As academic researchers we are all under pressure to publish (http://bit.ly/AsIO8B), if you believe your study has some genuine contribution to make to psychological science, then don’t accept a rejection from the first journal you send it to. In fact, if you don’t think your study has any real contribution to make to psychological knowledge at all, don’t accept a rejection from the first journal you send it to! Because you will probably get it published somewhere. I’d love to know what the statistics are on this, but I bet if you persist enough, your paper will get published.

9.  Publish your finding in a book chapter (non- peer reviewed), or an invited review, or a journal special issue - all of which are likely to have an editorial "light touch”. Well, if you do it might not get cited much (http://t.co/D55VKWDm), but it’s a good way of getting dodgy findings (and dodgy theories) into the public domain.

10.  Do some research on some highly improbable effects - and hope that some turn up significant by chance. (http://bit.ly/QsOQNo) And it won’t matter that people can’t replicate it – because replications will only rarely get published! (http://bit.ly/xVmmOv). The more improbable your finding, the more newsworthy it will be, the more of a celebrity you will become, the more people will try to replicate your research and fail, the more you will be wasting genuine research time and effort. But it will be your 15 minutes of fame!

Finally, if you haven’t been able to generate false psychological knowledge through one of these 10 processes, then try to get your finding included in an Introduction to Psychology textbook. Once your study is enshrined in the good old Intro’ to Psych’ text, then it’s pretty much going to be accepted as fact by at least one and maybe two future generations of psychologists. And once an undergrad has learnt a “fact”, it is indelibly inscribed on their brain and is faithfully transported into future reality!

"An effect is not an effect until it is replicated" - Pre-cognition or Experimenter Demand Effects

3/20/2013

 
First published 15/09/2012 at http://grahamdavey.blogspot.co.uk
Picture
There has been much talk recently about the scientific process in the light of recent claims of fraud against a number of psychologists (http://bit.ly/R8ruMg), and also the failure of researchers to replicate some controversial findings by Darryl Bem purportedly showing effects reminiscent of pre-cognition (http://bit.ly/xVmmOv). This has led to calls for replication to be the cornerstone of good science – basically “an effect is not an effect until it’s replicated” (http://bit.ly/UtE1hb). But is replication enough? Is it possible to still replicate “non-effects”? Well replication probably isn’t enough. If we believe that a study has generated ‘effects’ that we think are spurious, then failure to replicate might be instructive, but it doesn’t tell us how or why the original study came by a significant effect. Whether the cause of the false effect is statistical or procedural, it is still important to identify this cause and empirically verify that it was indeed causing the spurious findings. This can be illustrated by a series of replication studies we have recently carried out in our experimental psychopathology labs at the University of Sussex.

Recently we’ve been running some studies looking at the effects of procedures that generate distress on cognitive appraisal processes. These studies are quite simple in design and highly effective at generating negative mood and distress in our participants (participants are usually undergraduate students participating for course credits), and pilot studies suggest that experienced distress and negative mood do indeed facilitate the use of clinically-relevant appraisal processes.

The first study we did was piloted as a final year student project. It produced nice data that supported our predictions – except for one thing. The two groups (distress group and control group) differed significantly on pre-manipulation baseline measures of mood and other clinically-relevant characteristics. Participants due to undertake the most distressing manipulation scored significantly higher on pre-experimental clinical measures of anxiety (M=6.9, SD 3.6, v M=3.8, SD 2.5)[F(56)=4.01 , p=.05], and depression (M=2.2, SD 2.6, M=1.1, SD 1.1) [F(56)=4.24, p=.04]. Was this just bad luck? The project student had administered the questionnaires herself prior to the experimental manipulations, and she had used a quasi-random participant allocation method (rotating participants to experimental conditions in a fixed pattern).

Although our experimental predictions had been supported (even when pre-experimental baseline measures were controlled for), we decided to replicate the study, this time run by another final year project student. Lo and behold, the participants due to undertake the distressing task scored significantly higher on pre-experimental measures of anxiety (M=9.1, SD 4.1, v M=6.9, SD 3.0) [F(56)=6.01, p=.01], and depression (M=4.3, SD 3.7, v M=2.4, SD 2.4) [F(56)=5.09, p=.02]. Another case of bad luck? Questionnaires were administered and participants allocated in the same way as the first study.

Was this a case of enthusiastic final year project students determined to complete a successful project in some way conveying information to the participant about what they were to imminently undergo? Basically, was this an implicit experimenter demand effect being conveyed by an inexperienced experimenter? To try and clear this up, we decided to replicate again, this time it was to be run by an experienced post doc researcher – someone who was wise to the possibility of experimenter demand effects, aware that this procedure was possibly prone to these demand effects, and would presumably to be able to minimize them.  To cut a long story short – we replicated the study again – but still replicated the pre-experimental group differences in mood measures! Participants who were about to undergo the distress procedure scored higher than participants about to undergo the unstressful control condition.

At this point, we were beginning to believe in pre-cognition effects! Finally, we decided to replicate again. But this time, the experimenter would be entirely blind to the experimental condition that a participant was in. Sixty sealed packs of questionnaires and instructions were made up before any participants were tested – half contained information for the participant about how to complete the questionnaires and how to run either the stressful or the control condition. The experimenter merely allowed the participant to chose a pack from a box at the outset, and was entirely unaware which condition the participant was running during the experiment. To cut another long story short – to our relief and satisfaction, the pre-experimental group differences in anxiety and depression measures disappeared. It wasn’t pre-cognition after all - it was an experimenter demand effect.

The point I’m making is that replication alone may not be sufficient to identify genuine effects – you can also replicate “non-effects” quite effectively - even by actively trying not to, and even more so by meticulously replicating the original procedure. If we have no faith in a particular experimental finding, it is incumbent on us as good scientists to identify the factor or factors that gave rise to that spurious finding wherever we can.

Designing an Intro to Psych Textbook

3/17/2013

 
Originally published 12/04/2012 at http://grahamdavey.blogspot.co.uk

                                                  “Teach your children well, their father's hell did slowly go by,
                                                    
And feed them on your dreams, the one they fix, the one you'll know by”.

I've been asked to scope out a proposal for a new UK/European based Intro to Psych textbook for undergraduate students.  So what should this book look like? Simply asking people what you should put into an Intro to Psych book has its problems. Here lies the vicious cycle that leads to a plethora of clone-like text books, most of which contain much of the same material, many of the same learning features (but using different buzzy names), all boasting much the same range of web resources, all dividing psychology into similar sub-sections and as a result all perpetuating the same "preordained" syllabus – the winner is the one with most pages and the biggest website!

My recent blog titled "Whatever happened to learning theory" led to some very interesting correspondences with Eric Charles (@EPCharles) about some of the things that were right and wrong with Introductory Psychology. Eric has posted a couple of blogs discussing what he believes is wrong with the way we currently teach Intro to Psych and also making some suggestions about what an Intro to Psych textbook should do (http://bit.ly/H60Vld and http://bit.ly/H6ZpBX) - I recommend you look at these in detail. But before I summarise Eric's points it is worth considering how Intro to Psych textbooks often get scoped in the first place.

I've already edited and contributed to one Intro to Psych text - "Complete Psychology" published by Hodder HE (http://bit.ly/HcD6hU).  The first edition was published in 2002, and it represented an exciting race to be the first UK full colour Intro to Psych text. The book (all 849 pages) was written in six months, and although there are many aspects of the book that I'm proud to be associated with, it was very traditional in its representation of psychology. It adhered strictly to the BPS curriculum and unashamedly portrayed this as its main virtue. It was great fun to write and to work with the other contributors at that time, it was also fun spending a summer conceiving of and actualising a range of learning and presentational features for the book. But time, and the greater resources of the other larger publishers, has overtaken this project.

The trap we now fall into is that Intro to Psych textbooks have a desperate need to be as inclusive as possible. We are all open ears to every psychology lecturer who says "you didn't include x" or "there wasn't enough of y" - so we bung it in to be as inclusive as we can and to say we cover more material and provide more resources than any other textbook. What is perplexing about asking Psychology lecturers what they want from an Intro to Psych book is that, in my experience, prior to the book being written they will say they want X, Y and Z, but once it's written and on the bookshelves they rarely use X, Y and Z. Web resources are a good example. Lecturers will say they want PowerPoint presentations, seminar guidelines, photos and videos, but there's very little evidence they use these resources very much once they've been generated. In fact, most lecturers (quite reasonably) prefer to use their own lecture support resources.

So in the production of an Intro to Psych textbook a lot of effort often goes into providing the range of topics and resources that lecturers 'say' they want, and much less goes into the overall 'concept' of the book, and as a consequence into providing a modern, integrated, challenging syllabus for students which satisfies the developing intellectual needs of psychology majors, genuinely reflects the development of psychological science, and also provides psychology minors with a suitable overview of the discipline.

To go back to Eric Charles, he makes the very valid point that Intro to Psych books often serve as the main “controllable exposure that most people will have to academic psychology”. He also points out that Intro to Psych books should (1) continually challenge students to approach psychological questions in new and unintuitive ways, rather than striving to make the subject matter fit easily into their preconceptions, but (2) the emphasis should be on findings that remain generally accepted over long periods – providing a basis for the scientific value of psychology and for future research, rather than blindly focussing on cutting edge recent research, and (3) Intro to Psych textbooks should try to expose students to the complexity of current debates rather than trying to get students to express their own opinions about current debate. Most importantly, Intro to Psych books fail to provide a vision of the field as a whole, and they fail to make it clear why the same course should talk about “neurons, eye-balls, brain waves, rats pressing levers, Piaget, introversion, compliance, and anti-social personality disorder”. In addition he suggests that Intro to Psych books should not include “trivial but attention getting findings, or now rejected findings”. For example, he 1) challenges anyone to tell him what critical insight into psychology was gained from the Stanford Prison Experiment, and 2) why Freud’s theories are being treated in such great detail, etc.

So what should a modern Intro to Psych syllabus look like and how should a modern Intro to Psych book portray it?

First, syllabuses designed and recommended by learned societies probably don’t help to definitively answer this question. I am a great believer in the benefits that learned societies can offer their discipline and associated professions – and this has been practically demonstrated by my commitment over the years to the British Psychological Society. However, learned societies tend to be rather loosely bound organizations that have evolved organizational structures based on fostering as many representative interests within the discipline as can be practically sustained (and all competing for a high profile and a piece of whatever cake is being offered). Promoting and representing the diversity of the discipline in this way is likely to lead to a recommended syllabus that is characterized by its breadth and diversity rather than its structure and the developmental dynamics of the subject matter. It is certainly important to have breadth in the syllabus, but this approach rarely provides conceptual structure for the discipline as a whole – usually just a categorical list of recommended topics, usually according to an historically pre-ordained formula.

Second, asking psychology lecturers what they want in either a syllabus or a textbook leads to much the same inclusive, but unstructured, outcome – and this is very much the process that publishers go through when they review proposals for a new text book. The review process largely tells the author what is missing and needs to be included rather than providing insight into overall structure.

Nevertheless, the contemporary pressures of satisfying fee-paying undergraduate students does lead psychology departments to think about how Intro to Psych might be structured and portrayed – if only (and rather shallowly) in a way that keeps its students happy (and responding highly on the National Student Survey). In particular, many students come to psychology with the aspiration to become applied psychologists. This has almost certainly led to departments including more applied psychology courses in their first year syllabus and even trying to teach some core psychology through applied psychology modules. Nothing wrong with this if it successfully teaches core knowledge and keeps the students happy (see http://bit.ly/zFaVrw).

So where do we go for an Intro to Psych syllabus that genuinely reflects the dynamic development of the discipline, provides an integrated structure and vision of the field, considers important theoretical, conceptual and methodological developments, and both challenges and satisfies students?

Here are some obvious and traditional approaches:

1.         The ‘shopping list’ approach – we can ask a cross-section of lecturers (and students) what they want to see in an Intro to Psych course, take the top 30 topics and commission a chapter on each.

2.         The ‘level of explanation’ approach – Commissioning sections on biological psychology, cognitive psychology, and behavioural approaches.

3.         The ‘core knowledge’ approach – a traditional one in which psychology is split into historically important core topics including cognitive psychology, biological psychology, social psychology, personality and individual differences, developmental psychology, and maybe abnormal psychology and conceptual and historical issues.

4.         The ‘lifespan approach’ – clumping sections of the book into describing and explaining the psychology of various life stages, including pre-natal, infancy, childhood and adolescence, adulthood, and old age.

5.         The ‘embedded features’ approach – Take a traditional approach to defining the core areas of psychology, but include a range of teaching and learning features in each chapter that convey visions of how the discipline is developing.

This list is by no means exhaustive, and I’d be grateful for your thoughts and suggestions about what an Intro to Psych textbook should be and should look like, and what it should (and perhaps should not) include. Whatever the outcome, it needs to be engaging and make both teaching and learning natural and easy processes. But most importantly for our discipline and how we teach future generations of students, it needs to convincingly reflect dynamic changes in the content and structure of psychology, and not just pander to the current market needs of the lowest common denominator.

    Author

    Graham C. L. Davey, Ph.D. is Professor of Psychology at the University of Sussex, UK. His research interests extend across mental health problems generally, and anxiety and worry specifically. Professor Davey has published over 140 articles in scientific and professional journals and written or edited 16 books including Psychopathology; Clinical Psychology; Applied Psychology; Complete Psychology; Worrying & Psychological Disorders; and Phobias: A Handbook of Theory, Research & Treatment. He has served as President of the British Psychological Society, and is currently Editor-in-Chief of Journal of Experimental Psychopathology and Psychopathology Review. When not writing about psychology he watches football and eats curries.

    Archives

    September 2019
    May 2019
    August 2018
    July 2018
    June 2015
    April 2015
    November 2014
    March 2014
    December 2013
    July 2013
    June 2013
    April 2013
    March 2013

    Categories

    All
    Anxiety
    Books
    Clinical Psychology
    Journal Publishing
    Medicine
    Mental Health
    Phobias
    Psychology
    Research
    Research Councils

    RSS Feed

Proudly powered by Weebly