• Home
  • Graham Davey's Blog
  • My Books
    • Published Books
    • New Book Projects >
      • Psychopathology 3e Excerpts
  • My Research
  • The Sidcup Mixer
  • Journals
  • News
    • Latest News
    • Clinical Psychology Research & Chat
  • Real Life
Papers from Sidcup
Graham Davey's website

"Psychology" - The Struggling Science of Mental Life

3/20/2013

 
First published 29/12/2012 at http://grahamdavey.blogspot.co.uk
Picture
Many of you may be old enough to remember George A. Miller’s book “Psychology: The Science of Mental Life”. As an undergraduate psychology student I was brought up with books with titles that variously contained the words science, psychology, behaviour and mind in them. These books had one main purpose – to persuade students of psychology that psychology was a legitimate scientific pursuit, using rigorous scientific methods to understand human behaviour and the human mind. All on a par with the more established sciences such as biology, physics and chemistry.

Even if you’re happy with the notion of psychology as a science, we then have the various debates about whether psychology is a biological science or a social science, and in the UK this isn’t just an issue about terminology, it is also a major issue about funding levels. Do psychologists need labs, do undergraduate psychology students need to do lab classes to learn to be psychologists? This almost became the tail wagging the dog, as funding bodies such as HEFCE (and its predecessor the University Funding Council) looked to save money by re-banding psychology as a half-breed science sitting somewhere between social science and biological science. I even seem to recall that some psychology departments were designated social psychology departments and given little or no lab funding. So were students in those Departments being taught science or not? What breed of psychology was it?

Just one more example before I get to the main point. A few years ago I had the good fortune to teach a small-group elective to second-year medical students. This was a 6-week course on cognitive models of psychopathology. I was fortunate to teach this group because it contained highly motivated and intelligent students. Now, I have never viewed myself as anything other than a scientist using scientific methods to understand human behaviour in general and psychopathology in particular. But these groups of highly able and highly trained medical students inevitably had difficulty with two particular aspects of the material I was teaching them: (1) how can we use science to study “cognitions” when we can’t see them, when we make up ‘arbitrary’ concepts to describe them, and we can’t physically dissect them? and (2) at the end of the day, cognitions will always boil down to biology, so it is biology – and not cognitions – that should be the object of scientific study.

What struck me most was that these students had already developed a conception of science that was not procedure based, but was content based. It was the subject matter that defined science for them, not particularly the methodology.

My argument here is that while psychology had been touted as a science now for a number of generations, psychologists over these generations have failed to convince significant others (scientists in other disciplines, funding organizations, etc.) that psychology is a science on a par with other established sciences. Challenges to psychology as a science come in many forms and from many different sources. Here are a few examples:

(1)      Funding bodies frequently attempt their own ‘redefining’ of psychology, especially when budgets are tight, and psychology is a soft target here, with its large numbers of students offering significant savings if science-related funding is downgraded.

(2)      Students, teachers and researchers in other science disciplines often have very esoteric views of what science is, and these views revolve around their own subject matter and the techniques they specially use to understand that subject matter. Psychologists have probably not been proactive or aggressive enough in broadcasting the ways in which psychology is science and how it uses scientific methodologies in a highly objective and rigorous way.

(3)      Members of other science disciplines frequently have a ‘mental block’ when it comes to categorizing psychology as a science (that’s probably the nicest way I can put it!). This reminds me of the time a few years ago when I was representing psychology on the UK Science Council. There was a long discussion about how to increase the number of women taking science degrees. During this discussion it was pointed out that psychology was extremely successful at recruiting female students, so perhaps we shouldn’t be too pessimistic about recruiting women into at least some branches of science. The discussion paused briefly, and then continued as if nothing of any relevance whatsoever had been said!

(4)      All branches of knowledge are open to allegations of fraud, and there has been some considerable discussion recently about fraud in science, fraud in psychology and the social sciences, and – most specifically – fraud in social psychology. Arguably, psychology is the science discipline most likely to be hurt by such allegations – not because methodology is necessarily less rigorous than in other science disciplines or publication standards any less high, but because many scientists in other disciplines fail to understand how psychology practices as a science. Sadly, this is even true within the discipline of psychology, and it is easy to take the trials and tribulations that have recently been experienced in social psychology research as an opportunity for the more ‘hard-nosed’ end of psychology to sneer at what might be considered the softer under-belly of psychological science. One branch of psychology ‘sneering’ at another branch is not a clever thing to do, because this will all be grist to the mill branding psychology generally as “non-scientific” by members of other science disciplines.

I’ll finish by mentioning a recent report published in 2011 attempting to benchmark UK psychology research within an international context. Interestingly, this report (published jointly by the ESRC, BPS, EPS and AHPD) listed nine challenges to the competitiveness of current psychology research in the UK. A significant majority of these challenges relate to the skills and facilities necessary for pursuing psychology as a science!

Psychology still requires an orchestrated campaign to establish it’s scientific credentials – especially in the eyes of other science disciplines, many of which have their own distorted view of what science is, but already occupy the intellectual high ground. Challenges to psychology as a science come from many diverse sources, including funding bodies, other sciences, intra-disciplinary research fraud, and conceptual differences within psychology as an integrated, but diverse, discipline.

"An effect is not an effect until it is replicated" - Pre-cognition or Experimenter Demand Effects

3/20/2013

 
First published 15/09/2012 at http://grahamdavey.blogspot.co.uk
Picture
There has been much talk recently about the scientific process in the light of recent claims of fraud against a number of psychologists (http://bit.ly/R8ruMg), and also the failure of researchers to replicate some controversial findings by Darryl Bem purportedly showing effects reminiscent of pre-cognition (http://bit.ly/xVmmOv). This has led to calls for replication to be the cornerstone of good science – basically “an effect is not an effect until it’s replicated” (http://bit.ly/UtE1hb). But is replication enough? Is it possible to still replicate “non-effects”? Well replication probably isn’t enough. If we believe that a study has generated ‘effects’ that we think are spurious, then failure to replicate might be instructive, but it doesn’t tell us how or why the original study came by a significant effect. Whether the cause of the false effect is statistical or procedural, it is still important to identify this cause and empirically verify that it was indeed causing the spurious findings. This can be illustrated by a series of replication studies we have recently carried out in our experimental psychopathology labs at the University of Sussex.

Recently we’ve been running some studies looking at the effects of procedures that generate distress on cognitive appraisal processes. These studies are quite simple in design and highly effective at generating negative mood and distress in our participants (participants are usually undergraduate students participating for course credits), and pilot studies suggest that experienced distress and negative mood do indeed facilitate the use of clinically-relevant appraisal processes.

The first study we did was piloted as a final year student project. It produced nice data that supported our predictions – except for one thing. The two groups (distress group and control group) differed significantly on pre-manipulation baseline measures of mood and other clinically-relevant characteristics. Participants due to undertake the most distressing manipulation scored significantly higher on pre-experimental clinical measures of anxiety (M=6.9, SD 3.6, v M=3.8, SD 2.5)[F(56)=4.01 , p=.05], and depression (M=2.2, SD 2.6, M=1.1, SD 1.1) [F(56)=4.24, p=.04]. Was this just bad luck? The project student had administered the questionnaires herself prior to the experimental manipulations, and she had used a quasi-random participant allocation method (rotating participants to experimental conditions in a fixed pattern).

Although our experimental predictions had been supported (even when pre-experimental baseline measures were controlled for), we decided to replicate the study, this time run by another final year project student. Lo and behold, the participants due to undertake the distressing task scored significantly higher on pre-experimental measures of anxiety (M=9.1, SD 4.1, v M=6.9, SD 3.0) [F(56)=6.01, p=.01], and depression (M=4.3, SD 3.7, v M=2.4, SD 2.4) [F(56)=5.09, p=.02]. Another case of bad luck? Questionnaires were administered and participants allocated in the same way as the first study.

Was this a case of enthusiastic final year project students determined to complete a successful project in some way conveying information to the participant about what they were to imminently undergo? Basically, was this an implicit experimenter demand effect being conveyed by an inexperienced experimenter? To try and clear this up, we decided to replicate again, this time it was to be run by an experienced post doc researcher – someone who was wise to the possibility of experimenter demand effects, aware that this procedure was possibly prone to these demand effects, and would presumably to be able to minimize them.  To cut a long story short – we replicated the study again – but still replicated the pre-experimental group differences in mood measures! Participants who were about to undergo the distress procedure scored higher than participants about to undergo the unstressful control condition.

At this point, we were beginning to believe in pre-cognition effects! Finally, we decided to replicate again. But this time, the experimenter would be entirely blind to the experimental condition that a participant was in. Sixty sealed packs of questionnaires and instructions were made up before any participants were tested – half contained information for the participant about how to complete the questionnaires and how to run either the stressful or the control condition. The experimenter merely allowed the participant to chose a pack from a box at the outset, and was entirely unaware which condition the participant was running during the experiment. To cut another long story short – to our relief and satisfaction, the pre-experimental group differences in anxiety and depression measures disappeared. It wasn’t pre-cognition after all - it was an experimenter demand effect.

The point I’m making is that replication alone may not be sufficient to identify genuine effects – you can also replicate “non-effects” quite effectively - even by actively trying not to, and even more so by meticulously replicating the original procedure. If we have no faith in a particular experimental finding, it is incumbent on us as good scientists to identify the factor or factors that gave rise to that spurious finding wherever we can.

How Research Methods Textbooks Fail Final Year Project Students

3/20/2013

 
First published 05/09/2012 at http://grahamdavey.blogspot.co.uk
The time is about to come when all those fresh-faced final year empirical project students will be filing through our office doors looking for the study that’s going to give them the first class degree they are craving for.

Unfortunately, as a supervisor you’ll find that their mind isn’t focused on doing scientific research – it’s focused on getting a good mark for their project. This means that most of your time as a supervisor will be spent not on training your undergraduate supervisees to do research (as it should be), but on (1) telling them what they have to do to write up a good project, and (2) reassuring them that they’ve understood what you said is required for writing up a good project.

As an empirical scientist you might believe that the most important part of the training for your undergraduate project students is learning about experimental design and about statistical analysis. Wrong. Absolutely no over-arching information about experimental design will be absorbed by the student – only that they lie awake at night needing to know how many participants they will need to test and – more importantly – how will they get those participants?

Most project students have a small notebook they’ve bought from W H Smiths and in which they write down the pressing questions they need to ask their supervisor at the next supervision session (just in case they may forget). Questions like “Can I do this experiment in my bathroom in my student flat?”, “Can I test my mother’s budgerigar if I’m short of participants?”, “Will it matter if my breath smells of cider when I’m coding my data?”, “Do I need to worry about where I put the decimal point?”, “Will it affect my participants’ behaviour if I dye my hair day-glow orange in the middle of the study?”… and so on.

I believe that project students ask these kinds of questions because none of these questions are properly addressed or answered in standard Research Methods textbooks – an enormous oversight! Research Methods textbooks mince around talking about balanced designs, counterbalancing, control groups, demand effects, and so on. But what about the real practical issues facing a final year empirical project student? “How will I complete my experiment if I split up with my boyfriend and can’t use his extended local family as participants?”, “Where can I find those jumbo paper clips that I need to keep all the response sheets together?”, “Why do I need to run a control condition when I could be skiing in Austria?”

Perhaps we need some new, young, motivated research methods authors to provide us with the textbooks that will answer the full range of questions asked by undergraduate empirical project students. Sadly, at present, these textbooks answer the questions that students aren’t interested in asking – let’s get real with undergraduate research training!
Picture

The Perfect University

3/20/2013

 
First published 13/06/2012 at http://grahamdavey.blogspot.co.uk
Picture
Why doesn’t every student in Higher Education get awarded a first class degree? Is it because they’re not intelligent enough? Is it because they’re not taught effectively? Is it because a majority of students are plain lazy? Is it because they are too spoon-fed? Well no, it’s none of those. It’s because most Universities haven’t yet fine-tuned their assessment and classification systems in a way that will allow all students – regardless of ability and potential – to get a first class degree. A majority of Universities have adjusted their assessment and classification schemes to a point where only the most delinquent of students will attain anything less than an upper-second class degree, but there is still some fine-tuning required to turn out close to 100% first class students.

Here are some tips for those Universities and institutions still striving for this level of perfection. Individually none of these factors is necessarily bad practice or illegal, and indeed many institutions strive to introduce many of these factors as examples of innovative good practice. However, if you put them all together in one scheme, you create an assessment and classification system that can turn the most delinquent and uninspiring student into a first class success. Here are the basic elements of that system:

1.         Always adopt a categorical marking scheme. Make sure that the first class band of marks covers 30% of awardable marks (e.g. 70-100%) whereas other classification bands cover only 10% (e.g. upper second class from 60-69%).  Within the first class band of marks, make sure there are as few categorical marks available to be awarded as possible and that there is a giant leap in awardable marks between a low first and a good first. For example, make the following marks the only ones awardable in the first class band, 75%, 90% and 100%. Then make sure that the assessment guidelines for 90% are as similar as possible to 75%, but with an added factor that all first class scripts would normally possess (e.g. to be awarded 90% a piece of work must have all the characteristics of a piece of work worthy of 75%, but will show “evidence of breadth of reading”).

2.         Always make sure that each piece of work is double marked, and that any discrepancies between markers are rounded up (e.g. if one marker awards 75% and the second awards 95%, then award 95%).

3.         Allow all final year students a resit option on failed papers that is not capped at the basic pass mark. Indeed, also consider allowing final year students the opportunity to resubmit any piece of work where they are not satisfied with the original mark.

4.         Include MCQs as a highly weighted component of every course/module – at both second and final year. Ensure that these MCQs are taken from a limited bank of questions that is recycled every year. Conveniently forget to adjust the marks on these MCQs for the possibility of chance correct answers.

5.         Include as many assessments as possible where the student has the opportunity to score 100% (e.g. MCQs, assessments where there is an indisputable correct answer or answers, etc.)

6.         Have at least one course/module in the final year that is weighted to make up most of the marks for that year (e.g. a final year project/dissertation). Ensure that the credit weighting of this course is excessive (e.g. the equivalent to 4 other courses), but that the work required by the student is nowhere near equivalent to the work required of four courses. Make sure the students are aware that this is a course/module on which they should concentrate their efforts.

7.         Adopt a very liberal classification borderline “bumping up” scheme that will ensure that as many students below a borderline will meet the criteria for being “bumped up” into the higher classification bracket even if they haven’t achieved the required aggregate mark for that higher classification bracket. Make sure that this is a mechanistic “bumping up” process determined by an algorithm (don’t involve the external examiners in this process – they may question it!)

8.         Introduce changes to the assessment and classification processes every year. This will mean that students will usually be simultaneously graded by two schemes – the “old scheme” and the “new scheme”, and all candidates will be classified according to their ‘best’ outcome from either of the schemes.

9.         Encourage students to apply for concessions and submit mitigating evidence. Make this process as simple as possible and do not set deadlines for evidence to be submitted. In particular, allow mitigation to be submitted after the student has knowledge of their degree classification.

10.       Allow external examiners to adjust agreed marks. But only upwards, so as not to unnecessarily disadvantage any student.

11.       No need to make candidates' identities anonymous. Some good students may have an off day in the exams and names on scripts will allow the internal examiner to mark the candidate according to their ability rather than on an exam "off day". Poor students who perform above their expected ability in an exam can be identified and rewarded accordingly.

12.       External examiners need pacifying and domesticating. Make sure that they have comfortable hotels and are given expensive dinners. Always tell them that there have been IT problems in the Registry and a full summary of marks and assessment statistics is unavailable this year. Fabricate at least two admin staff illnesses which have meant that scripts and coursework could not be sent to the external for moderation. Compliant externals should also be appointed for additional years after the end of their term of office. Make sure it is clear to externals that assessment guidelines (and anything else they may query) have been imposed by the University central administration and are out of the control of the Department. Regularly change Departmental Exam Officers so that no one individual can acquire enough knowledge to ensure the assessment period is conducted according to the full set of regulations.

13.      If an external examiner attempts to question the objectivity and validity of an examination and assessment process, the Registry should reply by stating that there was not a critical mass of external examiners across disciplines raising this particular issue to require a change in University policy. University Registries should ensure that the full range of external examiners' reports are not compiled in any single place where they are freely available for general scrutiny.

14.       Finally, make sure that a directive comes down from the University Registry to all examiners to “mark generously and use the full extremes of the marking scales – especially the first class band of marks”. This, of course, is imperative if the institution is to achieve a good grading in forthcoming National Student Surveys!

Please feel free to suggest more practical ideas by which Universities can adjust their assessment and classification processes to generate increasing percentages of first class students. Don't forget, well qualified graduates are our future - we need more of them!

Should You Publish Your Undergraduate Student's Projects?

3/17/2013

 
Most academics and researchers now rely on their undergraduate students’ final year projects as an important research resource. These projects provide opportunities to test out new procedures, methodologies and theories at relatively low cost to the researcher. Nevertheless, no matter how much you might closely supervise this research there is still a nagging doubt that you have delegated important research to a relatively inexperienced individual. How do you decide whether the research they have delivered is worthy of writing up and publishing? Below is a flow-chart that allows the inexperienced junior lecturer to make some decisions about publishing an undergraduate project[1].
Picture
[1] This flow-chart is designed to ensure optimal career development for junior and mid-career academics and researchers.

Does a Menu Explain a Restaurant? - Clinical Constructs as Potential Category Mistakes

3/17/2013

 
First published 03/05/2012 at http://grahamdavey.blogspot.co.uk
Picture
Over the past 20 years or so, clinical psychology researchers have developed cognitive models of psychopathology that have used constructs that appear to capture the beliefs, attitudes and thought patterns associated with psychiatric symptoms. These constructs have been used in many ways: to understand and explain symptoms and to develop new interventions that intervene by attempting to modify the psychological processes implied by the construct. Clinical psychology researchers have never been terribly good at articulating the exact theoretical nature of these constructs, but they are regularly portrayed as inferred states or processes derived most often from the clinical experiences of researchers or clinicians in their interactions with patients (Davey, 2003).[1] The purpose of these constructs is to help understand psychopathology symptoms, to provide a basis for developing interventions to treat the psychopathology, and – in the case of those who advocate cognitive explanations of psychopathology – to link thoughts, beliefs and cognitive processes to subsequent symptoms.

Hypothetical constructs have a long history in the study of psychology and human behaviour (MacCorquodale & Meehl, 1948; Cronbach & Meehl, 1955; Strauss & Smith, 2009), and their main purpose has been to help identify the theoretical mechanisms that underlie performance and behaviour (Whitely, 1983). In clinical psychology research, constructs have played an important part in the development of models of anxious psychopathology – especially in the years since cognitive approaches to understanding anxiety have become prevalent. Clinical constructs are often developed from the researchers own clinical experiences, and they represent hypothetical structures that usually attempt to summarize important aspects of the patient experience and integrate this with one or more theoretically important process that the researcher believes underlies the symptoms. In the past 20-30 years many theoretically influential clinical constructs have been developed during research on the aetiology and maintenance of anxiety disorders. Some of the more influential of these include inflated responsibility (Salkovskis, 1985), intolerance of uncertainty (Dugas, Gagnon, Ladouceur & Freeston, 1998), clinical perfectionism (Shafran, Cooper & Fairburn, 2002), and thought-action fusion (Shafran & Rachman, 2004), to name but a few. There is no doubt that clinical constructs have been influential in the development of theories of anxiety-based psychopathology, and these constructs have a prima facie clinical relevance and respectability by emerging from clinical experience, idiographic assessment, from illustrative case histories, exploratory qualitative methods, or content analysis of patient self-report statements (e.g. Frost, Steketee, Amir et al., 1997; Freeston, Rheaume, Letarte, Dugas & Ladouceur, 1994).

At this point it is important to understand the role that clinical psychology researchers see for the clinical constructs they develop. Without a doubt, in the majority of cases talk of a ‘causal’ or ‘explanatory’ role in the elicitation and maintenance of symptoms creeps into the discussion. For example, Koerner & Dugas (2006) note that intolerance of uncertainty  “is thought to lead to worry directly” (2006, p201); Salkovskis, Wroe, Gledhill, Morrison et al. (2000) write that “the occurrence and/or content of intrusions (thoughts, images, impulses and/or doubts) are interpreted (appraised) as indicating that the person may be responsible for harm to themselves or others. This leads both to adverse mood (anxiety and depression) and the decision and motivation to engage in neutralising behaviours (which can include a range of behaviours such as compulsive checking, washing and covert ritualising)” (2000, p348; my italics); Shafran, Thordarson & Rachman (1996) write that “increased endorsement of dysfunctional beliefs, particularly TAF [thought-action fusion] is likely to exacerbate low self-esteem, depression, anxiety, and perceived responsibility for the event.” (1996, p379). The implication of causation of construct on symptoms is further alluded to in the box-and-arrow schematic models of emotion-based disorders that have become associated with research on some of these clinical constructs (Davey, 2003). There is no doubt that such constructs help us to conceptualize the psychological processes and states involved in a specific psychopathology, but is there any basis for assuming that their role is a causal one?

In order to elevate these hypothetical constructs to the level of empirically verifiable and usable entities the constructs have to become measurable and, in many cases, manipulable – especially if they are to prove useful in clinical interventions. This process usually proceeds with the researcher describing the defining features of the construct and developing an instrument to measure the construct. Once a set of the defining features of the construct has been established, subsequent measurement instruments are developed and validated. Having defined the construct’s main features and developed a measurement instrument, the construct is now experimentally manipulable and objectively measurable according to standard empirical and scientific tenets. Subsequent controlled manipulation of the construct may result in observable changes in symptoms, leading us to conclude that the construct plays a direct or indirect causal role in determining the appearance or strength of the symptoms. These manipulations may be in the form of potential therapeutic interventions or in the form of a controlled experimental manipulation (e.g. the effect of manipulation of inflated responsibility or intolerance of uncertainty on compulsive behaviour). At this point, the construct has become a recognizable explanatory feature of the psychopathology, supported by empirical evidence in the form of its measurable relationship with symptoms (through correlational and regression analyses) and demonstrable effects on symptoms (through experimental manipulation).

The process described above appears to be an admirable attempt by clinical researchers to objectify their clinical experiences and subject them to rigorous, scientific analysis. At the end of this process we have constructs that are measurable and manipulable and can be empirically tested in their relationship with psychopathology symptoms. However, we need to be aware that clinical constructs are not directly observable and need to be inferred from the behaviour and responses of our patients and experimental participants. Inferential techniques, by their very nature, rely on observable behaviour to tell us something about the existence and behaviour of the unobservable psychological mechanisms that underlie performance (Whitely, 1983; Strauss & Smith, 2009). What is important about these inferential processes is that we cannot use the same behavioural anchors to verify the construct and then use them as outcome measures in experiments/interventions to determine whether the construct has an explanatory role or causal effect.

This logical inconsistency appears to be what happens in the research history of many clinical constructs. The confounding factor is that the construct is verified on the basis of patient reports about their psychopathology experiences and their symptoms or on researchers’ assumptions about these experiences (e.g. Frost, Steketee, Amir et al., 1997; Shafran, Thordarson & Rachman, 1996; Chambless, Caputo, Bright & Gallagher, 1984; Dunmore, Clark & Ehlers, 1999). When unpacked, many validated measures of clinical constructs resemble a list of questions about symptoms. It should then come as no surprise that (1) measures of the construct are significantly correlated with measures of symptoms, and (2) manipulating the construct causes concomitant predictable changes in symptoms. This raises serious doubts about concluding that the construct or the psychological states defined by the construct cause the symptoms or are even an explanation of the symptoms. To be fair, there are good arguments for saying that clinical constructs have helped to develop effective interventions for anxiety disorders. But it’s impossible to say that they are effective because they address the ‘causes’ of symptoms rather than the symptoms themselves. If the same behaviours (symptoms) are used to both verify the construct and to explore the construct’s explanatory role in the psychopathology then construct and symptoms are essentially the same thing. Logically, many clinical constructs do not exist other than being extrapolated from the symptoms that they are developed to explain. This relationship between clinical constructs and the behaviours they are developed to explain is reminiscent of what Ryle (1949) called a category mistake. Ryle wrote that:

“..when we describe people as exercising qualities of mind, we are not referring to occult episodes of which their overt acts and utterances are effects; we are referring to those overt acts and utterances themselves” (1949, p26).

Given that very many clinical constructs are defined in ways that represent mental states of which symptoms are deemed to be their effects then we must seriously consider that the clinical construct approach to explaining psychopathology is also underpinned by a category mistake. In their discussion of constructs in clinical psychology research, Strauss & Smith (2009) distinguish between constructs developed as tools to measure and predict behaviour (constructs based on “nomothetic span”[2]), and those constructs that go beyond the data used to support them and postulate entities, processes or events that are not directly observed but which may underlie behaviour - known as “construct representation” (e.g. Whiteley, 1983; Strauss & Smith, 2009; MacCorquodale & Meehl, 1948). It is arguable that the current approach to clinical constructs in clinical psychology research has generated a culture in which clinical constructs proliferate without being properly theoretically defined – especially in the sense that they might represent constructs based on nomothetic span or construct representation. It may well turn out that many of those clinical constructs that have been researched so avidly in the past 10-15 years are no more than basic redescriptions of the symptoms they are often thought to explain.

REFERENCES

Chambless DL, Caputo GC, Bright P & Gallagher R (1984) Assessment of fear of fear in agoraphobics – the body sensations questionnaire and the agoraphobic cognitions questionnaire. Journal of Consulting & Clinical Psychology, 52, 1090-1097.
Cronbach LJ & Meehl PE (1955) Construct validity in psychological tests. Psychological Bulletin, 52, 281-302.

Davey G.C.L. (2003) Doing clinical psychology research: What is interesting isn’t always useful. The Psychologist, 16, 412-416.

Dugas MJ, Gagnon F, Ladouceur R & Freeston MH (1998) Generalized anxiety disorder: A preliminary test of a conceptual model. Behaviour Research & Therapy, 36, 215-226.

Dunmore E, Clark DM & Ehlers A (1999) Cognitive factors involved in the onset and maintenance of posttraumatic stress disorder (PTSD) after physical or sexual assault. Behaviour Research & Therapy, 37, 809-829

Freeston, M. H., Rhéaume, J., Letarte, H., Dugas, M. J., & Ladouceur, R. (1994). Why do people worry? Personality and Individual Differences, 17, 791–802.

Frost R, Steketee G, Amir N, Bouvard M et al. (1997) Cognitive assessment of obsessive-compulsive disorder. Behaviour Research & Therapy, 35, 667-681.

Haslam, N. (1997). Evidence that male sexual orientation is a matter of degree. Journal of Personality and Social Psychology, 73, 862-870.

Koerner N & Dugas MJ (2006) A cognitive model of generalized anxiety disorder: The role of intolerance of uncertainty. In GCL Davey & A Wells (Eds) Worry & its psychological disorders. John Wiley.

MacQuorquodale K & Meehl PE (1948) On a distinction between hypothetical constructs and intervening variables. Psychological Review, 55, 95-107.

Meehl, P. E. (1992). Factors and taxa, traits and types, differences of degree and differences in kind. Journal of Personality, 60, 117-174.

Meehl, P. E. (1995). Bootstraps taxometrics: Solving the classification problem in psychopathology. American Psychologist, 50, 266-275.

Ruscio J, Ruscio AM & Carney LM (2011) Performing taxometric analysis to distinguish categorical and dimensional variables. Journal of Experimental Psychopathology, in press.

Ryle G (1949) The Concept of Mind. Peregrine Books.

Salkovskis, P. M. (1985). Obsessional-compulsive problems: a cognitive-behavioural analysis. Behaviour Research and Therapy, 23, 571-583.

Salkovskis PM, Wroe AL, Gledhill A, Morrison N et al. (2000) Responsibility attitudes and interpretations are characteristic of obsessive compulsive disorder. Behaviour Research & Therapy, 38, 347-372.

Shafran R, Cooper Z & Fairburn CG (2002) Clinical perfectionism: A cognitive-behavioural analysis. Behaviour Research & Therapy, 40, 773-791.

Shafran R & Rachman S (2004) Thought-action fusion: A review. Journal of Behavior Therapy & Experimental Psychiatry, 35, 87-107.

Shafran R, Thordarson DS & Rachman S (1996) Thought-action fusion in obsessive compulsive disorder. Journal of Anxiety Disorders, 10, 379-391.

Strauss ME & Smith GT (2009) Construct validity: Advances in theory and methodology. Annual Review of Clinical Psychology, 5, 1-25.

Whitely SE (1983) Construct validity: Construct representation versus nomothetic span. Psychological Bulletin, 93, 179-197.

[1] Clinical constructs as latent categorical variables can also be inferred statistically (e.g. Meehl, 1995; Ruscio, Ruscio & Carney, 2011), but these too will usually have their origins in clinical practice or clinical theory and are used to advance the development of causal theories (e.g. Haslam, 1997; Meehl, 1992). This discussion will be limited to those clinical constructs developed to explain (rather than categorize) anxiety disorders.

[2] “Nomothetic span” refers to the pattern of significant relationships among measures of the same or different constructs (i.e. convergent and discriminant validity) (Whitely, 1983; Strauss & Smith, 2009). Thus, the descriptive validity of a construct is established by observing that it is related to measures it should be theoretically related to and not related to measures that it should not be theoretically related to.

Categories

All
Anxiety
Books
Clinical Psychology
Journal Publishing
Medicine
Mental Health
Phobias
Psychology
Research
Research Councils

Designing an Intro to Psych Textbook

3/17/2013

 
Originally published 12/04/2012 at http://grahamdavey.blogspot.co.uk

                                                  “Teach your children well, their father's hell did slowly go by,
                                                    
And feed them on your dreams, the one they fix, the one you'll know by”.

I've been asked to scope out a proposal for a new UK/European based Intro to Psych textbook for undergraduate students.  So what should this book look like? Simply asking people what you should put into an Intro to Psych book has its problems. Here lies the vicious cycle that leads to a plethora of clone-like text books, most of which contain much of the same material, many of the same learning features (but using different buzzy names), all boasting much the same range of web resources, all dividing psychology into similar sub-sections and as a result all perpetuating the same "preordained" syllabus – the winner is the one with most pages and the biggest website!

My recent blog titled "Whatever happened to learning theory" led to some very interesting correspondences with Eric Charles (@EPCharles) about some of the things that were right and wrong with Introductory Psychology. Eric has posted a couple of blogs discussing what he believes is wrong with the way we currently teach Intro to Psych and also making some suggestions about what an Intro to Psych textbook should do (http://bit.ly/H60Vld and http://bit.ly/H6ZpBX) - I recommend you look at these in detail. But before I summarise Eric's points it is worth considering how Intro to Psych textbooks often get scoped in the first place.

I've already edited and contributed to one Intro to Psych text - "Complete Psychology" published by Hodder HE (http://bit.ly/HcD6hU).  The first edition was published in 2002, and it represented an exciting race to be the first UK full colour Intro to Psych text. The book (all 849 pages) was written in six months, and although there are many aspects of the book that I'm proud to be associated with, it was very traditional in its representation of psychology. It adhered strictly to the BPS curriculum and unashamedly portrayed this as its main virtue. It was great fun to write and to work with the other contributors at that time, it was also fun spending a summer conceiving of and actualising a range of learning and presentational features for the book. But time, and the greater resources of the other larger publishers, has overtaken this project.

The trap we now fall into is that Intro to Psych textbooks have a desperate need to be as inclusive as possible. We are all open ears to every psychology lecturer who says "you didn't include x" or "there wasn't enough of y" - so we bung it in to be as inclusive as we can and to say we cover more material and provide more resources than any other textbook. What is perplexing about asking Psychology lecturers what they want from an Intro to Psych book is that, in my experience, prior to the book being written they will say they want X, Y and Z, but once it's written and on the bookshelves they rarely use X, Y and Z. Web resources are a good example. Lecturers will say they want PowerPoint presentations, seminar guidelines, photos and videos, but there's very little evidence they use these resources very much once they've been generated. In fact, most lecturers (quite reasonably) prefer to use their own lecture support resources.

So in the production of an Intro to Psych textbook a lot of effort often goes into providing the range of topics and resources that lecturers 'say' they want, and much less goes into the overall 'concept' of the book, and as a consequence into providing a modern, integrated, challenging syllabus for students which satisfies the developing intellectual needs of psychology majors, genuinely reflects the development of psychological science, and also provides psychology minors with a suitable overview of the discipline.

To go back to Eric Charles, he makes the very valid point that Intro to Psych books often serve as the main “controllable exposure that most people will have to academic psychology”. He also points out that Intro to Psych books should (1) continually challenge students to approach psychological questions in new and unintuitive ways, rather than striving to make the subject matter fit easily into their preconceptions, but (2) the emphasis should be on findings that remain generally accepted over long periods – providing a basis for the scientific value of psychology and for future research, rather than blindly focussing on cutting edge recent research, and (3) Intro to Psych textbooks should try to expose students to the complexity of current debates rather than trying to get students to express their own opinions about current debate. Most importantly, Intro to Psych books fail to provide a vision of the field as a whole, and they fail to make it clear why the same course should talk about “neurons, eye-balls, brain waves, rats pressing levers, Piaget, introversion, compliance, and anti-social personality disorder”. In addition he suggests that Intro to Psych books should not include “trivial but attention getting findings, or now rejected findings”. For example, he 1) challenges anyone to tell him what critical insight into psychology was gained from the Stanford Prison Experiment, and 2) why Freud’s theories are being treated in such great detail, etc.

So what should a modern Intro to Psych syllabus look like and how should a modern Intro to Psych book portray it?

First, syllabuses designed and recommended by learned societies probably don’t help to definitively answer this question. I am a great believer in the benefits that learned societies can offer their discipline and associated professions – and this has been practically demonstrated by my commitment over the years to the British Psychological Society. However, learned societies tend to be rather loosely bound organizations that have evolved organizational structures based on fostering as many representative interests within the discipline as can be practically sustained (and all competing for a high profile and a piece of whatever cake is being offered). Promoting and representing the diversity of the discipline in this way is likely to lead to a recommended syllabus that is characterized by its breadth and diversity rather than its structure and the developmental dynamics of the subject matter. It is certainly important to have breadth in the syllabus, but this approach rarely provides conceptual structure for the discipline as a whole – usually just a categorical list of recommended topics, usually according to an historically pre-ordained formula.

Second, asking psychology lecturers what they want in either a syllabus or a textbook leads to much the same inclusive, but unstructured, outcome – and this is very much the process that publishers go through when they review proposals for a new text book. The review process largely tells the author what is missing and needs to be included rather than providing insight into overall structure.

Nevertheless, the contemporary pressures of satisfying fee-paying undergraduate students does lead psychology departments to think about how Intro to Psych might be structured and portrayed – if only (and rather shallowly) in a way that keeps its students happy (and responding highly on the National Student Survey). In particular, many students come to psychology with the aspiration to become applied psychologists. This has almost certainly led to departments including more applied psychology courses in their first year syllabus and even trying to teach some core psychology through applied psychology modules. Nothing wrong with this if it successfully teaches core knowledge and keeps the students happy (see http://bit.ly/zFaVrw).

So where do we go for an Intro to Psych syllabus that genuinely reflects the dynamic development of the discipline, provides an integrated structure and vision of the field, considers important theoretical, conceptual and methodological developments, and both challenges and satisfies students?

Here are some obvious and traditional approaches:

1.         The ‘shopping list’ approach – we can ask a cross-section of lecturers (and students) what they want to see in an Intro to Psych course, take the top 30 topics and commission a chapter on each.

2.         The ‘level of explanation’ approach – Commissioning sections on biological psychology, cognitive psychology, and behavioural approaches.

3.         The ‘core knowledge’ approach – a traditional one in which psychology is split into historically important core topics including cognitive psychology, biological psychology, social psychology, personality and individual differences, developmental psychology, and maybe abnormal psychology and conceptual and historical issues.

4.         The ‘lifespan approach’ – clumping sections of the book into describing and explaining the psychology of various life stages, including pre-natal, infancy, childhood and adolescence, adulthood, and old age.

5.         The ‘embedded features’ approach – Take a traditional approach to defining the core areas of psychology, but include a range of teaching and learning features in each chapter that convey visions of how the discipline is developing.

This list is by no means exhaustive, and I’d be grateful for your thoughts and suggestions about what an Intro to Psych textbook should be and should look like, and what it should (and perhaps should not) include. Whatever the outcome, it needs to be engaging and make both teaching and learning natural and easy processes. But most importantly for our discipline and how we teach future generations of students, it needs to convincingly reflect dynamic changes in the content and structure of psychology, and not just pander to the current market needs of the lowest common denominator.

New Research Council Regulations Governing Experimental Procedures in the Behavioural & Social Sciences

3/17/2013

 
Originally published 01/04/2012

By now most of you will be aware of the new regulations governing experimental procedures introduced by the UK research councils (and following on from similar changes already applied in Europe and the USA). For those of us conducting behavioural, social and cognitive neuroscience studies on human participants it will represent a major change in the way we conduct our experiments, treat our participants, collect our data, and develop our scientific models. The major changes have been introduced to ensure that behavioural and neuroscience research using human participants complies with a mixture of research council developments on the importance of social impact of funded research and the recent EU Court of Human Rights declarations on the rights and civil liberties of individuals as extended to human participants in experimental procedures.

The most obvious change is the introduction of regulations governing the nature and impact of distraction activities in psychological experiments. In an attempt to spread the social and economic impact of biological science research to activities that take place in the experimental procedure itself, experimenters will no longer have a free choice of distractor tasks (e.g. in memory experiments) or inter-task activities to present to their participants. Researchers will no longer be able to ask their participants to count backwards in threes to prevent rehearsal of learned material. Instead, participants must engage in an activity that represents a significant social or economic contribution. The ESRC website provides a number of examples of the socially and economically inclusive distractor tasks that can now be deployed, many of which are designed to directly benefit the institution in which the research is being conducted. These include asking participants to empty waste bins in faculty offices, mark first year lab reports, prepare sandwiches for senior management luncheon meetings, and chair student misconduct tribunals. Participants with specific vocational skills can be asked to use those skills during experimental distraction tasks, including fixing laboratory plumbing, vacuuming carpets, cooking lunch for university research employees/technicians (but not for postgraduate research students), etc. During inter-trial intervals participants educated to FE level should be urged to teach 50-min Level 1 and Level 2 undergraduate student seminars, and to write draft exam papers for finals resits. Given the dismay expressed by many researchers to these fundamental changes in research protocols, RCUK has expressed regret at not including behavioural and social science researchers in the consultation process for these changes, but confirms that discussions with Russell Group Vice-Chancellors proved to be very constructive and Vice-Chancellors were said to be unanimously supportive of the new changes.

However, the major change to research council approved experimental procedures results from recent changes to human rights legislation. No longer can participants be coerced to ‘respond as quickly as possible’ in reaction time and related studies nor can they be given a fixed time in which to recall previously learned material in memory-related experiments. According to the legislation all participants “…must be treated with equality and respect in such a way as to allow the individual to fully contemplate the various stimulus and response choice options available to them before executing a response – a response which in many cases may be final and irrevocable within the confines of the experimental procedure”. This, of course, will have major implications for many experimental procedures, including choice reaction-time studies, Implicit Association Tests, many lexical decision tasks, as well as response bias training procedures and homophone ambiguity tasks.

Of this latter group of changes, perhaps the one that will have the greatest impact on researchers is the abolition of the fixed recall period in memory tasks. In future all participants will be allowed as much time as they require to recall prior-learned material and word lists. Research council guidelines now specify that participants in such studies should be given the opportunity to recall experimental material “…over as extended a time period as is necessary and befits the status of the participant as a respected and valued member of society”. The minimum recall time now recommended by RCUK is one week, timed from the end of the learning phase of the study. These guidelines state that all participants must be given a stamped addressed envelop when leaving the laboratory so that they can jot down any material recalled in the week following the experiment and submit that material to the experimenter for proper inclusion in the study analysis. Similarly, participants can no longer be allocated to different experimental conditions on a random basis without prior consultation. All participants must be given an informed overview of each experimental condition and allowed a free choice of the condition in which they wish to participate. The participant also has the right to change this choice at any time after the study has begun, and also will have the choice to sample each of the conditions before making a decision on which group to participate in. Researchers in individual institutions are encouraged to hold regular ‘fairs’ for participants that advertise and provide examples of the various experimental conditions in their studies and which will allow participants to make a fully informed choice of the experimental conditions in which they would like to participate. Placebo conditions must now be clearly labeled as such for the participant and cake provided for the participant at the end of a placebo procedure to compensate for the lack of a psychologically/biologically potent component in the experimental condition. Also, any procedures that involve deception must be approved by a locally-appointed panel of civil rights legal advisors – at least one of whom must be a fully qualified and experienced teacher of qualitative methods.

For your information, full details of these changes to the regulations governing experimental procedures in the behavioural and social sciences can be found at http://bit.ly/HeZGp7.

Whatever happened to Learning Theory?

3/17/2013

 
First published 23/03/2012 at http://grahamdavey.blogspot.co.uk
I’ve already blogged about B.F.Skinner, and –coincidentally – he has just celebrated his 108th birthday. But it led me to think about how learning theory in general seems to have drifted slowly out of our undergraduate psychology curricula, out of our animal and experimental psychology labs, and out of the list of high impact journals. I don’t mean just ‘behaviourism’, I mean learning theory and all that embraces - from schedules of reinforcement and behaviour analysis, to associative learning and cognitive inferential models of conditioning – in both animals and humans.

In 2010, the BPS Curriculum for the Graduate Basis for Chartered Membership of the Society listed ‘learning’ as a topic under Cognitive Psychology (that would have jarred with Prof. Skinner!), and not under Biological Psychology. Interestingly, 10 years ago it was listed under both cognitive and biological psychology. In my own institution I know that learning theory has become a relatively minor aspect of Level 1 and Level 2 teaching. Until 2 years ago, I offered a final year elective called ‘Applications of Learning Theory’, but despite its applied, impact-related title the course usually recruited less than 10 students. I usually had to begin the first two lectures by covering the basics of associative learning. If these students had been taught anything about learning theory in Years 1 and 2 they had retained none of it. This state of affairs is quite depressing in an institution that twenty five years ago had one of the leading animal learning labs in the world, inhabited by researchers such as Nick Mackintosh, Tony Dickinson, John Pearce, and Bob Boakes, to name but a few.

I haven’t done anything like a systematic survey of what different Psychology Departments teach in their undergraduate courses, but I suspect that learning theory no longer commands anything more than a couple of basic lectures at Level 1 or Level 2 in many departments. To be fair, most contemporary Introduction to Psychology texts usually contain a chapter devoted to learning (e.g. 1,2), but this is usually descriptive and confined to the difference between instrumental and classical conditioning, coverage of schedules of reinforcement (if you’re lucky), and a sizable focus on why learning theory has applied importance.

So why the apparent decline in the pedagogic importance of learning theory? I suspect the reasons are multiple. Most obviously, learning theory got overtaken by cognitive psychology in the 1980s and 1990s. There is an irony to this in the sense that during the 1980s, the study of associative learning had begun to develop some of the most innovative inferential methods to study what were effectively ‘cognitive’ aspects of animal learning (3, 4) and had also given rise to influential computational models of associative learning such as the Rescorla-Wagner and Pearce-Hall models (5,6). These techniques gave us access to what was actually being learnt by animals in simple (and sometimes complex) learning tasks, and began to provide a map of the cognitive mechanisms that underlay associative learning. This should have provided a solid basis from which animal learning theory could have developed into more universal models of animal consciousness and experience – but unfortunately this doesn’t appear to have happened on the scale that we might have expected. I’m still not sure why this didn’t happen, because at the time this was my vision for the future of animal learning, and one I imparted enthusiastically to my students. I think that the study of associative learning got rather bogged down in struggles over the minutiae of learning mechanisms, and as a result lost a lot of its charisma and appeal for the unattached cognitive researcher and the inquisitive undergraduate student. It certainly lost much of its significance for applied psychologists, which was one of the attractions of the radical behaviourist approach to animal learning.

A second factor in the decline of learning theory was almost certainly the decline in the number of animal labs in psychology departments – brought about in the 1980s and 1990s primarily by a vocal and active animal lib movement. This was certainly one factor that persuaded me to move from doing animal learning studies to human learning studies. I remember getting back into work one Monday morning to find leaflets pushed through the front door of the Psychology building by animal lib activists. These leaflets highlighted the cruel research carried out by Dr. Davey in Psychology who tortured rats by putting paper clips on their tails (7). At the time this was a standard technique used to generate stress in rats to investigate the effects of stress on feeding and drinking, but it did lead me to think hard about whether this research was important and whether there were other forms of research I should be moving towards. It was campaigns like this that led many Universities to either centralize their animal experiment facilities or to abandon them altogether. Either way, it made animal research more difficult to conduct and certainly more difficult for the interested undergraduate and postgraduate student to access.

In my own case, allied to the growing practical difficulties associated with doing animal learning research was the growing intellectual solitude of sharing a research topic with an ever decreasing number of researchers. In the 1980s I was researching performance models of Pavlovian conditioning – basically trying to define the mechanisms by which Pavlovian associations get translated into behaviour – particularly in unrestrained animals. Eventually it became clear to me that only me and maybe two or three other people worldwide shared this passion. Neither was it going to set the world on fire (a bit like my doctoral research on the determinants of the fixed-interval post-reinforcement pause in rats!). To cut a long story short, I decided to abandon animal research and invest my knowledge of learning theory into more applied areas that held a genuine interest for the lay person. Perhaps surprisingly it was Hans Eysenck  who encouraged me to apply my knowledge of learning theory to psychopathology. During the 1980s, conditioning theory was getting a particularly bad press in the clinical psychology literature, and after chairing an invited keynote by Hans at a BPS London Conference he insisted I use my knowledge of conditioning to demonstrate that experimental approaches to psychopathology still had some legs (but only after he’d told me how brilliant his latest book was). This did lead to a couple of papers in which I applied my knowledge of inferential animal learning techniques to conditioning models of anxiety disorders (8,9). But for me, these were the first steps away from learning theory and into a whole new world of research which extended beyond one other researcher in Indiana, and some futile attempts to attach paper clips to the tails of hamsters (have you ever tried doing that? If not – don’t!)(7).

I was recently pleasantly surprised to discover that both the Journal of the Experimental Analysis of Behavior and the Journal of Applied Behavior Analysis are still going strong as bastions of behaviour analysis research. Sadly, Animal Learning & Behavior has now become Learning & Behavior, and Quarterly Journal of Experimental Psychology B (the comparative half traditionally devoted largely to animal learning) has been subsumed into a single cognitive psychology QJEP. But I was very pleasantly surprised to find that when I put ‘Experimental Analysis of Behaviour Group’ into Google that the group was still alive and kicking (http://eabg.bangor.ac.uk). This group was the conference hub of UK learning theory during the 1970s and 1980s, affectionately known as ‘E-BAG’ and provided a venue for regular table football games between graduate students from Bangor, Oxford, Cambridge, Sussex and Manchester amongst others.

I’ve known for many years that I still have a book in me called ‘Applications of Learning Theory’ – but it will never get written, because there is no longer a market for it. That’s a shame, because learning theory still has a lot to offer. It offers a good grounding in analytical thinking for undergraduate students, it provides a range of imaginative inferential techniques for studying animal cognition, it provides a basic theoretical model for response learning across many areas of psychology, it provides a philosophy of explanation for understanding behaviour, and it provides a technology of behaviour change – not many topics in psychology can claim that range of benefits.

(1)      Davey G C L (2008) Complete Psychology. Hodder HE.
(2)      Hewstone M, Fincham F D & Foster J (2005) Psychology. BPS Blackwell.
(3)      Rescorla R A (1980) Pavlovian second-order conditioning. Hillsdale, NJ: Erlbaum.
(4)      Dickinson A (1980) Contemporary animal learning theory. Cambridge: Cambridge University Press.
(5)      Rescorla R A & Wagner A R (1972) A theory of Pavlovian conditioning: Variations in the effectiveness of reinforcement and nonreinforcement. In A H Black & W F Prokasy (Eds) Classical conditioning II: Current research and theory. New York: Appleton-Century-Crofts.
(6)      Pearce J J & Hall G (1980) A model for Pavlovian learning: Variations in the effectiveness of conditioned but not of unconditioned stimuli. Psychological Review, 87, 532-552.
(7)      Meadows P, Phillips J H & Davey G C L (1988) Tail-pinch elicited eating in rats (Rattus Norvegicus) and hamsters (Mesocricetus auratus). Physiology & Behavior, 43, 429-433.
(8)      Davey G C L (1992) Classical conditioning and the acquisition of human fears and phobias: Review and synthesis of the literature. Advances in Behaviour Research & Therapy, 14, 29-66).
(9)      Davey G C L (1989) UCS revaluation and conditioning models of acquired fears. Behaviour Research & Therapy, 27, 521-528.

When measuring science distorts it: 8 things that muddy the waters of scientific integrity and progress

3/17/2013

 
Picture
If you are a scientist of almost any persuasion, one of the processes that you probably cherish most dearly is the objectivity and integrity of the scientific process - a process that leads us to discover and communicate what we loosely like to call ‘the truth’ about our understanding of things. But maybe the process is not as honed as it should be, and maybe it’s not as efficient as it could be? In many cases it’s the desire to quantify and evaluate research output for purposes other than to understand scientific progress that is the culprit, and which distorts scientific progress to the point where it becomes an obstacle to good and efficient science. Below are 8 factors that lead to a distortion of the scientific process – many of which have been brought about by the desire to quantify and evaluate research. Scientific communities have discussed many of these factors previously on various social networks and in scientific blogs, but I thought it would be useful to bring some of them together.

1.         Does measurement of researchers’ scientific productivity harm science? Our current measures of scientific productivity are crude, but are now so universally adopted that they matter for all aspects of the researcher’s career, including tenure (or unemployment), funding (or none), success (or failure), and research time (or teaching load) (Lawrence, 2008)[1]. Research productivity is measured by number of publications, number of citations, and impact factors of journal outlets that are then rewarded with money (either in the form of salaries or grants). Lawrence argues that if you need to publish “because you need a meal ticket, then you end up publishing when you are hungry – not when the research work is satisfactorily completed”. As a result, work is regularly submitted for publication when it is incomplete, when the ideas are not fully thought through, or with incomplete data and arguments. Publication – not the quality of the scientific knowledge reported – is paramount.

2.         But the need to publish in high impact journals has another consequence. Journal impact factors are correlated with the number of retractions rather than citations an individual paper will receive (http://bit.ly/AbFfpz)[2]. One implication of this is that the rush to publish in high impact journals increases the pressure to ‘maybe’ “forget a control group/experiment, or leave out some data points that don’t make the story look so nice”? – all behaviours that will decrease the reliability of the scientific reports being published (http://bit.ly/ArMha6).

3.         The careerism that is generated by our research quality and productivity measures not only fosters incomplete science at the point of publication, it can also give rise to exaggeration and outright fraud (http://bit.ly/AsIO8B). There are recent prominent examples of well-known and ‘respected’ researchers faking data on an almost industrial scale. One recent example of extended and intentional fraud is the Dutch social psychologist Diederick Stapel, whose retraction was published in the journal Science (http://bit.ly/yH28gm)[3]. In this and possibly other cases, the rewards of publication and citation outweighed the risks of being caught. Are such cases of fraudulent research isolated examples or the tip of the iceberg? They may well be the tip of a rather large iceberg. More than 1 in 10 British-based scientists or doctors report witnessing colleagues intentionally altering or fabricating data during their research (http://reut.rs/ADsX59), and a survey of US academic psychologists suggests that 1 in 10 Psychologists in the US has falsified research data (http://bit.ly/yxSL1A)[4]. If these findings can be extrapolated generally, then we might expect that 1 in 10 of the scientific articles we read contains, or is based on, doctored or even faked data.

4.         Journal impact ratings have another negative consequence on the scientific process. There is an increasing tendency for journal editors to reject submissions without review – not purely on the basis of methodological or theoretical rigour – but on the basis that the research lacks “novelty or general interest” (http://bit.ly/wvp9V8). This tends to be editors attempting to protect the impact rating of their journal by rejecting submissions that might be technically and methodologically sound, but are unlikely to get cited very much. One particular type of research that falls foul of this process is likely to be replication. Replication is a cornerstone of the scientific method, yet failures to replicate appear to have a low priority for publication – even when the original study being replicated is controversial (http://bit.ly/AzyRXw). That citation rate has become the gold standard to indicate the quality of a piece of research or the standing of a particular researcher misses the point that high citation rates can also result from controversial but un-replicable findings. This has led some scientists to advocate the use of a ‘r’ or ‘replicability’ index for research to supplement the basic citation index (http://bit.ly/xQuuEP).

5.         Whether a research finding is published and considered to be methodologically sound is usually assessed by the use of standard statistical criteria (e.g. assessed by formal statistical significance, typically for p-values less than 0.05). But the probability that a research finding is true is not just dependent on the statistical power of the study and the level of statistical significance, but also on other factors to do with the context in which research on that topic is being undertaken. As John Ioannidis has pointed out, “..a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance.” (Ioannidis, 2005)[5]. This leads to the conclusion that most research findings are false for most research designs and for most fields!

6.         In order to accommodate the inevitable growth in scientific publication, journals have increasingly taken to publishing research in shorter formats than the traditional scientific article. These short reports limit the length of an article, but the need for this type of article may well be driven by the academic researcher’s need to publish in order to maintain their career rather than the publisher’s need to optimize limited publishing resources (e.g. pages in a printed journal edition). The advantage for researchers – and their need to publish and be cited – is that on a per page basis shorter articles are cited more frequently than longer articles (Haslam, 2010)[6]. But short reports can lead to the propagation of ‘bad’ or ‘false’ science. For example, shorter, single-study articles can be poor models of science because longer, multiple study articles may often include confirmatory full or partial replications of the main findings (http://nyti.ms/wkzBpS). In addition, small studies are inherently unreliable and more likely to generate false positive results (Bertamini & Munafo, 2012)[7]. Many national research assessment exercises require not only that quality of research be assessed in some way, but they also specify a minimum quantity requirement as well. Short reports – with all the disadvantages they may bring to scientific practice – will have a particular attraction to those researchers under pressure to produce quantity rather than quality.

7.         The desire to measure the applied “impact relevance” of research – especially in relation to research funding and national research assessment exercises has inherent dangers for identifying and understanding high-quality research. For example, in the forthcoming UK research excellence framework, lower quality research for which there is good evidence of “impact” may be given a higher value than higher-quality outputs for which an “impact” case is less easy to make (http://bit.ly/y7cqPW). This shift towards the importance of research “impact” in defining research quality has the danger of encouraging researchers to pursue research relevant to short-term policy agendas rather than longer-term theoretical issues. The associated funding consequence is that research money will drift towards those organizations pursuing policy-relevant rather than theory-relevant research, with the former being inherently labile and dependent on changes in both governments and government policies.

8.         Finally, when discussing whether funding is allocated in a way appropriate to optimizing scientific progress, there is the issue of whether we fund researchers when they’re past their best. Do we neglect those researchers in their productive prime who can add fresh zest and ideas into the scientific research process? Research productivity peaks at age 44 (or an average of 17 years after a researchers first publication), but research funding peaks at age 53 – suggesting productivity declines even as funding increases (http://bit.ly/yQUFis). It’s true, these are average statistics, but it would be interesting to know whether there are inherent factors in the funding process that favour past reputation over current productivity.


[1] Lawrence P A (2008) Lost in publication: How measurement harms science. Ethics in Science & Environmental Politics, 8, 9-11.
[2] Fang F C & Casadevall A (2011) Retracted science and the retraction index. Infection & Immunity. Doi: 10.1128/IAI.05661-11.
[3] Stapel D A & Lindenberg S (2011) Retraction. Science, 334, 1202.
[4] John K L, Loewenstein G & Prelec D (in press) Measuring the prevalence of questionable research practices with incentives for truth-telling. Psychological Science.
[5] Ioannidis J P A (2012) Why most published research findings are false. PLoS Medicine, doi/10.1371/journal.pmed.0020124
[6] Haslam N (2010) Bite-size science: Relative impact of short article formats. Perspectives on Psychological Science, 5, 263-264.
[7] Bertamini M & Munafo M R (2012) Bite-size science and its undesired side effects. Perspectives in Psychological Science, 7, 67-71.
<<Previous
Forward>>

    Author

    Graham C. L. Davey, Ph.D. is Professor of Psychology at the University of Sussex, UK. His research interests extend across mental health problems generally, and anxiety and worry specifically. Professor Davey has published over 140 articles in scientific and professional journals and written or edited 16 books including Psychopathology; Clinical Psychology; Applied Psychology; Complete Psychology; Worrying & Psychological Disorders; and Phobias: A Handbook of Theory, Research & Treatment. He has served as President of the British Psychological Society, and is currently Editor-in-Chief of Journal of Experimental Psychopathology and Psychopathology Review. When not writing about psychology he watches football and eats curries.

    Archives

    September 2019
    May 2019
    August 2018
    July 2018
    June 2015
    April 2015
    November 2014
    March 2014
    December 2013
    July 2013
    June 2013
    April 2013
    March 2013

    Categories

    All
    Anxiety
    Books
    Clinical Psychology
    Journal Publishing
    Medicine
    Mental Health
    Phobias
    Psychology
    Research
    Research Councils

    RSS Feed

Proudly powered by Weebly