• Home
  • Graham Davey's Blog
  • My Books
    • Published Books
    • New Book Projects >
      • Psychopathology 3e Excerpts
  • My Research
  • The Sidcup Mixer
  • Journals
  • News
    • Latest News
    • Clinical Psychology Research & Chat
  • Real Life
Papers from Sidcup
Graham Davey's website

The Perfect University

3/20/2013

 
First published 13/06/2012 at http://grahamdavey.blogspot.co.uk
Picture
Why doesn’t every student in Higher Education get awarded a first class degree? Is it because they’re not intelligent enough? Is it because they’re not taught effectively? Is it because a majority of students are plain lazy? Is it because they are too spoon-fed? Well no, it’s none of those. It’s because most Universities haven’t yet fine-tuned their assessment and classification systems in a way that will allow all students – regardless of ability and potential – to get a first class degree. A majority of Universities have adjusted their assessment and classification schemes to a point where only the most delinquent of students will attain anything less than an upper-second class degree, but there is still some fine-tuning required to turn out close to 100% first class students.

Here are some tips for those Universities and institutions still striving for this level of perfection. Individually none of these factors is necessarily bad practice or illegal, and indeed many institutions strive to introduce many of these factors as examples of innovative good practice. However, if you put them all together in one scheme, you create an assessment and classification system that can turn the most delinquent and uninspiring student into a first class success. Here are the basic elements of that system:

1.         Always adopt a categorical marking scheme. Make sure that the first class band of marks covers 30% of awardable marks (e.g. 70-100%) whereas other classification bands cover only 10% (e.g. upper second class from 60-69%).  Within the first class band of marks, make sure there are as few categorical marks available to be awarded as possible and that there is a giant leap in awardable marks between a low first and a good first. For example, make the following marks the only ones awardable in the first class band, 75%, 90% and 100%. Then make sure that the assessment guidelines for 90% are as similar as possible to 75%, but with an added factor that all first class scripts would normally possess (e.g. to be awarded 90% a piece of work must have all the characteristics of a piece of work worthy of 75%, but will show “evidence of breadth of reading”).

2.         Always make sure that each piece of work is double marked, and that any discrepancies between markers are rounded up (e.g. if one marker awards 75% and the second awards 95%, then award 95%).

3.         Allow all final year students a resit option on failed papers that is not capped at the basic pass mark. Indeed, also consider allowing final year students the opportunity to resubmit any piece of work where they are not satisfied with the original mark.

4.         Include MCQs as a highly weighted component of every course/module – at both second and final year. Ensure that these MCQs are taken from a limited bank of questions that is recycled every year. Conveniently forget to adjust the marks on these MCQs for the possibility of chance correct answers.

5.         Include as many assessments as possible where the student has the opportunity to score 100% (e.g. MCQs, assessments where there is an indisputable correct answer or answers, etc.)

6.         Have at least one course/module in the final year that is weighted to make up most of the marks for that year (e.g. a final year project/dissertation). Ensure that the credit weighting of this course is excessive (e.g. the equivalent to 4 other courses), but that the work required by the student is nowhere near equivalent to the work required of four courses. Make sure the students are aware that this is a course/module on which they should concentrate their efforts.

7.         Adopt a very liberal classification borderline “bumping up” scheme that will ensure that as many students below a borderline will meet the criteria for being “bumped up” into the higher classification bracket even if they haven’t achieved the required aggregate mark for that higher classification bracket. Make sure that this is a mechanistic “bumping up” process determined by an algorithm (don’t involve the external examiners in this process – they may question it!)

8.         Introduce changes to the assessment and classification processes every year. This will mean that students will usually be simultaneously graded by two schemes – the “old scheme” and the “new scheme”, and all candidates will be classified according to their ‘best’ outcome from either of the schemes.

9.         Encourage students to apply for concessions and submit mitigating evidence. Make this process as simple as possible and do not set deadlines for evidence to be submitted. In particular, allow mitigation to be submitted after the student has knowledge of their degree classification.

10.       Allow external examiners to adjust agreed marks. But only upwards, so as not to unnecessarily disadvantage any student.

11.       No need to make candidates' identities anonymous. Some good students may have an off day in the exams and names on scripts will allow the internal examiner to mark the candidate according to their ability rather than on an exam "off day". Poor students who perform above their expected ability in an exam can be identified and rewarded accordingly.

12.       External examiners need pacifying and domesticating. Make sure that they have comfortable hotels and are given expensive dinners. Always tell them that there have been IT problems in the Registry and a full summary of marks and assessment statistics is unavailable this year. Fabricate at least two admin staff illnesses which have meant that scripts and coursework could not be sent to the external for moderation. Compliant externals should also be appointed for additional years after the end of their term of office. Make sure it is clear to externals that assessment guidelines (and anything else they may query) have been imposed by the University central administration and are out of the control of the Department. Regularly change Departmental Exam Officers so that no one individual can acquire enough knowledge to ensure the assessment period is conducted according to the full set of regulations.

13.      If an external examiner attempts to question the objectivity and validity of an examination and assessment process, the Registry should reply by stating that there was not a critical mass of external examiners across disciplines raising this particular issue to require a change in University policy. University Registries should ensure that the full range of external examiners' reports are not compiled in any single place where they are freely available for general scrutiny.

14.       Finally, make sure that a directive comes down from the University Registry to all examiners to “mark generously and use the full extremes of the marking scales – especially the first class band of marks”. This, of course, is imperative if the institution is to achieve a good grading in forthcoming National Student Surveys!

Please feel free to suggest more practical ideas by which Universities can adjust their assessment and classification processes to generate increasing percentages of first class students. Don't forget, well qualified graduates are our future - we need more of them!

Should You Publish Your Undergraduate Student's Projects?

3/17/2013

 
Most academics and researchers now rely on their undergraduate students’ final year projects as an important research resource. These projects provide opportunities to test out new procedures, methodologies and theories at relatively low cost to the researcher. Nevertheless, no matter how much you might closely supervise this research there is still a nagging doubt that you have delegated important research to a relatively inexperienced individual. How do you decide whether the research they have delivered is worthy of writing up and publishing? Below is a flow-chart that allows the inexperienced junior lecturer to make some decisions about publishing an undergraduate project[1].
Picture
[1] This flow-chart is designed to ensure optimal career development for junior and mid-career academics and researchers.

Does a Menu Explain a Restaurant? - Clinical Constructs as Potential Category Mistakes

3/17/2013

 
First published 03/05/2012 at http://grahamdavey.blogspot.co.uk
Picture
Over the past 20 years or so, clinical psychology researchers have developed cognitive models of psychopathology that have used constructs that appear to capture the beliefs, attitudes and thought patterns associated with psychiatric symptoms. These constructs have been used in many ways: to understand and explain symptoms and to develop new interventions that intervene by attempting to modify the psychological processes implied by the construct. Clinical psychology researchers have never been terribly good at articulating the exact theoretical nature of these constructs, but they are regularly portrayed as inferred states or processes derived most often from the clinical experiences of researchers or clinicians in their interactions with patients (Davey, 2003).[1] The purpose of these constructs is to help understand psychopathology symptoms, to provide a basis for developing interventions to treat the psychopathology, and – in the case of those who advocate cognitive explanations of psychopathology – to link thoughts, beliefs and cognitive processes to subsequent symptoms.

Hypothetical constructs have a long history in the study of psychology and human behaviour (MacCorquodale & Meehl, 1948; Cronbach & Meehl, 1955; Strauss & Smith, 2009), and their main purpose has been to help identify the theoretical mechanisms that underlie performance and behaviour (Whitely, 1983). In clinical psychology research, constructs have played an important part in the development of models of anxious psychopathology – especially in the years since cognitive approaches to understanding anxiety have become prevalent. Clinical constructs are often developed from the researchers own clinical experiences, and they represent hypothetical structures that usually attempt to summarize important aspects of the patient experience and integrate this with one or more theoretically important process that the researcher believes underlies the symptoms. In the past 20-30 years many theoretically influential clinical constructs have been developed during research on the aetiology and maintenance of anxiety disorders. Some of the more influential of these include inflated responsibility (Salkovskis, 1985), intolerance of uncertainty (Dugas, Gagnon, Ladouceur & Freeston, 1998), clinical perfectionism (Shafran, Cooper & Fairburn, 2002), and thought-action fusion (Shafran & Rachman, 2004), to name but a few. There is no doubt that clinical constructs have been influential in the development of theories of anxiety-based psychopathology, and these constructs have a prima facie clinical relevance and respectability by emerging from clinical experience, idiographic assessment, from illustrative case histories, exploratory qualitative methods, or content analysis of patient self-report statements (e.g. Frost, Steketee, Amir et al., 1997; Freeston, Rheaume, Letarte, Dugas & Ladouceur, 1994).

At this point it is important to understand the role that clinical psychology researchers see for the clinical constructs they develop. Without a doubt, in the majority of cases talk of a ‘causal’ or ‘explanatory’ role in the elicitation and maintenance of symptoms creeps into the discussion. For example, Koerner & Dugas (2006) note that intolerance of uncertainty  “is thought to lead to worry directly” (2006, p201); Salkovskis, Wroe, Gledhill, Morrison et al. (2000) write that “the occurrence and/or content of intrusions (thoughts, images, impulses and/or doubts) are interpreted (appraised) as indicating that the person may be responsible for harm to themselves or others. This leads both to adverse mood (anxiety and depression) and the decision and motivation to engage in neutralising behaviours (which can include a range of behaviours such as compulsive checking, washing and covert ritualising)” (2000, p348; my italics); Shafran, Thordarson & Rachman (1996) write that “increased endorsement of dysfunctional beliefs, particularly TAF [thought-action fusion] is likely to exacerbate low self-esteem, depression, anxiety, and perceived responsibility for the event.” (1996, p379). The implication of causation of construct on symptoms is further alluded to in the box-and-arrow schematic models of emotion-based disorders that have become associated with research on some of these clinical constructs (Davey, 2003). There is no doubt that such constructs help us to conceptualize the psychological processes and states involved in a specific psychopathology, but is there any basis for assuming that their role is a causal one?

In order to elevate these hypothetical constructs to the level of empirically verifiable and usable entities the constructs have to become measurable and, in many cases, manipulable – especially if they are to prove useful in clinical interventions. This process usually proceeds with the researcher describing the defining features of the construct and developing an instrument to measure the construct. Once a set of the defining features of the construct has been established, subsequent measurement instruments are developed and validated. Having defined the construct’s main features and developed a measurement instrument, the construct is now experimentally manipulable and objectively measurable according to standard empirical and scientific tenets. Subsequent controlled manipulation of the construct may result in observable changes in symptoms, leading us to conclude that the construct plays a direct or indirect causal role in determining the appearance or strength of the symptoms. These manipulations may be in the form of potential therapeutic interventions or in the form of a controlled experimental manipulation (e.g. the effect of manipulation of inflated responsibility or intolerance of uncertainty on compulsive behaviour). At this point, the construct has become a recognizable explanatory feature of the psychopathology, supported by empirical evidence in the form of its measurable relationship with symptoms (through correlational and regression analyses) and demonstrable effects on symptoms (through experimental manipulation).

The process described above appears to be an admirable attempt by clinical researchers to objectify their clinical experiences and subject them to rigorous, scientific analysis. At the end of this process we have constructs that are measurable and manipulable and can be empirically tested in their relationship with psychopathology symptoms. However, we need to be aware that clinical constructs are not directly observable and need to be inferred from the behaviour and responses of our patients and experimental participants. Inferential techniques, by their very nature, rely on observable behaviour to tell us something about the existence and behaviour of the unobservable psychological mechanisms that underlie performance (Whitely, 1983; Strauss & Smith, 2009). What is important about these inferential processes is that we cannot use the same behavioural anchors to verify the construct and then use them as outcome measures in experiments/interventions to determine whether the construct has an explanatory role or causal effect.

This logical inconsistency appears to be what happens in the research history of many clinical constructs. The confounding factor is that the construct is verified on the basis of patient reports about their psychopathology experiences and their symptoms or on researchers’ assumptions about these experiences (e.g. Frost, Steketee, Amir et al., 1997; Shafran, Thordarson & Rachman, 1996; Chambless, Caputo, Bright & Gallagher, 1984; Dunmore, Clark & Ehlers, 1999). When unpacked, many validated measures of clinical constructs resemble a list of questions about symptoms. It should then come as no surprise that (1) measures of the construct are significantly correlated with measures of symptoms, and (2) manipulating the construct causes concomitant predictable changes in symptoms. This raises serious doubts about concluding that the construct or the psychological states defined by the construct cause the symptoms or are even an explanation of the symptoms. To be fair, there are good arguments for saying that clinical constructs have helped to develop effective interventions for anxiety disorders. But it’s impossible to say that they are effective because they address the ‘causes’ of symptoms rather than the symptoms themselves. If the same behaviours (symptoms) are used to both verify the construct and to explore the construct’s explanatory role in the psychopathology then construct and symptoms are essentially the same thing. Logically, many clinical constructs do not exist other than being extrapolated from the symptoms that they are developed to explain. This relationship between clinical constructs and the behaviours they are developed to explain is reminiscent of what Ryle (1949) called a category mistake. Ryle wrote that:

“..when we describe people as exercising qualities of mind, we are not referring to occult episodes of which their overt acts and utterances are effects; we are referring to those overt acts and utterances themselves” (1949, p26).

Given that very many clinical constructs are defined in ways that represent mental states of which symptoms are deemed to be their effects then we must seriously consider that the clinical construct approach to explaining psychopathology is also underpinned by a category mistake. In their discussion of constructs in clinical psychology research, Strauss & Smith (2009) distinguish between constructs developed as tools to measure and predict behaviour (constructs based on “nomothetic span”[2]), and those constructs that go beyond the data used to support them and postulate entities, processes or events that are not directly observed but which may underlie behaviour - known as “construct representation” (e.g. Whiteley, 1983; Strauss & Smith, 2009; MacCorquodale & Meehl, 1948). It is arguable that the current approach to clinical constructs in clinical psychology research has generated a culture in which clinical constructs proliferate without being properly theoretically defined – especially in the sense that they might represent constructs based on nomothetic span or construct representation. It may well turn out that many of those clinical constructs that have been researched so avidly in the past 10-15 years are no more than basic redescriptions of the symptoms they are often thought to explain.

REFERENCES

Chambless DL, Caputo GC, Bright P & Gallagher R (1984) Assessment of fear of fear in agoraphobics – the body sensations questionnaire and the agoraphobic cognitions questionnaire. Journal of Consulting & Clinical Psychology, 52, 1090-1097.
Cronbach LJ & Meehl PE (1955) Construct validity in psychological tests. Psychological Bulletin, 52, 281-302.

Davey G.C.L. (2003) Doing clinical psychology research: What is interesting isn’t always useful. The Psychologist, 16, 412-416.

Dugas MJ, Gagnon F, Ladouceur R & Freeston MH (1998) Generalized anxiety disorder: A preliminary test of a conceptual model. Behaviour Research & Therapy, 36, 215-226.

Dunmore E, Clark DM & Ehlers A (1999) Cognitive factors involved in the onset and maintenance of posttraumatic stress disorder (PTSD) after physical or sexual assault. Behaviour Research & Therapy, 37, 809-829

Freeston, M. H., Rhéaume, J., Letarte, H., Dugas, M. J., & Ladouceur, R. (1994). Why do people worry? Personality and Individual Differences, 17, 791–802.

Frost R, Steketee G, Amir N, Bouvard M et al. (1997) Cognitive assessment of obsessive-compulsive disorder. Behaviour Research & Therapy, 35, 667-681.

Haslam, N. (1997). Evidence that male sexual orientation is a matter of degree. Journal of Personality and Social Psychology, 73, 862-870.

Koerner N & Dugas MJ (2006) A cognitive model of generalized anxiety disorder: The role of intolerance of uncertainty. In GCL Davey & A Wells (Eds) Worry & its psychological disorders. John Wiley.

MacQuorquodale K & Meehl PE (1948) On a distinction between hypothetical constructs and intervening variables. Psychological Review, 55, 95-107.

Meehl, P. E. (1992). Factors and taxa, traits and types, differences of degree and differences in kind. Journal of Personality, 60, 117-174.

Meehl, P. E. (1995). Bootstraps taxometrics: Solving the classification problem in psychopathology. American Psychologist, 50, 266-275.

Ruscio J, Ruscio AM & Carney LM (2011) Performing taxometric analysis to distinguish categorical and dimensional variables. Journal of Experimental Psychopathology, in press.

Ryle G (1949) The Concept of Mind. Peregrine Books.

Salkovskis, P. M. (1985). Obsessional-compulsive problems: a cognitive-behavioural analysis. Behaviour Research and Therapy, 23, 571-583.

Salkovskis PM, Wroe AL, Gledhill A, Morrison N et al. (2000) Responsibility attitudes and interpretations are characteristic of obsessive compulsive disorder. Behaviour Research & Therapy, 38, 347-372.

Shafran R, Cooper Z & Fairburn CG (2002) Clinical perfectionism: A cognitive-behavioural analysis. Behaviour Research & Therapy, 40, 773-791.

Shafran R & Rachman S (2004) Thought-action fusion: A review. Journal of Behavior Therapy & Experimental Psychiatry, 35, 87-107.

Shafran R, Thordarson DS & Rachman S (1996) Thought-action fusion in obsessive compulsive disorder. Journal of Anxiety Disorders, 10, 379-391.

Strauss ME & Smith GT (2009) Construct validity: Advances in theory and methodology. Annual Review of Clinical Psychology, 5, 1-25.

Whitely SE (1983) Construct validity: Construct representation versus nomothetic span. Psychological Bulletin, 93, 179-197.

[1] Clinical constructs as latent categorical variables can also be inferred statistically (e.g. Meehl, 1995; Ruscio, Ruscio & Carney, 2011), but these too will usually have their origins in clinical practice or clinical theory and are used to advance the development of causal theories (e.g. Haslam, 1997; Meehl, 1992). This discussion will be limited to those clinical constructs developed to explain (rather than categorize) anxiety disorders.

[2] “Nomothetic span” refers to the pattern of significant relationships among measures of the same or different constructs (i.e. convergent and discriminant validity) (Whitely, 1983; Strauss & Smith, 2009). Thus, the descriptive validity of a construct is established by observing that it is related to measures it should be theoretically related to and not related to measures that it should not be theoretically related to.

Categories

All
Anxiety
Books
Clinical Psychology
Journal Publishing
Medicine
Mental Health
Phobias
Psychology
Research
Research Councils

Designing an Intro to Psych Textbook

3/17/2013

 
Originally published 12/04/2012 at http://grahamdavey.blogspot.co.uk

                                                  “Teach your children well, their father's hell did slowly go by,
                                                    
And feed them on your dreams, the one they fix, the one you'll know by”.

I've been asked to scope out a proposal for a new UK/European based Intro to Psych textbook for undergraduate students.  So what should this book look like? Simply asking people what you should put into an Intro to Psych book has its problems. Here lies the vicious cycle that leads to a plethora of clone-like text books, most of which contain much of the same material, many of the same learning features (but using different buzzy names), all boasting much the same range of web resources, all dividing psychology into similar sub-sections and as a result all perpetuating the same "preordained" syllabus – the winner is the one with most pages and the biggest website!

My recent blog titled "Whatever happened to learning theory" led to some very interesting correspondences with Eric Charles (@EPCharles) about some of the things that were right and wrong with Introductory Psychology. Eric has posted a couple of blogs discussing what he believes is wrong with the way we currently teach Intro to Psych and also making some suggestions about what an Intro to Psych textbook should do (http://bit.ly/H60Vld and http://bit.ly/H6ZpBX) - I recommend you look at these in detail. But before I summarise Eric's points it is worth considering how Intro to Psych textbooks often get scoped in the first place.

I've already edited and contributed to one Intro to Psych text - "Complete Psychology" published by Hodder HE (http://bit.ly/HcD6hU).  The first edition was published in 2002, and it represented an exciting race to be the first UK full colour Intro to Psych text. The book (all 849 pages) was written in six months, and although there are many aspects of the book that I'm proud to be associated with, it was very traditional in its representation of psychology. It adhered strictly to the BPS curriculum and unashamedly portrayed this as its main virtue. It was great fun to write and to work with the other contributors at that time, it was also fun spending a summer conceiving of and actualising a range of learning and presentational features for the book. But time, and the greater resources of the other larger publishers, has overtaken this project.

The trap we now fall into is that Intro to Psych textbooks have a desperate need to be as inclusive as possible. We are all open ears to every psychology lecturer who says "you didn't include x" or "there wasn't enough of y" - so we bung it in to be as inclusive as we can and to say we cover more material and provide more resources than any other textbook. What is perplexing about asking Psychology lecturers what they want from an Intro to Psych book is that, in my experience, prior to the book being written they will say they want X, Y and Z, but once it's written and on the bookshelves they rarely use X, Y and Z. Web resources are a good example. Lecturers will say they want PowerPoint presentations, seminar guidelines, photos and videos, but there's very little evidence they use these resources very much once they've been generated. In fact, most lecturers (quite reasonably) prefer to use their own lecture support resources.

So in the production of an Intro to Psych textbook a lot of effort often goes into providing the range of topics and resources that lecturers 'say' they want, and much less goes into the overall 'concept' of the book, and as a consequence into providing a modern, integrated, challenging syllabus for students which satisfies the developing intellectual needs of psychology majors, genuinely reflects the development of psychological science, and also provides psychology minors with a suitable overview of the discipline.

To go back to Eric Charles, he makes the very valid point that Intro to Psych books often serve as the main “controllable exposure that most people will have to academic psychology”. He also points out that Intro to Psych books should (1) continually challenge students to approach psychological questions in new and unintuitive ways, rather than striving to make the subject matter fit easily into their preconceptions, but (2) the emphasis should be on findings that remain generally accepted over long periods – providing a basis for the scientific value of psychology and for future research, rather than blindly focussing on cutting edge recent research, and (3) Intro to Psych textbooks should try to expose students to the complexity of current debates rather than trying to get students to express their own opinions about current debate. Most importantly, Intro to Psych books fail to provide a vision of the field as a whole, and they fail to make it clear why the same course should talk about “neurons, eye-balls, brain waves, rats pressing levers, Piaget, introversion, compliance, and anti-social personality disorder”. In addition he suggests that Intro to Psych books should not include “trivial but attention getting findings, or now rejected findings”. For example, he 1) challenges anyone to tell him what critical insight into psychology was gained from the Stanford Prison Experiment, and 2) why Freud’s theories are being treated in such great detail, etc.

So what should a modern Intro to Psych syllabus look like and how should a modern Intro to Psych book portray it?

First, syllabuses designed and recommended by learned societies probably don’t help to definitively answer this question. I am a great believer in the benefits that learned societies can offer their discipline and associated professions – and this has been practically demonstrated by my commitment over the years to the British Psychological Society. However, learned societies tend to be rather loosely bound organizations that have evolved organizational structures based on fostering as many representative interests within the discipline as can be practically sustained (and all competing for a high profile and a piece of whatever cake is being offered). Promoting and representing the diversity of the discipline in this way is likely to lead to a recommended syllabus that is characterized by its breadth and diversity rather than its structure and the developmental dynamics of the subject matter. It is certainly important to have breadth in the syllabus, but this approach rarely provides conceptual structure for the discipline as a whole – usually just a categorical list of recommended topics, usually according to an historically pre-ordained formula.

Second, asking psychology lecturers what they want in either a syllabus or a textbook leads to much the same inclusive, but unstructured, outcome – and this is very much the process that publishers go through when they review proposals for a new text book. The review process largely tells the author what is missing and needs to be included rather than providing insight into overall structure.

Nevertheless, the contemporary pressures of satisfying fee-paying undergraduate students does lead psychology departments to think about how Intro to Psych might be structured and portrayed – if only (and rather shallowly) in a way that keeps its students happy (and responding highly on the National Student Survey). In particular, many students come to psychology with the aspiration to become applied psychologists. This has almost certainly led to departments including more applied psychology courses in their first year syllabus and even trying to teach some core psychology through applied psychology modules. Nothing wrong with this if it successfully teaches core knowledge and keeps the students happy (see http://bit.ly/zFaVrw).

So where do we go for an Intro to Psych syllabus that genuinely reflects the dynamic development of the discipline, provides an integrated structure and vision of the field, considers important theoretical, conceptual and methodological developments, and both challenges and satisfies students?

Here are some obvious and traditional approaches:

1.         The ‘shopping list’ approach – we can ask a cross-section of lecturers (and students) what they want to see in an Intro to Psych course, take the top 30 topics and commission a chapter on each.

2.         The ‘level of explanation’ approach – Commissioning sections on biological psychology, cognitive psychology, and behavioural approaches.

3.         The ‘core knowledge’ approach – a traditional one in which psychology is split into historically important core topics including cognitive psychology, biological psychology, social psychology, personality and individual differences, developmental psychology, and maybe abnormal psychology and conceptual and historical issues.

4.         The ‘lifespan approach’ – clumping sections of the book into describing and explaining the psychology of various life stages, including pre-natal, infancy, childhood and adolescence, adulthood, and old age.

5.         The ‘embedded features’ approach – Take a traditional approach to defining the core areas of psychology, but include a range of teaching and learning features in each chapter that convey visions of how the discipline is developing.

This list is by no means exhaustive, and I’d be grateful for your thoughts and suggestions about what an Intro to Psych textbook should be and should look like, and what it should (and perhaps should not) include. Whatever the outcome, it needs to be engaging and make both teaching and learning natural and easy processes. But most importantly for our discipline and how we teach future generations of students, it needs to convincingly reflect dynamic changes in the content and structure of psychology, and not just pander to the current market needs of the lowest common denominator.

New Research Council Regulations Governing Experimental Procedures in the Behavioural & Social Sciences

3/17/2013

 
Originally published 01/04/2012

By now most of you will be aware of the new regulations governing experimental procedures introduced by the UK research councils (and following on from similar changes already applied in Europe and the USA). For those of us conducting behavioural, social and cognitive neuroscience studies on human participants it will represent a major change in the way we conduct our experiments, treat our participants, collect our data, and develop our scientific models. The major changes have been introduced to ensure that behavioural and neuroscience research using human participants complies with a mixture of research council developments on the importance of social impact of funded research and the recent EU Court of Human Rights declarations on the rights and civil liberties of individuals as extended to human participants in experimental procedures.

The most obvious change is the introduction of regulations governing the nature and impact of distraction activities in psychological experiments. In an attempt to spread the social and economic impact of biological science research to activities that take place in the experimental procedure itself, experimenters will no longer have a free choice of distractor tasks (e.g. in memory experiments) or inter-task activities to present to their participants. Researchers will no longer be able to ask their participants to count backwards in threes to prevent rehearsal of learned material. Instead, participants must engage in an activity that represents a significant social or economic contribution. The ESRC website provides a number of examples of the socially and economically inclusive distractor tasks that can now be deployed, many of which are designed to directly benefit the institution in which the research is being conducted. These include asking participants to empty waste bins in faculty offices, mark first year lab reports, prepare sandwiches for senior management luncheon meetings, and chair student misconduct tribunals. Participants with specific vocational skills can be asked to use those skills during experimental distraction tasks, including fixing laboratory plumbing, vacuuming carpets, cooking lunch for university research employees/technicians (but not for postgraduate research students), etc. During inter-trial intervals participants educated to FE level should be urged to teach 50-min Level 1 and Level 2 undergraduate student seminars, and to write draft exam papers for finals resits. Given the dismay expressed by many researchers to these fundamental changes in research protocols, RCUK has expressed regret at not including behavioural and social science researchers in the consultation process for these changes, but confirms that discussions with Russell Group Vice-Chancellors proved to be very constructive and Vice-Chancellors were said to be unanimously supportive of the new changes.

However, the major change to research council approved experimental procedures results from recent changes to human rights legislation. No longer can participants be coerced to ‘respond as quickly as possible’ in reaction time and related studies nor can they be given a fixed time in which to recall previously learned material in memory-related experiments. According to the legislation all participants “…must be treated with equality and respect in such a way as to allow the individual to fully contemplate the various stimulus and response choice options available to them before executing a response – a response which in many cases may be final and irrevocable within the confines of the experimental procedure”. This, of course, will have major implications for many experimental procedures, including choice reaction-time studies, Implicit Association Tests, many lexical decision tasks, as well as response bias training procedures and homophone ambiguity tasks.

Of this latter group of changes, perhaps the one that will have the greatest impact on researchers is the abolition of the fixed recall period in memory tasks. In future all participants will be allowed as much time as they require to recall prior-learned material and word lists. Research council guidelines now specify that participants in such studies should be given the opportunity to recall experimental material “…over as extended a time period as is necessary and befits the status of the participant as a respected and valued member of society”. The minimum recall time now recommended by RCUK is one week, timed from the end of the learning phase of the study. These guidelines state that all participants must be given a stamped addressed envelop when leaving the laboratory so that they can jot down any material recalled in the week following the experiment and submit that material to the experimenter for proper inclusion in the study analysis. Similarly, participants can no longer be allocated to different experimental conditions on a random basis without prior consultation. All participants must be given an informed overview of each experimental condition and allowed a free choice of the condition in which they wish to participate. The participant also has the right to change this choice at any time after the study has begun, and also will have the choice to sample each of the conditions before making a decision on which group to participate in. Researchers in individual institutions are encouraged to hold regular ‘fairs’ for participants that advertise and provide examples of the various experimental conditions in their studies and which will allow participants to make a fully informed choice of the experimental conditions in which they would like to participate. Placebo conditions must now be clearly labeled as such for the participant and cake provided for the participant at the end of a placebo procedure to compensate for the lack of a psychologically/biologically potent component in the experimental condition. Also, any procedures that involve deception must be approved by a locally-appointed panel of civil rights legal advisors – at least one of whom must be a fully qualified and experienced teacher of qualitative methods.

For your information, full details of these changes to the regulations governing experimental procedures in the behavioural and social sciences can be found at http://bit.ly/HeZGp7.

Whatever happened to Learning Theory?

3/17/2013

 
First published 23/03/2012 at http://grahamdavey.blogspot.co.uk
I’ve already blogged about B.F.Skinner, and –coincidentally – he has just celebrated his 108th birthday. But it led me to think about how learning theory in general seems to have drifted slowly out of our undergraduate psychology curricula, out of our animal and experimental psychology labs, and out of the list of high impact journals. I don’t mean just ‘behaviourism’, I mean learning theory and all that embraces - from schedules of reinforcement and behaviour analysis, to associative learning and cognitive inferential models of conditioning – in both animals and humans.

In 2010, the BPS Curriculum for the Graduate Basis for Chartered Membership of the Society listed ‘learning’ as a topic under Cognitive Psychology (that would have jarred with Prof. Skinner!), and not under Biological Psychology. Interestingly, 10 years ago it was listed under both cognitive and biological psychology. In my own institution I know that learning theory has become a relatively minor aspect of Level 1 and Level 2 teaching. Until 2 years ago, I offered a final year elective called ‘Applications of Learning Theory’, but despite its applied, impact-related title the course usually recruited less than 10 students. I usually had to begin the first two lectures by covering the basics of associative learning. If these students had been taught anything about learning theory in Years 1 and 2 they had retained none of it. This state of affairs is quite depressing in an institution that twenty five years ago had one of the leading animal learning labs in the world, inhabited by researchers such as Nick Mackintosh, Tony Dickinson, John Pearce, and Bob Boakes, to name but a few.

I haven’t done anything like a systematic survey of what different Psychology Departments teach in their undergraduate courses, but I suspect that learning theory no longer commands anything more than a couple of basic lectures at Level 1 or Level 2 in many departments. To be fair, most contemporary Introduction to Psychology texts usually contain a chapter devoted to learning (e.g. 1,2), but this is usually descriptive and confined to the difference between instrumental and classical conditioning, coverage of schedules of reinforcement (if you’re lucky), and a sizable focus on why learning theory has applied importance.

So why the apparent decline in the pedagogic importance of learning theory? I suspect the reasons are multiple. Most obviously, learning theory got overtaken by cognitive psychology in the 1980s and 1990s. There is an irony to this in the sense that during the 1980s, the study of associative learning had begun to develop some of the most innovative inferential methods to study what were effectively ‘cognitive’ aspects of animal learning (3, 4) and had also given rise to influential computational models of associative learning such as the Rescorla-Wagner and Pearce-Hall models (5,6). These techniques gave us access to what was actually being learnt by animals in simple (and sometimes complex) learning tasks, and began to provide a map of the cognitive mechanisms that underlay associative learning. This should have provided a solid basis from which animal learning theory could have developed into more universal models of animal consciousness and experience – but unfortunately this doesn’t appear to have happened on the scale that we might have expected. I’m still not sure why this didn’t happen, because at the time this was my vision for the future of animal learning, and one I imparted enthusiastically to my students. I think that the study of associative learning got rather bogged down in struggles over the minutiae of learning mechanisms, and as a result lost a lot of its charisma and appeal for the unattached cognitive researcher and the inquisitive undergraduate student. It certainly lost much of its significance for applied psychologists, which was one of the attractions of the radical behaviourist approach to animal learning.

A second factor in the decline of learning theory was almost certainly the decline in the number of animal labs in psychology departments – brought about in the 1980s and 1990s primarily by a vocal and active animal lib movement. This was certainly one factor that persuaded me to move from doing animal learning studies to human learning studies. I remember getting back into work one Monday morning to find leaflets pushed through the front door of the Psychology building by animal lib activists. These leaflets highlighted the cruel research carried out by Dr. Davey in Psychology who tortured rats by putting paper clips on their tails (7). At the time this was a standard technique used to generate stress in rats to investigate the effects of stress on feeding and drinking, but it did lead me to think hard about whether this research was important and whether there were other forms of research I should be moving towards. It was campaigns like this that led many Universities to either centralize their animal experiment facilities or to abandon them altogether. Either way, it made animal research more difficult to conduct and certainly more difficult for the interested undergraduate and postgraduate student to access.

In my own case, allied to the growing practical difficulties associated with doing animal learning research was the growing intellectual solitude of sharing a research topic with an ever decreasing number of researchers. In the 1980s I was researching performance models of Pavlovian conditioning – basically trying to define the mechanisms by which Pavlovian associations get translated into behaviour – particularly in unrestrained animals. Eventually it became clear to me that only me and maybe two or three other people worldwide shared this passion. Neither was it going to set the world on fire (a bit like my doctoral research on the determinants of the fixed-interval post-reinforcement pause in rats!). To cut a long story short, I decided to abandon animal research and invest my knowledge of learning theory into more applied areas that held a genuine interest for the lay person. Perhaps surprisingly it was Hans Eysenck  who encouraged me to apply my knowledge of learning theory to psychopathology. During the 1980s, conditioning theory was getting a particularly bad press in the clinical psychology literature, and after chairing an invited keynote by Hans at a BPS London Conference he insisted I use my knowledge of conditioning to demonstrate that experimental approaches to psychopathology still had some legs (but only after he’d told me how brilliant his latest book was). This did lead to a couple of papers in which I applied my knowledge of inferential animal learning techniques to conditioning models of anxiety disorders (8,9). But for me, these were the first steps away from learning theory and into a whole new world of research which extended beyond one other researcher in Indiana, and some futile attempts to attach paper clips to the tails of hamsters (have you ever tried doing that? If not – don’t!)(7).

I was recently pleasantly surprised to discover that both the Journal of the Experimental Analysis of Behavior and the Journal of Applied Behavior Analysis are still going strong as bastions of behaviour analysis research. Sadly, Animal Learning & Behavior has now become Learning & Behavior, and Quarterly Journal of Experimental Psychology B (the comparative half traditionally devoted largely to animal learning) has been subsumed into a single cognitive psychology QJEP. But I was very pleasantly surprised to find that when I put ‘Experimental Analysis of Behaviour Group’ into Google that the group was still alive and kicking (http://eabg.bangor.ac.uk). This group was the conference hub of UK learning theory during the 1970s and 1980s, affectionately known as ‘E-BAG’ and provided a venue for regular table football games between graduate students from Bangor, Oxford, Cambridge, Sussex and Manchester amongst others.

I’ve known for many years that I still have a book in me called ‘Applications of Learning Theory’ – but it will never get written, because there is no longer a market for it. That’s a shame, because learning theory still has a lot to offer. It offers a good grounding in analytical thinking for undergraduate students, it provides a range of imaginative inferential techniques for studying animal cognition, it provides a basic theoretical model for response learning across many areas of psychology, it provides a philosophy of explanation for understanding behaviour, and it provides a technology of behaviour change – not many topics in psychology can claim that range of benefits.

(1)      Davey G C L (2008) Complete Psychology. Hodder HE.
(2)      Hewstone M, Fincham F D & Foster J (2005) Psychology. BPS Blackwell.
(3)      Rescorla R A (1980) Pavlovian second-order conditioning. Hillsdale, NJ: Erlbaum.
(4)      Dickinson A (1980) Contemporary animal learning theory. Cambridge: Cambridge University Press.
(5)      Rescorla R A & Wagner A R (1972) A theory of Pavlovian conditioning: Variations in the effectiveness of reinforcement and nonreinforcement. In A H Black & W F Prokasy (Eds) Classical conditioning II: Current research and theory. New York: Appleton-Century-Crofts.
(6)      Pearce J J & Hall G (1980) A model for Pavlovian learning: Variations in the effectiveness of conditioned but not of unconditioned stimuli. Psychological Review, 87, 532-552.
(7)      Meadows P, Phillips J H & Davey G C L (1988) Tail-pinch elicited eating in rats (Rattus Norvegicus) and hamsters (Mesocricetus auratus). Physiology & Behavior, 43, 429-433.
(8)      Davey G C L (1992) Classical conditioning and the acquisition of human fears and phobias: Review and synthesis of the literature. Advances in Behaviour Research & Therapy, 14, 29-66).
(9)      Davey G C L (1989) UCS revaluation and conditioning models of acquired fears. Behaviour Research & Therapy, 27, 521-528.

When measuring science distorts it: 8 things that muddy the waters of scientific integrity and progress

3/17/2013

 
Picture
If you are a scientist of almost any persuasion, one of the processes that you probably cherish most dearly is the objectivity and integrity of the scientific process - a process that leads us to discover and communicate what we loosely like to call ‘the truth’ about our understanding of things. But maybe the process is not as honed as it should be, and maybe it’s not as efficient as it could be? In many cases it’s the desire to quantify and evaluate research output for purposes other than to understand scientific progress that is the culprit, and which distorts scientific progress to the point where it becomes an obstacle to good and efficient science. Below are 8 factors that lead to a distortion of the scientific process – many of which have been brought about by the desire to quantify and evaluate research. Scientific communities have discussed many of these factors previously on various social networks and in scientific blogs, but I thought it would be useful to bring some of them together.

1.         Does measurement of researchers’ scientific productivity harm science? Our current measures of scientific productivity are crude, but are now so universally adopted that they matter for all aspects of the researcher’s career, including tenure (or unemployment), funding (or none), success (or failure), and research time (or teaching load) (Lawrence, 2008)[1]. Research productivity is measured by number of publications, number of citations, and impact factors of journal outlets that are then rewarded with money (either in the form of salaries or grants). Lawrence argues that if you need to publish “because you need a meal ticket, then you end up publishing when you are hungry – not when the research work is satisfactorily completed”. As a result, work is regularly submitted for publication when it is incomplete, when the ideas are not fully thought through, or with incomplete data and arguments. Publication – not the quality of the scientific knowledge reported – is paramount.

2.         But the need to publish in high impact journals has another consequence. Journal impact factors are correlated with the number of retractions rather than citations an individual paper will receive (http://bit.ly/AbFfpz)[2]. One implication of this is that the rush to publish in high impact journals increases the pressure to ‘maybe’ “forget a control group/experiment, or leave out some data points that don’t make the story look so nice”? – all behaviours that will decrease the reliability of the scientific reports being published (http://bit.ly/ArMha6).

3.         The careerism that is generated by our research quality and productivity measures not only fosters incomplete science at the point of publication, it can also give rise to exaggeration and outright fraud (http://bit.ly/AsIO8B). There are recent prominent examples of well-known and ‘respected’ researchers faking data on an almost industrial scale. One recent example of extended and intentional fraud is the Dutch social psychologist Diederick Stapel, whose retraction was published in the journal Science (http://bit.ly/yH28gm)[3]. In this and possibly other cases, the rewards of publication and citation outweighed the risks of being caught. Are such cases of fraudulent research isolated examples or the tip of the iceberg? They may well be the tip of a rather large iceberg. More than 1 in 10 British-based scientists or doctors report witnessing colleagues intentionally altering or fabricating data during their research (http://reut.rs/ADsX59), and a survey of US academic psychologists suggests that 1 in 10 Psychologists in the US has falsified research data (http://bit.ly/yxSL1A)[4]. If these findings can be extrapolated generally, then we might expect that 1 in 10 of the scientific articles we read contains, or is based on, doctored or even faked data.

4.         Journal impact ratings have another negative consequence on the scientific process. There is an increasing tendency for journal editors to reject submissions without review – not purely on the basis of methodological or theoretical rigour – but on the basis that the research lacks “novelty or general interest” (http://bit.ly/wvp9V8). This tends to be editors attempting to protect the impact rating of their journal by rejecting submissions that might be technically and methodologically sound, but are unlikely to get cited very much. One particular type of research that falls foul of this process is likely to be replication. Replication is a cornerstone of the scientific method, yet failures to replicate appear to have a low priority for publication – even when the original study being replicated is controversial (http://bit.ly/AzyRXw). That citation rate has become the gold standard to indicate the quality of a piece of research or the standing of a particular researcher misses the point that high citation rates can also result from controversial but un-replicable findings. This has led some scientists to advocate the use of a ‘r’ or ‘replicability’ index for research to supplement the basic citation index (http://bit.ly/xQuuEP).

5.         Whether a research finding is published and considered to be methodologically sound is usually assessed by the use of standard statistical criteria (e.g. assessed by formal statistical significance, typically for p-values less than 0.05). But the probability that a research finding is true is not just dependent on the statistical power of the study and the level of statistical significance, but also on other factors to do with the context in which research on that topic is being undertaken. As John Ioannidis has pointed out, “..a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance.” (Ioannidis, 2005)[5]. This leads to the conclusion that most research findings are false for most research designs and for most fields!

6.         In order to accommodate the inevitable growth in scientific publication, journals have increasingly taken to publishing research in shorter formats than the traditional scientific article. These short reports limit the length of an article, but the need for this type of article may well be driven by the academic researcher’s need to publish in order to maintain their career rather than the publisher’s need to optimize limited publishing resources (e.g. pages in a printed journal edition). The advantage for researchers – and their need to publish and be cited – is that on a per page basis shorter articles are cited more frequently than longer articles (Haslam, 2010)[6]. But short reports can lead to the propagation of ‘bad’ or ‘false’ science. For example, shorter, single-study articles can be poor models of science because longer, multiple study articles may often include confirmatory full or partial replications of the main findings (http://nyti.ms/wkzBpS). In addition, small studies are inherently unreliable and more likely to generate false positive results (Bertamini & Munafo, 2012)[7]. Many national research assessment exercises require not only that quality of research be assessed in some way, but they also specify a minimum quantity requirement as well. Short reports – with all the disadvantages they may bring to scientific practice – will have a particular attraction to those researchers under pressure to produce quantity rather than quality.

7.         The desire to measure the applied “impact relevance” of research – especially in relation to research funding and national research assessment exercises has inherent dangers for identifying and understanding high-quality research. For example, in the forthcoming UK research excellence framework, lower quality research for which there is good evidence of “impact” may be given a higher value than higher-quality outputs for which an “impact” case is less easy to make (http://bit.ly/y7cqPW). This shift towards the importance of research “impact” in defining research quality has the danger of encouraging researchers to pursue research relevant to short-term policy agendas rather than longer-term theoretical issues. The associated funding consequence is that research money will drift towards those organizations pursuing policy-relevant rather than theory-relevant research, with the former being inherently labile and dependent on changes in both governments and government policies.

8.         Finally, when discussing whether funding is allocated in a way appropriate to optimizing scientific progress, there is the issue of whether we fund researchers when they’re past their best. Do we neglect those researchers in their productive prime who can add fresh zest and ideas into the scientific research process? Research productivity peaks at age 44 (or an average of 17 years after a researchers first publication), but research funding peaks at age 53 – suggesting productivity declines even as funding increases (http://bit.ly/yQUFis). It’s true, these are average statistics, but it would be interesting to know whether there are inherent factors in the funding process that favour past reputation over current productivity.


[1] Lawrence P A (2008) Lost in publication: How measurement harms science. Ethics in Science & Environmental Politics, 8, 9-11.
[2] Fang F C & Casadevall A (2011) Retracted science and the retraction index. Infection & Immunity. Doi: 10.1128/IAI.05661-11.
[3] Stapel D A & Lindenberg S (2011) Retraction. Science, 334, 1202.
[4] John K L, Loewenstein G & Prelec D (in press) Measuring the prevalence of questionable research practices with incentives for truth-telling. Psychological Science.
[5] Ioannidis J P A (2012) Why most published research findings are false. PLoS Medicine, doi/10.1371/journal.pmed.0020124
[6] Haslam N (2010) Bite-size science: Relative impact of short article formats. Perspectives on Psychological Science, 5, 263-264.
[7] Bertamini M & Munafo M R (2012) Bite-size science and its undesired side effects. Perspectives in Psychological Science, 7, 67-71.

Pavlov's Sabbatical - The dangers of misinterpreting your experimental findings

3/17/2013

 
Very few people are aware that it is the 100th anniversary of Ivan Petrovich Pavlov’s sabbatical year at the University of Sussex. In 1912, fresh from many research successes and a Nobel Prize in Physiology in 1904, he had managed to negotiate a year’s sabbatical leave from the surgical department of the physiological laboratory at the Institute of Experimental Medicine in St. Petersburg. I am able to say that I am aware of this unusual episode in Pavlov’s illustrious history because many years ago I was asked to write a chapter on Pavlov in an Edward de Bono book called ‘The Greatest Thinkers’[1], and my research uncovered this relatively unrecognized period of Pavlov’s life. At that time, Russia was riven with famine and political discontent, and Pavlov was struggling with the concept of ‘psychical secretion’. He had begun to realize that the forms of nervous regulation of digestive gland secretion in dogs could often be conditioned not only by purely physiological factors but also by what he initially called ‘psychical’ factors. He felt that the time was right to move away from the political and social turmoil in Russia to develop his overarching theory of conditioning that he felt would some day consolidate his role as the eminent physiologist of the century. Having secured funding for his sabbatical year, he travelled to England. When he arrived he was overjoyed by the opportunities that the University of Sussex offered him for his research. He was based in the School of Life Sciences (a building which is now occupied by administrators and accountants trying to determine how biological sciences can operate at anything less than an enormous loss), and immediately set up a conditioning lab to further explore his theories of ‘psychical secretion’.

Pavlov loved the liberal, academic atmosphere at Sussex, and spent many hours in what then was euphemistically called ‘East Slope Bar’ (because of its ability to slope eastwards and be a bar at the same time). He had decided that his next goal was to prove that associative conditioning was a basic and universal learning process, and that it was the most basic adaptive learning mechanism in the animal kingdom. There was no doubt that classical conditioning was universal – it could be found in primates right down to single celled organisms (yes, even nematodes), but for Pavlov there was something missing. His conditioning theory was incomplete. It had to apply to animals of all kinds, creeds, political persuasion, and psychic state.

To this end, Pavlov dedicated his sabbatical year at the University of Sussex to determining whether classical conditioning applied to dead as well as living organisms. This was a stroke of genius. Only very few scientists possess the insight that allows them to project their theories into areas which are challenging and paradigm shifting (e.g. Sheldrake[2], Bem[3]), but Pavlov was such a scientist.

Pavlov began his research by looking for a source of dead dogs that would serve as subjects in his research. He very soon found that source at the ‘Goods Inwards’ door of the University Refectory. He negotiated a regular supply of dead dogs for his experiments, and the University Hospitality services have recently recognized the historical importance of this by erecting a brass plaque outside the University cafeteria commemorating their role in Pavlov’s sabbatical research.
Picture
Pavlov conducted his research on the salivary conditioning of dead dogs with his usual scientific rigour. Having carried his subjects back to his lab, he placed them in the usual experimental restraints and began the conditioning trials. I have been lucky enough to secure some original transcripts of the notes Pavlov kept on those early experiments. His excitement was palpable. As he writes:

“ I placed the subject on the experimental table; I rang the bell; I waited very briefly then I gave the dog the food….. Nothing.! ….No salivation! - I was puzzled. This had always worked before in the lab in St. Petersburg. Why was this so different in the University of Sussex? I could not believe that my universal learning principles did not also apply to dead organisms. But wait….of course! This was only the first trial. There will be no learning on the first trial! We must pair the bell with food on more occasions.”

Pavlov’s scientific logic was impeccable. He continued with his experimental procedure,  but time eventually told a sad story. Although Pavlov had strived manfully to extend his so-called universal principles of learning to dead animals, it didn’t appear to work. His dead dogs failed to salivate to the bell CS even after hundreds of conditioning trials. Nevertheless, being the scientist that he was, and after many hours and days of detailed thought and analysis, Pavlov came to the obvious conclusion. It was not that dead dogs were not conditionable - they were in fact deaf. Pavlov had managed to salvage his universal principle of learning by taking a thoughtful and insightful new look at the data. Any of you that have come across a dead dog will be fully aware that deafness is indeed a feature of dead dogs, and our knowledge of this feature stems from Pavlov’s pioneering experimental work during his sabbatical year at the University of Sussex.

[1] De Bono E (1976) The Greatest Thinkers. Weidenfeld & Nicolson: London
[2] Sheldrake R (2012) The Science Delusion. Coronet.
[3] Bem D. J. (2011) Feeling the future: Experimental evidence for anomalous retroactive influences on cognition and affect. Journal of Personality & Social psychology, 100, 407-425.

The Trials & Tribulations of the Journal of Null Results

3/17/2013

 
Picture
Ok - straight to the point again. There will never be a Journal of Null Results. Recently Richard Wiseman tweeted “Psychologists: I am thinking of starting a journal that only publishes expts that obtained null results. Wld you submit to it?” I replied – Yes, I have drawers full of studies with null results. But – having said that - I struggle with even thinking about writing them up, let alone submitting them – and the reality is that I know that no editor would contemplate accepting them for publication. But I have great sympathy for creating a Journal of Null Results.

So what would a Journal of Null Results look like? Is it going to be swamped with submissions from people who didn’t get significant results? That would be nice for all our young and up-and-coming researchers. It would mean that those 50% or so of PhD studies that never achieve significance would see the light of day; it would mean that all undergraduate projects – at a sweep – would be suddenly publishable. But I’m not sure that’s what’s intended. One of the cornerstones of science is replication, but we still seem quite bad at deciding when replication is necessary and deciding when non-replication is publishable. These are the decisions upon which science – according to the principle of replication – should be based. I haven’t looked closely at the literature, but I suspect that most journal editors would not be interested in publishing (1) a straight replication of a study that is already published, and (2) a study that fails to replicate an already published study – especially if the results are null ones.

There are two issues that immediately come to mind here. First, we can never quite know why a study has generated null results. As a good scientist, you certainly wouldn’t design your experiment to generate predictions that would support the null hypothesis. We know that null results can be produced by low power, inappropriate control conditions, poor experimental environments, experimenter bias, etc. etc. – so null results studies already have that inherent disadvantage unless they are obsessively faithful reproductions of studies that have produced significant results in the past.

As I said earlier there are two distinct, but related issues here: who decides whether a finding is so significant that it needs replicating, and is a replication going to generate enough citations to make an editor believe it is worth publishing? For example, in 2011 Daryl Bem published an article in Journal of Personality & Social Psychology – one of the flagship journals of the APA – apparently showing that ESP existed and that people could predict the future. The normal process for the vast majority of research findings would be that this finding  - published in one of the major psychological journals – would now simply enter into core psychological knowledge. It would find its way into Introduction to Psychology textbooks and become taught as psychological fact. But it is controversial. Richard Wiseman, Chris French & Stuart Ritchie have had the salutary experience of trying to publish a replication of this finding, and their failure to get their null results published in that same journal is documented in this blog (http://chronicle.com/blogs/percolator/wait-maybe-you-cant-feel-the-future/27984). Quite strangely, Journal of Personality & Social Psychology told them ‘no, we don’t ever accept straight replication attempts’. I assume the implication there is that whatever they publish is so rock hard fact because they are such a high impact journal that they have no need to justify the scientific integrity of what they publish? So where do you publish replications if not in the same journal as the original study? Wiseman et al. did submit their replication to another journal, but the reasons for not publishing it in that journal seem from the above blog also to be bizarre. Once a finding does get published – no matter how bizarre and leftfield – there are editors bending over backwards to find reasons not to publish attempted replications.

One reason why replications may not get published, and specifically why null findings may not get published, is because editors are not only nowadays being pressurized to find reasons to reject quite scientifically acceptable studies (because the journal simply has to manage demand), but because a study may not generate citations. This is certainly true of successful replications, which we assume will never be as heavily cited as the original study. In the last couple of weeks I had an article rejected by an APA journal, and one of the reasons given by the action editor was “we must make decisions based not only on the scientific merit of the work but also with an eye to the potential level of impact for the findings”. Whoa! – that is very worrying. Who is making decisions about whether an article is citable or not?

Finally, I should recite one of my own experiences on how you try to get the research community to hear about null findings. During the early 2000s, my colleague Andy Field and I did quite a bit of research on evaluative conditioning. There was a small experimental literature in the 1990s suggesting that evaluative conditioning was real, important, and possessed characteristics that made it distinct from standard Pavlovian conditioning – such as occurring outside of consciousness and being resistant to extinction. Interestingly, multinational corporations suddenly got interested in this research because it suggested that it might be a useful paradigm for changing people’s preferences, and so had important ramifications for advertising generally. Andy’s PhD thesis rather eloquently demonstrated that the evaluative conditioning effects found in most of the 1990s studies could be construed as artifacts of the procedures that had been used in those earlier studies and need not necessarily be seen as examples of associative learning (Field & Davey, 1999). We spent many subsequent years trying to demonstrate evaluative conditioning in paradigms that were artifact free, but there were many very highly powered studies that resulted in failure after failure to obtain significant effects. Evaluative conditioning was much more difficult to obtain than the published literature suggested. So what should we do with these bucket loads of null results? No one really wanted to publish them. In fact, we didn’t want to publish them because we thought we hadn’t quite got it right yet! We kept on thinking there must be a procedure that produces robust evaluative conditioning but we haven’t quite refined it. In 1998 I was awarded a 3-year BBSRC project grant worth £144K of taxpayers money to investigate evaluative conditioning. We found that it was extremely difficult to demonstrate evaluative condition – and it was also very difficult to publish the fact that we couldn’t demonstrate evaluative conditioning! Andy and I attempted to publish all of our evaluative conditioning null findings in relatively mainstream journals – summarizing a total of 12 studies largely with null results. No luck there, but we did eventually manage to publish these findings in the Netherlands Journal of Psychology (Field, Lascelles, Lester, Askew & Davey, 2008). I must admit, I don’t know what the impact factor of that journal is, and I don’t know how many people have ever read that article. But I suspect it will never make headlines in any review articles on evaluative conditioning.

The problem for null results is that they lie in that no man’s land between flawed design and genuine refutation. Our existing peer review process leads us to believe that what is published is published because this is the way we verify scientific fact. It then becomes significantly more difficult to produce some evidential material saying that ‘scientific fact’ is wrong. It then becomes even more difficult to prove that a ‘scientific fact’ is wrong because of a study demonstrating null results.

Field A. S. & Davey G.C.L. (1999) Re-evaluating evaluative conditioning: A nonassociative explanation of conditioning effects in the visual evaluative conditioning paradigm. Journal of Experimental Psychology: Animal Behavior Processes, 25, 211-224.

Field AP, Lascelles KRR, Lester KJ, Askew C & Davey GCL (2008) Evaluative conditioning: Missing, presumed dead. Netherlands Journal of Psychology, 64, 46-64.



The Evils of Journal Scope-Shrinkage: The Example of Clinical Psychology

3/17/2013

 
First published 27/01/2012 at http://grahamdavey.blogspot.co.uk
Picture
I’ll come straight to the point. The more that journals have to introduce demand management strategies, the more they end up shrinking their scope. The more they shrink their scope, the more they force research into a cul-de-sac and isolate it from cross-fertilization from other core areas of their over-arching discipline. In this day and age, there are more and more researchers all of whom are under pressure to do research and to publish it – for both the sake of their current jobs and their future careers. Journals find themselves overwhelmed with submissions, to the point where many APA journals now have 80%+ rejection rates. So how do you manage demand? Well, at least some journals manage their demand by shrinking their scope. Demand management is important because it reduces the journal costs incurred by managing a submission through the journal editorial system, and reduces the costs incurred through Associate Editors who manage the peer review process. For most traditional journals that also have a print as well as an on-line version of the journal, it means that increased numbers of submissions mean increased costs for just a fixed publication income. Effectively, scientific publishers believe they are spending good money having to find reasons to reject large numbers of submissions that are of acceptable scientific quality but will never themselves earn money for that publisher.

So you shrink your scope. Often scope-shrinkage has a relatively small impact. But in some areas of psychology it can have a significant impact depending on how a journal redefines its scope. I am an experimental psychopathology researcher. The majority of my research is conducted as an experimental psychologist on a subject matter that is psychopathology, and I have traditionally published in clinical psychology journals – which is a good fit for the subject matter of my research and also gets my research read by clinical psychologists.

But even before scope-shrinkage, I’ve sometimes encountered difficulties publishing in clinical psychology journals because I’ve used analogue rather than clinical samples or my research has not been viewed by editors as being relevant to clinical interventions. They might just as well have said “You’re not a clinical psychologist, and your research can’t have any relevance to clinical populations because your participants weren’t a clinically diagnosed sample, and so your research is of no interest to the clinical research community!” Harsh – but that is the feeling I got.

Well, now that’s official – at least for some Elsevier journals. In 2010 Behaviour Research & Therapy – that traditional bastion of experimental psychopathology research - posted a very brief editor’s note stating “Behaviour Research & Therapy encompasses all of what is commonly referred to as cognitive behaviour therapy (CBT)” with a revised focus on processes that had a direct implication for treatment and the evaluation of empirically-supported interventions (Vol 48, iii, 2010). In effect, it had become a CBT evaluation journal. An email exchange I had with the editor confirmed that this re-scope was a consequence of the large number of submissions to the journal. To be fair, the editor did say that “the goal is not to eliminate research on experimental psychopathology, but to try to have it more ‘directly’ related to prevention and treatment”.

So where do I now go to publish my psychopathology research if it’s not clearly intervention related. BRAT’s editor did say that “a final sentence in the Discussion would not suffice” to make any research intervention relevant. Fair enough. Most of my research is on the aetiology of anxiety disorders, so one journal that I’ve published in quite frequently is another Elsevier journal, Journal of Anxiety Disorders. I submitted a manuscript in June 2010 – around the same time that BRAT had shrunk its scope to intervention-relevant papers. I received a regretful email from the action editor immediately after submission saying “we have made a decision that we will no longer review manuscripts based solely on undiagnosed or analogue samples. This decision can be found within the Editorial Guidance paragraph on the back cover of the journal. Consequently, I will be unable to accept it for publication”.

Scope-shrinkage yet again. I’m sure that these decisions about journal scope were all taken in the best of faith and genuinely meant to help deal with and manage demand, but I can’t help but think of the potential restrictions that changes such as these will place on the discipline-wide exchange of ideas and information that seeds genuine progress in any applied area. OK, so I’m now miffed that I can’t easily publish any more in journals that I used to consider automatic outlets for my research, but there must surely be a bigger and wider cost. As we get more journals with increasingly narrower scopes, it is likely to lead to researchers reading only those journals that have a direct relevance to their research and areas of interest. There could well be significantly fewer left-field ideas, fewer opportunities for the cross-fertilization of ideas. It is also likely to lead to the entrenchment of existing paradigms of research within specific areas – especially applied areas such as clinical psychology where theoretical and empirical sharpness can often be compromised by the need for serviceable outcomes.

During our own weekly lab meetings, I always bring the latest copy of Quarterly Journal of Experimental Psychology along as soon as it’s published, and we look through it for ideas that have relevance to the psychopathology processes that we’re researching. This has already been the source of some exciting new ways for us to conceptualise and study the psychopathology processes we’re interested in. With the scope-shrinkage currently occurring in at least some important clinical psychology journals, I wonder where new ways of thinking about clinically-related research problems will come from unless those researchers who publish in these journals are actively scouring the contents of journals beyond their immediate clinical remit.

<<Previous
Forward>>

    Author

    Graham C. L. Davey, Ph.D. is Professor of Psychology at the University of Sussex, UK. His research interests extend across mental health problems generally, and anxiety and worry specifically. Professor Davey has published over 140 articles in scientific and professional journals and written or edited 16 books including Psychopathology; Clinical Psychology; Applied Psychology; Complete Psychology; Worrying & Psychological Disorders; and Phobias: A Handbook of Theory, Research & Treatment. He has served as President of the British Psychological Society, and is currently Editor-in-Chief of Journal of Experimental Psychopathology and Psychopathology Review. When not writing about psychology he watches football and eats curries.

    Archives

    September 2019
    May 2019
    August 2018
    July 2018
    June 2015
    April 2015
    November 2014
    March 2014
    December 2013
    July 2013
    June 2013
    April 2013
    March 2013

    Categories

    All
    Anxiety
    Books
    Clinical Psychology
    Journal Publishing
    Medicine
    Mental Health
    Phobias
    Psychology
    Research
    Research Councils

    RSS Feed

Proudly powered by Weebly