Thoughts on Psychology

Just another WordPress.com site

Archive for the ‘Research methods’ Category

Link: fraud in psychology

leave a comment »

There are many reasons to be cautious in consuming psychological research. This is perhaps the most egregious:

http://www.guardian.co.uk/science/2012/sep/13/scientific-research-fraud-bad-practice

Nothing new really, psychology has learnt little from the Cyril Burt case

Advertisements

Written by daijones

September 17, 2012 at 4:34 am

More shoddy science reporting from the BBC

leave a comment »

An exciting headline on the BBC News website recently:

Alzheimer’s: Diet ‘can stop brain shrinking’

Alzheimer’s is a devastating illness and anything that can help prevent or delay its onset, or lessen it’s severity, is to be welcomed. The headline, and in part the article that follows, suggest that diet can reduce brain shrinkage in later life and so act protectively against Alzheimer’s. There are a couple of problems here though, that should be quickly apparent if you read the article with a sceptical eye. The first is in the suggestion that the article has anything to do with Alzheimer’s: the research wasn’t conducted on Alzheimer’s sufferers, and participants weren’t followed longitudinally to see if there was a differential incidence of Alzheimer’s developing. So, the research can’t actually tell us anything about Alzheimer’s. To be fair to the article, this is mentioned. In the penultimate paragraph.

A bigger problem, potentially, is with the interpretation of the research itself. The article takes an unambiguous position that a diet high in vitamins and omega 3 fatty acids caused a reduction in brain shrinkage with age. However, the research didn’t find this. Rather, it found a correlation between blood nutrients and brain volume: as a quasi-experiment, even if the test used a difference statistic then the result is essentially correlational. And correlation doesn’t prove causation. There are a number of possible reasons for the results found, given the pre-existing evidence that education and intellectual effort increases brain complexity and volume. Off the top of my head, it may be that people who are well educated tend to have higher brain volumes, and also tend to eat healthier diets. Or people from higher socio-economic groups tend to be both more highly educated, and are more likely to follow (and be able to afford) a healthy diet.

To eliminate these possibilities, you’d hope that the original research controlled for factors including education and socio-economic status. It’s behind a paywall so I can’t check, but if they did then the BBC didn’t think to mention it. The other result reported, that there was a difference in performance on cognitive tasks in a sample of people without clinical deficits, suggests that there’s some relationship between diet and cognitive performance, but without knowing the educational history of the participants it’s impossible to decide whether diet causes differences in performance, as the article suggests; or whether it’s again a matter of better educated people tending to have better diets. The latter is certainly a strong possibility, and you’d hope that the health editor who wrote the article would discuss this.

Written by daijones

January 1, 2012 at 10:25 pm

Psychology’s subjects

with one comment

Psychology is usually pursued as a science akin to the natural sciences, attempting to find universal laws of behaviour that explain all humans. Central to this view is the idea that it doesn’t matter what humans we study, since we all have the same fundamental psychological processes. However, for many phenomena in human psychology this isn’t necessarily true, and arguably much of our psychological function is a reflection of our cultural background. If this is the case, then it’s a problem that Western psychology tends to only investigate Western participants and Western concepts, and then tries to apply the resultant theories to all people. For example, Hwang (2005) describes the modernising approach to intervention in the developing world, which imposed Western notions of individualism on other societies as the “right” goal for development, while Watters (2010) describes how Western definitions of mental illness are being exported to other cultures to the detriment of members of those cultures.

As the references above suggest, there is some awareness of the problem of generating “universal” theories of human nature without paying regard to cultural differences, and specifically the problems posed by studying exclusively Western participants and assuming the results hold true for other peoples. A couple of good articles address this head issue head on.

Arnett (2008) analyses research published in APA journals and finds that participants are overwhelmingly drawn from the 5% of the world’s population who are from the USA. He analyses the problems this causes, and suggests some solutions. The abstract reads:

This article proposes that psychological research published in APA journals focuses too narrowly on Americans, who comprise less than 5% of the world’s population. The result is an understanding of psychology that is incomplete and does not adequately represent humanity. First, an analysis of articles published in six premier APA journals is presented, showing that the contributors, samples, and editorial leadership of the journals are predominantly American. Then, a demographic profile of the human population is presented to show that the majority of the world’s population lives in conditions vastly different from the conditions of Americans, underlining doubts of how well American psychological research can be said to represent humanity. The reasons for the narrowness of American psychological research are examined, with a focus on a philosophy of science that emphasizes fundamental processes and ignores or strips away cultural context. Finally, several suggestions for broadening the scope of American psychology are offered.

Henrich et al (2010) go further in identifying problems with relying on Western participants, arguing in depth that Westerners are importantly different from other peoples on important characteristics, and that these differences colour the results of psychology research. The abstract reads:

Behavioral scientists routinely publish broad claims about human psychology and behavior in the world’s top journals based on samples drawn entirely from Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies. Researchers—often implicitly—assume that either there is little variation across human populations, or that these “standard subjects” are as representative of the species as any other population. Are these assumptions justified? Here, our review of the comparative database from across the behavioral sciences suggests both that there is substantial variability in experimental results across populations and that WEIRD subjects are particularly unusual compared with the rest of the species—frequent outliers. The domains reviewed include visual perception, fairness, cooperation, spatial reasoning, categorization and inferential induction, moral reasoning, reasoning styles, self-concepts and related motivations, and the heritability of IQ. The findings suggest that members of WEIRD societies, including young children, are among the least representative populations one could find for generalizing about humans. Many of these findings involve domains that are associated with fundamental aspects of psychology, motivation, and behavior—hence, there are no obvious a priori grounds for claiming that a particular behavioral phenomenon is universal based on sampling from a single subpopulation. Overall, these empirical patterns suggests that we need to be less cavalier in addressing questions of human nature on the basis of data drawn from this particularly thin, and rather unusual, slice of humanity. We close by proposing ways to structurally re-organize the behavioral sciences to best tackle these challenges.

References

Arnett, Jeffrey J. (2008) The Neglected 95%. American Psychologist, Vol 63(7), pp. 602-614.

Henrich, J., Heine, S. J., Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, Vol 33(2-3), pp. 61-83.

Hwang, K.-K. (2005). The indigenous movement. The Psychologist, 18(2), 80-83.

Watters, E. (2010). Crazy Like Us: The globalization of the American Psyche. New York: Simon & Schuster.

Written by daijones

October 8, 2010 at 2:01 am

Posted in Full post, Research methods

Tagged with

Learning About Research From a Cereal Packet

leave a comment »

Actually, from a cereal advert, but that makes a less eyecatching heading. Anyone seen that Special K advert, that makes the following claim?
“Research shows that people who eat a healthy breakfast are more likely to be slimmer than people who eat no breakfast at all”
There’s an interesting lesson to learn from this claim. It seems counter-intuitive, surprising even. The implication, as I read it, is that eating a “healthy breakfast”, presumably Special K, leads to you being slimmer than if you ate no breakfast. However, there’s nothing in the claim that actually supports that implication.

Imagining how this research must have gone, then I’d guess that they took a group of people who claimed to eat a healthy breakfast; and a group of people who claimed to eat no breakfast; and compared the weights or BMIs of the two groups; finding that the healthy group scored lower on average. Fine, but what we have here is a classic quasi-experiment: the groups already exist. As such, you can’t ascribe causality, and there are three possible interpretations of the finding:
* eating a healthy breakfast makes you slimmer (what they’d like you to believe)
* being slim makes you eat a healthy breakfast rather than no breakfast (unlikely)
* people who care about healthy eating are both more likely to be slim and more likely to eat a healthy breakfast (my preferred option)
Quasi-experiments are essentially correlational designs, even if you use a difference test to analyse the results. All the Special K results mean is that there’s a correlation between healthy eating and being slim. Big surprise!

Of course, Special K could have done experimental research, asking one group of people to eat a healthy breakfast for a month; and one group of people to eat no breakfast for a month; controlling for equivalent eating behaviour at other times of day; and then comparing change in weight or BMI over time. If people in the healthy group show a larger drop, then you can claim a causal relationship. The reason I don’t think they did this is because the advert is actually very careful not to claim a causal relationship – they don’t say “eating a healthy breakfast helps weight loss compared to eating no breakfast”. The Advertising Standards Authority wouldn’t let them make such a claim unless they’d done experimental research; the fact that they don’t make the claim suggests they’ve done different research, of the type I described at the start. However, they make their claim in a very clever way, to give the impression that if you eat Special K for breakfast, you’ll lose weight – the first of the three possible interpretations.

Now, I hesitate to criticise anything that encourages people to eat healthily, and particularly anything that discourages people from avoiding eating. However, there’s a cautionary tale here. People routinely over-state the findings of correlational research, and outside the control of the ASA routinely ascribe causality when there’s no basis. Be careful when reading about research results like this. In terms of psychology, the research described is exactly equivalent, in logical terms, to race or gender difference research. We could rewrite the claim like so:
“Research shows that people who come from a white ethnic group are more likely to score highly on IQ tests than people who come from afro-caribbean ethnic group.”
This has shown to be the case in various research projects, but as those of you who’ve suffered my rants in lectures will know, the reasons for this are highly debatable, and to my mind most likely to be because of differing social opportunities and experiences for the two groups.

Written by daijones

September 11, 2010 at 6:55 pm

Posted in Full post, Research methods

Tagged with

Studying memory for words (and confounding variables generally)

leave a comment »

Studies investigating memory for word lists are amongst the most popular to do for research methods assignments. This is for good reason, since such studies are usually straightforward both conceptually and methodologically. However, problems arise due to people’s choice of words. These problems are important to address in their own right, but also teach us something about doing research in general.

When people do studies of memory for word lists, it’s common for them to make up lists of words off the top of their head. This is a Bad Thing. As an example, consider the study I described in the last post, comparing memory for long words with memory for short words. The logic of this study is that people are given either a list of long words and/or a list of short words to remember. Recall for the respective kinds of list is measured, and then compared using an appropriate analysis, e.g. a t-test. If a significant difference is found, then we conclude that the length of words affects how well we can remember them, and particularly conclude that the phonological loop has a time limited capacity.

The logic above is sound, but only as long as we can discount any alternative explanations. However, if there can be an alternative explanation for why memory for the two lists differs, then we can’t confidently conclude that word length has an effect. Take the following two lists as an example:
cat xylophone
hat phenomenon
mat evolution
bat conundrum
Clearly, one list consists of short words, and one list consists of longer words. However, that’s not the only difference between the two lists. The lists also differ in that:
* all the words in list one are familiar, the words in the second list less so. Familiarity affects memory
* the words in the first list rhyme with each other, words in the second list don’t. Rhyming affects memory.
* words in the first list all refer to concrete objects, words in the second list are more abstract. Concreteness affects memory.
If we find a difference between people’s memory for list one and their memory for list two, we’d want to conclude that it’s because of the word length. Actually, there are at least four possible reasons for such a difference, because there are at least four kinds of difference between the lists: word length; rhyming; familiarity; and concreteness.

(Actually, we probably wouldn’t find a difference, because even the four long words fit into the 2 second phonological loop. Almost everyone would score 4 for each list. We need more words in each list to detect a difference, which illustrates the importance of measures being fine grained enough to measure what we want.)

So, the study described above is clearly flawed because there are alternative explanations for the results – the study is potentially confounded. That’s the general issue I talked about above.

The specific issue is as follows. If you’re doing a study of memory for words, you need to think carefully about the words you use. If you’re doing a between (unrelated) design where the same list of words can be used, e.g. recall with and without interference, then you can relax a little – there’s only one list of words, so no need to worry about differences between those lists. If you’re doing a between design where there are separate lists though, e.g. long and short, you need to worry.

If you’re doing a within (related) design, then you almost always need to choose word lists carefully, because participants will be remembering more than one list of words. If you’re doing a within design where you’re testing memory with and without interference, then you can’t use the same list of words because of practice effects. You need two (or more) lists of words, but you also need to make sure that the lists are equally difficult to remember, so that the only explanation for any difference you find is that for one list there was interference.

In general, when you’re doing a study of memory for word lists where you’re using more than one list of words, then you need to design two matched word lists that you can show are equivalent on any possible confounding variables. Of course, the words will differ on the one criterion you’re interested in as an independent variable, if any. So, if you’re looking at the effects of interference in a within design, you need to ensure that the words in each list are of equal length; equal familiarity; equal concreteness; etc. If you’re doing a within design looking at the effect of word length, then you need to ensure that the words in each list are of equal familiarity; equal concreteness; etc.; but different in terms of word length.

(A quick note: the word length effect arises because of the time based capacity of the phonological loop. Length in this context refers to articulatory length – how long it takes to say a word – not the number of letters in the word. The number of syllables is a rough guide to articulatory length, and certainly a better one than the number of letters.)

So, how do you get these magical matched word lists? Luckily, some kind souls have developed a publicly available database of words marked up with various psycholinguistic characteristics, including articulatory length, familiarity, concreteness, etc. The database allows you to select words according to whichever of these characteristics you want to focus on. Use the database to generate words according to whatever criteria you choose, then randomly choose the number of words you need for each list. You can then write about this in your materials section, to show how much care you’ve taken to eliminate confounding variables. You can access the database at the following address:
http://websites.psychology.uwa.edu.au/school/MRCDatabase/mrc2.html

Use the “Dict Utility Interface” link to access the old, web searchable version of the interface.

I’d recommend using sections 2 & 3 of the interface to select required values of NSYL, the number of syllables; FAM, the familiarity, where 100=not familiar, 700=very familiar; CONC, for concreteness, 100=not concrete, 700=very concrete; and PDWTYPE, part of speech, choosing INClude N, for nouns. Adjust these, then click the GO button to generate a list of words. If anyone wants help, give me a shout or leave a comment.

Written by daijones

September 11, 2010 at 6:54 pm

Posted in Full post, Research methods

Tagged with