Thoughts on Psychology

Just another WordPress.com site

Knowledge on the Internet

leave a comment »

You’ll hear us say at various times that you shouldn’t rely on web sites as sources to reference in coursework, though some sites are useful starting points to get your head round a question. Wikipedia’s particularly good, and particularly popular (too popular sometimes), but it has its problems. A study in Nature found that the accuracy of wikipedia articles wasn’t far short of Encyclopaedia Britannica articles. This is sound, but at degree level we expect knowledge beyond what you find in an encyclopaedia, no matter how good. More to the point, this level of quality comes about because anyone can contribute to wikipedia, but this is also its downfall – anyone can edit a wikipedia article to say whatever they want. The system is self correcting, in that people can flag disagreement and make changes to articles, and eventually articles tend to settle down to a sound position. Whenever you look up something in wikipedia though, there’s always a chance that you catch it at an intermediate stage where someone’s written any old nonsense.

The above is all true, and widely recognised. For most people, wikipedia is an excellent source, provided you bear in mind the fact that some material may be in dispute. For others, however, wikipedia is the front line in a dastardly plot by left wingers. Check out this Guardian article about Conservapedia, an online “encyclopaedia” set up to counteract the alleged left wing propaganda promulgated by wikipedia and the like:

http://www.guardian.co.uk/technology/2007/mar/02/wikipedia.news
(For the Conservapedia web site: http://conservapedia.com/Main_Page)

This development tells us something about the nature of knowledge. The people who set up conservapedia are dissatisfied with the knowledge presented on wikipedia, because it doesn’t fit in with what they, for whatever reasons, believe to be true. This is revealed nicely by their comments about the Democratic party – whatever your political views, any fair minded person would find it hard to believe that one of the two mainstream US parties has a ‘true agenda’ of cowering to terrorism.

The conservapedia site would seem to many to be politics presented as knowledge. The trouble is, this is usually true to some extent. It’s not always so blatant, but knowledge is fundamentally socially constructed, such that a given group comes to some agreement about what counts as “true” and what doesn’t. In some cases the given group is a clear subset of society with a clear agenda behind what they believe to be true. More widely though, any particular culture or society will have its own ways of agreeing what’s acceptable knowledge. In general in the West we prefer the scientific method as a way of finding ‘truth’, but the scientific method has its own flaws that means that just because some claim is widely accepted, doesn’t mean its true. In the 1940s most psychologists accepted behaviourism as a ‘true’ theory of human behaviour, but now we know better 😉 The lesson is always be sceptical about the truth claims of others. Including those on wikipedia.

Advertisements

Written by daijones

September 11, 2010 at 7:02 pm

Posted in Full post

Tagged with

On Word Limits in Assignments

leave a comment »

It’s a commonly held belief that when you’re set an assignment with a particular word limit, you’re allowed to go over or under by 10%. So, for example, if the word limit is 2000 words then you’d be okay producing between 1800 and 2200 words. Outside of that range, you attract penalties for going over or under the limit. As is often the case, this belief is commonly held, but partially wrong 😉

Within the University regulations for assessment there’s no mechanism to penalise students for writing significantly less than the word limit. You can go under by as much as you want. On the other hand, there IS a set penalty for going more than 10% over the limit. If you go more than 10% over the limit then we’ll mark the work as normal and then apply the University’s standard penalty. This is that for every additional 10% above the allowance, inclusive, we take off 5 marks. In the case of an essay with a 2000 word limit, if you do 2190 words you’re fine; if you do 2230 words you lose 5 marks off the final mark; 2450 you lose ten marks, etc.

Having said that, word limits are there for a reason: it’s very difficult to produce a good answer in response to a given question in less words than the limit. While there’s no fixed penalty for going under the word limit, we set word limits in the expectation that students will need to write that many words, after selecting and rejecting appropriate material, to produce a good essay. If you find you’re well short of the word limit you will end up with a low mark because it’s a poor piece of work, in that you’ve probably left stuff out we expected to be included. The same actually holds true for going over the word limit also – if you write 2400 words for an essay with a 2000 word limit then you’re probably waffling a bit, and you’ll lose marks for that; and then lose more marks when we apply the penalty for breaching the word limit.

As a general guide, you should aim to write somewhat more than the word limit, but then identify the most relevant material to keep in to get down to the limit. Part of any assessment task is deciding what goes in and what doesn’t. If you find yourself short of the word limit think about what you might have left out that we were expecting you to include. Don’t though just pad out the essay to make up the words – the marker will notice irrelevant padding and you’ll lose marks for poor choice of material. If you find yourself well over the word limit, look for the least relevant material that you can remove. Remember we expect proper grammar, so just deleting every occurrence of “the” isn’t a good strategy!

Written by daijones

September 11, 2010 at 6:59 pm

Posted in Full post

Tagged with ,

Rejecting over-simplistic views of mental health

leave a comment »

A couple of interesting articles from the Guardian about issues in mental health. As you might expect from me, the interest is not only in what they say about mental health per se, but also in what they say about wider issues in psychology.

In an article on schizophrenia in black britons, Kwame McKenzie talks about the increased incidence of schizophrenia in afro-caribbean groups in British society:
http://www.guardian.co.uk/commentisfree/2007/apr/02/comment.health
Particularly interesting is his observation that comparative rates of incidence of schizophrenia between black and white groups in Britain aren’t reflected in other cultural settings. I.e., in a predominantly white British culture, members of afro-caribbean groups are far more likely to be diagnosed as schizophrenic than members of white groups; and this difference is specific to the British cultural setting, as opposed to say african or caribbean settings. This suggests that there’s something about British culture that leads to the difference in incidence rates, further suggesting both that simplistic claims for a causative biological basis for schizophrenia are misplaced; and that psychology as a discipline is wrong to ignore cultural factors in the illness. The author is a psychiatrist, and as such treats the diagnosis of mental health problems as unproblematic: there’s a whole separate debate as to the extent to which schizophrenia is a culturally specific diagnosis; and the extent to which predominantly white psychiatrists can unproblematically diagnose mental illness in other cultural groups. I’ll leave that for now though. (This is touched on in some of the comments on the article. In general though, the comments vary greatly in the extent to which they’re worth reading. I wouldn’t bother.)

The more interesting article, for me at least, discusses increasing rates of depression in Western society:
http://www.guardian.co.uk/lifeandstyle/2007/apr/02/healthandwellbeing.books
The take home messages here are about depression as a culturally caused illness; and about possible remedies that aren’t addressed by modern psychology. (Indeed, that are discounted as part of the medicalisation of psychological conditions in modern Western society.) There are very interesting insights about how psychological states in individuals arise from the organisations of cultures and societies. More to the point, again for those who’ve heard me banging on in the past, is the observation about the changing notion of the self from the 16th century onwards. See, it’s not just me 😉

Written by daijones

September 11, 2010 at 6:58 pm

Posted in Full post

Tagged with

Learning About Research From a Cereal Packet

leave a comment »

Actually, from a cereal advert, but that makes a less eyecatching heading. Anyone seen that Special K advert, that makes the following claim?
“Research shows that people who eat a healthy breakfast are more likely to be slimmer than people who eat no breakfast at all”
There’s an interesting lesson to learn from this claim. It seems counter-intuitive, surprising even. The implication, as I read it, is that eating a “healthy breakfast”, presumably Special K, leads to you being slimmer than if you ate no breakfast. However, there’s nothing in the claim that actually supports that implication.

Imagining how this research must have gone, then I’d guess that they took a group of people who claimed to eat a healthy breakfast; and a group of people who claimed to eat no breakfast; and compared the weights or BMIs of the two groups; finding that the healthy group scored lower on average. Fine, but what we have here is a classic quasi-experiment: the groups already exist. As such, you can’t ascribe causality, and there are three possible interpretations of the finding:
* eating a healthy breakfast makes you slimmer (what they’d like you to believe)
* being slim makes you eat a healthy breakfast rather than no breakfast (unlikely)
* people who care about healthy eating are both more likely to be slim and more likely to eat a healthy breakfast (my preferred option)
Quasi-experiments are essentially correlational designs, even if you use a difference test to analyse the results. All the Special K results mean is that there’s a correlation between healthy eating and being slim. Big surprise!

Of course, Special K could have done experimental research, asking one group of people to eat a healthy breakfast for a month; and one group of people to eat no breakfast for a month; controlling for equivalent eating behaviour at other times of day; and then comparing change in weight or BMI over time. If people in the healthy group show a larger drop, then you can claim a causal relationship. The reason I don’t think they did this is because the advert is actually very careful not to claim a causal relationship – they don’t say “eating a healthy breakfast helps weight loss compared to eating no breakfast”. The Advertising Standards Authority wouldn’t let them make such a claim unless they’d done experimental research; the fact that they don’t make the claim suggests they’ve done different research, of the type I described at the start. However, they make their claim in a very clever way, to give the impression that if you eat Special K for breakfast, you’ll lose weight – the first of the three possible interpretations.

Now, I hesitate to criticise anything that encourages people to eat healthily, and particularly anything that discourages people from avoiding eating. However, there’s a cautionary tale here. People routinely over-state the findings of correlational research, and outside the control of the ASA routinely ascribe causality when there’s no basis. Be careful when reading about research results like this. In terms of psychology, the research described is exactly equivalent, in logical terms, to race or gender difference research. We could rewrite the claim like so:
“Research shows that people who come from a white ethnic group are more likely to score highly on IQ tests than people who come from afro-caribbean ethnic group.”
This has shown to be the case in various research projects, but as those of you who’ve suffered my rants in lectures will know, the reasons for this are highly debatable, and to my mind most likely to be because of differing social opportunities and experiences for the two groups.

Written by daijones

September 11, 2010 at 6:55 pm

Posted in Full post, Research methods

Tagged with

Studying memory for words (and confounding variables generally)

leave a comment »

Studies investigating memory for word lists are amongst the most popular to do for research methods assignments. This is for good reason, since such studies are usually straightforward both conceptually and methodologically. However, problems arise due to people’s choice of words. These problems are important to address in their own right, but also teach us something about doing research in general.

When people do studies of memory for word lists, it’s common for them to make up lists of words off the top of their head. This is a Bad Thing. As an example, consider the study I described in the last post, comparing memory for long words with memory for short words. The logic of this study is that people are given either a list of long words and/or a list of short words to remember. Recall for the respective kinds of list is measured, and then compared using an appropriate analysis, e.g. a t-test. If a significant difference is found, then we conclude that the length of words affects how well we can remember them, and particularly conclude that the phonological loop has a time limited capacity.

The logic above is sound, but only as long as we can discount any alternative explanations. However, if there can be an alternative explanation for why memory for the two lists differs, then we can’t confidently conclude that word length has an effect. Take the following two lists as an example:
cat xylophone
hat phenomenon
mat evolution
bat conundrum
Clearly, one list consists of short words, and one list consists of longer words. However, that’s not the only difference between the two lists. The lists also differ in that:
* all the words in list one are familiar, the words in the second list less so. Familiarity affects memory
* the words in the first list rhyme with each other, words in the second list don’t. Rhyming affects memory.
* words in the first list all refer to concrete objects, words in the second list are more abstract. Concreteness affects memory.
If we find a difference between people’s memory for list one and their memory for list two, we’d want to conclude that it’s because of the word length. Actually, there are at least four possible reasons for such a difference, because there are at least four kinds of difference between the lists: word length; rhyming; familiarity; and concreteness.

(Actually, we probably wouldn’t find a difference, because even the four long words fit into the 2 second phonological loop. Almost everyone would score 4 for each list. We need more words in each list to detect a difference, which illustrates the importance of measures being fine grained enough to measure what we want.)

So, the study described above is clearly flawed because there are alternative explanations for the results – the study is potentially confounded. That’s the general issue I talked about above.

The specific issue is as follows. If you’re doing a study of memory for words, you need to think carefully about the words you use. If you’re doing a between (unrelated) design where the same list of words can be used, e.g. recall with and without interference, then you can relax a little – there’s only one list of words, so no need to worry about differences between those lists. If you’re doing a between design where there are separate lists though, e.g. long and short, you need to worry.

If you’re doing a within (related) design, then you almost always need to choose word lists carefully, because participants will be remembering more than one list of words. If you’re doing a within design where you’re testing memory with and without interference, then you can’t use the same list of words because of practice effects. You need two (or more) lists of words, but you also need to make sure that the lists are equally difficult to remember, so that the only explanation for any difference you find is that for one list there was interference.

In general, when you’re doing a study of memory for word lists where you’re using more than one list of words, then you need to design two matched word lists that you can show are equivalent on any possible confounding variables. Of course, the words will differ on the one criterion you’re interested in as an independent variable, if any. So, if you’re looking at the effects of interference in a within design, you need to ensure that the words in each list are of equal length; equal familiarity; equal concreteness; etc. If you’re doing a within design looking at the effect of word length, then you need to ensure that the words in each list are of equal familiarity; equal concreteness; etc.; but different in terms of word length.

(A quick note: the word length effect arises because of the time based capacity of the phonological loop. Length in this context refers to articulatory length – how long it takes to say a word – not the number of letters in the word. The number of syllables is a rough guide to articulatory length, and certainly a better one than the number of letters.)

So, how do you get these magical matched word lists? Luckily, some kind souls have developed a publicly available database of words marked up with various psycholinguistic characteristics, including articulatory length, familiarity, concreteness, etc. The database allows you to select words according to whichever of these characteristics you want to focus on. Use the database to generate words according to whatever criteria you choose, then randomly choose the number of words you need for each list. You can then write about this in your materials section, to show how much care you’ve taken to eliminate confounding variables. You can access the database at the following address:
http://websites.psychology.uwa.edu.au/school/MRCDatabase/mrc2.html

Use the “Dict Utility Interface” link to access the old, web searchable version of the interface.

I’d recommend using sections 2 & 3 of the interface to select required values of NSYL, the number of syllables; FAM, the familiarity, where 100=not familiar, 700=very familiar; CONC, for concreteness, 100=not concrete, 700=very concrete; and PDWTYPE, part of speech, choosing INClude N, for nouns. Adjust these, then click the GO button to generate a list of words. If anyone wants help, give me a shout or leave a comment.

Written by daijones

September 11, 2010 at 6:54 pm

Posted in Full post, Research methods

Tagged with

Eugenics never went away

leave a comment »

Written by daijones

September 11, 2010 at 6:53 pm

Posted in Biological determinism, Quick Link

Tagged with

Offender Profiling

leave a comment »

Nice Guardian article on criminal profiling:

http://www.guardian.co.uk/uk/2010/may/15/criminal-profiling-jon-ronson

Written by daijones

September 11, 2010 at 6:52 pm

Posted in Quick Link