I think I’ve heard variations on this headline from just about every direction in my social media over the last few days:
“Study find non-religious kids more altruistic than religious kids”
It is based on a study published in Current Biology – a study which you can find and read for yourself here.
My reactions to the initial study were mixed. On the one hand, this is not a surprising result. I’ve already heard of research which suggests rule-based moral education interferes with actual moral reasoning in children and adults. (I can’t find a reference to back that up, so I have to hold even that belief tentatively right now. If you can point me to work on this, please let me know and I’ll add it in.) Since a good number of religious traditions focus on rule-based morality (“thou shalt” and “thou shalt not” because of what a book says), it is natural to suspect that people taught in that way will have impaired moral reasoning.
On the other hand, just about every study of social interest which hits the headlines gets distorted according to various agendas and narrative impulses (to support one or the other side, or just to make things sound more interesting). So I was prepared, on eventually reading the paper, to discover that its data supported none of the claims coming to me, primarily via the nonreligious people in my networks.
So much for initial skepticism – I had, I think, neutralized much of the impulse to uncritically accept claims just because they conformed to my preconceptions. But this was an actual study, peer-reviewed and published in a respectable journal. So there is something to learn, and I am fortunate to be trained in interpreting social science research. So I have read through the paper. Here is my attempt to report what I think it really says, and does not say, and what interesting questions we might want to ask next to probe the implications further.
The key number I was looking for was at the end of this sentence:
Results from a linear regression with number of stickers shared as the dependent variable and age (1-year bins), country of origin, socioeconomic status (SES), and religious identification of the household (dummy coded) suggest that age (βstandardized = 0.39, p < 0.001), SES (βstandardized = 0.16, p < 0.001), country (βstandardized = 0.1, p < 0.01), and religious identification (βstandardized = −.132, p < 0.001) are significant predictors of sharing, (model r2adjusted = 0.184).
The beta values indicate the size of each effect – and you see that age is about three times the magnitude of religion. That is, if we are taking these as causes of sharing, a kid’s age has 3 times the effect on sharing that the kid’s family’s religion has. But for me, it’s the last value that I was hunting for. The adjusted r², indicates how much of the variation in sharing is accounted for by the variables observed (age, country, SES, and religious identification). In this case, taking all three of these factors together, we are only able to account for about 18% of the variation in sharing.
What this means is that, even if there are no flaws in the study (and oh boy, there are flaws – see below), and the effect is really real, it doesn’t tell us all that much about any particular kid. The variation between individual kids, or kids within a particular religion, is several times as large as the observed variation between kids from different religious backgrounds. (This was a point that Lynn Honey, in her stats talk at the recent Alberta Secular Conference, made in general. Don’t just ask if there’s a difference between group A and group B. Ask if the difference is large relative to the unexplained variation.)
One thing that was almost completely lacking from the paper was an acknowledgement of mixed-belief marriages. A year ago I read an excellent book about the topic, and it was disappointing to see it left out of this analysis – although that could just be because of the small sample size. In the study, only Christianity, Islam, and nonbelief had large enough samples to be included in the main analysis. I would be curious to see whether diversity (kids with parents or communities with a mix of different religious or other identities) affects kids’ ability to empathize.
I am also struck that we are told only about select tests that the authors made. While I find p-values very useful, one way they become suspect is if people do a lot of tests and only report the ones they find interesting. What were the negative results? How can I know that the few results we are told about are the whole story, or just the bits of the story that make for a tidy conclusion? This is a problem with any study whose analyses are not pre-registered – ie, just about any social science research published these days. But it is a problem.
Last, I want to point you to a couple of other articles people have written criticizing the study and/or the conclusions that laypeople are drawing from it, which I came across after intially drafting this post. They contribute interesting different perspectives that add to the picture above.
A secular writer who makes similar points to mine. Curiously, he ends his explanation of why we can’t draw firm conclusions from this study by stating that “this study does at least provide evidence that atheist kids are not less altruistic than religious kids.” As one commenter put it, “you can’t have it both ways”. I would say that “at least this study does not confirm that atheist kids are less altruistic than religious kids.”
A religious writer. While I agree with him that the study isn’t as conclusive as some of the headlines suggest, his critique itself is unfortunately error-ridden. For example, he says the study should have controlled for parental education. It did (at least for maternal education). He also complains that the authors conflated “mercy” with “morality” in their study of punitiveness. I don’t think they did. In fact, throughout the paper they clearly separate their discussion of the altruism test (the dictator game) and the test of punitiveness. It is clear to me that the authors think these two tests measure distinct things.
Both of these articles highlight a crucial problem with the dictator game – the central test of altruism – which I had missed. They claim that, rather than testing altruism, it may simply test obedience. This seems very plausible to me. The test was set up like this:
- The experimenter gives the child some stickers.
- The child is then invited to share some stickers with another (unseen) child by putting them in a separate envelope while the experimenter’s back is turned.
- The number of stickers shared is used as the operational measure of altruism in this study.
Yes, an altruistic child will tend to put more stickers in the envelope. But so will a more obedient child. Without deeper knowledge of the psychological literature, I am forced to take this as a profound flaw. I have my expectations about whether nonreligious kids will, on average, tend to be more obedient than religious kids. But we’re doing science here, and my empirically un-tested expectations carry little weight.
So the conclusion? Well, it may sound dismissive, but I think this study tells us very little. It does fail to confirm the stereotypical religious expectation that religious people are more altruistic than non-religious people. But it also fails to soundly refute it.
With appropriate follow-ups, it may become a useful bit of evidence in the picture of how our beliefs shape our moral behaviours. For now, it is mainly useful as a cautionary tale in interpreting scientific results.
The best thing about this paper? It is freely available for anyone to read and examine. And this, in my mind, puts it a step ahead of many better-designed studies that are locked behind paywalls.