Attack of the killer stats assignment

Tonight I finally submitted my first stats assignment — ten days overdue. The ‘easiest’ of three, they tell me. Univariate exploratory data analysis. Choose about ten variables from a cut-down data set drawn from the Health Survey for England. Calculate the mean and median. Make bar charts and histograms and box-and-whisker plots. Assess normality via skew and kurtosis. And here’s the hard part, for me anyway: develop hypotheses that can be tested using these methods. Wait, what?

For most of those ten days I was grinding my gears over the sheer pointlessness of the task.  I don’t care if the distribution of data is ‘normal’ (i.e. Gaussian).  If it isn’t, I’ll use a non-parametric test in my next assignment, or I’ll apply an arithmetic transformation, or I’ll consult the literature to identify sensible cut-points to turn interval data into categorical data, or, fuck it, I’ll quote central limit theorem and use a parametric test anyway.

I chose variables to do with social class, material deprivation (independent variables) and smoking uptake, heaviness, and cessation (dependent variables). In my lit review, study after study reported that there’s no difference in uptake by class or education. In many there was no difference in number of quit attempts either, but poor and working class people are much less likely to succeed in quitting.

From a stats point of view, what I’m supposed to care about (in this assignment) was that ‘age of smoking uptake’ had massive kurtosis, i.e. the curve was peaked like a Saturn 5 rocket.  I’m supposed to care about that because it affects what tests I can later apply and whether the results they give me, from the limited sample I had at hand, can be taken as a meaningful reflection on reality, i.e. smoking in the population at large.

As Howard Becker puts it, it’s a logic of synecdoche: can we reliably take this sample to stand in for and represent the population?

What I actually cared about was the fact the peak on the graph was around 13 years of age.  Forget the stats for a moment and apply some practical intelligence.  That, right there, tells you why higher education doesn’t affect rates of smoking uptake — because most people in my sample started smoking in early high school.

Here’s why I don’t care if my data is normal: no matter what test I eventually use, it’s still just a signal. I’m not taking it as gospel even if p<0.000 .. 001. It’s another bit of information I’ll add to the pile along with all the studies I read and my life experience and practical judgment as a practitioner. Bent Flyvbjerg calls this phronesis, i.e. (to simplify quite a bit) good judgment in practice.

This is also why the hypothesis testing pissed me off so badly. I’m supposed to propose hypotheses that can be tested by univariate analysis.  ‘That the median number of cigarettes smoked in a week will be equivalent to a pack a day’. That’s a univariate hypothesis. It doesn’t compare anything. It’s not ‘that the median cigarettes per week is higher among working class people’ — sorry Dan, that’s bivariate.

Who cares if the first hypothesis is rejected or not? It was totally fucking arbitrary to begin with. I picked an arbitrary value out of the air based on a cultural stereotype, ‘the pack-a-day-smoker.’ But to some people, devotees of null hypothesis significance testing, i.e. the dominant paradigm in quantitative social science, it really, really matters that I pick a hypotheses before I do any tests.

On this view, I’m to perform a pantomime of a scientific experiment, defining my hypothesis ahead of time and then using an appropriate statistical test to falsify it. Of course, falsifying my pack-a-day hypothesis wouldn’t give you any information about what the median was — it just tells you what it wasn’t. So we oh so cleverly phrase the hypothesis in the negative, then we reject it, and Karl Popper doesn’t spin in his grave. YAY!

In practice, nobody but nobody does this kind of testing on univariate data, but as students we’re being drilled in it as a matter of disciplinary socialisation. Because public health has to be Scientific, yo.

Honestly, fuck that. Public health should work. That’s what matters. Who cares if your brother-in-law who’s a research chemist thinks your degree is hard science or not?

If this didn’t scare you off, you might like this great post by Peter Freed: When Central Tendency Junkies Attack, which was inspired by the vitriolic response to this, now rather prescient-seeming post, Jonah Lehrer Is Not A Neuroscientist.  TL;DR: Freed argues that statistics should be understood as metaphysics, not a science.  This is the philosophy equivalent of a 19th Century gauntlet, thrown down with a cold sneer.

Doctors for the Family — but not evidence

If a doctor wants to abide by his or her conscience on the question of gay marriage, I’m fine with that.

If the same doctor claims to speak for the evidence on the health and social impacts of homosexuality, but instead speaks from his or her moral beliefs and distorts the evidence to do so, then I have a problem.

The Herald-Sun reports today that Prof Kuruvilla George, Victoria’s Deputy Chief Psychiatrist and Government-appointed member of the Victorian Equal Opportunity and Human Rights Commission, joined a list of signatories to a Senate Inquiry submission by “Doctors for the Family” opposing theMarriage Equality Amendment Bill 2010.

The submission was made after Prof George’s appointment to the Commission and makes his tenure there impossible;  lawful sexual activity is one of the attributes the Commission was created to protect.

Disagreeing about gay marriage isn’t the problem — it’s how the Doctors for “the” (sic) Family went about it.  They explicitly argue their case on health grounds, saying they were created to “highlight the health aspects of marriage and family and ensure a healthy future for our children.”  Their terms are medical, rather than moral.

“We submit that the evidence is clear that children who grow up in a family with a mother and father do better in all parameters than children without.”

The source they cite is a report by a Law professor from a study commissioned by the Australian Christian Lobby.  This is not medical evidence;  nor is it free of bias;  nor is it accurate about the current state of the evidence, which shows that children raised by same sex parents do as well as (and sometimes better than) children of opposite sex parents.

The submission goes on to refer to ominous “health consequences of that behaviour [i.e. homosexuality] for children”, but the footnote turns out to refer to HIV and syphilis infection.  These are vanishingly uncommon among children, and they are the consequences of epidemics of HIV and syphilis, not homosexuality.

These are not complicated moral questions: they’re matters of fact.  In giving health advice a doctor has a professional and legal duty to be informed and unbiased.  In claiming to speak as doctors and to offer advice about public and children’s health, these citizens have created that expectation and then signally failed to fulfil it.

The Australian Medical Association has firmly refuted the claims and the ABC is reporting the Minister for Mental Health, Mary Wooldridge, has asked Prof George for an immediate explanation. Attorney-General Robert Clark, who appointed Prof George to the Commission, needs to do the same.

Given his willingness to put scientifically unfounded personal beliefs ahead of the established evidence on homosexuality and same sex parentage, Prof George’s tenure as Deputy Chief Psychiatrist for Victoria and membership of the Commission are unsustainable and should be terminated.

Building the evidence

On Friday, I presented at an interstate talkfest on HIV in culturally diverse communities.

The pre-reading for the meeting was heavy with public health strategies focused very tightly on disease reduction, and a mammoth epi report, analysing results from surveillance of HIV notifications, behavioural surveys, and mathematical modelling.

Despite having a massive committee and a very prescriptive action plan, it seemed that, aside from one very capable and energetic agency, actual action had almost come to a standstill.  The workshop was intended to revitalise it, but its title and main objective was “building the evidence”.

Does lack of evidence cause inaction?

I had only eight minutes to describe findings from our consultation report, and answer three questions posed by the organisers — not a lot of time, and not enough for reflection on the knowledge practices involved.  But I managed to squeeze in a provocative suggestion, and I was delighted when it was picked up in question time.

I suggested that health promotion planning is not especially sensitive to variations in the epidemiology.

In other words, whether epi reports there are 10 or 20 new infections in a particular cultural group — double the amount, 100% difference in statistical terms — I’m still going to propose broadly the same plan of action in response.

We’ll hire a project worker, identify partners, convene a reference group, review the literature, undertake community consultation, do some rapid assessment of the causes and most useful activities/messages/channels to use, do the work and then report back on how it went.

Knowing the number of new infections is much less important than knowing the size of the group, and developing a close understanding of what resources exist in that group in our state for undertaking health promotion work.

  • There’s no use planning a social marketing campaign if the target group is really spread out geographically and have no common media channels (like newspapers and radio programs) and habits (like reading and listening to them) and related skills (like print, media and health literacies for reading and learning from social marketing campaigns).
  • There’s no use proposing a key opinion leaders approach if the community is so new or disorganised that it has no institutions and structures for leadership and communication.

That kind of knowledge makes a HUGE difference in health promotion planning; stats on numbers of new infections, not so much.

I pointed this out in my presentation, and sure enough, in question time, there came the obvious objection, from an epidemiologist who does a lot of interesting work in mathematical modelling.

We have to ‘live in the real world’, he said, ‘we can’t ignore the epidemiology’.

Not really what I called for.

Where those stats make a big difference is in public health.  Government funding is the ultimate zero sum game, and epi helps decision-makers objectively assign priority to competing worthy causes.  It is a hugely important tool in rational government. Around the world, where governments have ignored the epi and funded politically convenient work, HIV epidemics have exploded.

And while I’ve said that epi isn’t always terribly relevant for health promotion planning, I still read every paper and every report, cover to cover, and squeeze every last drop of meaning and use out of it. The key thing is where I use it — in my funding submissions and advocacy to government.

I want to draw a strong distinction between (1) health promotion and (2) public health.  They are not the same thing, and shouldn’t be conflated.  They have different vocabularies, pointing to different underlying concepts and philosophies, and they focus on different levels.

Obviously they are connected – since public health people fund health promotion workers.  In spaces where they overlap, however, like the talkfest I was at, you can pretty quickly see communication problems arising between the two different languages.

It’s a problem of articulation – in two senses: how we find the terms to express ourselves, and how our discourses (the languages of our disciplines) can join together and mesh at their points of connection to transmit power.

Power in this case means funding for work, but it can also mean domination, where one can override the other, and alternatively it can mean trust, where two groups who use different language and knowledge practices can still work together effectively.

Epi and behavioural surveillance are vital for the public health functions of commissioning, monitoring and evaluating health promotion activities and outcomes.  As I’ve argued here, health promotion people should know about them, but the inputs into our planning need to be broader.

Culturally diverse communities are mostly migrants, whether temporary (international students, casual workers and visitors) or permanent (skilled migration, family reunion, and humanitarian entrants), and migration patterns change all the time.

Health promotion in this area requires a process of continuous rapid assessment, and instead of dismissing this because it doesn’t look like formal epidemiology, we need to borrow principles for assessing and improving its rigour and validity from qualitative methodologies.

We also need better recognition of the difference in languages, concepts and inputs used by public health and health promotion, and the need for careful and respectful engagement at points where they interface with each other.

Without it, decision-makers and researchers will be left to hold talkfest events, to scratch their heads and wonder why their ‘evidence’ never makes it into practice.

Reply privileges

In my early twenties, that guy was me.  These days, I pick my battles.

At any given time, I have a lot on my mind or in my notebook, and not enough time/energy to write it up.  What I love about Facebook and blogging is that I get to talk to amazingly smart people, all around the world, about stuff we care about.

When someone misrepresents me, or tries to drag me off topic, they’re trying to waste my time on correcting them or fighting on multiple fronts.

In my head, people who do that lose their ‘reply privileges’.  Not their right to reply — which is endless — but their claim on my attention, time and energy.

If I post another message in the same thread, it will be for the audience, because I truly believe they’re smart enough to figure out what’s happening and discount it accordingly.  In gay men’s health, there’s so much we need to talk about, there’s just no point getting bogged down fighting old battles with people whose opinions are never going to change.

Bad science HIV-style

One of my favourite blogs is Ben Goldacre’s Bad Science, where he bangs on a bit about the pseudoscience of homeopathy. There’s no shortage of bullshit about HIV, either. Thousands of people died in South Africa when it was governed by AIDS denialist President Thabo Mbeki and a Health Minister who recommended beetroot, garlic and lemon juice instead of toxic Western antiretrovirals. Now the author of the Raw Top Blog has posted an anonymous AAP report about an eminent zoologist speculating that HIV may evolve to become less deadly to its human hosts, just as SIV seemingly has in monkeys. The zoologist, Roger Short, is the same guy who wrongly claimed a good squeeze of lemon juice in the vagina would kill HIV and – as an added bonus – work as a contraceptive, too. In the community sector one of our key roles is translating medical and scientific knowledge of HIV into everyday language and circulating the outputs through cultures and communities. This is a good example of the same process taking place independently – and erroneously. If you live in a country where you can access affordable anti-HIV medication, then HIV diagnosis is no longer an immediate medium-term death sentence. That’s completely different from the argument that HIV is evolving to become less deadly. In fact, HIV might evolve to become more infectious, or develop resistance quicker, as treatments apply selection pressure. Linked on the same page Raw Top mentioned: oh, look.