Consequently, we frequently miss the mark at the level of delivery and, therefore, of impact.
In large part this is a presuppositional problem. The data on which a great deal of policy design is based is often ideological. It is also, frequently, more qualitative, than it is quantitative.
Qualitative data is useful and important, but only to a point, and seldom at the starting point.
Qualitative data is useful in forming impressions of how policy, or absence of policy, affects individuals. or even groups of individuals. Lived experience enriches our impressions and can, and sometimes should, influence policy design at the edges.
In a sense, qualitative data is more useful to the psychotherapist, or the practitioner, than the policy analyst, or at the level of policy design. It should (generally) have little influence on the formation of policy in the wider sense. It is, often, not sufficiently robust to be generalised in any meaningful way, and it has no place at the centre of policy design, which should always (primarily) lean heavily into the analysis and extrapolation of reliable (comparative) quantitative data.
Quantitative data must always shape the bigger picture, define boundaries, and delineate the primary discourse ... with qualitative data serving as a possible mechanism for cautious fine-tuning.
Too often assertions are made, and positions taken, by those who have access to the levers of influence, that have little relation to actual facts, that are heavily reliant on ideology, and anecdote, and that are too infrequently challenged.
People are often too scared to challenge these because, through verbatim repetition, they enter the political, and public, lexicon as a given. Because their enthusiastic protagonists often have significant input, and influence, at the level of policy design, objectives, and anticipated outcomes, too often become seriously compromised.
This is especially the case with issues that are highly contentious, or where there are vested interests.
Quantitative data tends to be centring, it is, by its nature, resistant to ideology, less susceptible to manipulation, and more amenable to the wise use of public funding. In short, it is objective!
Policy will only be as good as the quality of the data that shapes it.
I was struck by an animated and highly emotional assertion by a green MP recently, that English was beaten into indigenous people, at immeasurable cost to them. I well remember, at the very start of my teaching career, speaking with teachers who were at, or beyond, retirement age, who commented that Maori was nearly always spoken in the homes of most Maori children, and that the insistence that English was the language they used at school came from the parents themselves.
The assertion by this Green MP is one of myriad examples of how facts can be amplified, twisted, taken out of context, used to advance an agenda ... and ultimately to shape policies based on dubious, self-interested and heavily ideologically oriented assumptions.
Lived experience is important, but exceptions exist everywhere, things are always more complex, and multivariate, and fluid, than they seem, or than we would wish them to be, and we are prisoners to our worldviews.
Context is everything!
Maybe we could be a lot better at saying "show me the quantitative data" before we become too convinced of someone's argument, and before we give them access to the microphones, and the public purse. Not qualitative (subjective) data ... quantitative (objective) data.
We need to insist on proper quantitative evidence before we take something as a given, before these are enshrined (or applied presuppositionally) in policy, and before they become the recipients of public largesse.
Caleb Anderson, a graduate history, economics, psychotherapy and theology, has been an educator for over thirty years, twenty as a school principal.

3 comments:
Anyone can do 'qualitative' research (Anny said, Mary said, Joey said, Paula said...... then when these carefully chosen respondents say what you knew in advance they would say, treat it as hard fact). Most of it is grossly intellectually dishonest. But anyone can do it, including DEI wonders who would have difficulty counting on their fingers.
Quantitative research is hard. Most people think it extends no further than reporting a few percentages (which much of the commercial 'research' does) but the real thing involves the application of inferential stats which most people wouldn't even recognise as stats given the esoteric symbols all over the place. This requires intense training in the methodology and rationale of these approaches. I have been on many thesis committees and questions I often posed included "Did you check whether the data were Normally distributed before applying parametric tests?" and "Wouldn't you say that conclusion was a bit optimistic given that p<0.05 finding?" (People didn't always like me!)
I am glad I am out of academia now. Most of the 'qualitative' bullshit is just ideology dressed up as 'research', and grossly intellectually dishonest to boot given that the data are usually massaged to fit a priori conclusions that preceded the data.
No wonder so many trendy types in academe regard quantitative research as being an evil White male construct aimed at oppressing other social groups.
Fully agree Barend, but I was puzzled by the p<0.05 comment. Did you by chance mean p>0.05, which would mean that the finding was not significant at any level? Or, did you mean that p<0.05 in that experiment is not sufficiently far from significance to be likely and that p<0.01 is preferred?
The latter, Allen.
A fun activity I used to indulge in with some students was programming the computer to come up with sets of random numbers and run tests on them (e.g. Pearson). The students faces fell when they saw apparently 'significant' results popping up.
Post a Comment
Thank you for joining the discussion. Breaking Views welcomes respectful contributions that enrich the debate. Please ensure your comments are not defamatory, derogatory or disruptive. We appreciate your cooperation.