Since entering tertiary education over 30 years ago, I have been active in both, but have veered more towards the latter activity, as a glance at the Publications section of my CV bears witness to.
My approach to research stresses the ‘science’ in ‘social science’: I treat social science research as an exercise in applied science – scientific methodology applied to social issues. This involves the collection of ‘hard’ quantitative data mostly directly from research subjects and processing it using analytical statistics.
At the same time, I like collecting ‘soft’ data from participants in the form of comments they make and reproducing some of those to add human interest to my articles – a few pointed comments from respondents certainly make an academic paper more readable.
But I have had to change direction over the past half dozen years. My published papers now tend to be based on documentary sources – not a human subject in sight. Where my name does appear as a co-author for an article involving direct data collection from human subjects, I make sure it comes last even if I wrote most of it.
The reason for this sudden aversion to the limelight is the imposition of so-called research ethics guidelines which my university adopted in line with common American and other Western practice. There is now a research ethics committee that vets every research proposal, including all data collection instruments to be used, and the researcher must satisfy the committee where changes are required before being able to proceed. There are rules about sample selection: we’re no longer allowed to approach would-be research subjects and ask them point-blank whether they’d like to participate – we have to send out a general invitation and wait for them to approach us.
This procedural quagmire (it has added a full semester to the programme of a Master’s student conducting empirical research for a thesis in my department) is a spin-off from developments in medical research after WW2. The use of concentration camp detainees as research subjects by German medical experimenters such as the infamous Josef Mengele during that period has been well publicised. Without excusing those practices, it should however be noted that a great deal of research was carried out on people without their consent across the world, including Western countries, until well after WW2. Foe instance, prisoners in some jurisdictions were routinely used as guinea pigs for the testing of new drugs, and patients were often subjected to experimental treatments without really knowing what was going on.
I can fully understand misgivings that people may have about letting enthusiastic researchers loose on vulnerable populations such as sick and disabled people, and captive populations such as prisoners. I agree entirely with the caveat that subject participation must be voluntary and that consent given to gather personal data must be genuine (i.e. the chosen subject must be in a position to give free, informed consent). The use of minors such as school kids introduces ancillary considerations that need to be addressed, such as parental consent. It was vulnerable populations that those who drafted the guidelines in the early days had in mind. But there has been ‘mission creep’ since to encompass all potential research subjects whether they are in the least ‘vulnerable’ or not.
An excellent example of this arose when I contacted colleagues in Australian university teacher education units a few years ago with a view to ascertaining how much attention is being given to legal aspects of teaching as a profession during pre-service teacher training. I particularly wanted to gauge the views of post-graduate secondary teaching diploma students towards the end of their studies. There were some positive responses to my initial approach but then I received rather terse emails from heads of research ethics committees demanding that I get my project vetted by them and that the study was on hold until I had done so. I refused and that was the end of that.
Let’s get real here. A university graduate is hardly likely to be coerced by little old me into participating in a survey if s/he doesn’t want to take part. Upon receiving my invitation by mail, all s/he has to do is tick the “No thanks” option and return the invitation, failing which I’ll get the message if s/he doesn’t respond at all within a couple of weeks. If I were in such a person’s shoes, I would be rather offended by the suggestion that some committee had taken it upon itself to dictate the terms of our engagement.
That’s very much how I feel – offended! There is a tacit message in there that I am considered likely to be an untrustworthy person who will abuse his power to badger people into cooperating with him. While this approach is certainly relevant to situations involving groups such as hospital patients and prison inmates, it is laughable when applied to someone in my position asking professional adults whether they would be willing to take part in a study on their training. This is a matter between me and those people – no third party is required, nor is it desired. For a third party to stick its nose into our business is to treat both of us with contempt. As I told one ethics committee member here, mind your own bloody business and leave other, competent people to get on with theirs! (No, it didn’t go down well.)
The effect of the sample recruitment rules now in force is, statistically speaking, devastating. The whole point of sampling, whether of a whole population or a defined subpopulation, is to generate data that can be extrapolated to that entire unit; samples must be representative of the statistical population from which they are drawn. The scientific approach is to employ random sampling so that there is no selector bias at play, which would skew the data. But we have now been hamstrung by a recruitment procedure that just about guarantees a non-representative sample. People who respond to general invitations to participate in research are people who wish to be heard, and they often have unrepresentative backgrounds and views.
It has always been the case that samples in social science research have been self-selected to some extent. Now, however, we can safely say that they are entirely, or just about entirely, self-selected. All pretence of sample randomness and thereby representativeness is now in the realm of pie-in-the-sky.
This pleases the more fashionable members of the social science community who have long frowned upon the application of scientific methods to social research. They pooh-pooh quantitative methodology – some commentators appear to regard it as part of a plot hatched by us evil White males. They have even invented a word to describe the facile methods they use – ‘qualitative research’. These mainly involve reporting what subjects (usually targeted interviewees) say about the issues supposedly being investigated. So you end up with a load of unsubstantiated opinion parading as research findings. But hey, as any ‘qualitative’ pseudoresearcher will tell you, there’s no such thing as objective truth anyway – yet another evil White male construct no doubt. Feelings as opposed to facts are the arbiters of truth for them, backed up by ideologies such as ‘social constructivism’. The ‘science’ in ‘social science’ is rapidly going out of the window.
Ideology is what all this ‘ethics’ crap is about – it has nothing to do with ethics as I understand the term. ‘Ethics’ has become a smokescreen for ideological vetting of research proposals and keeping findings that may not square with PC doctrine out of the academic literature.
In my next life I may well be an academic again but the way things are going it won’t be in social science. I know where I’m not wanted and, more pointedly, I know what company I don’t want to keep.
Barend Vlaardingerbroek BA, BSc, BEdSt, PGDipLaws, MAppSc, PhD is an associate professor of education at the American University of Beirut and is a regular commentator on social and political issues. Feedback welcome at firstname.lastname@example.org