We need an urgent review of the way in which literacy and numeracy are taught in primary schools.
The New Zealand Initiative recently commented on a pilot of new reading, writing and numeracy assessments for NCEA. Just a third of students participating in the pilot met the standard for writing. One third met each of the reading and numeracy standards.
Commentators from the education sector were surprisingly relaxed about the findings.
The Ministry of Education asserted that the trial was “small-scale and not representative”. In other words, they’re saying that their own study didn’t include enough students to yield reliable results. They’re also saying that the sample of schools involved wasn’t a good snapshot of New Zealand’s schools in general.
Let’s take these claims one at a time.
The numeracy assessment included 1,055 students. That’s about the same numbers as in a standard political poll. With a sample that size, we can be confident that, if the assessment was run nationally, between 62% and 68% of students would meet the standard.
The other two assessments were completed by fewer students – 590 in reading and 554 in writing. Even so, that’s enough to give us fairly reliable estimates. Based on the 67% success rate in the reading assessment, between 64% and 71% of students nationally would be expected to succeed. For writing, in which just 35% of students passed the assessment, we’d expect between 31% and 39% nationally to succeed.
The argument that there’s nothing to worry about because the pilot was “small scale” doesn’t wash. Even performance at the upper ends of these margins of error would signal dismal results if the standards were implemented nationally.
That’s assuming, however, that the samples were not biased. This brings us to the Ministry’s second objection to the Initiative’s commentary – their claim that the sample of participating schools was not representative.
Participating schools spanned the decile range. The sample included schools in both urban and rural locations. The Ministry’s report does note that more than a third of the numeracy results came from one high decile school. Students at high decile schools generally do better in assessments than students at low decile schools.
Of participating students, 18% were Māori, 5% Pacific and 67% European. Comparing with the Ministry’s national data on student ethnicity, Māori and Pacific students were somewhat under-represented in the pilot study. European students were similarly over-represented. It’s no secret that, regrettably, Māori and Pacific students do less well on average in our education system than European students.
We might conclude that the Ministry was right – the participants in the study were not perfectly representative of the national picture. However, the ethnicity profile of the sample suggests that, if the assessments were implemented nationally, results would be even worse than they were in the pilot.
Pip Tinning, Vice President of the Association of Teachers of English, pointed out that most of the pilot participants were in Years 9 and 10. Students don’t usually commence gaining credits towards NCEA until Year 11. This is the strongest argument that the pilot assessments do not signal as great a catastrophe as the Initiative says they do.
Even so, 90% of the participating students were in Year 10. Research commissioned by the Tertiary Education Commission, published in 2014, suggests that students don’t typically make much progress in literacy or numeracy between Years 8 and 11.
Vaughan Couillault, President of the Secondary Principals Association, came up with an imaginative objection to the Initiative’s commentary. He likened the pilot to road testing a car, as opposed to testing the ability of the driver. He meant that the pilot was about making sure the assessments were of good quality rather than actually testing the students’ literacy and numeracy. His implication is that the pilot did not yield valid results for the participating students.
Couillault is correct that the primary purpose of the pilot was to evaluate the assessment process. However, his comparison with road testing is a false analogy. When cars are road tested, experienced drivers are used. Furthermore, cars are not designed to assess the skills of drivers. The pilot showed that the assessments are valid and reliable, so the results for the participating students are too.
Year 11 students may well do a little better than Year 10 students. It would, however, be most unwise to rely on one year of additional schooling to make a decisive difference to the pilot results. It’s also possible that some students didn’t take the assessment seriously, given that it was a pilot and didn’t have personal consequences for them. But even if all of the success rates came up by ten percentage points, we’d still have a quarter of our young people failing in reading and numeracy, and more than half failing in writing.
What is most disappointing is not the pilot results themselves but that influential spokespeople for leading educational organisations seem so determined to explain them away. It’s not as if there’s no corroborating evidence. Results from international tests and reports from independent education researchers point to ongoing decline in New Zealand's literacy and numeracy education.
Rather than flailing about trying to defend the indefensible, we need an urgent review of the way in which literacy and numeracy are taught in primary schools. Until our educational ostriches pull their heads out of the sand, our young people will continue to be sold short.
The other two assessments were completed by fewer students – 590 in reading and 554 in writing. Even so, that’s enough to give us fairly reliable estimates. Based on the 67% success rate in the reading assessment, between 64% and 71% of students nationally would be expected to succeed. For writing, in which just 35% of students passed the assessment, we’d expect between 31% and 39% nationally to succeed.
The argument that there’s nothing to worry about because the pilot was “small scale” doesn’t wash. Even performance at the upper ends of these margins of error would signal dismal results if the standards were implemented nationally.
That’s assuming, however, that the samples were not biased. This brings us to the Ministry’s second objection to the Initiative’s commentary – their claim that the sample of participating schools was not representative.
Participating schools spanned the decile range. The sample included schools in both urban and rural locations. The Ministry’s report does note that more than a third of the numeracy results came from one high decile school. Students at high decile schools generally do better in assessments than students at low decile schools.
Of participating students, 18% were Māori, 5% Pacific and 67% European. Comparing with the Ministry’s national data on student ethnicity, Māori and Pacific students were somewhat under-represented in the pilot study. European students were similarly over-represented. It’s no secret that, regrettably, Māori and Pacific students do less well on average in our education system than European students.
We might conclude that the Ministry was right – the participants in the study were not perfectly representative of the national picture. However, the ethnicity profile of the sample suggests that, if the assessments were implemented nationally, results would be even worse than they were in the pilot.
Pip Tinning, Vice President of the Association of Teachers of English, pointed out that most of the pilot participants were in Years 9 and 10. Students don’t usually commence gaining credits towards NCEA until Year 11. This is the strongest argument that the pilot assessments do not signal as great a catastrophe as the Initiative says they do.
Even so, 90% of the participating students were in Year 10. Research commissioned by the Tertiary Education Commission, published in 2014, suggests that students don’t typically make much progress in literacy or numeracy between Years 8 and 11.
Vaughan Couillault, President of the Secondary Principals Association, came up with an imaginative objection to the Initiative’s commentary. He likened the pilot to road testing a car, as opposed to testing the ability of the driver. He meant that the pilot was about making sure the assessments were of good quality rather than actually testing the students’ literacy and numeracy. His implication is that the pilot did not yield valid results for the participating students.
Couillault is correct that the primary purpose of the pilot was to evaluate the assessment process. However, his comparison with road testing is a false analogy. When cars are road tested, experienced drivers are used. Furthermore, cars are not designed to assess the skills of drivers. The pilot showed that the assessments are valid and reliable, so the results for the participating students are too.
Year 11 students may well do a little better than Year 10 students. It would, however, be most unwise to rely on one year of additional schooling to make a decisive difference to the pilot results. It’s also possible that some students didn’t take the assessment seriously, given that it was a pilot and didn’t have personal consequences for them. But even if all of the success rates came up by ten percentage points, we’d still have a quarter of our young people failing in reading and numeracy, and more than half failing in writing.
What is most disappointing is not the pilot results themselves but that influential spokespeople for leading educational organisations seem so determined to explain them away. It’s not as if there’s no corroborating evidence. Results from international tests and reports from independent education researchers point to ongoing decline in New Zealand's literacy and numeracy education.
Rather than flailing about trying to defend the indefensible, we need an urgent review of the way in which literacy and numeracy are taught in primary schools. Until our educational ostriches pull their heads out of the sand, our young people will continue to be sold short.
Dr Michael Johnston has held academic positions at Victoria University of Wellington for the past ten years. He holds a PhD in Cognitive Psychology from the University of Melbourne. This article was originally published by ThePlatform.kiwi and is published here with kind permission.
3 comments:
It would appear that all the commenters are just trying to conceal and justify incompetent teaching. Education has been going down hill for decades now.
As can be seen with the whole education ministry the incompetence starts at the top with the minister. Reading transcripts from the PMs statements reveal a distinct lack of literacy. In fact it appears to be endemic in politics as displayed during question time. I notice that David Seymour has to read very slowly and precisely when questioning the PM so that she understands.
more worrying than the actual results of the sample is that many schools outrightly refused to participate...
a major challenge in interpreting results is the lack of random sampling - this would bias the outcomes irrespective of sample size. the only way to ensure (and enfore) random sampling would be to disallow opting out. the idea is to create an accurate statistic - not to punish the student or school.
Dr. Johnston is right. We have a serious problem if education leaders in relevant Government agencies are trying to explain poor performance away in this manner. However, I am not surprised, having worked in education research and statistics, lecturing and academic management for some years.
Clearly, there are issues in teaching literacy and numeracy at primary-level but possibly there are other systemic issues.
Once a respected profession, teaching is no longer as attractive as before. This is well-known. And we could be right to have concerns about the quality of literacy and numeracy teaching at primary level.
At secondary-level NCEA involves standards-based assessment, in my view a drive towards fair assessment. It does not drive curriculum or, at least, is not supposed to drive curriculum. However, NCEA has given rise to over-assessment and criteria (standards) can be applied very brutally. Often students submit objectively very fine work but attain a low grade because the student has failed to guess how an evaluator or moderator interprets a particular criterion. Too often, the result is deflation for the student. Teachers get caught out and I admit to having failed to guess correctly on several occasions when submitting samples for moderation – much to my embarrassment.
How does our secondary curriculum compare with those of other OECD nations? It appears to me to lag behind those of certain nations quite substantially.
People in leadership positions at relevant Government agencies should be highly-trained. Most of them are but I know a few who are supposed to manage statistics and research teams but who have little or no subject-matter expertise. No background in education, teaching, research or statistics! And are they good people managers? Oh dear! The less said, the better!
How does this stuff happen? And what about the quality of analysis and policy-making if we have people like them making the calls? Explaining underperformance away is no surprise. And the same agencies are trying to pitch indigenous knowledge as an equal to science. The situation is truly frightening!
In a recent Breaking Views article I stated that relevant agencies are promoting the use of Māori language, and are working actively towards mātauranga Māori (Māori traditional knowledge) becoming valued (in my opinion, long overdue!) but possibly being taught within our curricula as equal to “western science”. I repeat what I said in my earlier article - that respect for all minorities and their world views is most desirable and strictly necessary, but elevating any indigenous or traditional knowledge to the level of science within primary and secondary education is not sensible in the twenty-first century, especially when our educational performance appears to be dropping relative to that of other nations.
Education is suffering at present and we need the intervention of genuine and well-intentioned experts such as Dr. Johnston to begin a process of repair. Please - no more managers and decision-makers promoted to mismanage something as critical as education simply because they trumpet political correctness and shout louder than the real experts!
David Lillis
Post a Comment
Thanks for engaging in the debate!
Because this is a public forum, we will only publish comments that are respectful and do NOT contain links to other sites. We appreciate your cooperation.