NEWS & ANALYSIS

In defence of the new matric

Dr Nick Taylor writes that the criticism of the national senior certificate exam results has been ill-informed

The shrill tone of much comment which greets the annual publication of the National Senior Certificate results is perhaps understandable, given the importance of this event, not only to the future life prospects of candidates, but indeed to the development of the country. However, the ill-informed nature of much of this commentary is inexcusable, since so much information is available to sustain the debate at a much higher level than is currently the case. There is much to talk about on this very complex question, but in the space available I will confine myself to the three issues which seem to concern most critics: the numbers passing Maths, the standard of those passes, and the pass rate of the NSC as a whole.

In 2004 the Centre for Development and Enterprise published a report on the state of Maths and Science in the nation's high schools. Based on an analysis of the aggregate marks of individual candidates conducted by Professor Charles Simkins, the CDE report concluded that, while higher grade (HG) Maths passes had remained pretty static between 20 000 and 25 000 for a number of years, there were sufficient numbers of students in the system with the ability to at least double this figure.

Two factors were inhibiting this achievement, effectively preventing tens of thousands of candidates from gaining entry to engineering and all the other high skill career options which the country so desperately needs. The first constriction on the number of HG passes was that many schools simply did not offer Maths at this level, if at all. The second problem was that many schools who did offer HG Maths persuaded the majority of students to take it on the easier standard grade level, even though large numbers of them passed with A,  B or C symbols (60% or higher). The point is well illustrated in the table below, which gives support to the CDE hypothesis that the number of HG Maths passes could easily be doubled, if only all the students with the ability to do so were given the opportunity.  

Table 1: HG equivalent Maths  passes, 2006-08

 Year

Standard Grade: symbols A-C

Higher

Grade

TOTAL

A

B

C

Total A-C

2006

6616

6823

12590

26029

25217

51246

2007

7458

7488

13944

28890

25415

54305

2008

 

 

 

 

 

63034

In order to achieve this aim the new FET curriculum, culminating in the first NSC exam in 2008, simply changed the choices available to candidates, with all students being obliged to register either for Maths or Maths Literacy. The result: 287 487 candidates wrote Maths, compared with the 41 000 and 306 000 who sat the HG and SG exams, respectively, in 2007. (249 784 wrote Maths Literacy in 2008).

Concerned at the prospect of many students with low quality Maths passes knocking at the academy doors, the universities declared a 50% pass in Maths in 2008 to be equivalent to a HG pass in 2007, a level achieved by 63 034 candidates in 2008. ‘Ah!', cry the critics ‘This could only have been achieved with a massive lowering of standards'. From another perspective, however, the new NSC curriculum presents an elegant administrative solution to what was an obdurate systemic blockage.

Perhaps the critics have something of a point: an increase to 63 034 of candidates getting the equivalent of HG Maths compared to the 54 305 predicted in Table 1 may be a little high and the universities may want to adjust their benchmark of Maths HG equivalence upward. A 26% increase in university exemptions in 2008 (from 85 000 to 107 000) may also be on the high side, although this increase was also achieved in 2007.

Let's examine the evidence relating to standards more closely. The first question to ask here is: what is the best measure of standards? Regarding the NSC, the public debate in SA is firmly fixed on the pass rate as the primary indicator of quality. This view was encouraged by the DOE in the years 1994-2003, when the department set the pass rate as its own indicator of success. Pass rates declined steadily over the period 1994-1999, dropping from 58% in 1994 to a low of 49% in 1999. The pass rate is a ratio of passes to candidates, and the falling pass rate over this period is shown by the widening gap between the numbers of passes and candidates in the following graph.

*Estimate for 2008 passes, which will only be finalized towards the end of March once all supplementary exams have been completed and processed.

One of the first steps taken by Kader Asmal on his appointment as Minister in 1999 was to launch a campaign to improve SC pass rates. The effects were immediate, with rates rising annually to 73% in 2003. The Minister and the DOE were triumphant, declaring victory for their policies and claiming that schools were now operating more effectively. However, deeper analysis by Umalusi and others since 2003 strongly indicates that the bulk of these effects were achieved by manipulating the results by means of four measures: eliminating high risk candidates (notice the rapid drop in the number of candidates from 1999 to 2003), encouraging candidates to register at the easier standard grade level, lowering the standard of examination questions, and using political arguments rather than statistical techniques to raise raw scores during the moderation process. It is likely that, at best, only a small fraction of the rise in SC results can be attributed to improved school quality.

Following the release of these findings in 2003, Naledi Pandor's Ministry and the DOE, in cooperation with Umalusi, embarked on a systematic process of improving curriculum and assessment standards. The effects of these efforts are shown in trends in the SC graph above over the period 2004-2008. During this time the pass rate declined (giving rise to much public condemnation of the DOE), while the number of candidates rose (showing improved throughput in the school system), the number of passes increased (providing more matriculants with better life chances), and the standards of papers was improved through the inclusion of greater numbers of questions which demand higher cognitive skills from candidates, and the addition of a paper in all languages which required extended writing.

There is obviously still a long way to go before the system, at all levels, reaches acceptable quality benchmarks, but a marginal drop in pass rates would seem to be a small price to pay for the three very positive developments in the last 5 years. Yet public opinion, fixed on one (unsuitable) indicator, remains unimpressed.

Umalusi shares the view of the DOE that the most valid indicator of school quality is the cognitive standard of the curriculum and assessment system. This is reflected in the research conducted by Umalusi in 2008, which looked at the comparative levels of cognitive demand of the 2007 and 2008 curricula and examination papers in certain key subjects, including English Second Language (taken by over 80% of candidates), Maths and Maths Literacy. The Maths Literacy report concluded that the paper could have been pitched at a higher level. After considering all the qualitative reports, The Umalusi Statistics Committee adjusted Maths Literacy raw scores downward before release, and has recommended  a greater number of more challenging questions be included in the final paper in future.

Regarding Maths, the jury is divided, which is probably a sign that the standard of 2008 was of the same order of magnitude as that of 2007. There is some evidence that 2008 did lack challenge at the top of the range, reflected in a large increase in the number of distinctions. This is likely, in light of the fact that the third Maths paper, which contained the most difficult topics, was optional in 2008, and only 11 174 (4%) candidates wrote the paper. However, it would seem that the body of papers 1 and 2 was set at about the right level. This is an important area of debate, on which we have not heard the last word, as experts continue to apply their minds. My own view is that the DOE should include the third Maths paper into the NSC qualification as soon as is administratively possible, as this is the most obvious way of improving the standard of Maths at the top end.

Three kinds of indicators are available for assessing the health of the FET system: quality, measured by the level of cognitive challenge of examination papers, quantity, the numbers passing, and efficiency as measured by the pass rate. All three are important, but it is a tall order to improve them all simultaneously, and, since the overriding problem of South African schooling is quality, this must take precedence.

From the foregoing discussion it should be clear that pass rates, on their own, constitute a rather poor indicator: rising pass rates masked a distinct decline in both quality and quantity in the years 1999-2003, while falling pass rates in the last five years have been accompanied by clear gains in the quality of the curriculum and exams, and the numbers of candidates passing. There is no doubt that government deserves strong criticism for administrative errors, such as the failure by a number of provinces to get their ducks in a row in 2008, thus delaying the release of close to 10% of results, and for many other things, such as the lack of political will to fire incompetent principals and to instill a stronger work ethic in provinces, districts, schools and classrooms.

But it is counterproductive to make sweeping condemnations of failure on the question of the NSC curriculum and exams when there is evidence to indicate that much progress has been made in this area. A more sophisticated public debate can play an important role in effecting further improvements here, but that would require a more informed and rigorous analysis of the available data on the part of the critics.

Dr Taylor is CEO of JET Education Services and a member of the Umalusi Statistics Committee. He writes in his personal capacity.

Click here to sign up for the Politicsweb free daily headline service