With more physics students than ever before getting top grades, Peter Main calls for new ways of measuring university performance to avoid “grade inflation”
Graduation ceremonies are a wonderful part of the academic calendar, where students celebrate their hard-won achievements. And these joyful events have become even happier over the last decade. In 2011 about half (51%) of graduates across all subjects at UK universities achieved an upper second-class degree, while a sixth (16%) were awarded a first-class degree. Just seven years later, 79% of all students were getting these top two degrees, with almost a third (29%) being given a first.
The proportion of students receiving the top grade, in other words, had nearly doubled – a spectacular increase by any standards. But we should hardly be surprised. The alleged quality of a university’s provision is these days measured by student satisfaction and employability – both of which can be enhanced by inflating the number of top grades. The pressure is only in one direction.
Degree classifications matter. Many recruiters, for example, consider only applicants who have “good” degrees. Some professions offer higher starting salaries to graduates with better degrees, while the ability to secure grants for PhD programmes usually depends on degree class. The rapid increase in top grades therefore raises three crucial issues. What does a degree classification mean? How do we compare standards between different subjects and institutions? And does the problem need fixing?
Most universities have descriptors to identify, for example, a first-class performance. While they are useful in telling students what virtues are likely to lead to high marks, these descriptors are far from absolute. Some universities, for example, use terms such as “excellent”, “outstanding” or “very good” to distinguish between grades, without explaining how they differ.
More importantly, degrees are typically awarded based on “norm referencing” not “criterion referencing”. In other words, each university department sets tasks and exam papers to suit their students, marking accordingly. Despite universities pretending otherwise, there is no common currency to degree awards – it depends on the subject and the university. Put bluntly, it’s easier to get a first at some universities and harder at others.
Unfortunately there are no effective ways to compare standards between institutions. Within a given subject, such as physics, neither external accreditation (as happens in the UK and Ireland through the Institute of Physics) nor the system of external examiners leads to a common standard. And I am not even sure how to begin to compare standards between subjects.
So does degree inflation need fixing? Before we answer that, we need to ask why it’s happening. It would be lovely to think that undergraduates have simply got better, but that is hardly likely in all universities across all subjects. I also doubt that teaching has improved dramatically over such a short period. Instead, I believe grade inflation is mainly being driven by external arbiters of quality, such as the UK’s Teaching Excellence Framework (TEF) and university league tables.
Departments don’t consciously set out to award higher grades, but these systems tend to favour high marks. In the case of the TEF, its decisions are informed by the employability of graduates, student satisfaction and the proportion of students who progress from the first year of a degree to the second. As the TEF’s definition of employability includes how many students go on to postgraduate study (rather than just into work), the simplest way for a university to improve its score is to give more students good degrees. Monitoring progression from year one is also an invitation to be more lenient, while student satisfaction will not be harmed by awarding higher marks either.
There are two other inflationary factors. First, some league tables use the percentage of first-class degrees as a measure of quality. Second, and more subtly, it is increasingly a requirement for lecturers to provide a full set of notes for their courses together with worked answers for any problems set. Given that most formal physics exams test little more than rote learning, this arrangement makes it easier for students to do well.
Setting a new standard
Something needs to change. The arbitrary lines (first, upper second, etc) drawn in a continuum of performance make no sense and reinforce the notion of a universal standard. But even a switch to, say, a grade-point average does not address the comparability issue. What’s more, direct comparisons between institutions and, particularly, subjects make no sense because programmes are trying to do different things.
A physics department at one university might be focusing on, say, mathematical physics, while another adopts a more practical approach. In both cases, departments will assess at a level consistent with the students they have, essentially norm referencing. Their grades are not, and cannot be, directly comparable. We also need to ensure that quality assurance does not apply inflationary pressure but recognizes that each programme is unique.
The Master’s route
I would therefore like to see all programmes state what they are trying to achieve, indicating the type of students they are trying to attract and the employment destinations of their graduates. A department could succeed against an unchallenging target, but potential students would be aware of that and could make appropriate judgements. Alternatively, if a department asserts high ambition, for example claiming to take students without A-levels and produce graduates with high salaries, they had better be able to demonstrate it.
If we want to prevent grade inflation, we must stop pretending there is a common currency of grades and start measuring universities against what they are trying to achieve. Perhaps then we can shift the emphasis of a degree back towards education, rather than the mere acquisition of a qualification.