What Students Really Mean by Teaching Excellence

Updated Apr 04, 2026

Universities talk constantly about teaching excellence, but too often assess it without first asking students what excellent teaching looks like in practice. A new study published in Assessment & Evaluation in Higher Education by Claudine Fox, Martyn Parker, and Emily Locker at the University of Warwick suggests students define it through three connected ideas: meaningful support, student-centric teaching, and opportunities for growth. For institutions shaping TEF narratives, teaching awards, or module evaluations, that is far more useful than a generic satisfaction score alone. At Student Voice Analytics, we believe the most credible evidence of teaching quality comes from students in their own words.

What the study asked, and why it matters now

Despite substantial research on teaching quality, much of the existing evidence draws on staff perspectives, retrospective alumni accounts, or small qualitative samples. Fox, Parker, and Locker argue that the student voice has not been adequately incorporated into our understanding of excellence. Their mixed methods study, combining an online survey of 79 undergraduate and postgraduate taught students with follow-up focus groups involving 11 participants, focuses solely on current students at a UK research-intensive university. That design choice matters because current students can speak to their lived experience, rather than relying on memories reshaped by time.

The timing is significant. In the wake of the pandemic, universities have adopted new pedagogies, blended delivery models, and digital tools at pace. What counted as excellent teaching in 2019 may not map neatly onto the post-COVID landscape. For institutions navigating TEF, NSS, and internal evaluation processes, understanding excellence through today's student perspective makes teaching quality more credible and more actionable.

Three pillars of excellence: what students told the researchers

Analysis of the focus group data produced three overarching themes that give universities a more practical definition of excellent learning and teaching:

1. Balanced, constructive, and supportive. Students said excellent teaching supports both their academic development and their mental health and wellbeing. This is not a peripheral concern: survey correlations showed that effective, empowering, and inclusive learning environments were associated with improved wellbeing. Students want feedback that is constructive and honest, not simply affirming, and they value lecturers who recognise when someone is struggling beyond the academic context. The takeaway is clear: support is part of teaching quality, not an optional extra.

2. Student-centric. Excellence, in students' eyes, means being placed at the centre of the learning and teaching environment. This includes responsive communication, teaching adapted to the needs of the cohort, and a genuine dialogue in which student feedback is sought, heard, and acted on through a visible feedback loop. Students are not asking for consumer-style accommodation; they are asking to be treated as active participants in their own education. For institutions, that means student voice matters most when it changes practice, not when it is merely collected.

3. Growth and development. Students described excellent teaching as creating opportunities for growth, developing not just subject knowledge but transferable skills, critical thinking, and professional readiness. This dimension varied subtly between undergraduates and postgraduate taught students, suggesting that excellent practice needs to be sensitive to educational stage. A single institutional definition of excellence may be too blunt if it ignores those differences.

The study concludes that learning and teaching is perceived as excellent when students "feel supported in both their academic work and in their mental health and wellbeing, where students are placed at the centre of the learning and teaching environment and are provided with opportunity for growth and development."

Implications for teaching awards and institutional evaluations

One of the study's most practical contributions is its relevance to teaching award schemes. The authors note that such awards have been criticised for lacking clarity about what they recognise and reward. If award criteria do not reflect how students define excellence, awards risk rewarding visibility rather than value. The findings suggest criteria should encompass the breadth of factors students identify: balanced support, student-centricity, and developmental opportunity, rather than defaulting to narrow metrics such as lecture ratings or module pass rates.

For institutions using student evaluation surveys, the study offers an equally useful warning. Standard Likert-scale questions may not capture the richness of what students mean when they rate a module or lecturer highly, which is one reason teaching evaluation surveys work better when students and staff help design them. A student who ticks "agree" for "teaching was excellent" may be thinking about the lecturer's pastoral care, their responsiveness to feedback, or the way the module built towards a clear developmental goal. Without systematic analysis of free-text comments, institutions cannot know which dimension of excellence is driving satisfaction or dissatisfaction. That makes comment analysis essential if teams want to act on scores rather than simply report them.

What this means for student voice practice

The findings reinforce a principle that is central to our work at Student Voice Analytics: quantitative survey scores alone are not enough. When students write about their experience in free-text comments, they express the nuances that sit behind the numbers. They tell you whether "excellent" meant inspirational lectures, timely feedback, or a lecturer who noticed they were struggling. By categorising and analysing these comments at scale, institutions can move from knowing that a module scored 4.2 out of 5 to understanding what made it excellent, and what would make it even better.

The subtle differences between undergraduate and postgraduate taught perceptions also matter. A one-size-fits-all approach to evaluating teaching quality across an institution risks missing meaningful variation. PGT students, for instance, may place greater emphasis on professional development and research-informed teaching, while undergraduates may prioritise engagement, clarity, and wellbeing support. Institutions that can segment and analyse student feedback by level and cohort are better placed to improve teaching in ways students actually notice.

FAQ

Q: How can universities use these findings to redesign their teaching award schemes?

A: The study suggests that teaching awards should be assessed against the three dimensions students identify: balanced and constructive support, including wellbeing, student-centricity, and opportunities for growth and development. Award criteria that focus narrowly on student satisfaction scores or peer observation alone may miss much of what students value. Institutions could incorporate evidence from free-text student feedback, analysed thematically, to assess nominees against these broader criteria. This would also help address concerns about bias in nomination-based awards, where certain groups of staff may be systematically under-nominated.

Q: Does the study account for differences across disciplines, and how might subject area affect perceptions of excellence?

A: The study was conducted within a single institution, the University of Warwick, and did not systematically compare perceptions across disciplines. The authors acknowledge this as a limitation. However, the three themes identified, support, student-centricity, and growth, are broad enough to apply across disciplines, even if their specific expression varies. For example, "growth and development" might manifest as clinical competence in a medical programme or creative confidence in an arts programme. Institutions can explore disciplinary variation by analysing free-text comments at the subject level, which is where large-scale text analysis becomes particularly valuable.

Q: How does this research connect to the broader debate about metrics like the NSS and TEF?

A: The NSS and TEF both attempt to capture teaching quality, but they rely heavily on predefined questions and quantitative scales. This study demonstrates that students' understanding of excellence is richer and more multidimensional than any single survey question can capture. A student might rate overall satisfaction highly for reasons that are entirely invisible in the structured data, such as feeling personally supported during a difficult period. Institutions that supplement their NSS and internal evaluation data with systematic analysis of free-text comments can build a more complete and actionable picture of what drives student perceptions of quality, and where targeted improvements will have the greatest impact.

References

[Paper Source]: Claudine Fox, Martyn Parker and Emily Locker "The meaning of excellence in learning and teaching to students" DOI: 10.1080/02602938.2025.2588681

Request a walkthrough

Book a free Student Voice Analytics demo

See all-comment coverage, sector benchmarks, and reporting designed for OfS quality and NSS requirements.

  • All-comment coverage with HE-tuned taxonomy and sentiment.
  • Versioned outputs with TEF-ready reporting.
  • Benchmarks and BI-ready exports for boards and Senate.
Prefer email? info@studentvoice.ai

UK-hosted · No public LLM APIs · Same-day turnaround

Related Entries

The Student Voice Weekly

Research, regulation, and insight on student voice. Every Friday.

© Student Voice Systems Limited, All rights reserved.