Halo effects in the student voice: unwanted correlations
By Student Voice
A Halo effect is a significant correlation between two items of the questionnaire which should be unrelated.
At Student Voice, we are always looking for areas where our understanding of how we capture and use student comments can be improved. A key issue is in the validity of certain survey instruments and in particular the use of the results of quantitative scale questions.
This paper analyses correlations in student evaluations of teaching. The authors used a novel identification procedure to assess the presence of halo effects and found that, despite the distortion due to halo effects, responses to distorted questions remained informative.
Overall, the results of this experiment suggest that the distortion in evaluation questionnaires caused by halo effects need not be a concern for higher education institutions.
Halo effects occur when all responses to a questionnaire are highly correlated and reveal little more than an overall evaluation. Halo effects are difficult to identify in SETs because correlation is expected between responses to different items. Shortening the evaluation could reduce cross-correlation, as well as survey fatigue and improve response rates, but may not help with identifying halo effects.
One method to find halo effects is to find a new variable that does not correlate to the other variables (the questions in the survey). Preliminary evidence suggests that students may be more likely to agree with positive statements if they are asked about their opinion on an unrelated question (i.e. “was the lecture room large enough?”).
The study looked at the effect of halo bias on true correlations between attributes at an Italian university. In a university setting, variance in scores for different attributes was taken as an indication of the accuracy of the test. The data indicated that any correlation which is unjustified by true underlying correlations should be considered a form of halo effect.
SETs are often used to measure the strength of relationships between two sets of items. The high correlation between SETs can be problematic for validity. However, they do suggest that long (and potentially burdensome or costly) questionnaires may be unnecessary and it may be better to ask a much smaller number of questions
In summary, SETs are not a perfect measure of teaching quality. They do have some flaws and shortcomings that affect the validity of their use in determining a teacher's effectiveness. However, they can be used by educators help find areas for improvement.
Halo effects are drawbacks to SET because they reduce the reliability of within-teacher distinctions by flattening the overall profile of ratings. On the other hand, halo effects can magnify differences in the mean ratings received by different teachers. The paper states that there is no reason to believe that the SETs contain fundamental problems which would invalidate our data set. Block rating is more common in some other studies, but this study found weak evidence that students commonly block rate.
For the university as a whole, there is little evidence of extreme halo effects. However, responses to questions are highly correlated with each other. This is corroborative evidence for a halo effect but to really get an understanding of whether a halo effect exists we need to consider independent information, in this case about the rooms themselves and map that to a question in the SET.
At SET level the independent information about rooms has more explanatory power than the halo effect. SETs usually have a diagnostic goal and not only a summative goal, and if they are meant to provide specific feedback in order to improve teaching performance, halo effects represent a problem.
Summary:
- Halo effects are present in SET
- Evaluations remain informative of the various aspects being judged, despite the distortion
- No evidence for a need to design evaluations differently because of halo effects
FAQ
Q: How can Student Voice initiatives more effectively incorporate text analysis to detect and mitigate halo effects in open-ended student feedback?
A: Student Voice initiatives can use text analysis to explore open-ended feedback from students by applying natural language processing (NLP) techniques. These techniques can help identify recurring themes, sentiments, and patterns that might not be evident through traditional quantitative analysis. For instance, text analysis can uncover whether positive or negative comments about teaching are influenced by unrelated factors, such as the physical classroom environment. This deeper understanding can help educators and institutions discern genuine feedback on teaching effectiveness from perceptions influenced by halo effects. By integrating these insights into the evaluation process, Student Voice efforts can contribute to a more accurate and nuanced understanding of teaching quality.
Q: What specific strategies or methodologies can be employed to ensure the validity and reliability of SETs, considering the presence of halo effects?
A: To enhance the validity and reliability of Student Evaluation of Teaching (SETs) in the face of halo effects, institutions can adopt several strategies. One approach is to diversify the types of questions used in evaluations, combining quantitative scales with qualitative, open-ended questions. This mix allows for a broader range of student feedback, capturing nuances that scale-based questions might miss. Additionally, training sessions for students on the importance of unbiased feedback and the potential impact of halo effects can raise awareness and encourage more thoughtful responses. Implementing anonymous evaluations can also reduce social desirability bias, where students might otherwise rate instructors more favorably if they believe their responses can be traced back to them. These strategies, rooted in the principles of student voice, aim to create a more balanced and comprehensive evaluation system that better reflects the complexities of teaching effectiveness.
Q: In what ways can Student Voice initiatives leverage independent variables, like the unrelated question about the lecture room size mentioned, to better understand and interpret the data collected through SETs?
A: Student Voice initiatives can use independent variables, such as unrelated questions about the physical learning environment, to add depth to the interpretation of data collected through SETs. By analyzing responses to these unrelated questions alongside traditional evaluation metrics, educators can identify patterns that may indicate a halo effect. For example, if students' ratings of teaching quality are consistently correlated with their satisfaction with the lecture room size, this could signal that their overall perception of the course is influencing specific evaluations of teaching effectiveness. Recognising these patterns allows for a more critical examination of SET data, encouraging educators to consider external factors that may be influencing student feedback. This approach supports the broader aim of Student Voice initiatives to foster a comprehensive and nuanced understanding of the student experience, leading to more informed decisions about teaching and learning improvements.
References
[Paper Source] Edmund Cannon & Giam Pietro Cipriani "Quantifying halo effects in students’ evaluation of teaching"
DOI: 10.1080/02602938.2021.1888868
Related Entries