October 1st, 2021 – Imperfect AI Autograders

This week, we have a joint presentation by Tiffany Wenting Li and Silas Hsu on their recently published work regarding imperfect AI autograders.

Please join us, and stay afterwards to socialize with the other attendees (and eat cookies)!

Attitudes Surrounding an Imperfect AI Autograder — Abstract

Deployment of AI assessment tools in education is widespread, but work on students’ interactions and attitudes towards imperfect autograders is comparatively lacking.  This paper presents students’ perceptions surrounding a 90% accurate automated short-answer grader that determined homework and exam credit in a college-level computer science course.  Using surveys and interviews, we investigated students’ knowledge about the autograder and their attitudes.

We observed that misalignment between folk theories about how the autograder worked and how it actually worked could lead to suboptimal answer construction strategies. Students overestimated the autograder’s probability of marking correct answers as wrong, and estimates of this probability were associated with dissatisfaction and perceptions of unfairness.  Many participants expressed a need for additional instruction on how to cater to the autograder. From these findings, we propose guidelines for incorporating imperfect short answer autograders into the classroom in a manner that is considerate of students’ needs.

Personal Bios

Tiffany Wenting Li is a 4th year Ph.D. student in the Computer Science Department at UIUC, advised by Dr. Karrie Karahalios and Dr. Hari Sundaram. Her research interests broadly lie in the intersection of human-computer interaction (HCI), education technology, and artificial intelligence (AI). She is excited about leveraging AI effectively and fairly to increase access to quality education. Currently, she focuses on two lines of research. First, she is working to address the imperfection and opacity of AI-driven feedback systems to maximize students’ learning gain. Her second line of research develops algorithmic systems to facilitate collaborative peer feedback exchanges at scale, optimizing for learning gain, feedback diversity, and efficiency. Before she was a Ph.D. student, she studied mathematics and economics at Cornell University.

Silas Hsu is a 4th year Ph.D. student at UIUC specializing in human-computer interaction (HCI) and advised by Dr. Karrie Karahalios. His research focuses on helping people make the most of imperfect AI systems and helping users correct AI’s mistakes. Currently, Silas pursues this goal in the area of (1) everyday online algorithms that curate ads and feeds, making sure people have meaningful control over the content they see; and (2) algorithms that grade students’ work, giving students tools to help them make sense of possibly imperfect feedback. When he’s not busy, Silas continues to add to his over 20 years of classical piano experience and strives for performance quality rivaling that of professionals.