Degree

Doctor of Philosophy (PhD)

Department

Division of Computer Science and Engineering

Document Type

Dissertation

Abstract

Most existing autograders used for grading programming assignments are based on unit testing, which is tedious to implement for programs with graphical output and does not allow testing for other code aspects, such as programming style or structure. We present a novel autograding approach based on machine learning that can successfully check the quality of coding assignments from a high school-level CS-for-all computational thinking course. For evaluating our autograder, we graded 2,675 samples from five different assignments from the past three years, including open-ended problems from different units of the course curriculum. Our autograder uses features based on lexical analysis and classifies programs according to a code quality rubric. With Pearson correlation coefficient scores in the range of 0.80--0.96, our autograder shows its usefulness in the classroom. This autograder supports teachers grading graphical output while also providing information on the readability, coding style, and efficiency of the student submissions. This lessens the workload of teachers and helps teachers to judge code quality efficiently.

Date

4-3-2023

Committee Chair

Baumgartner, Gerald

DOI

10.31390/gradschool_dissertations.6089

Share

COinS