Skip to the content.

Back to Home Page

Massive Open Online Courses (MOOCs) have been both hyped as steps towards cheaper, more democratic education and criticized as low-quality substitutes for traditional education. In this project, we propose to address one of the major challenges in MOOCs: grading and feedback. Improving the quality of grading and feedback to students would improve learning outcomes and add value to MOOC course credits, making MOOCs a more useful and sustainable educational resource.

Limited resources in large courses prevent personalized feedback from instructors. We therefore turn to peer grading and feedback, which has the potential to support inexpensive and scalable MOOCs. Peer grading has been tested with limited success, yet we believe that further research could make it practical and reliable.

This project aims to develop a deeper understanding of peer grading via a combination of theoretical analysis and empirical testing. We aim to establish bounds on reliability and scalability of peer grading systems. Analysis of these fundamental properties will lay groundwork for long-term MOOC research and development. We will use our analysis to develop practical grading and feedback systems which combine student and instructor input.

Major challenges

Primary research questions

Documents

Nihar B. Shah, Joseph K. Bradley, Abhay Parekh, Martin Wainwright, and Kannan Ramchandran.
A Case for Ordinal Peer-evaluation in MOOCs.
NeurIPS Workshop on Data Driven Education, 2013.

N. Shah, S. Balakrishnan, J. Bradley, A. Parekh, K. Ramchandran and M. Wainwright.
Estimation from Pairwise Comparisons: Sharp Minimax Bounds with Topology Dependence.
JMLR 17(58): 1-47, 2016.