How does Perusall determine comment quality?

Tags perusall

When assigning content, Perusall can grade student comments as low, medium, or high-quality. How does it determine this?

mceclip0.png

Assuming these comments are representative of these students’ comments for this assignment (and also that their comments are distributed throughout the entire assignment and submitted on time), they would obtain the following evaluations for their body of comments:

  • Allison: Meets expectations
    Alison’s comments reveal interpretation of the text and demonstrate her understanding of concepts through analogy and synthesis of multiple concepts. Her responses are thoughtful explanations with substantiated claims and/or concrete examples. She also poses a profound question that goes beyond the material covered in the text. Finally, she applies understanding of graphical representation to explain the relationship between concepts.
  • Beth: Improvement needed
    While Beth asks possibly insightful questions, she does not elaborate on thought process. She demonstrates superficial reading, but no 
thoughtful reading or interpretation of the text. When responding to other students’ questions, she demonstrates some thought but does not really address the question posed.
  • Cory: Deficient
    Cory’s comments have no real substance and do not demonstrate any thoughtful reading or interpretation of the text. His questions do not explicitly identify points of confusion. Moreover, his comments are not backed up by any reasoning or assumptions.

Perusall uses a machine learning algorithm that uses linguistic features of the text to create a predictive model for the score a human instructor would give. In other words, instead of trying to figure out a set of rules to measure these things, we create a "training set" consisting of a large number of comments along with grades given by multiple expert human graders that are grading according to the rubric, and then create an algorithm that combines the linguistic features to best predict the scores given by the expert human graders. What we found in our validation work is that Perusall agreed with the expert human graders about as often as two humans agreed with each other!You are in full control of how your students are evaluated.

 

From Perusall's Perspective, we are trying to save an instructor time by suggesting a score. By default, will we not show students scores until you are ready to release and approve. To view how we calculated a score, you can click on their grade in the gradebook and change the score if needed. To adjust when the score is released to students, you can go to settings>scoring and adjust the setting "Release scores to students". Also, under settings> general > analytics you can choose how you want our algorithm to interact with your course.

 

Details

Article ID: 155661
Created
Mon 11/20/23 10:41 AM
Modified
Mon 11/20/23 10:45 AM