How to Maximize Your Peer Evaluations Results

Peer evalutions

Peer evaluation is carried about by evaluators generally for the purpose of improvement in the teaching methods. The evaluation criteria can either be done through a formative or a summative review for new hiring or promotional purpose. Often self evaluation forms are developed to assess contribution, performance, skills, competencies, team work or attitude of students, team members and faculty members.

Best practices for peer evaluation activities

Critical and motivational peer feedback increases productivity, overall knowledge retention and yields higher quality work from students. Here are some tips and best practices for creating activities and structuring peer evaluations:

1. Validity

The outcome that you should look for in a peer evaluation is validity. What this means is that a student peer evaluation should mimic the same depth, thought process and insight as a professor’s evaluation. This is a clear marker of success, because a professor’s marking abilities are typically held to the golden standard. Not to mention that a valid student evaluation proves that grading automation is sustainable because it replicates that of a professor’s.

2. Reliability

Reliability is measured by the consistency among peer evaluations. Unless a piece of work is subjective, a collection of peer evaluations must point to a general direction in order to provide value. This can only occur when evaluations are consistent across the board, in terms of evaluation depth or given score. Kritik implements a variety of features to help ensure

3. Maximum capacity of words on bodies of work (creations)

For written work, essays should not exceed more than 1,000 words. As you might imagine, students can provide more precise constructive feedback on content that is shorter in length. This leaves less room for too much variation in evaluations, and students are prompted to make more consistent conclusions among themselves.

4. Maximum capacity of words on evaluations

Even the number of words on evaluations should also be limited as well, in order to ensure that effective, regular and concise feedback is given. According to a study conducted by West Virginia University, feedback should not exceed 50 words or more.

5. Clarity of criteria given to students

Keep your rubrics clear and concise. Give examples and indicators of poor, moderate, and excellent bodies of work. In terms of transferring professional knowledge to your students, clarify your thought process as well as tips and tricks that you use throughout your grading process. Naturally, this boosts the validity of your student’s evaluations, as you help shape an evaluation process that mirrors that of your own. Guide them to give constructive feedback for further improvement.

6. Number of evaluations required by students

An excessive number of assigned evaluations will exhaust the student and their time, which can deplete the quality and validity of the peer evaluations despite adequate training and instruction. At the same time, the accuracy of grading will be compromised if only one or two peer evaluations are provided for a given work. According to a few studies conducted by Georgia State University and Pennsylvania State University, the optimum number of individuals to review is between four and six.

The coronavirus pandemic has made us shift to virtual classrooms, where the struggle to teach with maximum efficiency has reduced. To find of what things we lack in the online classroom, compared to a traditional one is by moving through a peer review process repeatedly. The peer evaluation process are also helpful to assess the efforts of group members of group projects, how much each student has contributed to the group work assignment.

References

Ballantyne, R., Hughes, K., & Mylonas, A. (2002). Developing procedures for implementing peer assessment in large classes using an action research process. Assessment & Evaluation in Higher Education, 27(5), 427-441.

García Martínez, C., Cerezo, R., Bermúdez, M., & Romero, C. (2019). Improving essay peer grading accuracy in massive open online courses using personalized weights from student's engagement and performance. Journal of Computer Assisted Learning, 35(1), 110-120.

Li, H., Xiong, Y., Zang, X., L. Kornhaber, M., Lyu, Y., Chung, K. S., & K. Suen, H. (2016). Peer assessment in the digital age: a meta-analysis comparing peer and teacher ratings. Assessment & Evaluation in Higher Education, 41(2), 245-264.

Roy, M., & Michaud, N. (2018). L’autoévaluation et l’évaluation par les pairs en enseignement supérieur: promesses et défis. Formation et profession, vol. 26, n° 2, 2018.

Seifert, T., & Feliks, O. (2019). Online self-assessment and peer-assessment as a tool to enhance student-teachers’ assessment skills. Assessment & Evaluation in Higher Education, 44(2), 169-185.

Chris Palazzo
Marketer & Educator. Blending the two here at Kritik

Heading