Help Center

How to quickly get up and running with Kritik

How to convert your course content into Kritik Activities

Break large assignments down into smaller activities

Cumulative assignments can be transformed into smaller segments of peer evaluation that will ultimately help your students produce a higher quality final assignment.

How to:

Start by segmenting the steps necessary to creating the final assignment, and creating activities for each portion. For example, in a research paper, assign:

  • One activity in which the student presents their hypothesis
  • One activity in which the student presents their data collection methodology
  • One activity in which the student presents their findings and discussions

Turn Readings into Engaging Material

Students absorb an abundance of content through weekly readings, but they cannot fully exercise this until class or during their exams. You can transform this into opportunities for students to retain this information and extend their learning through peer evaluations.

How to:

Per each weekly reading, you can assign quick, frequent activities such as:

  • Creating a question based on the reading materials
  • Sharing notes and comments made on the readings
  • Teaching the readings in a creative way
  • Answering thought/discussion provoking questions
  • Creating a video explaining the contents of the reading

Homework Questions and Problem Sets

Homework questions are also a great repository for peer evaluation content. Not only can students evaluate solutions to questions, but they can also investigate and build on their peer’s thought process.

How to:

Assign activities per set of homework questions, and ask students to clearly outline their thought processes, formulas, and diagrams for peer review. Be sure to share solutions to the questions with the class as soon the deadline is terminated.

Labs and In-Class Activities

In-class teaching methods are easily transferable to Kritik. The beauty of using Kritik for labs and in-class activities is that you can reap the benefits of the concise timing of the activity scheduler, and prolong the discussion far after class time.

How to:

Set the deadline for creation shortly after the lab or in-class activity is done. The creation phase can be used to submit lab results or findings done in class. The evaluation and feedback stage are used as a discussion board for the different conclusions that your students have made through their findings.

Activity Templates

Kritik offers four activity templates that are explicitly designed with the structure of Bloom’s Taxonomy. These are only suggestive templates, and may be customized. Each template accompanied with sample instructions, objectives and rubrics.

Blooms Taxonomy

Our activity templates focus on three highest levels of cogitative thinking from Bloom's Taxonomy; Analysis, Evaluation, and Creation

Create Question

This template asks students to formulate a higher order thinking question based on reading comprehension. This activity aims to evaluate the question’s Richness, Complexity, Scope, and Relevance.

Create an Essay

This template prompts students to write an argumentative essay based on a controversial opinion/subject. The rubric criteria includes: Clarity of Thoughts, Accuracy, Creative and Critical Thinking, Source and Evidence.

Create Content to teach Peers

Students are asked to teach content to their peers in a way that promotes higher content retention among their peers. Students are evaluated based on Organization, Relevance, Clarity, and Knowledge of Content.

Creative Communication

This template asks students to communicate course content in a creative way, (i.e. through  illustration, infographic item, summary table, short video, animation, or anything that will help convey the message faster or make it more engaging than the plain text). Students are evaluated by Organization, Knowledge, Text and Readability, Creativity and Visual Aids.

Best Practices

Best practices for creating activities and structuring peer evaluations

Validity

The outcome that you should look for in a peer evaluation is validity. What this means is that a student peer evaluation should mimic the same depth, thought process and insight as a professor’s evaluation. This is a clear marker of success, because a professor’s marking abilities are typically held to the golden standard. Not to mention that a valid student evaluation proves that grading automation is sustainable because it replicates that of a professor’s.

Reliability

Reliability is measured by the consistency among peer evaluations. Unless a piece of work is subjective, a collection of peer evaluations must point to a general direction in order to provide value. This can only occur when evaluations are consistent across the board, in terms of evaluation depth or given score.

Maximum Capacity of Words on Bodies of Work (Creations)

For written work, essays should not exceed more than 1,000 words. As you might imagine, students can provide more precise feedback on content that is shorter in length. This leaves less room for too much variation in evaluations, and students are prompted to make more consistent conclusions among themselves.

Maximum Capacity of Words on Evaluations

Even the number of words on evaluations should also be limited as well, in order to ensure that effective, regular and concise feedback is given. According to a study conducted by West Virginia University, feedback should not exceed 50 words or more.

Clarity of Criteria Given to Students 

Keep your rubrics clear and concise. Give examples and indicators of poor, moderate, and excellent bodies of work. In terms of transferring professional knowledge to your students, clarify your thought process as well as tips and tricks that you use throughout your grading process. Naturally, this boosts the validity of your student’s evaluations, as you help shape an evaluation process that mirrors that of your own.

Number of Evaluations Required by Students

An excessive number of assigned evaluations will exhaust the student and their time, which can deplete the quality and validity of the peer evaluations despite adequate training and instruction. At the same time, the accuracy of grading will be compromised if only one or two peer evaluations are provided for a given work. According to a few studies conducted by Georgia State University and Pennsylvania State University, the optimum number of individuals to review is between four and six.