How These 5 Canadian Professors Are Improving Feedback and Learning Outcomes Through Kritik

In lecture halls across Canada, instructors face a familiar tension: how do you provide meaningful, timely feedback to 300 students? How do you teach critical thinking in a first-year course with 800 enrollments? And in an era when generative AI can produce polished prose in seconds, how do you design assessments that actually measure learning?

These aren’t hypothetical questions. They reflect the daily reality for university and college instructors navigating large classes, evolving student needs, and rapidly shifting technological landscapes. The traditional model, where one instructor grades every assignment and provides all feedback, simply doesn’t scale. Even when it is feasible, it often falls short of supporting strong learning outcomes.

At the same time, academic integrity violations such as cheating and plagiarism remain common challenges in university courses. In large classes, identifying academic offences can be particularly difficult due to the sheer volume of exams and assignments instructors must review.

But across Canadian higher education, a pedagogical shift is underway. Instructors Mary-Helen Armour, Ros Woodhouse and Ian Lumb from York University, Robin Potter of Seneca Polytechnic, and Laura Reid of Western University are redesigning assessment to prioritize process over product, feedback over grading, and evaluative judgment over passive compliance. Their approaches share a common thread: structured peer assessment that helps students develop critical thinking skills while receiving faster, more diverse feedback.

This shift includes fundamentally rethinking how students learn to evaluate quality, revise their thinking, and communicate effectively; all skills that matter far more in the age of AI than the ability to produce a single polished draft.

Assessment Design in the Age of AI

The rise of generative AI has forced many instructors to reconsider what they’re actually assessing. If AI can write a competent essay or solve a standard problem set, then assessment focused solely on final products becomes less meaningful. A learning-focused approach emphasizes student-centered, formative, and ongoing feedback strategies, shifting the focus from simply providing feedback to fostering students' active engagement with feedback throughout their learning journey.

This is where process-oriented assessment and peer review offer a path forward. Integrating learning strategies such as Socratic questioning, case study analysis, and problem-based learning can effectively foster critical thinking by encouraging students to investigate problems, analyze evidence, and develop reasoned conclusions (Hmelo-Silver, 2004). When students engage in scaffolded assignments, peer feedback, revision, and reflection, they’re demonstrating learning that AI cannot perform on their behalf. Scaffolded assignments break large projects into manageable steps with feedback at each stage to guide the critical thinking process. They’re building evaluative judgment to recognize gaps in logic and to improve work based on critique.

Robin Potter emphasizes that AI-aware pedagogy doesn’t mean avoiding technology; it means designing assessment that makes learning visible. “We need assignments where students explain their thinking, critique others’ reasoning, and revise based on feedback,” she says. “These activities reveal understanding in ways that final products alone cannot.” Integrating critical thinking into higher education curricula enhances learning outcomes significantly by improving academic performance. Critical thinking also fosters metacognition, allowing students to understand their own thinking processes and biases, which leads to continuous improvement.

Mary-Helen Armour’s approach of repeated low-stakes assignments with peer feedback serves a similar purpose. Students submit multiple drafts, receive peer and instructor commentary, and revise over time. This iterative process makes it harder to outsource work to AI and more valuable to engage authentically with feedback and revision.

The result is assessment that feels more authentic, connected to real-world practices of collaboration, critique, and continuous improvement.

How Canadian Universities and Colleges Are Rethinking Feedback and Critical Thinking at Scale

Seneca Polytechnic: Process-Oriented Assessment for Critical Thinking

The pressure on traditional assessment design has never been greater. Employers consistently rank critical thinking and problem-solving among the most important skills for graduates, according to employer surveys from organizations such as the National Association of Colleges and Employers (2016) and the World Economic Forum (2025).

In response, Canadian universities and colleges are moving toward more authentic assessment strategies that emphasize process, reflection, and the ability to evaluate and improve work over time. Assessment evaluation plays a crucial role in measuring feedback literacy behaviors and integrating assessment with broader educational strategies to support student success. This shift recognizes a fundamental truth: the most valuable skill isn’t producing a perfect final product, but knowing how to assess quality, give constructive feedback, and revise based on critique.

Robin Potter at Seneca Polytechnic frames this as process-oriented assessment. “We need to move beyond just grading the final product,” she explains. “Students need to engage in analysis, reflection, and peer critique. They need to develop evaluative judgment, the ability to recognize what makes work strong or weak, and to apply that understanding to their own thinking.”

This approach aligns with growing research on feedback literacy: students learn more when they actively engage with feedback, compare their work to others, and practice giving constructive critique. In large classes, peer assessment becomes not just a practical necessity but a pedagogical advantage.

Download Resource: Building Critical Thinking Skills through Process-based Peer Assessments

Western University: Improving Feedback in Large Computer Science Classes

Teaching large classes presents unique obstacles to meaningful feedback. When one instructor is responsible for hundreds of students, feedback often arrives weeks late, feels generic, or focuses narrowly on grades rather than learning. Students submit work, receive a mark, and move on, often missing the iterative revision process that builds deeper understanding. In college courses, student feedback practices and students' perceptions play a crucial role in improving learning outcomes, as they inform how students seek, use, and respond to feedback.

Laura Reid discovered this challenge firsthand while teaching computer science at Western University. In a first-year course with nearly 800 students, providing timely, detailed feedback on coding assignments seemed impossible. “I initially adopted peer assessment as a way to manage grading pressure,” she says. “But I quickly realized the real impact was on student learning and understanding.” By analyzing student responses to feedback surveys, Reid was able to better understand students' perceptions and learning needs in such a large class, allowing her to refine her instructional approach.

Through structured peer evaluation using detailed rubrics, Reid’s students began reviewing each other’s code, seeing multiple approaches to the same problem, and developing stronger reasoning about what constitutes quality work. The feedback came faster, covered more perspectives, and helped students internalize technical standards in ways that instructor comments alone couldn’t achieve. Understanding students' learning needs and perceptions can help instructors tailor feedback processes to maximize impact in large college courses.

This experience reflects a broader pattern across disciplines. When assessment design incorporates peer review, students benefit from exposure to diverse approaches, faster feedback cycles, and the metacognitive work of evaluating others’ reasoning. These benefits matter especially in large classes where individual instructor attention is limited.

Learn more: Scaling Robust Feedback in Large Courses Without Increasing Workload

York University: Building Feedback Literacy Through Peer Review

Critical thinking at scale does not mean lowering academic standards or automating assessment. Instead, it requires intentionally designed learning experiences that give students repeated opportunities to evaluate ideas, compare approaches, and revise their work, which are all core activities that develop critical thinking skills.

At York University, Mary-Helen Armour uses peer assessment in large first-year and online asynchronous courses to increase student interaction and strengthen writing over time. Her approach emphasizes calibration, where students first practice evaluating sample work using rubrics before applying those criteria to peers’ drafts. Through this process, students learn how strong analysis and clear communication are defined within their discipline, while feedback literacy becomes embedded in the course design.

“Peer assessment helps students write for real audiences rather than only for instructors,” Armour explains. “They start to see their writing as communication, not just compliance. And they develop workplace-relevant skills in giving and receiving feedback.”

Armour also uses anonymous peer review strategically. Removing student identities reduces anxiety, particularly for students with accommodations or those who feel self-conscious about their writing. It also encourages more honest, constructive feedback. As a result, students focus on the quality of ideas rather than classroom social dynamics.

Also at York, Ros Woodhouse integrates peer feedback across writing, communication, iTech, and data science courses. Students review drafts, slide decks, document design, and presentation practice, often as preparation for experiential learning projects with external partners. According to Woodhouse, the goal is not simply faster feedback but stronger evaluative skills.

“The goal isn’t just faster feedback,” she notes. “It’s helping students develop stronger evaluation skills and better communication habits before their work goes into the world.”

In geoscience courses, York instructor Ian Lumb uses peer assessment to encourage students to connect disciplinary knowledge with real-world issues such as climate change and water pollution. In this context, large classes actually become an advantage.

“Students see how their peers interpret the same material differently,” Lumb explains. “That exposure strengthens critical thinking across the whole class.”

Across these examples, individual assignments, peer discussion, and structured feedback create authentic learning activities that help students interpret evidence, evaluate arguments, and refine their reasoning. Together, these strategies demonstrate how critical thinking can be developed at scale without sacrificing academic rigor.

Download Resource: Enhancing Teaching Experiences in Large Courses with Kritik

How Peer Assessment with Kritik Improves Feedback in Higher Education across North America

Kritik supports structured peer assessment in higher education, allowing instructors to expand feedback while keeping students actively involved in the learning process. Instead of relying on a single instructor’s comments, students receive peer feedback from multiple perspectives. Using structured rubrics and evaluation workflows in Kritik, students review classmates’ work, interpret assessment criteria, and compare different approaches to the same problem. This process strengthens feedback literacy, deepens engagement with course material, and supports the development of critical thinking skills.

As instructors such as Laura Reid at Western University and Robin Potter at Seneca Polytechnic have observed, the value of peer assessment is not just faster feedback. When students evaluate peers’ work and revise their own assignments based on critique, they develop the ability to judge quality, explain reasoning, and improve their work, all skills that strengthen learning outcomes in higher education, especially in the age of generative AI.

Key Principles for Scaling Feedback and Learning Outcomes

Despite differences in discipline and context, each instructor above successfully shares several design principles when it comes to peer assessment that improves student success and teaching effectiveness:

Scaffolded assignments with clear progression. Students don’t jump straight into high-stakes peer review. They practice with low-stakes assignments, calibration exercises, and sample work before evaluating peers’ substantive projects. Setting the right tone from the beginning of the course helps ensure student participation, and instructors can start small by asking students questions about their learning expectations on day one to build trust and promote collaboration.

Rubric-guided feedback that builds consistency. Detailed rubrics help students understand quality standards and provide structured, constructive feedback. Rubrics also make instructor oversight more manageable and ensure peer assessment aligns with learning outcomes. The importance of students' questions in interpreting and engaging with feedback should be emphasized, as this supports feedback literacy and deeper learning.

Calibration and norming activities. Before peer review begins, students practice evaluating sample work together, discuss criteria, and develop shared understanding of standards. This calibration improves feedback quality and builds confidence. However, students' fears of peer judgment can discourage participation in large classes, and addressing how students feel about the process can foster a more participatory classroom culture. Breaking up large classes into segments with specific questions or exercises can also promote participation.

Repeated practice over time. Peer assessment works best when it’s not a one-time activity. Scaffolded opportunities to give and receive feedback help students develop stronger evaluative judgment and communication skills. Students' work and their reactions to feedback are integral to fostering feedback skills, as active engagement with feedback enables students to make meaningful improvements.

Anonymity to support honesty and inclusion. Anonymous peer review reduces anxiety, encourages candor, and creates more equitable conditions for students with diverse backgrounds and accommodations. Online discussion boards can provide structured opportunities for students who are shy to participate in class, further supporting inclusion.

Instructor oversight and accountability. Successful peer assessment doesn’t mean instructors step back entirely. Instructors review peer feedback quality, intervene when necessary, and ensure students take the process seriously. Modeling questioning skills and encouraging students to practice these skills in class are essential for developing critical thinking in higher education.

Connection to authentic learning contexts. The most effective peer assessment connects to real-world practices such as workplace feedback, professional collaboration, or experiential learning with external partners, as illustrated by enhancing peer learning in computer science courses. This connection increases student engagement and relevance. Feedback must become a part of the institution-wide culture for students to be comfortable giving and receiving it, supporting a sustainable environment for critical thinking and growth.

The Future of Feedback in Canadian Higher Education

As Canadian universities and colleges adapt to larger enrolments, evolving technology, and limited instructional resources, the need for scalable feedback processes will continue to grow. Approaches such as peer assessment offer a practical way to support student learning while maintaining meaningful engagement in large classes.

However, adopting peer feedback also requires rethinking the purpose of assessment. If assessment is designed only to assign grades efficiently, peer review may appear unnecessary. But when assessment focuses on developing critical thinking, evaluative judgment, and communication skills, peer feedback becomes an essential component of the learning process.

The instructors at York University, Seneca Polytechnic, and Western University demonstrate that feedback-rich assessment design can scale effectively across large courses. Their classroom practices show that when students learn to evaluate peers’ work, respond to critique, and revise their ideas, they develop the analytical and communication skills needed for academic and professional success.

For instructors, department leaders, and teaching and learning teams across Canada, the challenge is not whether to incorporate more feedback into assessment design. Instead, the question is how to design courses that encourage student engagement, support feedback literacy, and strengthen learning outcomes across diverse educational contexts.

Developing student feedback literacy is an ongoing process that requires thoughtful course design, consistent feedback opportunities, and reinforcement throughout the academic year. By investing in these strategies, institutions can create learning environments where students become active participants in their learning and develop the critical thinking skills needed to succeed in a rapidly changing world.

Set clear expectations for AI use in your course with our ready-to-use AI Policy Template.

Build Critical Thinking Into Every Assignment with the Kritik Toolkit

Kritik360 helps you move beyond one-time submissions by engaging students in structured peer assessment, rubric-guided evaluation, and meaningful feedback. VisibleAI adds transparency to each student's working process, so you can guide ethical, responsible practices while gaining insight into how each student thinks and utilizes AI.

Together, they support deeper learning, stronger critical thinking, and greater confidence in the integrity of student work.

Try Kritik360 for free or book a discovery call to explore VisibleAI.

References

Hmelo-Silver, C. E. (2004). Problem-based learning: What and how do students learn? Educational Psychology Review, 16(3), 235–266. https://doi.org/10.1023/B:EDPR.0000034022.16470.f3

National Association of Colleges and Employers. (2016, April 20). Employers identify four “must have” career readiness competencies for college graduates. https://www.naceweb.org/career-readiness/competencies/4fcc5854-93f9-408e-af2a-caeb8a248f3c

World Economic Forum. (2025). The future of jobs report 2025. https://www.weforum.org/publications/the-future-of-jobs-report-2025/

Heading