Ditch Multiple Choice Questions for your Take-Home Examinations

Traditional assessments for in-class examinations range from open-book or closed-book pen and paper assessments. Where multiple-choice questions (MCQs) are one of the most commonly used assessment methods in higher education [1].  In the higher education setting, the tested performance from MCQs is translated into grades to represent the level of achievement students reach [2]. MCQ's can cover a broad range of course material in a timely manner and provides professors with an efficient way to score them through scantrons [1].

Now, it may even be possible to integrate MCQ's examinations through online software platforms like Quizlet. One could argue this assessment can combat plagiarism if proper conditions are set like allocating at most 30 seconds to a minute per question or rewording questions so that students are less likely to make a quick google search during the exam.  I have even heard of professors posting MCQ's tests with wrong answers to textbook solutions sites like Chegg to easily identify plagiarizers. While I give credit for these clever ideas, I do not believe MCQ's are the most optimal assessment method.

Transforming Traditional Examination Methods

Despite the availability of guidelines and courses teaching the principles of creating MCQ's they are not always followed. According to a study conducted by Tarrant et al. (2006) it was found that out of 2770 MCQ's created between 2001 and 2005, 46% of them violated writing guidelines. When the quality of MCQ assessments is comprised of being poorly written it will adversely impact students' performance and achievement negatively.

In addition, MCQ's do not assess higher-order thinking and focus too much on the recall of knowledge [3]. According to Bloom's taxonomy scale which is a representation of student learning domains in a hierarchy, the lowest level, remembering is considered root learning [4]. For students to move from a root learner to a true scholar, levels of understanding, applying, analyzing, evaluating and creating the need to be integrated in order to create new knowledge [5].

To understand what level students are at, Bloom's taxonomy allows instructors to be better informed on what learning phase students have grasped in consideration of the course material. MCQ does not allow students to define problems, predict, hypothesize, experiment, analyze, conclude and self-reflect [6]]. Students are not provided with the opportunity to use their intrinsic creativity or ability to express ideas through this assessment method. The lack of evaluative skills taught in higher education prevents students from accessing their higher cognitive domain.

To test higher-order thinking using ill-defined/ structured open-ended questions with no time constraints is a reasonable assessment method. We have seen professors on Kritik, break down their final examination or term papers into smaller activities to evaluate students. Challenging students to construct a comparison-contrast essay would allow students to use a very different combination of references for essay exams. Let the students start preparations early and set up an activity where students can apply excerpts of class readings to be cited in the test. This way responsible students who have kept up with the course will be rewarded for their knowledge of the course material and feel confident that they will compose their examination on the material they know thoroughly. Before writing the essay, students can provide positive feedback or suggestions while slackers are recognized throughout the peer-review process.

The peer-review process has proved effective for professors to gauge whether students have met all the learning goals. For STEM-related disciplines, due to the time constraint for single examinations, it can be difficult to test all learning objectives. In addition, single examinations where the passing grade is 50% does not necessarily mean all learning objective has been met. Through take-home examinations where there is no time limit, and since students have unlimited access to information, they are then required to prove that they have met all learning goals. Students will promote the higher-order cognitive skills as they are improving their ability to find, validate, select, integrate, synthesize, communicate, comprehend and present information [5].

Get Started Today

Learn more about how Kritik can help you
Request a Demo
  1. DiBattista, David & Kurzawa, Laura. (2011). Examination of the Quality of Multiple-choice Items on Classroom Tests. Canadian Journal for the Scholarship of Teaching and Learning. 2. 10.5206/cjsotl-rcacea.2011.2.4.
  2. Yorke, M. (2009). “Faulty signals? Inadequacies of grading systems and a possible response,” in Assessment, Learning and Judgement in Higher Education, ed. G. Joughin (London, UK: Springer), 1–20.
  3. Tarrant, M., Knierim, A., Hayes, S. K., and Ware, J. (2006). The frequency of item writing flaws in multiple-choice questions used in high stakes nursing assessments. Nurs. Educ. Pract. 6, 354–363. doi:10.1016/j.nepr.2006.07.002
  4. Bloom, B. Taxonomy of Educational Objectives. The Classification of Educational Goals, Handbook 1: CognitiveDomain; David McKay: New York, NY, USA, 1956.
  5. Bengtsson, L. (2019, November 6). Take-Home Exams in Higher Education: A Systematic Review. Retrieved from [***Missing Reference***]
  6. Svoboda, W. A case for out-of-class exams. Clear. House J. Educ. Strateg. Issues Ideas 1971, 46, 231–233. [CrossRef]

Navinaa Sanmugavadivel
Education enthusiast