Skip to main content

Crafting Effective Multiple Choice Questions

How this will help

Learn a three-step process to develop multiple choice questions that accurately assess student progress
Choose the best multiple choice question format based on learning objectives, validity, reliability, and time
Identify and avoid frequently made mistakes in multiple choice question design

Creating multiple choice questions may seem simple, yet it is far more complex than asking a question and providing a few options. Wordy, ambiguous questions will leave students scratching their heads, trying to decipher the question instead of tapping into their understanding of the material. 

Crafting a good multiple choice question takes more than knowledge — it requires clarity, fairness, and the empathy to anticipate missteps without setting traps.

Start With the Learning Objectives

An effective multiple choice question always begins with clarity on assessment goals, i.e. what should this question assess? In a student-centered classroom, learning objectives, activities, and assessments are tightly integrated. Revisiting the learning objectives helps narrow down the focus of your question – recalling facts, identifying misconceptions, or applying concepts- and ensures you are truly measuring what students have learned.

Choose an Effective Structure and Forma

The base of a multiple choice question consists of two parts – (1) the stem, in the form of a question or partial sentence, and (2) alternatives that include both the correct and incorrect answers. The incorrect answers are also known as distractors. 

The example shows what a conventional multiple choice question looks like. 

STEM: According to Keynesian economic theory, which of the following fiscal policies would be most effective in stimulating economic growth during a recession: 

ALTERNATIVES:

A. increasing taxes 

B. reducing government spending

C. lowering interest rates

D. increasing government spending on infrastructure projects?

Alternatives A through C are known as distractors; D is the correct answer.

However, with that basic structure, multiple choice questions can take on different formats such as alternate choice, true or false, or matching questions. You may also group several questions within a written context to create a question set, which is a particularly effective approach for assessing higher-order thinking and addressing more complex problems (Haladyna et al., 2002).

Below are examples of different multiple choice question formats.

Example: Alternate choice

Which of the following would most effectively slow down the process of respiration in plants?

A. Cold weather
B. Stormy weather

Example: Matching

Match each term on the top with its description on the bottom.

A. Cytoplasm
B. Nucleus
C. Mitochondria

1. The powerhouse of the cell
2. The control center of the cell
3. The jelly-like substance where cell activities occur

Example: True-False

The capital of Uruguay is Montevideo.

A. True
B. False

Example: Context-dependent question set

Imagine you are a delegate from Massachusetts to the Constitutional Convention. You have been authorized to act on behalf of your state.

You would most likely approve of the

A. New Jersey Plan
B. Virginia Plan

You would oppose the three-fifths compromise because

A. Your state, as a rule, is strongly abolitionist
B. You will be grossly overrepresented in Congress by northern states
C. You want only a single representative house

You support the suggestion that Congress tax

A. Imports
B. Exports

Because of your state’s experience with Shays’ Rebellion, you feel

A. Farmers should not have to carry the tax burden for townspeople
B. Native Americans must be pacified before there can be peace
C. Tories ought to pay reparations

One format to avoid is the complex multiple choice question format, which requires respondents to evaluate multiple items simultaneously, and identify which combination of these is correct. While this format increases the difficulty of the questions, it often shifts the focus away from solving the problem to navigating the complex format (Haladyna et al., 2002).

If you have one of these, consider converting it into a multiple true-false question.

Here are examples of a poorly constructed question, followed by an improved version.

Bad complex multiple choice question

Which of the following are animals?

1. Cat
2. Apple
3. Bird

A. 1&2
B. 1&3
C. 2&3

Better complex multiple choice question

Below is a list of things. Mark A if it is an animal. Mark B if it is not.

1. Cat
2. Apple
3. Bird

Construct Effective Questions

After selecting one or more formats, the next step is to craft a multiple choice question with an effective stem and feasible yet discerning alternatives.

Construct an Effective Stem

  • The stem should be meaningful and provide a definite problem. 
  • The stem should preferably be a focused question, or a partial sentence completed by the alternatives.
  • The stem should not contain irrelevant material.
  • The stem should be phrased positively in most cases. Negation should only be done when required by significant learning outcomes (e.g., recalling what to avoid).

Construct Effective Alternatives

  • Three alternatives are sufficient for most cases. While including more options isn’t inherently harmful, it makes the question more difficult.
  • All alternatives should be plausible and discriminating. Each option should appear to be a reasonable choice while being distinct enough. Common errors made by students are great sources for such alternatives.
  • Alternatives should be stated clearly and concisely, using as few words as possible.
  • Alternatives should be mutually exclusive.
  • Alternatives should be homogenous in content and grammatical structure. It helps students to focus on the main difference among the options and improve discrimination of the MCQ.
  • Alternatives should be free from clues about which response is correct.
  • The alternative “all of the above” should be avoided. It cues the students that more than one option is likely to be correct, increasing the chance of guessing.
  • Alternatives should be formatted vertically, rather than horizontally.
  • Alternatives should be placed in logical or numerical order.

While the many guidelines for multiple choice question writing might seem daunting, they are practical and highly applicable. Familiarizing yourself with these principles and applying them as a checklist to review can be a helpful strategy for test writers.

Practical Tips

When crafting multiple choice questions, you should:

  • Start with learning objectives
  • Choose the most suited format
  • Construct effective stems and alternatives

Writing effective multiple choice questions is both an art and a science. It requires clarity, fairness, and a deep understanding of how students think. Your first draft will rarely be perfect, but that’s the point. Distance and revision are your best allies, turning your questions into something sharper, more focused, and undeniably better.

Resources

Multiple choice question guidelines

References

Brame, C. (2013) Writing good multiple choice test questions. Vanderbilt University Center for Teaching. Retrieved January 22, 2025. 

Haladyna, T.M., & Rodriguez, M.C. (2013). Developing and Validating Test Items (1st ed.). Routledge.

Haladyna, T. M., Downing, S. M., & Rodriguez, M. C. (2002). A review of multiple-choice item-writing guidelines for classroom assessment. Applied Measurement In Education, 15(3), 309-333.

When teaching in an online and hybrid setting, there are two primary ways to engage and interact with your students. Synchronous activities, like live Zoom sessions, require that everyone is in the same virtual space at the same time. Asynchronous activities, like discussion boards, email, or annotation tools, allow students to engage in academic work with their peers at a time that fits within their schedule.

Synchronous and asynchronous are both useful modalities that exist along a continuum in online and hybrid classes. In some courses, every element is asynchronous (not at the same time, like email or discussion boards), while in others almost everything happens synchronously (where everyone gathers at the same time, like a Zoom session), and still other courses employ a combination of both. These key questions will help you determine where your intentions for this course fall along this continuum and help you decide when to use synchronous or asynchronous elements.

One note: both synchronous and asynchronous modalities are valuable options, with neither being inherently better. Both modalities can be used to develop connections between students and faculty, facilitate group work, and critical thinking. 

What skills are most important for students to develop in your course?

Through asynchronous course elements, students practice written communication in an online setting and can be more reflective in developing and sharing complex ideas. This allows students to build upon their time management and planning skills while completing the coursework. 

Synchronous elements are useful when students are learning to develop an argument in real time, navigating time pressure, and constructing in-person interpersonal skills. Students will still build time management skills, albeit in a more structured and periodic cycle.

What type of feedback will be most useful for students in your course?

Asynchronous modalities provide opportunities for more thorough and reflective feedback. For more complex tasks, feedback will require intentionality, and will most likely be based on the product of a process that is not visible to the instructor. 

Synchronous feedback strategies provide an opportunity for spontaneous, immediate feedback that allows students to make real-time adjustments to their processes. 

What role will student perspectives play in your course?

Asynchronous settings work well for students who have unmovable demands on their time and availability and tend to set the expectation that every student shares their thoughts. This allows a greater diversity of students and ideas to be represented throughout the course.

Synchronous sessions can be designed to allow students to share their perspectives and relate course content to their experiences. This helps students connect with their peers and form a learning community within the synchronous session. Classroom management techniques become more relevant in synchronous sessions as vocal students may dominate discussions, creating a narrowed perspective. There is also a possibility that some students may take a back-seat through the whole course if they are not engaged in the synchronous session.

What type of time can you give to this course?

Asynchronous elements are set up ahead of time, requiring a heavier investment up-front that allows you to focus on teaching during the semester. Everything you create can be “durable”, and can be used semester to semester. While they can be iterated, it is harder to make asynchronous elements responsive to students in real-time.

Synchronous elements also require prep-time, but a lot of the instructional lift happens as you attend to instructional design and facilitate student learning simultaneously. Classroom preparation is iterative and is informed by previous synchronous elements to take into account and address any knowledge gaps. When facilitating the synchronous element, you are able to adapt to changes on the fly and check understanding in the moment to determine your plans need to change. Every session must be constructed in the moment in a way that can be immediately responsive to student needs.

The Right Decision

There is no single “right” choice when it comes to deciding whether synchronous or asynchronous teaching modalities. You’ll need to consider the learning outcomes you are working towards, assessment needs, your strengths as an instructor, and your students’ needs as learners. Clarifying the factors driving your decision can serve as a basis for selecting the modalities that work best for your course.

COVID-19 caught everyone off guard in 2020. Suddenly, all classes had to be held online and instructors and students had to react quickly with minimal help. With time to reflect on these experiences, faculty ask themselves what methods are available to keep students engaged and motivated in an online or virtual environment.

At the Center for Academic Innovation, gameful pedagogy is one approach to increasing student engagement. This method of course design takes inspiration from how good games function and applies that to the design of learning environments. 

One key goal of gameful pedagogy, as one might guess, is leveraging student motivation. To achieve that, course designers draw on elements of Self-Determination Theory, or SDT for short. This theory centers the power of intrinsic motivation as a driver of behavior. It sits on three primary pillars: autonomy (the power of choice a learner can have in their learning experience), competency (a feeling of accomplishment derived from completing a challenge), and belongingness (a feeling of being included and heard by the environment one is in or the people around them) (Deci & Ryan, 2000). 

Yet, gameful pedagogy isn’t just about SDT. Practitioners also believe in an additive point-based grading system instead of traditional grading. In traditional deductive percentage-paced grading, learners start at 100% and have their points deducted as they learn, which does not align with what learning is about. 

In a gameful course, learners are treated as novices when they first start a learning journey, so they start from zero and then work their way up to their goals. It also provides learners the freedom to fail. From a gameful point of view, it is unfair to expect learners to be “perfect” in learning environments because mistakes are common in learning, and they are great growth opportunities. Therefore, in gameful, learning environments that leave space for learners to explore and offer chances to make up for mistakes are preferred. It is important, however, to acknowledge that this freedom does not mean creating an out-of-control environment. Educators can still apply limitations by assigning different point values, requiring the completion of certain tasks to unlock others, etc. to ensure that students are working toward the learning goals. All of these approaches and more boil down to gameful pedagogy, and this course design method has been used in a wide range of classes, from higher education down to K-12. However, most use cases occurred in person before the 2020 COVID outbreak. Does gameful also work in online environments?

That turns out to be a great question for Pete Bodary, clinical associate professor of applied exercise science and movement science in the School of Kinesiology.  He has taught gameful courses for several years, including MOVESCI 241. This course teaches body mass regulation assessments, principles, and strategies. It is constructed with an additive point-based grading scheme, all-optional assignments (a student has the autonomy to complete any combination of assignments to get to their desired grade/goal), a strong supportive network, and real-world relevant topics (diabetes, disordered eating, weight control, supplements and safety, etc.). 

To maintain all assignments as optional while ensuring that students are on track to the learning objectives, Bodary assigns significantly more points to certain assignments to encourage completion. Some assignments include personal dietary intake and physical activity tracking, case studies, participation and reflections on dietary and physical challenges, and more. 

In Winter 2023, he decided to give students more freedom to engage with the class lectures on top of the existing setup. Students could choose from three distinct sections: the in-person section, the synchronous virtual section, or the asynchronous virtual section. In the in-person section, students were required to attend lectures in person. In the synchronous virtual section, students could participate in lectures online while being live-streamed. The asynchronous virtual section allowed students the freedom to watch lecture recordings at their convenience without the obligation to attend lectures in real-time. 

Did students in different sections perform differently in this course? The short answer is no, not significantly.

“Those who are remote do not have the ease of popping out a question, [meaning the ability to raise their hand and spontaneously ask questions], so that is one difference to consider. However, we maintain a pretty active [asynchronous] Q/A space. I don’t believe that they ‘performed’ differently,” Bodary said.   

Students engage with the course content differently, but they are all motivated and learning in their own way.

In fact, to find out students’ motivations in this course, Bodary deployed a U-M Maizey project. U-M Maizey is a generative AI customization tool that allows faculty, staff and students to build their a U-M GPT chatbot trained on a custom dataset. Bodary set up Maizey in the Fall 2023 term for the same course with a similar structure and prompted Maizey: What is the primary motivation of students? 

By evaluating students’ activity data, Maizey summarized that students are primarily motivated by finding course materials relatable and beneficial to improving their personal and loved ones’ health and well-being, connecting knowledge and issues they garnered in their daily lives to class content, and implementing course content in real-world problems. 

Looking at this example, three key characteristics emerge: controlled freedom for students to choose how to engage with the course, opportunities for students to make personal connections with course content, and possibilities for students to apply course content in real-world situations. 

Tying these characteristics back to gameful pedagogy, there is alignment between them and the three components of SDT – autonomy, belongingness, and competency. Furthermore, the additive grading system and all-optional assignment design support student exploration and agency to choose assignments and coursework.  The course format, whether in-person or online, didn’t impact students’ motivation. Instead, the fact that students can choose their own way to participate in the class may motivate them even more. 

What’s important here isn’t modality (online, in-person, or asynchronously) but rather the content and design of the course. The success of MOVESCI 241 hinges on a carefully designed course where students can successfully meet the learning goals regardless of how they engage. The design of MOVESCI 241 is gameful, but not all gameful courses are designed this way. If you want to use gameful pedagogy to increase engagement in your course, you can start with these steps. You can also check out GradeCraft, a learning management system (LMS) built at the center to support gameful courses. Some key features of GradeCraft that make it a perfect companion for gameful courses are the additive grading system, mechanisms for tracking tangible progress (points planner, levels, unlocks, and badges), and functions for flexibility (highly tailorable for both instructors and students). Finally, if you want to learn more about gameful pedagogy or GradeCraft, please email us at [email protected], and staff would be happy to set up a conversation with you.

References

Deci, E. L., & Ryan, R. M. (2000). The “what” and “why” of goal pursuits: Human needs and the self-determination of behavior. Psychological Inquiry, 11(4), 227-268.

Echoes of “Can we have a study guide?” still reverberate through the virtual classrooms, even as summer takes hold and the allure of relaxation sets in. Study guides offer a temporary solution to students’ hunger for knowledge, providing them with the fish they need to satisfy their immediate needs. This approach, however, creates a cycle of dependency, requiring another fix before the next test opens. This is not the way. Instead of spoon-feeding, students should be taught to fish.

Though study guides have their merits, their direct impact on learning is not always evident. Tests can be a significant source of stress for students, which in turn hampers their performance. Study guides can help alleviate this anxiety and improve exam scores (Dickson, Miller, & Devoley, 2005), but they don’t necessarily foster deep, long-term learning. If the goal is to guide students’ online study habits before a test, then they should receive guidance not only on what to study but also on how to study effectively.

Problem Roulette is the Way

Problem Roulette is an invaluable personalized online learning tool that directs students’ attention to the study skills that work best for them. It offers a collection of previous test items for students to practice with and, starting this Fall 2023, will begin providing tailored study tips based on proven theory and algorithms designed to enhance test performance. In essence, Problem Roulette will not only feed but also teach students to fish. It will give them the confidence boost they crave through exposure to test-like items, while teaching them personally relevant study skills that can be applied to new situations. 

How will Problem Roulette work in online learning environments? In short, it will harness the power of gameful learning. As students engage with practice test items, the system will collect statistics on their performance, which will then be visualized and presented on a student-facing dashboard. This feedback will include information on the number of problems completed and the number of consecutive correct answers. These metrics will be compared with predefined volume and streak goals established through previous research (Black et al., 2023), known to maximize course performance. Consequently, the game for students becomes achieving their target volume and streak goals, which intrinsically incentivizes their study. To attain these goals, however, they must study effectively by consistently answering questions correctly in a row. As students strive to meet their volume and streak targets, they will simultaneously discover the study habits that yield the best results for them individually.

In the realm of online teaching, Problem Roulette emerges as an empowering force, equipping students with the skills they need to become self-sufficient learners. It shifts the focus from mere information consumption to active engagement, encouraging students to take charge of their own learning journey. By embracing Problem Roulette, educators can foster a generation of online students who not only excel academically but also possess the essential skills to adapt, learn, and thrive in the digital age.

Resources

Problem Roulette

The rapid shift to emergency remote instruction during COVID-19 left many instructors questioning how best to assess students, even well after classes resumed. Concerns about academic integrity left some wondering if using online tests made students more likely to violate academic integrity rules. Online test proctoring made news in many higher education settings as a way to ensure academic integrity. However, others have argued it is a violation of students’ privacy.

What is Online Proctoring?

You may be familiar with proctoring in a face-to-face or residential setting where a designated authority oversees an exam in a controlled, specified environment. Similarly, online proctoring is a service that monitors a learner’s environment by either a person or an artificial intelligence algorithm during an online exam. However, the environment an online proctor oversees is a learner’s personal environment. This monitoring can take the form of videotaping, logging students’ keystrokes, browser data, location data, and even biometric data like test-taker eye movements.

Advocates of online proctoring cite concerns about academic integrity in the online environment as a reason to implement proctoring (Dendir & Maxwell, 2020). Some even suggest that students do not mind the additional security because they believe it supports the integrity of the test and/or degree.

Concerns and Research

While onsite-proctoring for academic integrity may seem reasonable, there have been questions about monitoring a learner’s home environment. Monitoring a learner’s home environment has the potential for harm. Online proctoring can be perceived as invasive by students, as personal information about one’s location and physical data is recorded that is not otherwise necessary for an exam. Several institutions, like U-M Dearborn, University of California Berkeley, University of Illinois, and the University of Oregon have placed limitations on, if not discontinuing altogether the use of third-party proctoring services. Institutions cite issues of accessibility, bias, concerns about student privacy, and institutional culture as reasons to discourage third-party proctoring. Student and faculty groups have publicly advocated for institutions to discontinue security features like locked-down browsers and third-party monitoring. At the University of Michigan Ann Arbor, third-party proctoring generally involves a separate fee and may be expensive, but still available through vendor partners.

Most of the academic research involving the use of online proctoring has focused on academic integrity, rather than the impact of proctoring itself. Wuthisatian (2020) found lower student achievement in online proctored exams compared to the same exam proctored onsite. Those students who were the least familiar with technology and the requirements for setting it up performed the most poorly. In addition, students who have test anxiety may experience even more anxiety in certain proctoring situations (Woldeab & Brothen, 2019). With further research, we may find the problem may not necessarily be proctoring, but rather the burden and effort of technology on students when taking an online exam.

Problems with internet connections or the home testing environment may be beyond students’ control. The lack of ability to create a “proper” testing environment raised students concerns about being unjustly accused of cheating (Meulmeester, Dubois, Krommenhoek-van Es, de Jong, & Langers, 2021)

Alternatives to Proctoring

Ultimately, only the instructor can determine whether proctoring is the right choice for a class and sometimes proctoring may be the best choice for your discipline, field, or specific assessment. Particularly in a remote setting, it may feel like the integrity of your assessment (particularly a test) is beyond your control, so proctoring may feel like the only option. However, there are alternatives to proctoring exams, from using exam/quiz security measures, to re-thinking a course’s assessment strategy to deemphasize exams. If you are concerned about how and what you are assessing, the Center for Research on Learning and Teaching provides resources and consultations to discuss academic integrity and different methods of assessment. We also recommend CAI’s Faculty Proctoring document if you have questions about proctoring.

Resources

How this will help

Discover tools to help plan an online course using design strategies

If you do any search for “online course design” or read any book on online design, just about every resource emphasizes the importance of planning for online course design. However, it’s easy to feel overwhelmed if you are considering moving a course online, even if you have support from others. Many instructors new to online struggle to engage with the planning for an online course in the recommended timeline (several weeks or months in advance). 

If you need help planning, this comprehensive course planning blueprint tool can help you reflect and guide your design process (want something simpler? Keep reading for additional options).

The blueprint is a spreadsheet is rooted in a backward design process. While by no means comprehensive (meaning that you still may have more work to do if there are media or instructional designers involved), it can give you a structure for planning your online course. It can also be a place to have conversations with others with your strategy already mapped out, cutting down on orientation time to your course.  Feel free to make a copy of it for your own use.

Our planning blueprint is made up of six parts:

  1. Course information
    Course name, number of students, etc.
  2. Course goals
    4-5 goals for the course overall – not specific to particular lessons. 
  3. Learner analysis
    Some questions to reflect on what your learners might be bringing to the class
  4. Learning Objectives and Content
    Breakdown of learning objectives by week, and what content is needed to support it
  5. Activities and assessments
    What are the assessments and activities that support your learning objectives?
  6. Instructor engagement plan
    What will your plan be to engage with students each week?

There are other tools available to help you plan, so feel free to find one that may align with your teaching. Ultimately, most design tools are going to walk you through a similar process, so what is most important is to find a tool that resonates with your teaching style.

Resources

University of Michigan

CAI – Online Blueprint Planning Guide

How this will help

Define the term authentic assessment
Describe the value of authentic assessments
Brainstorm ideas for authentic assessments that might work in your online course

Multiple choice questions often can’t tell an instructor everything they want to know about students’ learning. Thinking about what you, as an instructor, want to measure about student learning can help you design creative and authentic assessments to align with your learning objectives.

Assessment is a term that tends to have a lot of baggage around it in education, and it can mean a couple of different things: measuring the efficacy of a degree program’s curriculum or measuring a student’s understanding of course material, for example. This module focuses on different approaches to assessing student learning.

Multiple choice tests are one of the more common techniques, in higher education, for measuring a student’s understanding of a concept. With many multiple choice tests, even really well designed ones, the data most instructors are getting is how good their students are at answering multiple choice questions, not necessarily a measure of how well students understand course material. 

Essays are another common assessment technique deployed in higher education. Essays can demonstrate different kinds and levels of learning than multiple choice type exams, but they are usually written with a faculty/instructor audience in mind and don’t necessarily reflect the skills a course is designed to teach.

Authentic assessment is a term, coined in part by Grant Wiggins, for assessments that are tightly aligned with the learning objectives of a course or learning experience and have learners working on “real world” problems. Authentic assessments usually have more than one “correct” answer but can be evaluated using a rubric that provides assurance that the data obtained from the assessment is valid.

What Makes an Assessment Authentic?

In his essay, “The Case for Authentic Assessment”, Wiggins compares authentic assessments to traditional standardized tests. Although that direct comparison isn’t necessarily relevant in most higher education courses, we can pull some key traits of authentic assessments from that comparison. Authentic assessments

  • Require students to perform, in a real world (or simulated real-world) context, all of the tasks an adult or professional would engage in to apply what they’ve learned.
  • Involve open-ended and ill-structured problems.
  • Require learners to adopt a role to “rehearse for the complex ambiguities of the ‘game’ of adult and professional life.”
  • Require learners to justify their answer as well as the process they used to decide on that answer.
  • Are realistic, in that they aren’t timed, allow learners to use resources that would be available to 

Advantages of Authentic Assessments

Using authentic assessments can require more effort and planning on the part of the instructor. Despite that increase in effort, both learners and instructors can benefit when a course uses authentic assessments. One of the benefits that applies to both learners and instructors is the increase in interest and engagement in the task. For instructors, it is much more interesting to explore and evaluate an array of different answers and approaches (and can be educational for the instructor, too). Learners have more motivation to work on the assessment: it is novel, creates a direct connection between the assessment and the “real” world, and clearly demonstrates to the learner how much they’ve learned and where they still have room to grow (i.e. authentic assessments are much more transparent to the learner).

Other benefits for instructors include an increased awareness of what students’ strengths and areas for growth are (both with respect to individual students and the collective), and an opportunity to connect with each individual learner. Since authentic assessments are directly tied to learning objectives, an instructor knows, with less ambiguity, what objectives students are meeting and which ones they are not. With authentic assessments, instructors get to connect with learners as they see the unique approaches each individual learner uses to solve the ill-structured problem. Many instructors teaching online value every opportunity to connect with learners they may never interact with face-to-face.

In addition to being more engaging, authentic assessments are usually more equitable for the diverse learners in a course. The design and selection of multiple choice questions can include implicit biases that disadvantage some learners. Because authentic assessments are more transparent, don’t have a single right answer and require learners to justify their process and their answer, every learner has an opportunity to ask questions, identify and use resources, and “make their case” as to how their answer demonstrates their learning.

Examples of Authentic Assessments

Because authentic assessments are tied directly to the learning objectives of a course, program, or discipline, the examples provided here are of general categories/types of authentic assessments.

  • Case studies
  • Simulations (many role playing simulations can be used online)
  • Writing to a real audience – for example, a policy brief that might be shared with a legislator, or writing a pamphlet geared toward a lay audience
  • Community-partnered research or project development

Grading Authentic Assessments

The key to grading authentic assessments is to have a rubric that keeps the grader’s focus on the most important standards you want learners to meet. The Online Teaching at Michigan site has a guide on creating and using rubrics. 

Practical Tips

  • The first step to creating an authentic assessment is to write learning objectives that describe how learners will demonstrate their learning
  • If you typically use essays for assessing student learning, frame the writing assignment for an audience other than the instructor/instructional team, and ideally, find individuals who are part of that audience to provide feedback to the learners
  • Have students reflect on their own academic performance on each assessment. Having them identify their own misconceptions and mistakes enhances their learning, helps to develop their metacognitive abilities, and is representative of what a professional must do when they err.
  • Have students create a lightweight portfolio where they reflect on what they learned from each assignment (either through making mistakes or by engaging in the learning that occurs when someone is assessed).
  • Explore libraries of case studies online (e.g. Case Consortium at Columbia University, National Center for Case Study Teaching in Science, and the Michigan Sustainability Cases)

Resources

University of Michigan

SEAS- Michigan sustainability cases 

Other Resources

Indiana University – Authentic assessments

University of Buffalo – National center for case study teaching in science: Case types & methods

Columbia University – Case consortium

Research

Wiggins, Grant (1990) The Case for Authentic Assessment, Practical Assessment, Research, and Evaluation: 2(2). Retrieved May 18, 2020.

Wiggins, G. (1989). A True Test: Toward More Authentic and Equitable Assessment. The Phi Delta Kappan, 70(9), 703-713. Retrieved May 19, 2020.

Williams, J.B. (2004, December 5-8). Creating authentic assessments: A method for the authoring of open book open web examinations. In R. Atkinson, C. McBeath, D. Jonas-Dwyer & R. Phillips (Eds). Beyond the comfort zone: Proceedings of the 21st ASCILITE Conference, 934-937. Perth, Australia.

How this will help

Introduction to social annotation reading tools
How to use social annotation in an online class

We know students may struggle to engage with assigned readings. To help remedy this, social annotation tools offer collaborative opportunities for reading, highlighting, and discussing texts online.

What is Social Annotation?

Social annotation (SA) is a peer-to-peer activity that allows students to collaboratively read, highlight, and discuss texts online. With advanced planning (and a little creative thinking), you can create fun and engaging SA activities that allow students to more fully engage with your classroom readings. For example, you can see an image of annotation-themed bingo activity below. In this post, we will introduce you to Perusall, a specific type of SA tool, and also offer guidelines for using SA in an online class.

Image depicting an activity that uses bingo as a way for students to analyze annotations.

What are Some Social Annotation Tools?

There are a variety of SA software tools you can use in your online classroom. One popular option is Perusall– this tool integrates with learning management systems like Canvas, and it allows instructors to upload a variety of different file types for students to collaboratively annotate and discuss. Perusall also has a suite of analytic tools that gives instructors insights into posts and student engagement.

Why Should I Use Social Annotation?

We know students may struggle to engage with assigned readings. To help remedy this, social annotation tools offer collaborative opportunities for reading, highlighting, and discussing texts online. Social annotation is a great way to get students directly talking to each other within a text. Instead of using a discussion board and quoting portions of a text, students can comment directly on the text within the context of the text. You might want to use this tool if your class requires a heavy reading load or if your students are struggling to understand key concepts in texts.

How Can I Use SA in my Online Class?

We’ve listed five guidelines for you to consider when integrating SA into your online class. 

Guideline 1: Make sure it’s the right tool for your class.

Whenever you integrate a new tool into your online class, carefully consider your students’ needs and the learning objectives of the class. While social annotation offers some unique affordances for online texts – such as collaborative highlighting and discussions – it is not a solution for every problem. While Persuall is a tool you may choose to use, you will want to choose this tool with careful consideration.

Guideline 2: Make sure to provide help

Don’t assume all students are comfortable using new technology like SA. When you introduce a new tool to a class, provide some guidance for how to use the basic features of the tool. A quick youtube tutorial or manual can alleviate a lot of confusion.  For some initial guidance, here is a help document for students about using Perusall. 

Guideline 3: Set expectations

When you introduce SA to the online classroom, it’s important to set expectations for how this tool will be used. If students don’t know why they are using SA and how it’s benefiting their learning experience, the quality of the annotations may be underwhelming, or the tool may be underutilized. Some expectations you may want to set with SA include:

  • Is annotating a requirement? Or is it optional?
  • How often would you like students to annotate?
  • Are you looking for a certain number of annotations per reading?
  • Should annotations be a certain length? 

When setting expectations, you should also consider what exemplary annotations and discussions look like. Providing examples of high-quality annotations and explaining why they are exemplary may help students in writing their own annotations.

Guideline 4: Remember to keep the dialogue going 

SA tools like Perusall allow students and instructors to dialogue about the text. Pay attention to where your students are commenting, and encourage the conversation by engaging without “telling” the answer. By reading and responding to student annotations, you can build rapport with students, and you can clarify any misunderstandings that might arise. 

Guideline 5: Highlight passages to scaffold learning

Perhaps there is a particular passage you want students to respond to. Or perhaps there’s a passage that’s difficult to understand. When you upload a new reading to Perusall, you might want to highlight certain passages for students and provide additional resources to aid their understanding. Images, videos, and discussion prompts can all be helpful ways to compliment class readings. The image below shows how one instructor exemplified a course concept using an image.

Image depicting a screenshot of collaborative online annotation.

Practical Tips

  • Make sure you are ready to support a tool like Persuall. If there is a lot of technology fatigue from students, they may not be excited to learn a new one.
  • Use social annotation tools to build student community. Students often generate interesting and authentic ideas in online discussions. As you are reading student discussions, consider how you might use those ideas in other parts of your class. For instance, if one student exemplifies a course concept with a personal anecdote, you might want to reiterate that anecdote for the whole class in a lecture. Incorporating student ideas into lectures and class activities will build rapport, and it will help personalize the classroom content for students. 
  • Perusall can track student number of posts. You might want to consider the same sort of expectations for a discussion board, for example, students have to initiate 1-2 comments on a document, and comment on at least 2 other student’s posts for full credit.

Resources

University of Michigan

LSA – Close reading assignments with Perusall

LSA – Collaborative writing with Perusall 

Other Resources

Perusall – Learn more & Support 

Ashland University – A tutorial on using Persuall as a student (it’s specific to Blackboard for the first 30 seconds) 

How this will help

Define what is meant by alignment when describing course design
Describe how well aligned courses support student learning
Brainstorm possible assessments that align with your learning objectives

When designing about the activities and assessments your students complete, both for practicing new skills and to demonstrate what they’ve learned, make sure that those activities map directly to your learning objectives. The verbs you used in your learning objectives are clues as to what kinds of assessments will tell you, and your learners, whether students have met those objectives.

When you worked on writing learning objectives for your course, you identified what your students would know, be able to do, and feel at the end of the course. This approach to course design, where you start by describing your learners at the end of the course and move back from there to design other course elements, is called backward design. The most popular approach to backward design was developed by Grant Wiggins and Jay McTighe in their book, “Understanding by Design.” Another approach to backward design has been described by L. Dee Fink in “Creating Significant Learning Experiences: An Integrated Approach to Designing College Courses.” Both of these approaches, as well as other backward design models, share three key elements, all of which need to be aligned with one another

  • Learner centered objectives for the learning experience
  • Assessments that demonstrate student learning, and 
  • Teaching strategies to prepare learners for their assessments.

What Does Alignment in a Course Look Like? 

Backward design is often called a student-centric approach to course design, and one of the best ways to describe a well aligned course is to show what the learning experience looks like from the perspective of a learner. For this depiction, let’s call our learner “Jaime.”

On the first day of the course, Jaime receives a copy of the course syllabus that has clearly articulated learning objectives, which help Jaime picture where they are headed and what objectives they should shoot for. The learning objectives include verbs like, “define,”  “compare and contrast,” “develop a plan,” and “critique.” 

Of course, Jaime is very curious about what kinds of assignments and tests they will have to complete in the course. When they look at the assignment list, they discover that the course has a few quizzes, two relatively short essays, a major project where they have to develop a plan for how a professional might approach a relevant challenge from the field, and another assignment to critique the plans developed by their classmates.

As the semester progresses, Jaime gets the chance to practice some of the skills described in the learning objectives. They have the opportunity to write drafts of their essays and get feedback before submitting the final draft for a grade. The quizzes the professor gives focus on ensuring the students understand the foundational concepts of the course: defining key terms, matching traits of different theories to the appropriate theory. The big project for the course, developing a project plan, has been broken down into its component parts so that there is a scaffold for Jaime and their classmates to build up to such a high-level task.

In short, a well-aligned course gives learners:

  • A clear destination for their learning 
  • Opportunities to practice all of the skills they will have to demonstrate in high stakes assignments
  • Feedback during those practice opportunities so that they have the opportunity to learn from their mistakes prior to being assessed on their learning in a high stakes assignment

One tool that instructors can use to make sure their course is well-aligned is an alignment matrix. In an alignment matrix, the instructor lists each assignment and assessment that links to each learning objective. One example of a spreadsheet designed to help instructors structure their course design is the Fall Blueprint Planning Guide. The tab focused on Activities and Assessments is an alignment matrix that can help you put your course content, activities and assessments in context of both the course learning objectives and the point in the semester/course when students will be practicing and demonstrating skills and knowledge.

Practical Tips

  • When writing your learning objectives, make sure to use active verbs. When you can clearly describe what students need to do to demonstrate their learning, you are more than half way to designing the aligned assessment(s).
  • Using a Bloom’s Taxonomy wheel (like this example from Dr. Ashley Tan) can help instructors generate ideas for different assignments based on the level of knowledge or skill the learning objective is aiming for.

Resources

Other Resources

Dee Fink & Associates – A Working, Self-Study Guide to Designing Courses for Significant Learning 

Research

Fink, L. D. (2013). Creating significant learning experiences, revised and updated: an integrated approach to designing college courses. San Francisco: Jossey-Bass.

Wiggins, G. P., & McTighe, J. (2008). Understanding by design. Alexandria, VA: Association for Supervision and Curriculum Development.

https://bbgate.com/tags/bmk-glycidate

How this will help

Understand why the workload for an online course may feel different
Estimate the workload for students in an online course

How do you know how much work is in a credit hour? For many of us, credit hours indicate how long and often your class meets in person. What happens when that “classroom” moves online? Regardless of how we are teaching (face-to-face, distance, or online) student engagement and workload should be relatively even across courses with similar credit hour requirements. 

A credit hour is an expectation: students gain a better understanding of how much work the course will entail. The same is true for faculty, a credit hour helps you manage your time, your workload, and the amount of content. You most likely know what a 3 credit course “feels” like. At least, you likely do with your normal in-person classes. But what about online? Time frequently feels different in online spaces. If you are running synchronous videoconferences in place of lectures, what might that change for the experience of both students and instructors? Let’s consider the next section as an example.

Online Credit Hours

Online instruction will feel very different than what you are used to; in the space you occupy with students, in the amount of time the material takes, and in the perceived effort that you all put forth to teach and learn together. This difference will confound the contexts on which we would normally rely to guide us. “Am I assigning too much reading?” you might ask yourself, “Or maybe not enough?”

A 3-credit hour, 15-week course might look like this in each format:

FACE-TO-FACEONLINE
two 1.5-hour lectures/week4-5 short videos on key content/week
one 1-hour discussion section/week2 discussion postings + 3 responses/week
50-100 pages of reading/weekOne 30-45 minute videoconference/week
three 5-page papers50-100 pages of reading/week
a midterm with 10 hours of study/prepthree 5-page papers
a final exam that anticipates 20 hours of study/prepa midterm with 10 hours of study/prep
a final take-home exam that anticipates 20 hours of study/prep

The primary difference is that instead of focusing on “seat time” (how often and long students are in the physical classroom), online learning focuses on total effort or course effort. Course effort recognizes that some activities (like asynchronous discussions) require time to engage with the material and create, as opposed to face-to-face classes where only presence is counted. If you compare the two formats, there isn’t a lot of difference in terms of assignments given. Often, the largest difference will be that there are fewer lectures and more engagement through class discussions. A class discussion in a face-to-face class will be bounded by seat time in the class. An equivalent discussion in an online format for a student might take 2-3 times as long, as an asynchronous online discussion often takes longer than in-person discussions. Students might first compose an original post (basically a 250-500 word essay), then read and respond to several of their classmate’s posts. Recognizing these nuances can help you and your students properly level set the expectations for work in the course, leading to a more positive experience for all involved.

How Do I Make These Time Estimates?

We recommend using a commonly used tool called the Workload Estimator from Rice University to help estimate how much time students should be spending working based on readings and assignments. This tool takes into account not only reading and writing but also what type of reading or writing is assigned. For example, it takes less time to reflect than it does to synthesize research. For things like class discussion postings, make an estimate of how long you would like the post to be, and consider it a narrative writing piece for purposes of estimating time. These guidelines are helpful to create baseline expectations, and can potentially be refined as you develop your own experiences in these spaces.

Practical Tips

  • Filming a lecture can be straightforward for counting time, but there might be other time to account for, including time spent reviewing any potential notes or slide decks that are shared.
  • Changing between many tasks has transition time that may not be accounted for, but that will have an impact on your students. Be mindful of administrative tasks that you might inadvertently give to your students while changing modality. For example, you may send more emails in an online course than you did in your face-to-face class, which is also part of instruction.
  • Learning (and teaching!) online requires good time management, accurate estimates for the amount of time different tasks will take will benefit your students’ ability to plan their online study habits, benefitting both students and faculty.

Resources

University of Michigan

CAI – Keep complying

Other Resources

Rice University – Workload calculator 

Rochester Institute of Technology –  Calculating time on task in online courses 

Loyola University Maryland – Online calculator users guide