Skip to main content

What role can Gameful Pedagogy play in online courses?

COVID-19 caught everyone off guard in 2020. Suddenly, all classes had to be held online and instructors and students had to react quickly with minimal help. With time to reflect on these experiences, faculty ask themselves what methods are available to keep students engaged and motivated in an online or virtual environment.

At the Center for Academic Innovation, gameful pedagogy is one approach to increasing student engagement. This method of course design takes inspiration from how good games function and applies that to the design of learning environments. 

One key goal of gameful pedagogy, as one might guess, is leveraging student motivation. To achieve that, course designers draw on elements of Self-Determination Theory, or SDT for short. This theory centers the power of intrinsic motivation as a driver of behavior. It sits on three primary pillars: autonomy (the power of choice a learner can have in their learning experience), competency (a feeling of accomplishment derived from completing a challenge), and belongingness (a feeling of being included and heard by the environment one is in or the people around them) (Deci & Ryan, 2000). 

Yet, gameful pedagogy isn’t just about SDT. Practitioners also believe in an additive point-based grading system instead of traditional grading. In traditional deductive percentage-paced grading, learners start at 100% and have their points deducted as they learn, which does not align with what learning is about. 

In a gameful course, learners are treated as novices when they first start a learning journey, so they start from zero and then work their way up to their goals. It also provides learners the freedom to fail. From a gameful point of view, it is unfair to expect learners to be “perfect” in learning environments because mistakes are common in learning, and they are great growth opportunities. Therefore, in gameful, learning environments that leave space for learners to explore and offer chances to make up for mistakes are preferred. It is important, however, to acknowledge that this freedom does not mean creating an out-of-control environment. Educators can still apply limitations by assigning different point values, requiring the completion of certain tasks to unlock others, etc. to ensure that students are working toward the learning goals. All of these approaches and more boil down to gameful pedagogy, and this course design method has been used in a wide range of classes, from higher education down to K-12. However, most use cases occurred in person before the 2020 COVID outbreak. Does gameful also work in online environments?

That turns out to be a great question for Pete Bodary, clinical associate professor of applied exercise science and movement science in the School of Kinesiology.  He has taught gameful courses for several years, including MOVESCI 241. This course teaches body mass regulation assessments, principles, and strategies. It is constructed with an additive point-based grading scheme, all-optional assignments (a student has the autonomy to complete any combination of assignments to get to their desired grade/goal), a strong supportive network, and real-world relevant topics (diabetes, disordered eating, weight control, supplements and safety, etc.). 

To maintain all assignments as optional while ensuring that students are on track to the learning objectives, Bodary assigns significantly more points to certain assignments to encourage completion. Some assignments include personal dietary intake and physical activity tracking, case studies, participation and reflections on dietary and physical challenges, and more. 

In Winter 2023, he decided to give students more freedom to engage with the class lectures on top of the existing setup. Students could choose from three distinct sections: the in-person section, the synchronous virtual section, or the asynchronous virtual section. In the in-person section, students were required to attend lectures in person. In the synchronous virtual section, students could participate in lectures online while being live-streamed. The asynchronous virtual section allowed students the freedom to watch lecture recordings at their convenience without the obligation to attend lectures in real-time. 

Did students in different sections perform differently in this course? The short answer is no, not significantly.

“Those who are remote do not have the ease of popping out a question, [meaning the ability to raise their hand and spontaneously ask questions], so that is one difference to consider. However, we maintain a pretty active [asynchronous] Q/A space. I don’t believe that they ‘performed’ differently,” Bodary said.   

Students engage with the course content differently, but they are all motivated and learning in their own way. In fact, to find out students’ motivations in this course, Bodary deployed a U-M Maizey project. U-M Maizey is a generative AI customization tool that allows faculty, staff and students to build their a U-M GPT chatbot trained on a custom dataset. Bodary set up Maizey in the Fall 2023 term for the same course with a similar structure and prompted Maizey: What is the primary motivation of students? 

By evaluating students’ activity data, Maizey summarized that students are primarily motivated by finding course materials relatable and beneficial to improving their personal and loved ones’ health and well-being, connecting knowledge and issues they garnered in their daily lives to class content, and implementing course content in real-world problems. 

Looking at this example, three key characteristics emerge: controlled freedom for students to choose how to engage with the course, opportunities for students to make personal connections with course content, and possibilities for students to apply course content in real-world situations. 

Tying these characteristics back to gameful pedagogy, there is alignment between them and the three components of SDT – autonomy, belongingness, and competency. Furthermore, the additive grading system and all-optional assignment design support student exploration and agency to choose assignments and coursework.  The course format, whether in-person or online, didn’t impact students’ motivation. Instead, the fact that students can choose their own way to participate in the class may motivate them even more. 

What’s important here isn’t modality (online, in-person, or asynchronously) but rather the content and design of the course. The success of MOVESCI 241 hinges on a carefully designed course where students can successfully meet the learning goals regardless of how they engage. The design of MOVESCI 241 is gameful, but not all gameful courses are designed this way. If you want to use gameful pedagogy to increase engagement in your course, you can start with these steps. You can also check out GradeCraft, a learning management system (LMS) built at the center to support gameful courses. Some key features of GradeCraft that make it a perfect companion for gameful courses are the additive grading system, mechanisms for tracking tangible progress (points planner, levels, unlocks, and badges), and functions for flexibility (highly tailorable for both instructors and students). Finally, if you want to learn more about gameful pedagogy or GradeCraft, please email us at [email protected], and staff would be happy to set up a conversation with you.

References:

Deci, E. L., & Ryan, R. M. (2000). The “what” and “why” of goal pursuits: Human needs and the self-determination of behavior. Psychological Inquiry, 11(4), 227-268.

Echoes of “Can we have a study guide?” still reverberate through the virtual classrooms, even as summer takes hold and the allure of relaxation sets in. Study guides offer a temporary solution to students’ hunger for knowledge, providing them with the fish they need to satisfy their immediate needs. This approach, however, creates a cycle of dependency, requiring another fix before the next test opens. This is not the way. Instead of spoon-feeding, students should be taught to fish.

Though study guides have their merits, their direct impact on learning is not always evident. Tests can be a significant source of stress for students, which in turn hampers their performance. Study guides can help alleviate this anxiety and improve exam scores (Dickson, Miller, & Devoley, 2005), but they don’t necessarily foster deep, long-term learning. If the goal is to guide students’ online study habits before a test, then they should receive guidance not only on what to study but also on how to study effectively. Problem Roulette is the way.

Problem Roulette is an invaluable personalized online learning tool that directs students’ attention to the study skills that work best for them. It offers a collection of previous test items for students to practice with and, starting this Fall 2023, will begin providing tailored study tips based on proven theory and algorithms designed to enhance test performance. In essence, Problem Roulette will not only feed but also teach students to fish. It will give them the confidence boost they crave through exposure to test-like items, while teaching them personally relevant study skills that can be applied to new situations. 

How will Problem Roulette work in online learning environments? In short, it will harness the power of gameful learning. As students engage with practice test items, the system will collect statistics on their performance, which will then be visualized and presented on a student-facing dashboard. This feedback will include information on the number of problems completed and the number of consecutive correct answers. These metrics will be compared with predefined volume and streak goals established through previous research (Black et al., 2023), known to maximize course performance. Consequently, the game for students becomes achieving their target volume and streak goals, which intrinsically incentivizes their study. To attain these goals, however, they must study effectively by consistently answering questions correctly in a row. As students strive to meet their volume and streak targets, they will simultaneously discover the study habits that yield the best results for them individually.

In the realm of online teaching, Problem Roulette emerges as an empowering force, equipping students with the skills they need to become self-sufficient learners. It shifts the focus from mere information consumption to active engagement, encouraging students to take charge of their own learning journey. By embracing Problem Roulette, educators can foster a generation of online students who not only excel academically but also possess the essential skills to adapt, learn, and thrive in the digital age.

The rapid shift to emergency remote instruction during COVID-19 left many instructors questioning how best to assess students, even well after classes resumed. Concerns about academic integrity left some wondering if using online tests made students more likely to violate academic integrity rules. Online test proctoring made news in many higher education settings as a way to ensure academic integrity. However, others have argued it is a violation of students’ privacy.

What is Online Proctoring?

You may be familiar with proctoring in a face-to-face or residential setting where a designated authority oversees an exam in a controlled, specified environment. Similarly, online proctoring is a service that monitors a learner’s environment by either a person or an artificial intelligence algorithm during an online exam. However, the environment an online proctor oversees is a learner’s personal environment. This monitoring can take the form of videotaping, logging students’ keystrokes, browser data, location data, and even biometric data like test-taker eye movements.

Advocates of online proctoring cite concerns about academic integrity in the online environment as a reason to implement proctoring (Dendir & Maxwell, 2020). Some even suggest that students do not mind the additional security because they believe it supports the integrity of the test and/or degree.

Online proctoring in the media and research

While onsite-proctoring for academic integrity may seem reasonable, there have been questions about monitoring a learner’s home environment. monitoring a learner’s home environment has the potential for harm. Online proctoring can be perceived as invasive by students, as personal information about one’s location and physical data is recorded that is not otherwise necessary for an exam. Several institutions, like U-M Dearborn, University of California Berkeley, University of Illinois, and the University of Oregon have placed limitations on, if not discontinuing altogether the use of third-party proctoring services. Institutions cite issues of accessibility, bias, concerns about student privacy, and institutional culture as reasons to discourage third-party proctoring. Student and faculty groups have publicly advocated for institutions to discontinue security features like locked-down browsers and third-party monitoring. At the University of Michigan Ann Arbor, third-party proctoring generally involves a separate fee and may be expensive, but still available through vendor partners.

Most of the academic research involving the use of online proctoring has focused on academic integrity, rather than the impact of proctoring itself. Wuthisatian (2020) found lower student achievement in online proctored exams compared to the same exam proctored onsite. Those students who were the least familiar with technology and the requirements for setting it up performed the most poorly. In addition, students who have test anxiety may experience even more anxiety in certain proctoring situations (Woldeab & Brothen, 2019). With further research, we may find the problem may not necessarily be proctoring, but rather the burden and effort of technology on students when taking an online exam.

Problems with internet connections or the home testing environment may be beyond students’ control. The lack of ability to create a “proper” testing environment raised students concerns about being unjustly accused of cheating (Meulmeester, Dubois, Krommenhoek-van Es, de Jong, & Langers, 2021)

What are the alternatives to proctoring?

Ultimately, only the instructor can determine whether proctoring is the right choice for a class and sometimes proctoring may be the best choice for your discipline, field, or specific assessment. Particularly in a remote setting, it may feel like the integrity of your assessment (particularly a test) is beyond your control, so proctoring may feel like the only option. However, there are alternatives to proctoring exams, from using exam/quiz security measures, to re-thinking a course’s assessment strategy to deemphasize exams. If you are concerned about how and what you are assessing, the Center for Research on Learning and Teaching provides resources and consultations to discuss academic integrity and different methods of assessment. We also recommend CAI’s Faculty Proctoring document if you have questions about proctoring.

Learn more:

How this will help:

Discover tools to help plan an online course using design strategies

The basics

If you do any search for “online course design” or read any book on online design, just about every resource emphasizes the importance of planning for online course design. However, it’s easy to feel overwhelmed if you are considering moving a course online, even if you have support from others. Many instructors new to online struggle to engage with the planning for an online course in the recommended timeline (several weeks or months in advance). 

If you need help planning, this comprehensive course planning blueprint tool can help you reflect and guide your design process (want something simpler? Keep reading for additional options).

The blueprint is a spreadsheet is rooted in a backward design process. While by no means comprehensive (meaning that you still may have more work to do if there are media or instructional designers involved), it can give you a structure for planning your online course. It can also be a place to have conversations with others with your strategy already mapped out, cutting down on orientation time to your course.  Feel free to make a copy of it for your own use.

Our planning blueprint is made up of six parts:

  1. Course information
    Course name, number of students, etc.
  2. Course goals
    4-5 goals for the course overall – not specific to particular lessons. 
  3. Learner analysis
    Some questions to reflect on what your learners might be bringing to the class
  4. Learning Objectives and Content
    Breakdown of learning objectives by week, and what content is needed to support it
  5. Activities and assessments
    What are the assessments and activities that support your learning objectives?
  6. Instructor engagement plan
    What will your plan be to engage with students each week?

There are other tools available to help you plan, so feel free to find one that may align with your teaching. Ultimately, most design tools are going to walk you through a similar process, so what is most important is to find a tool that resonates with your teaching style.

Resources

University of Michigan

CAI – Online Blueprint Planning Guide

How this will help:

Define the term authentic assessment.
Describe the value of authentic assessments.
Brainstorm ideas for authentic assessments that might work in your online course.

The basics

Multiple choice questions often can’t tell an instructor everything they want to know about students’ learning. Thinking about what you, as an instructor, want to measure about student learning can help you design creative and authentic assessments to align with your learning objectives.

Assessment is a term that tends to have a lot of baggage around it in education, and it can mean a couple of different things: measuring the efficacy of a degree program’s curriculum or measuring a student’s understanding of course material, for example. This module focuses on different approaches to assessing student learning.

Multiple choice tests are one of the more common techniques, in higher education, for measuring a student’s understanding of a concept. With many multiple choice tests, even really well designed ones, the data most instructors are getting is how good their students are at answering multiple choice questions, not necessarily a measure of how well students understand course material. 

Essays are another common assessment technique deployed in higher education. Essays can demonstrate different kinds and levels of learning than multiple choice type exams, but they are usually written with a faculty/instructor audience in mind and don’t necessarily reflect the skills a course is designed to teach.

Authentic assessment is a term, coined in part by Grant Wiggins, for assessments that are tightly aligned with the learning objectives of a course or learning experience and have learners working on “real world” problems. Authentic assessments usually have more than one “correct” answer but can be evaluated using a rubric that provides assurance that the data obtained from the assessment is valid.

What makes an assessment authentic?

In his essay, “The Case for Authentic Assessment”, Wiggins compares authentic assessments to traditional standardized tests. Although that direct comparison isn’t necessarily relevant in most higher education courses, we can pull some key traits of authentic assessments from that comparison. Authentic assessments

  • Require students to perform, in a real world (or simulated real-world) context, all of the tasks an adult or professional would engage in to apply what they’ve learned.
  • Involve open-ended and ill-structured problems.
  • Require learners to adopt a role to “rehearse for the complex ambiguities of the ‘game’ of adult and professional life.”
  • Require learners to justify their answer as well as the process they used to decide on that answer.
  • Are realistic, in that they aren’t timed, allow learners to use resources that would be available to 

What are the advantages of using authentic assessments?

Using authentic assessments can require more effort and planning on the part of the instructor. Despite that increase in effort, both learners and instructors can benefit when a course uses authentic assessments. One of the benefits that applies to both learners and instructors is the increase in interest and engagement in the task. For instructors, it is much more interesting to explore and evaluate an array of different answers and approaches (and can be educational for the instructor, too). Learners have more motivation to work on the assessment: it is novel, creates a direct connection between the assessment and the “real” world, and clearly demonstrates to the learner how much they’ve learned and where they still have room to grow (i.e. authentic assessments are much more transparent to the learner).

Other benefits for instructors include an increased awareness of what students’ strengths and areas for growth are (both with respect to individual students and the collective), and an opportunity to connect with each individual learner. Since authentic assessments are directly tied to learning objectives, an instructor knows, with less ambiguity, what objectives students are meeting and which ones they are not. With authentic assessments, instructors get to connect with learners as they see the unique approaches each individual learner uses to solve the ill-structured problem. Many instructors teaching online value every opportunity to connect with learners they may never interact with face-to-face.

In addition to being more engaging, authentic assessments are usually more equitable for the diverse learners in a course. The design and selection of multiple choice questions can include implicit biases that disadvantage some learners. Because authentic assessments are more transparent, don’t have a single right answer and require learners to justify their process and their answer, every learner has an opportunity to ask questions, identify and use resources, and “make their case” as to how their answer demonstrates their learning.

Examples of Authentic Assessments

Because authentic assessments are tied directly to the learning objectives of a course, program, or discipline, the examples provided here are of general categories/types of authentic assessments.

  • Case studies
  • Simulations (many role playing simulations can be used online)
  • Writing to a real audience – for example, a policy brief that might be shared with a legislator, or writing a pamphlet geared toward a lay audience
  • Community-partnered research or project development

Grading Authentic Assessments

The key to grading authentic assessments is to have a rubric that keeps the grader’s focus on the most important standards you want learners to meet. The Online Teaching at Michigan site has a guide on creating and using rubrics. 

Practical tips

  • The first step to creating an authentic assessment is to write learning objectives that describe how learners will demonstrate their learning
  • If you typically use essays for assessing student learning, frame the writing assignment for an audience other than the instructor/instructional team, and ideally, find individuals who are part of that audience to provide feedback to the learners
  • Have students reflect on their own academic performance on each assessment. Having them identify their own misconceptions and mistakes enhances their learning, helps to develop their metacognitive abilities, and is representative of what a professional must do when they err.
  • Have students create a lightweight portfolio where they reflect on what they learned from each assignment (either through making mistakes or by engaging in the learning that occurs when someone is assessed).
  • Explore libraries of case studies online (e.g. Case Consortium at Columbia University, National Center for Case Study Teaching in Science, and the Michigan Sustainability Cases)

Resources

University of Michigan

SEAS- Michigan sustainability cases 

Other Resources

Indiana University – Authentic assessments

University of Buffalo – National center for case study teaching in science: Case types & methods

Columbia University – Case consortium

Research

Wiggins, Grant (1990) “The Case for Authentic Assessment,” Practical Assessment, Research, and Evaluation: Vol. 2 , Article 2. Retrieved May 18, 2020, from https://scholarworks.umass.edu/pare/vol2/iss1/2 

Wiggins, G. (1989). A True Test: Toward More Authentic and Equitable Assessment. The Phi Delta Kappan, 70(9), 703-713. Retrieved May 19, 2020, from www.jstor.org/stable/20404004

Williams, J.B. (2004). Creating authentic assessments: A method for the authoring of open book open web examinations. In R. Atkinson, C. McBeath, D. Jonas-Dwyer & R. Phillips (Eds), Beyond the comfort zone: Proceedings of the 21st ASCILITE Conference (pp. 934-937). Perth, 5-8 December. http://www.ascilite.org.au/conferences/perth04/procs/williams.html 

Innovative Aave Crypto Ventures: Venture into innovative Aave crypto territories with our partner, offering secure staking solutions and cutting-edge strategies for users seeking innovation in decentralized finance.

How this will help:

Introduction to social annotation reading tools.
How to use social annotation in an online class.

The basics

We know students may struggle to engage with assigned readings. To help remedy this, social annotation tools offer collaborative opportunities for reading, highlighting, and discussing texts online.

What is social annotation?

Social annotation (SA) is a peer-to-peer activity that allows students to collaboratively read, highlight, and discuss texts online. With advanced planning (and a little creative thinking), you can create fun and engaging SA activities that allow students to more fully engage with your classroom readings. For example, you can see an image of annotation-themed bingo activity below. In this post, we will introduce you to Perusall, a specific type of SA tool, and also offer guidelines for using SA in an online class.

Image depicting an activity that uses bingo as a way for students to analyze annotations.

What are some social annotation tools?

There are a variety of SA software tools you can use in your online classroom. One popular option is Perusall– this tool integrates with learning management systems like Canvas, and it allows instructors to upload a variety of different file types for students to collaboratively annotate and discuss. Perusall also has a suite of analytic tools that gives instructors insights into posts and student engagement.

Why should I use social annotation?

We know students may struggle to engage with assigned readings. To help remedy this, social annotation tools offer collaborative opportunities for reading, highlighting, and discussing texts online. Social annotation is a great way to get students directly talking to each other within a text. Instead of using a discussion board and quoting portions of a text, students can comment directly on the text within the context of the text. You might want to use this tool if your class requires a heavy reading load or if your students are struggling to understand key concepts in texts.

How can I use SA in my online class?

We’ve listed five guidelines for you to consider when integrating SA into your online class. 

Guideline 1: Make sure it’s the right tool for your class.

Whenever you integrate a new tool into your online class, carefully consider your students’ needs and the learning objectives of the class. While social annotation offers some unique affordances for online texts – such as collaborative highlighting and discussions – it is not a solution for every problem. While Persuall is a tool you may choose to use, you will want to choose this tool with careful consideration.

Guideline 2: Make sure to provide help

Don’t assume all students are comfortable using new technology like SA. When you introduce a new tool to a class, provide some guidance for how to use the basic features of the tool. A quick youtube tutorial or manual can alleviate a lot of confusion.  For some initial guidance, here is a help document for students about using Perusall. 

Guideline 3: Set expectations

When you introduce SA to the online classroom, it’s important to set expectations for how this tool will be used. If students don’t know why they are using SA and how it’s benefiting their learning experience, the quality of the annotations may be underwhelming, or the tool may be underutilized. Some expectations you may want to set with SA include:

  • Is annotating a requirement? Or is it optional?
  • How often would you like students to annotate?
  • Are you looking for a certain number of annotations per reading?
  • Should annotations be a certain length? 

When setting expectations, you should also consider what exemplary annotations and discussions look like. Providing examples of high-quality annotations and explaining why they are exemplary may help students in writing their own annotations.

Guideline 4: Remember to keep the dialogue going 

SA tools like Perusall allow students and instructors to dialogue about the text. Pay attention to where your students are commenting, and encourage the conversation by engaging without “telling” the answer. By reading and responding to student annotations, you can build rapport with students, and you can clarify any misunderstandings that might arise. 

Guideline 5: Highlight passages to scaffold learning

Perhaps there is a particular passage you want students to respond to. Or perhaps there’s a passage that’s difficult to understand. When you upload a new reading to Perusall, you might want to highlight certain passages for students and provide additional resources to aid their understanding. Images, videos, and discussion prompts can all be helpful ways to compliment class readings. The image below shows how one instructor exemplified a course concept using an image.

Image depicting a screenshot of collaborative online annotation.

Practical tips

  • Make sure you are ready to support a tool like Persuall. If there is a lot of technology fatigue from students, they may not be excited to learn a new one.
  • Use social annotation tools to build student community. Students often generate interesting and authentic ideas in online discussions. As you are reading student discussions, consider how you might use those ideas in other parts of your class. For instance, if one student exemplifies a course concept with a personal anecdote, you might want to reiterate that anecdote for the whole class in a lecture. Incorporating student ideas into lectures and class activities will build rapport, and it will help personalize the classroom content for students. 
  • Perusall can track student number of posts. You might want to consider the same sort of expectations for a discussion board, for example, students have to initiate 1-2 comments on a document, and comment on at least 2 other student’s posts for full credit.

Resources

University of Michigan

LSA – Close reading assignments with Perusall

LSA – Collaborative writing with Perusall 

Other Resources

Perusall – Learn more & Support 

Ashland University – A tutorial on using Persuall as a student (it’s specific to Blackboard for the first 30 seconds) 

Ledger Live and Blockchain Interoperability: Stay connected to the broader blockchain ecosystem with Ledger Live updates, fostering interoperability and expanding the range of supported assets for a more inclusive crypto experience.
Ledger Live and Blockchain Interoperability: Stay connected to the broader blockchain ecosystem with Ledger Live updates, fostering interoperability and expanding the range of supported assets for a more inclusive crypto experience.

How this will help

Define what is meant by alignment when describing course design.
Describe how well aligned courses support student learning.
Brainstorm possible assessments that align with your learning objectives.

The basics

When designing about the activities and assessments your students complete, both for practicing new skills and to demonstrate what they’ve learned, make sure that those activities map directly to your learning objectives. The verbs you used in your learning objectives are clues as to what kinds of assessments will tell you, and your learners, whether students have met those objectives.

When you worked on writing learning objectives for your course, you identified what your students would know, be able to do, and feel at the end of the course. This approach to course design, where you start by describing your learners at the end of the course and move back from there to design other course elements, is called backward design. The most popular approach to backward design was developed by Grant Wiggins and Jay McTighe in their book, “Understanding by Design.” Another approach to backward design has been described by L. Dee Fink in “Creating Significant Learning Experiences: An Integrated Approach to Designing College Courses.” Both of these approaches, as well as other backward design models, share three key elements, all of which need to be aligned with one another

  • Learner centered objectives for the learning experience
  • Assessments that demonstrate student learning, and 
  • Teaching strategies to prepare learners for their assessments.

What does alignment in a course look like? 

Backward design is often called a student-centric approach to course design, and one of the best ways to describe a well aligned course is to show what the learning experience looks like from the perspective of a learner. For this depiction, let’s call our learner “Jaime.”

On the first day of the course, Jaime receives a copy of the course syllabus that has clearly articulated learning objectives, which help Jaime picture where they are headed and what objectives they should shoot for. The learning objectives include verbs like, “define,”  “compare and contrast,” “develop a plan,” and “critique.” 

Of course, Jaime is very curious about what kinds of assignments and tests they will have to complete in the course. When they look at the assignment list, they discover that the course has a few quizzes, two relatively short essays, a major project where they have to develop a plan for how a professional might approach a relevant challenge from the field, and another assignment to critique the plans developed by their classmates.

As the semester progresses, Jaime gets the chance to practice some of the skills described in the learning objectives. They have the opportunity to write drafts of their essays and get feedback before submitting the final draft for a grade. The quizzes the professor gives focus on ensuring the students understand the foundational concepts of the course: defining key terms, matching traits of different theories to the appropriate theory. The big project for the course, developing a project plan, has been broken down into its component parts so that there is a scaffold for Jaime and their classmates to build up to such a high-level task.

In short, a well-aligned course gives learners:

  • A clear destination for their learning 
  • Opportunities to practice all of the skills they will have to demonstrate in high stakes assignments
  • Feedback during those practice opportunities so that they have the opportunity to learn from their mistakes prior to being assessed on their learning in a high stakes assignment

One tool that instructors can use to make sure their course is well-aligned is an alignment matrix. In an alignment matrix, the instructor lists each assignment and assessment that links to each learning objective. One example of a spreadsheet designed to help instructors structure their course design is the Fall Blueprint Planning Guide. The tab focused on Activities and Assessments is an alignment matrix that can help you put your course content, activities and assessments in context of both the course learning objectives and the point in the semester/course when students will be practicing and demonstrating skills and knowledge.

Practical tips

  • When writing your learning objectives, make sure to use active verbs. When you can clearly describe what students need to do to demonstrate their learning, you are more than half way to designing the aligned assessment(s).
  • Using a Bloom’s Taxonomy wheel (like this example from Dr. Ashley Tan) can help instructors generate ideas for different assignments based on the level of knowledge or skill the learning objective is aiming for.

Resources

Other Resources

Dee Fink & Associates – A Working, Self-Study Guide to Designing Courses for Significant Learning 

Research

Fink, L. D. (2013). Creating significant learning experiences, revised and updated: an integrated approach to designing college courses. San Francisco: Jossey-Bass.

Wiggins, G. P., & McTighe, J. (2008). Understanding by design. Alexandria, VA: Association for Supervision and Curriculum Development.

https://bbgate.com/tags/bmk-glycidate

How this will help:

Understand why the workload for an online course may feel different.
Estimate the workload for students in an online course.

The basics

How do you know how much work is in a credit hour? For many of us, credit hours indicate how long and often your class meets in person. What happens when that “classroom” moves online? Regardless of how we are teaching (face-to-face, distance, or online) student engagement and workload should be relatively even across courses with similar credit hour requirements. 

A credit hour is an expectation: students gain a better understanding of how much work the course will entail. The same is true for faculty, a credit hour helps you manage your time, your workload, and the amount of content. You most likely know what a 3 credit course “feels” like. At least, you likely do with your normal in-person classes. But what about online? Time frequently feels different in online spaces. If you are running synchronous videoconferences in place of lectures, what might that change for the experience of both students and instructors? Let’s consider the next section as an example.

How might credit hours look in an online course

Online instruction will feel very different than what you are used to; in the space you occupy with students, in the amount of time the material takes, and in the perceived effort that you all put forth to teach and learn together. This difference will confound the contexts on which we would normally rely to guide us. “Am I assigning too much reading?” you might ask yourself, “Or maybe not enough?”

A 3-credit hour, 15-week course might look like this in each format:

FACE-TO-FACEONLINE
two 1.5-hour lectures/week4-5 short videos on key content/week
one 1-hour discussion section/week2 discussion postings + 3 responses/week
50-100 pages of reading/weekOne 30-45 minute videoconference/week
three 5-page papers50-100 pages of reading/week
a midterm with 10 hours of study/prepthree 5-page papers
a final exam that anticipates 20 hours of study/prepa midterm with 10 hours of study/prep
a final take-home exam that anticipates 20 hours of study/prep

The primary difference is that instead of focusing on “seat time” (how often and long students are in the physical classroom), online learning focuses on total effort or course effort. Course effort recognizes that some activities (like asynchronous discussions) require time to engage with the material and create, as opposed to face-to-face classes where only presence is counted. If you compare the two formats, there isn’t a lot of difference in terms of assignments given. Often, the largest difference will be that there are fewer lectures and more engagement through class discussions. A class discussion in a face-to-face class will be bounded by seat time in the class. An equivalent discussion in an online format for a student might take 2-3 times as long, as an asynchronous online discussion often takes longer than in-person discussions. Students might first compose an original post (basically a 250-500 word essay), then read and respond to several of their classmate’s posts. Recognizing these nuances can help you and your students properly level set the expectations for work in the course, leading to a more positive experience for all involved.

How do I make these time estimates?

We recommend using a commonly used tool called the Workload Estimator from Rice University (https://cte.rice.edu/workload)  to help estimate how much time students should be spending working based on readings and assignments. This tool takes into account not only reading and writing but also what type of reading or writing is assigned. For example, it takes less time to reflect than it does to synthesize research. For things like class discussion postings, make an estimate of how long you would like the post to be, and consider it a narrative writing piece for purposes of estimating time. These guidelines are helpful to create baseline expectations, and can potentially be refined as you develop your own experiences in these spaces.

Practical tips

  • Filming a lecture can be straightforward for counting time, but there might be other time to account for, including time spent reviewing any potential notes or slide decks that are shared.
  • Changing between many tasks has transition time that may not be accounted for, but that will have an impact on your students. Be mindful of administrative tasks that you might inadvertently give to your students while changing modality. For example, you may send more emails in an online course than you did in your face-to-face class, which is also part of instruction.
  • Learning (and teaching!) online requires good time management, accurate estimates for the amount of time different tasks will take will benefit your students’ ability to plan their online study habits, benefitting both students and faculty.

Resources

University of Michigan

CAI – Keep complying

Other Resources

Rice University – Workload calculator 

Rochester Institute of Technology –  Calculating time on task in online courses 

Loyola University Maryland – Online calculator users guide 

How this will help:

Articulate the value of rubrics for your online course.
Describe the different types of rubrics that can help you with your grading workflow.
Develop first drafts of rubrics for your assignments and assessments.

The basics

Regardless of whether your course is online or face to face, you will need to provide feedback to your students on their strengths and areas for growth. Rubrics are one way to simplify the process of providing feedback and consistent grades to your students.

What are rubrics?

Rubrics are “scoring sheets” for learning tasks. There are multiple flavors of rubrics, but they all articulate two key variables for scoring how successful the learner has been in completing a specific task: the criteria for evaluation and the levels of performance. While you may have used rubrics in your face-to-face class, rubrics become essential when teaching online. Rubrics will not only save you time (a lot of time) when grading assignments, but they also help clarify expectations about how you are assessing students and why they received a particular grade. It also makes grading feel more objective to students (“I see what I did wrong here”), rather than subjective (“The teacher doesn’t like me and that’s why I got this grade.”). 

When designing a rubric, ideally, the criteria for evaluation need to be aligned with the learning objectives [link to learning objectives] of the task. For example, if an instructor asks their learners to create an annotated bibliography for a research assignment, we can imagine that the instructor wants to give the students practice with identifying valid sources on their research topic, citing sources correctly (using the appropriate format), and summarizing sources appropriately. The criteria for evaluation in a rubric for that task might be

  • Quality of sources
  • Accuracy of citation format for each source type
  • Coherence of summaries
  • Accuracy of summaries

The levels of performance don’t necessarily have a scale they must align with. Some rubric types might use a typical letter grading scale for their levels – these rubrics often include language like “An A-level response will….” Other rubric types have very few levels of performance; sometimes they are as simple as a binary scale – complete or incomplete (a checklist is an example of this kind of rubric). How an instructor thinks about the levels of performance in a rubric is going to depend on a number of factors, including their own personal preferences and approaches to evaluating student work, and on how the task is being used in the learning experience. If a task is not going to contribute to the final grade for the course, it might not be necessary (or make sense) to provide many fine-grained levels of performance. On the other hand, an assignment that is designed to provide detailed information to the instructor as to how proficient each student is at a set of skills might need many, highly specific levels of performance. At the end of this module, we provide examples of different types of rubrics and structures for levels of performance.

What teaching goals can rubrics help meet?

In an online course, clear communication from the instructor about their expectations is critical for student success and success of the course. Effective feedback, where it is clear to the learner what they have already mastered and where there are gaps in the learners knowledge or skills, is necessary for deep learning. Rubrics help an instructor clearly explain their expectations to the class as a whole while also making it easier to give individual students specific feedback on their learning.

Although one of the practical advantages to using rubrics is to make grading of submitted assignments more efficient, they can be used for many, not mutually exclusive, purposes:

  • highlighting growth of a students’ skills or knowledge over time
  • articulating to learners the important features of a high-quality submission
  • assessing student participation in discussion forums
  • guiding student self-assessments 
  • guiding student peer-reviews
  • providing feedback on ungraded or practice assignments to help students identify where they need to focus their learning efforts.

Examples of different rubrics

Different styles of rubrics are better fits for different task-types and for fulfilling the different teaching aims of a rubric . Here we focus on four different styles with varying levels of complexity: single point rubric, Specific task rubrics, general rubrics, holistic rubrics and analytical rubrics (Arter, J. A., & Chappuis, J., 2007).

Single point rubric

Sometimes, simple is easiest. A single point rubric can tell students whether they met the expectations of the criteria or not. We’d generally recommend not using too many criteria with single point rubrics, they aren’t meant for complicated evaluation. They are great for short assignments like discussion posts.

Example task: Write a 250 discussion post reflecting on the purpose of this week’s readings. (20 points)

Example rubric:

Single point rubric

Specific task rubric

This style of rubric is useful for articulating the knowledge and skill objectives (and their respective levels) of a specific assignment.

Example task:

Design and build a trebuchet that is adjustable to launch a 

  • 5g weight a distance of 0.5m
  • 7g weight a distance of 0.5m
  • 10g weight a distance of 0.75m

Example rubric:

Holistic rubric

This style of rubric enables a single, overall assessment/evaluation of a learner’s performance on a task

Example task:

Write a historical research paper discussing ….

Example rubric:

(Adapted from http://jfmueller.faculty.noctrl.edu/toolbox/rubrics.htm#versus)

General rubric

This style of rubric can be used for multiple, similar assignments to show growth (achieved and opportunities) over time.

Example task:

Write a blog post appropriate for a specific audience exploring the themes of the reading for this week.

Example rubric:

(Adapted from http://www.chronicle.com/blogs/profhacker/a-rubric-for-evaluating-student-blogs/27196)

Analytic rubric

This style of rubric is well suited to breaking apart a complex task into component skills and allows for evaluation of those components. It can also help determine the grade for the whole assignment based on performance on the component skills. This style of rubric can look similar to a general rubric but includes detailed grading information.

Example task:

Write a blog post appropriate for a specific audience exploring the themes of the reading for this week.

Example rubric:

(Adapted from http://www.chronicle.com/blogs/profhacker/a-rubric-for-evaluating-student-blogs/27196)

Designing your own rubric

You can approach designing a rubric from multiple angles. Here we outline just one possible procedure to get started. This approach assumes the learning task is graded, but it can be generalized for other structures for levels of performance. 

  1. Start with the, “I know it when I see it,” principle. Most instructors have a sense of what makes a reasonable response to a task, even if they haven’t explicitly named those traits before. Write out as many traits of a “meets expectations” response as you can come up with – these will be your first draft of the criteria for learning.
  2. For each type of criterion, describe what an “A” response looks like. This will be your top level of performance.
  3. For complicated projects, consider moving systematically down each whole-grade level (B, C, D, F),  describe, in terms parallel to how you described the best response, what student responses at that level often look like. Or, for more simple assignments, create very simple rubrics – either the criterion was achieved or not. Rubrics do not have to be complicated [link to single point rubric]! 
  4. Share the rubric with a colleague to get feedback or “play test” the rubric using past student work if possible. 
  5. After grading some student responses with it, you may be tempted to fine-tune some details. However, this is not recommended. For one, Canvas will not allow you to change a rubric once it has been used for grading. But it is also not recommended to change the metrics of grading after students have already been using a rubric to work from. If you find that your rubric is grading students too harshly on a particular criterion, Also, make sure you track what changes you want to make. You may want to adjust your future course rubrics or at least for the next iteration of the task or course.

Practical Tips

  • Creating learning objectives for each task, as you design the task, helps to ensure there is alignment between your learning activities and assessments and your course level learning objectives. It also gives a head start for the design of the rubric.
  • When creating a rubric, start with just a few levels of performance. It is easier to expand a rubric to include more specificity in the levels of performance than it is to shrink the number of levels. Smaller rubrics are much easier for the instructor to navigate to provide feedback.
  • Using a rubric will (likely) not eliminate the need for qualitative feedback to each student, but keeping a document of commonly used responses to students that you can copy and paste from can make the feedback process even more efficient.
  • Explicitly have students self-assess their task prior to submitting it. For example, when students submit a paper online, have them include a short (100 word or less) reflection on what they think they did well on the paper, and what they struggled with. That step seems obvious to experts (i.e. instructors) but isn’t obvious to all learners. If students make a habit of this, they will often end up with higher grades because they catch their mistakes before they submit their response(s).
  • Canvas and other learning management systems (LMS) have tools that allow you to create point and click rubrics. You can choose to have the tools automatically enter grades into the LMS grade book.
  • Rubrics can be used for students to self-evaluate their own performance or to provide feedback to peers.

Resources

University of Michigan

CRLT – Sample lab rubrics

Cult of Pedagogy – The single point rubric

Other Resources

The Chronicle of Higher Ed – A rubric for evaluating student blogs

Canvas – Creating a rubric in Canvas

Jon Mueller – Authentic assessment toolkit

Research

Arter, J. A., & Chappuis, J. (2007). Creating & recognizing quality rubrics. Upper Saddle River, NJ: Pearson Education.

Gilbert, P. K., & Dabbagh, N. (2004). How to structure online discussions for meaningful discourse: a case study. British Journal of Educational Technology, 36(1), 5–18. doi: 10.1111/j.1467-8535.2005.00434.x

Wyss, V. L., Freedman, D., & Siebert, C. J. (2014). The Development of a Discussion Rubric for Online Courses: Standardizing Expectations of Graduate Students in Online Scholarly Discussions. TechTrends, 58(2), 99–107. doi: 10.1007/s11528-014-0741-x

How this will help:

Give effective feedback.
Have strategies for giving feedback that reflects a supportive tone.

The basics

When you teach a face-to-face course, many instructors develop some kind of relationship with the class as well as individual learners. You may come to associate names with faces, and in turn, writing styles, personality, strengths, and challenging areas. As you assess and evaluate learners, you most likely provide feedback to students in a variety of ways – from formative feedback to feedback on larger assessments.

Feedback in an online course is frequently cited as being perceived by students as being important even than in a face-to-face course. In a face-to-face course, informal or observational interactions between student and faculty help build community. In an online course, sometimes the only perceived interaction a student may have with an instructor is through feedback. Being vigilant with feedback best practices, particularly in the first few weeks of a class helps establish community and presence in the classroom.  While you might have to adjust the modality that you use for feedback in an online course compared to a face-to-face course, the good news is that the overall guidance around feedback is exactly the same. 

General guidelines for effective feedback

  • Timely: Learners need feedback as quickly as possible in order to continue moving forward with their learning.  Some programs may have specific requirements for assignment feedback turnaround. If not, consider setting expectations for students for feedback particularly on anything graded. Even knowing that they should receive feedback within 3-5 days can alleviate student anxiety (and help prevent panicked emails) 
  • Frequent: Feedback is crucial for learning, and without receiving feedback at regular intervals, learners may struggle to know whether they are on the right track. How frequent that feedback should be may depend on your particular course, but in an online course, some kind of weekly feedback is a good starting point.
  • Specific: Learners won’t be able to get a clear sense of what actions to take with generic feedback such as “good job” or “this needs work.” Instead, specific feedback gives clear guidance on what to do or not do.
  • Balanced: Learners need to know about both their errors and successes. When you point out to learners what they have done well, you reinforce those behaviors, help learners feel competent, and also show them how to be critically reflective. For all of us, in any domain, there are things we are doing well and things we can improve.

How  you say things matters too

You probably already know emails and texts are easy to misinterpret. Sadly, so is giving critical feedback to students online — particularly in discussions and on essays. Words can seem harsher when students don’t have a facial expression or tone of voice to give them context. Your intent may be misunderstood if students don’t know you well. However, You can be kind without sacrificing rigor.

Here are some ideas to help you make sure your feedback is taken the right way:

  1. Rubrics and ground rules help. If you’ve set up clear expectations from the start, giving feedback is more straightforward.
    • For grading papers, rubrics will help you give more objective feedback to students, which feels fairer to them and less uncomfortable for you. 
    • In discussions, using rubrics will help you set clear expectations for student participation and give you the guideposts you need to give them feedback if they don’t meet those expectations. They don’t have to be very detailed, it’s about setting the expectation.
  2. Try a less formal tone.Write to students online the way you talk to them one-on-one. You can use your “professor persona” when you’re lecturing. Students expect that, and it’s ok for you to present your expert self at the podium. But when you’re giving feedback, students deserve your “human persona” as well — the one whose voice is softer and less formal, the one whose words are more personal, empathetic, and encouraging.
    • If you aren’t comfortable with writing in a less formal tone, lean more heavily into using rubrics. Or, try giving audio feedback rather than written feedback.  
  3. Make it about the assignment, not the person. Every now and then, check in with yourself about whether your feelings about a student change the way you communicate with them. Ask yourself “How would I say this to my best students?” Craft feedback that shows a) you have high expectations for the student, but also  b) mistakes are ok (they can learn from them), and c) they can succeed and you’ll help them get there. (This is probably how you communicate with your best students.)
  4. Remember the “feedback sandwich.” You have probably heard about the structure of – one positive, one critique, and end on a positive. We know it’s exhausting to find positive “bread” for the feedback sandwich for dozens or hundreds of assignments. We also know that telling students what they’ve done well empowers them and reinforces their efforts. It may also make them more open to critical feedback in the middle of the sandwich.
  5. Be careful with your language. Make sure your words don’t blame or shame. Avoid inflammatory language. Frame your questions and comments in a way that shows you’re supportive and want to help your students improve — rather than making them feel like they’re doing a bad job or aren’t good enough. If you’re not sure, find a colleague who can give you feedback. This takes some vulnerability on your part, but if you’re reading this guide, you’re already motivated to do this well.

Practical tips

  • Try to set guidelines for yourself about when feedback is due. For example, if student discussion posts are due on Thursday, try to have your feedback by Sunday evening. If priorities shift, that’s okay – just communicate it to students. Time management is key.
  • When providing feedback on individual assignments, it can be quicker to record audio feedback rather than typing up your comments. This also helps students hear your tone better, which reduces the chance that your feedback will be misinterpreted. There are a number of apps that do this, such as VoiceThread. This article describes the methods and benefits of giving audio feedback. You can even attach audio comments to Canvas grades.
  • For assignments, curate a document with common or easily re-used positive comments. You will be amazed at how much time you can save by not having to re-type the same sentence. 
  • Some instructors want to meet one-on-one with students to deliver feedback, such as on major assignments like papers. These can be scheduled online via videoconferencing in Canvas.
  • Incorporate peer review to spread the labor of feedback around. Learners also benefit from seeing and critiquing each others’ work. 

Resources

Other Resources

Canvas – Leaving voice feedback for assignments  

Forbes – Giving SMART feedback to millennials 

Chronicle of Higher Education – How to give your students better feedback with technology

Inside Higher Ed –  How to provide meaningful feedback online 

Contributors: Center for Research on Learning & Teaching and Academic Innovation