Skip to main content

Generative AI for Course Design: The Basics

A computer monitor displaying lines of code

Introduction

Education is undergoing a significant transformation as generative artificial intelligence continues to develop at a rapid pace. It is now easier than ever for educators to experiment with generative AI in their practice and see for themselves how generative AI can be leveraged during the course development process to brainstorm, synthesize, and draft everything from communications to students to learning objectives.

Generative AI: The Basics

Before experimenting with Generative AI (GenAI), it is helpful to have some high level foundational knowledge of how GenAI works. Essentially, GenAI functions using advanced machine learning algorithms, specifically neural networks, which emulate human brain processing. These networks are trained with large datasets, enabling them to learn language patterns, nuances, and structures. As a result, GenAI can produce contextually relevant and coherent content, a capability exemplified in tools like ChatGPT. 

To better understand how GenAI tools like ChatGPT work, let’s look at a breakdown of the acronym “GPT”: 

GPT stands for “Generative Pre-trained Transformer.” It is a type of artificial intelligence model designed for natural language processing tasks. “Generative” refers to its ability to generate text based on a combination of the data it was trained on and your inputs. It can compose sentences, answer questions, and create coherent and contextually relevant paragraphs. 

The term “Pre-trained” indicates that the model has undergone extensive training on a vast dataset of text before it is fine-tuned for specific tasks. This pre-training enables the model to understand and generate human-like text. 

Finally, “Transformer” is the name of the underlying architecture used by GPT. Transformers are a type of neural network architecture that has proven especially effective for tasks involving understanding and generating human language due to their ability to handle sequences of data, such as sentences, and their capacity for parallel processing, which speeds up the learning process. 

The GPT series, developed by OpenAI, has seen several iterations, with each new version showing significant improvements in language understanding and generation capabilities. Many of these improvements are due to the model continuously training on user inputs. OpenAI has made it transparent that your data is being used to improve model performance and you can choose to opt out by following the steps that will be outlined in the upcoming articles on how to use GenAI tools for course design, learning objectives and more.

Does it matter which GenAI Tool I use?

Not really. Individuals may find preferences for one tool or another based on response speed or comfort with the interface. You may wish to use a tool that can opt out of using personal data for training purposes. Most of the GenAI tools are generally similar.

Next Steps and Considerations

In educational contexts, the incorporation of GenAI tools, such as ChatGPT, will potentially reshape our approach to content creation and improve efficiency for educators who often find themselves pressed for time. However, it is important to note the importance of acknowledging the technology’s limitations, such as potential biases, outdated information due to insufficient training data, and incorrect information – often referred to as “hallucinations.” It is vital that you always fact-check and revise GenAI outputs to maintain the integrity and high quality of your content.

In conclusion, by leveraging GenAI tools like ChatGPT, educators can navigate course design with greater ease and efficiency. From drafting learning objectives and engaging course titles to simplifying complex academic language and brainstorming assessments, GenAI has the potential to be an invaluable asset to your design work. However, it is critical to remember that these tools come with limitations, including potential biases and inaccuracies. By combining the strengths of GenAI with the expertise and critical oversight of educators, we can efficiently create new experiences for our learners.

Introduction

It is safe to say that by now, you have seen many articles/posts, opinions, and stories about ChatGPT—and the larger AI-Language Learning Models (LLMs)—in relation to higher education and teaching/learning in particular. These writings included several perspectives ranging from raising concerns to celebrating new opportunities and a mix of the former and the latter. Also, these writings continue to evolve and grow rapidly in number as new AI-powered LLMs continue to emerge and evolve (e.g., Google’s new AI LLMs: Bard).

The intent of this piece is not to add another article sharing tips or concerns about ChatGPT. That being said, this article (1) summarizes the major concerns about ChatGPT and (2) the ideas about its positive implications based on what it is published to date.

Concerns about ChatGPT

Faculty, scholars, and higher education leaders have raised several concerns about ChatGPT. These concerns stem from possible ways it can be used.

  • Using ChatGPT to cheat by asking it to write essays/answer open-ended questions in exams/discussion forums and homework assignments (December 19th, 2022 NPR Story) (December 6th, 2022 Atlantic Story) (January 16 New York Times Story).
  • Using ChatGPT to author scholarly works which conflict with the ethical standards of scientific inquiry. Several high-impact/profile journals have already formulated principles to guide authors on how to use LLMs AI tools and why it is not allowed to credit such tool as an author—any attribution of authorship carries with it accountability for the scholarly work, and no AI tool can take such responsibility (January 24th, 2023 Nature Editorial).
  • ChatGPT can threaten the privacy of students/faculty (and any other user). Its privacy policy states that data can be shared with third-party vendors, law enforcement, affiliates, and other users. Also, while one can delete their ChatGPT account, the prompts they entered into ChatGPT cannot be deleted. This setup threatens sensitive or controversial topics as this data cannot be removed (January 2023 Publication by Dr. Torrey Trust).
  • ChatGPT is not always trustworthy, as it can fabricate quotes and references. In an experiment conducted by Dr. Daniel Hickey at Indiana University Bloomington, Instructional Systems Technology department, “ChatGPT was able to write a marginally acceptable literature review paper, but fabricated some quotes and references. With more work such as including paper abstracts in the prompts, GPT is scarily good at referencing research literature, perhaps as well as a first-year graduate student.” (January 6th, 2023, Article by Dr. Daniel Hickey)

Excitement about ChatGPT

At the other end of the spectrum, there have been several ideas that express interest and excitement about ChatGPT in higher education. These ideas stem from how they can be used ethically and in a controlled manner.

  • Using ChatGPT to speed up the writing of drafts for several outlets (reports, abstracts, emails, conference proposals, press releases, recommendation letters, etc.) ChatGPT can produce elaborated writing that must be edited to remove any possible inconsistencies or inaccuracies (December 7th, 2022 Social Science Space story)
  • Using ChatGPT in the process of brainstorming ideas for curriculum design, lesson planning, and learning activities. The tool can provide some novel ideas or remind educators of some instructional techniques and strategies that they had heard about in the past (January 23rd, 2023, Article by Dr. David Wiley).
  • Using ChatGPT to provide students tutoring/scaffolds. The tool can act like a virtual tutor who does not simply give the answer to the student but rather scaffold them to reach the correct answers by themselves. (Sal Khan, founder/CEO of Khan Academy, Spring 2023 TED Talk)
  • Teaching with ChatGPT to train students on using AI tools and models, provide opportunities to exercise critical thinking skills, and improve their technological literacy (January 12th New York Times story).

Concluding Thoughts

There are major concerns about ChatGPT and the larger AI-powered Language Learning Models (LLMs) phenomenon. These concerns are legitimate and are opposed by notable ideas about the positive implications of AI-powered LLMs in higher education classrooms. As we aspire to make evidence-based educational and learning design decisions, one should carefully review the research that has been done on AI in relation to higher education up to this point and engage with the gaps as opportunities to expand knowledge and find new opportunities and risks.

Our University’s newly formed advisory committee on the applications of generative AI is a good example of how higher education institutions ought to recommend the use, evaluation, and development of emergent AI tools and services. Additionally, discussions about generative AI and its implications on education happening in public venues are necessary to strengthen the public-facing mission of the University, where input from educators, students, and members of the community is welcome.

The rapid shift to emergency remote instruction during COVID-19 left many instructors questioning how best to assess students, even well after classes resumed. Concerns about academic integrity left some wondering if using online tests made students more likely to violate academic integrity rules. Online test proctoring made news in many higher education settings as a way to ensure academic integrity. However, others have argued it is a violation of students’ privacy.

What is Online Proctoring?

You may be familiar with proctoring in a face-to-face or residential setting where a designated authority oversees an exam in a controlled, specified environment. Similarly, online proctoring is a service that monitors a learner’s environment by either a person or an artificial intelligence algorithm during an online exam. However, the environment an online proctor oversees is a learner’s personal environment. This monitoring can take the form of videotaping, logging students’ keystrokes, browser data, location data, and even biometric data like test-taker eye movements.

Advocates of online proctoring cite concerns about academic integrity in the online environment as a reason to implement proctoring (Dendir & Maxwell, 2020). Some even suggest that students do not mind the additional security because they believe it supports the integrity of the test and/or degree.

Online proctoring in the media and research

While onsite-proctoring for academic integrity may seem reasonable, there have been questions about monitoring a learner’s home environment. monitoring a learner’s home environment has the potential for harm. Online proctoring can be perceived as invasive by students, as personal information about one’s location and physical data is recorded that is not otherwise necessary for an exam. Several institutions, like U-M Dearborn, University of California Berkeley, University of Illinois, and the University of Oregon have placed limitations on, if not discontinuing altogether the use of third-party proctoring services. Institutions cite issues of accessibility, bias, concerns about student privacy, and institutional culture as reasons to discourage third-party proctoring. Student and faculty groups have publicly advocated for institutions to discontinue security features like locked-down browsers and third-party monitoring. At the University of Michigan Ann Arbor, third-party proctoring generally involves a separate fee and may be expensive, but still available through vendor partners.

Most of the academic research involving the use of online proctoring has focused on academic integrity, rather than the impact of proctoring itself. Wuthisatian (2020) found lower student achievement in online proctored exams compared to the same exam proctored onsite. Those students who were the least familiar with technology and the requirements for setting it up performed the most poorly. In addition, students who have test anxiety may experience even more anxiety in certain proctoring situations (Woldeab & Brothen, 2019). With further research, we may find the problem may not necessarily be proctoring, but rather the burden and effort of technology on students when taking an online exam.

Problems with internet connections or the home testing environment may be beyond students’ control. The lack of ability to create a “proper” testing environment raised students concerns about being unjustly accused of cheating (Meulmeester, Dubois, Krommenhoek-van Es, de Jong, & Langers, 2021)

What are the alternatives to proctoring?

Ultimately, only the instructor can determine whether proctoring is the right choice for a class and sometimes proctoring may be the best choice for your discipline, field, or specific assessment. Particularly in a remote setting, it may feel like the integrity of your assessment (particularly a test) is beyond your control, so proctoring may feel like the only option. However, there are alternatives to proctoring exams, from using exam/quiz security measures, to re-thinking a course’s assessment strategy to deemphasize exams. If you are concerned about how and what you are assessing, the Center for Research on Learning and Teaching provides resources and consultations to discuss academic integrity and different methods of assessment. We also recommend CAI’s Faculty Proctoring document if you have questions about proctoring.

Learn more: