Skip to main content

Ethical Considerations of Using GenAI Tools

Generative AI (GenAI) tools are becoming increasingly popular for a wide variety of uses, including in classrooms. Whether you’re generating images, building slides, or creating summaries of readings, it’s important to be thoughtful about the tools you’re using and the impact they can have on both your students and our world as a whole.

Bias

A GenAI tool is only as good as its training data; if that data contains content that is racist or sexist, we shouldn’t be surprised when the GenAI tool develops the same kind of bias. Bias can come in a variety of different types: stereotypical, gender, and political. All of these biases can lead to certain groups being inaccurately featured more or less in outputs.

Bloomberg tested the biases present in the Stable Diffusion text-to-image generator in 2023. When they prompted the model to create representations for jobs that were considered “high-paying” and “low-paying,” the images generated of high-paying jobs were typically of people with lighter skin tones. People with darker skin tones featured more prominently in images of low-paying jobs. Bloomberg found similar results when they looked at the gender of the people in the images. Stable Diffusion generated three images of men for one image of a woman. When women did appear in the generated images, they were typically in lower-paying and more traditional roles, like housekeeper. Prompts for jobs like “politician,” “lawyer,” “judge,” and “CEO” led to images that were almost entirely light-skinned men.

Harmful Content

Besides being biased, GenAI can produce content that is harmful in a variety of ways. GenAI can hallucinate content that is not based on actual data, and is instead fictitious or unrealistic. It can be used to produce artificial video or audio content impersonating a person’s likeness. When this kind of video and audio content is done with permission of the person, it’s commonly called “synthetic media.” When people create artificial video or audio content of someone without their permission, it’s referred to as a “deep-fake.” Deep-fakes are often used to harass, humiliate, and spread hate-speech. GenAI has made the creation of deep-fakes easy and cheap, and there have been several high-profile cases in the US and Europe of children and women being abused through their creation. 

Policymaking efforts to combat the proliferation of and harm caused by deep fakes have become common both in the U.S. and abroad, with proposals often including disclosure requirements for the use of synthetic media, at least for certain activities. While educational uses of these technologies are unlikely to be restricted or banned, users should strongly consider disclosing the use of these technologies by default in the interest of transparency and in anticipation of any future requirements to do so that may apply. It may also be worthwhile to consider whether companies offering these products are well positioned to comply with this quickly evolving regulatory landscape as well as whether they are making reasonable efforts to help prevent the misuse of their products.  

Data

The collection of data used to train GenAI models can raise a variety of privacy concerns, particularly around personal and proprietary data. Some personal data collection can be declined, although the methods of how to do so are often buried in lengthy terms of service that most users don’t read. Those terms of service also cover how the GenAI tool can use the data that you put into the tool via prompting, so you should be cognizant of the kind of information you’re feeding it.

Recently, the Cisco 2024 Data Privacy Benchmark Study revealed that most organizations are limiting the use of GenAI, with some banning it entirely, because of data privacy and security issues. This is likely because 48% of employees surveyed admitted to entering non-public company information into GenAI tools. There’s also a general lack of transparency around what kinds of data sets have been used to train GenAI tools. Although some explicitly state where their training data comes from, many are vague about what the training data was and how they accessed it.

Copyright

Right now, many believe that using content, like books, images, and videos, to train GenAI falls under fair use in the U.S., but there are currently multiple lawsuits challenging this notion. If companies are unable to leverage fair use to acquire training data, the effectiveness and availability of GenAI is likely to decrease dramatically. The cost of obtaining licenses for the incredible amount of data needed will likely drive all but the biggest companies out of the market.

The outputs created by GenAI can have their own copyright issues, depending on how much they pull from the training data. If the image generated by GenAI, for example, is substantially similar to an image in the training data, there could potentially be some liability for copyright infringement if or when the image is used. Many GenAI tools are attempting to avoid this by refusing to generate content that is similar to copyrighted material, but there are ways for creative prompters to get around these restrictions.

Although many GenAI tools claim to be trained on openly licensed content, studies show that when asked about licensing requirements, 70% of the tools didn’t specify what license requirements were for the generated work, and if they did, the tool often provided a more permissive license than what the original creator intended.

The use of GenAI brings up ethical issues around authorship that are often related to copyright but are separate. For example, when using information gathered from GenAI, there may be an ethical obligation to cite the original source to avoid claims of plagiarism. GenAI doesn’t typically provide citations, and when it does, those citations are frequently incorrect. There are also concerns about the displacement of human authors and artists by GenAI; this frequently comes up when GenAI is used to create works in the style of certain artists or authors.

Environmental Impact

GenAI has a huge environmental impact. Research has shown that training the early chatbots, such as GPT-3, produced as much greenhouse gas as a gasoline powered vehicle driving for 1 million miles. Generating one image using GenAI uses as much energy as fully charging your phone. ChatGPT alone consumes the same amount of energy as a small town every day. On top of that, the data centers needed to house the training data and infrastructure for these tools require large amounts of electricity and water to keep them from overheating. Right now, it’s nearly impossible to accurately evaluate or know the full extent of the environmental impacts of GenAI.

Equity

There are a variety of different types of equity concerns when it comes to GenAI. Most GenAI tools are trained on data from data rich languages and are less likely to include non-standard dialects or languages. There are also access and efficacy disparities. Not everyone will have access to GenAI tools, whether it’s because of the cost, a lack of internet access, or because there are accessibility issues with the tool. Underrepresented or underserved groups may find their experiences missing from the training data, which is only optimized for some groups, not all, limiting the efficacy of the outputs.

Finally, it’s important to remember that all of the legal and ethical issues discussed so far have a disproportionate effect on marginalized groups. For example, negative environmental effects tend to be felt the worst in more vulnerable communities. Considering the major impact GenAI has on the environment, how are we going to work with these groups to help ensure they’re not further harmed?

Conclusion

Overall, there are pretty significant legal and ethical issues we should consider before using GenAI tools. This doesn’t mean that we shouldn’t use GenAI tools; it means that we should be thoughtful about when, how, and why we’re using them. And we should know that the way we use them might change in the not so distant future. The current lawsuits will take years to work their way through the legal system, and depending on how they shake out, GenAI tools may have to go through some major changes when it comes to their training data.

Practical Tips

Here are five tips for navigating through these complex issues:

  1. Investigate the reputation of the GenAI tool and the company that created it. Perform an online search for any potential legal or ethical issues. Add search terms like “complaint,” “violation,” or “lawsuit” with the company’s name, and be sure to read product reviews.
  2. Check the terms of service. Review the terms of service and privacy policies before using GenAl. Caution should be taken before publishing materials created through GenAI.
  3. Protect sensitive data. In addition to data shared for training purposes, it should be assumed, unless otherwise stated, that data shared when using GenAI tools will be accessible by the third party tool provider and affiliates. Data sharing must adhere to U-M policies
  4. Consider the ethics/limitations. Continue to remember, and remind your students, that GenAI tools are often biased, as the technology is designed to output common results based on its learning model. GenAI can also “hallucinate,” so specific claims should always be verified before sharing.
  5. Consult resources and ask for help. We are still swimming in uncharted waters. Utilize resources available here at U-M, including training and workshops on GenAI that are hosted across U-M. There is also a new GenAI as a Learning Design Partner series led by U-M instructors that is freely available via Coursera.

You may have heard that recently, there have been updates to regulations implementing Title II of the American with Disabilities Act (ADA). These updates impact almost all of what we do in the online learning environment. With the aim of reducing burden for members of the disability community and providing equitable access to web content, the updates introduce technical guidelines that large public universities such as U-M must adhere to starting on April 24, 2026. We’ll discuss this further, and some exceptions to the rule, below.

Prohibiting Discrimination in Digital Spaces

The ADA is a civil rights law which blanketly prohibits discrimination on the basis of disability. More specifically, Title II of the ADA extends the prohibition of discrimination on the basis of having a disability to services, programs, and activities of state and local government entities, which includes public universities. In April 2024, rulemaking by the Department of Justice updated Title II regulations (added as a new subpart H to 28 CFR 35) by establishing specific technical standards to help ensure that all web and mobile applications are accessible.

Prior to this update, web content under Title II was required to be accessible, but public entities did not have specific direction on how to comply with ADA’s general requirements of nondiscrimination. Many organizations noted that voluntary compliance with previous digital accessibility guidelines did not result in equal access for people with disabilities. With the new guidelines in place, people with disabilities will now have equal access to all web-based content created by state or government institutions.

This is important progress for removing barriers to access in our very web-based world. Universities have become increasingly reliant on technology, whether for learning, working, or for transactions. With more than 10 millions students enrolled in some form of distance education, ensuring all students have equitable access to the same information, are able to engage in the same interaction, and can conduct the same transactions as their nondisabled peers is critical.  As online learning continues to grow, it is important to remember that more than 1 in 4 people in the US have disabilities, this includes an estimated 13.9% US adults with a cognitive disability impacting their concentration, memory, or decision making, 6.2% with a vision disability, and 5.5% with a hearing disability. 
This is not a solution in search of a problem; individuals with disabilities are consistently reporting challenges when accessing the web. The U.S. Department of Education’s  Office for Civil Rights (OCR) noted that they have resolved and monitored more than 1,000 cases, reported by members of the public, in recent years related to digital access. These complaints addressed the accessibility of many facets of the web: public-facing websites, learning management systems, password-protected student-facing content, and mass email blasts of colleges and universities, to name a few.

Technical Standards: WCAG 2.1, Level AA

Web content is defined as the information and experiences on the web, and it now must be readily accessible and usable to those with disabilities. This includes text, images, social media, sound, videos, scheduling tools, maps, calendars, payment systems, reservation systems, documents, etc. This also applies to web content that was made by a contractor or vendor. Universities may no longer rely on alternative versions or other workarounds to address barriers to inaccessible digital content or a reactive response when a student requests accommodations. 

The technical standards themselves, WCAG 2.1, Level AA, are an international set of standards developed by the Web Accessibility Initiative (WAI) of the  W3C, the World Wide Web Consortium, an organization that sets standards for web design. Generally speaking, they set clearly defined standards for content so that it is perceivable, operable, understandable, and robust. 

Though this is a new technical standard that all public universities must adhere to, the practice of producing and maintaining accessible content isn’t new at U-M. Since anyone at U-M can create digital content, our digital accessibility Standard Practice Guide Policy, deployed in 2022, states that any U-M developed or maintained electronic information technology (EIT) must meet the same technical standards required in  updated Title II regulations. This is to ensure that these technologies are as effective, available, and usable for individuals with disabilities as those who do not have disabilities. This applies to a wide range of technologies, from web-based applications, to digital textbooks, to electronic documents. Individual U-M units are responsible for maintaining the accessibility, usability, and equity of their EIT over time, in collaboration with other U-M units.

Limited Exceptions to the Ruling

If we build our content accessible, adhering to these guidelines, we are greatly reducing the chances that an individual with a disability is unable to access our content. Similarly to a curb cut in a sidewalk, not only can a person with a wheelchair access the street or sidewalk, but so can bicyclists and strollers. This concept applies to web content as well:If we build accessible web content, everyone can benefit. Given this, there are very few, limited exceptions to WCAG 2.1, AA conformance requirements that are further explained in the Fact Sheet: New Rule on the Accessibility of Web Content and Mobile Apps Provided by State and Local Governments. Note: please defer to guidance from your university for interpretations of these exceptions. In summary, some exceptions that come up in your teaching include:

  1. Archived web content:
    Oftentimes, there is web content that is not currently used as it’s outdated, not needed, or repeated somewhere else. If the content was created before the compliance date, only kept for reference/recordkeeping, is held in a special area for archived content, and it has not been changed since it was archived, then it would not need to meet WCAG 2.1 Level AA. An example could include a 2019 report on the enrollment data for an online degree program that hasn’t been updated and is stored in an “archived” section of a website.
  2. Content posted by a third party:
    When a third party, which is not posting due to contractual arrangements with the university, posts content on a university website or mobile app, these standards likely do not apply. For example, if a student comments on a discussion board within your course, it will probably fall under this exception.
  3. Preexisting conventional documents:
    These documents, such as old PDFs, word processing documents, spreadsheets, or presentations, that were made available prior to the ruling date AND are not currently being used An example could include a PDF for a research symposium event in 2022 that was still posted on the university’s website.

Other exceptions include password protected documents for a specific individual and preexisting social media posts made prior to the compliance date.

Common Questions

What if a student reports they cannot access my web content, despite WCAG 2.1, Level AA conformance?

This is definitely possible, as every person’s needs are different. One wouldn’t have to change their web content in this case, but would need to provide an equivalent alternative to that individual.

Can we just depend on a learner’s accommodation request?

This is considered an undue burden to a person with a disability by having them constantly request access to web content as resolutions to requests could take several days or weeks to comply. By designing web content to be accessible upon its creation, individuals with disabilities will have an equal opportunity to access content.

Are there resources and trainings available to learn more about digital accessibility that are tailored for instructional faculty?

At U-M, there are many opportunities to learn about a variety of accessibility topics, including those relevant to faculty, found on the Accessibility Training page maintained by ITS and ECRT. Additionally, there are many great resources available to increase the accessibility of your web content including:

Education is undergoing a significant transformation as generative artificial intelligence continues to develop at a rapid pace. It is now easier than ever for educators to experiment with generative AI in their practice and see for themselves how generative AI can be leveraged during the course development process to brainstorm, synthesize, and draft everything from communications to students to learning objectives.

Generative AI: The Basics

Before experimenting with Generative AI (GenAI), it is helpful to have some high level foundational knowledge of how GenAI works. Essentially, GenAI functions using advanced machine learning algorithms, specifically neural networks, which emulate human brain processing. These networks are trained with large datasets, enabling them to learn language patterns, nuances, and structures. As a result, GenAI can produce contextually relevant and coherent content, a capability exemplified in tools like ChatGPT. 

To better understand how GenAI tools like ChatGPT work, let’s look at a breakdown of the acronym “GPT”: 

GPT stands for “Generative Pre-trained Transformer.” It is a type of artificial intelligence model designed for natural language processing tasks. “Generative” refers to its ability to generate text based on a combination of the data it was trained on and your inputs. It can compose sentences, answer questions, and create coherent and contextually relevant paragraphs. 

The term “Pre-trained” indicates that the model has undergone extensive training on a vast dataset of text before it is fine-tuned for specific tasks. This pre-training enables the model to understand and generate human-like text. 

Finally, “Transformer” is the name of the underlying architecture used by GPT. Transformers are a type of neural network architecture that has proven especially effective for tasks involving understanding and generating human language due to their ability to handle sequences of data, such as sentences, and their capacity for parallel processing, which speeds up the learning process. 

The GPT series, developed by OpenAI, has seen several iterations, with each new version showing significant improvements in language understanding and generation capabilities. Many of these improvements are due to the model continuously training on user inputs. OpenAI has made it transparent that your data is being used to improve model performance and you can choose to opt out by following the steps that will be outlined in the upcoming articles on how to use GenAI tools for course design, learning objectives and more.

Does It Matter Which GenAI Tool I Use?

Not really. Individuals may find preferences for one tool or another based on response speed or comfort with the interface. You may wish to use a tool that can opt out of using personal data for training purposes. Most of the GenAI tools are generally similar.

Next Steps and Considerations

In educational contexts, the incorporation of GenAI tools, such as ChatGPT, will potentially reshape our approach to content creation and improve efficiency for educators who often find themselves pressed for time. However, it is important to note the importance of acknowledging the technology’s limitations, such as potential biases, outdated information due to insufficient training data, and incorrect information – often referred to as “hallucinations.” It is vital that you always fact-check and revise GenAI outputs to maintain the integrity and high quality of your content.

In conclusion, by leveraging GenAI tools like ChatGPT, educators can navigate course design with greater ease and efficiency. From drafting learning objectives and engaging course titles to simplifying complex academic language and brainstorming assessments, GenAI has the potential to be an invaluable asset to your design work. However, it is critical to remember that these tools come with limitations, including potential biases and inaccuracies. By combining the strengths of GenAI with the expertise and critical oversight of educators, we can efficiently create new experiences for our learners.

It is safe to say that by now, you have seen many articles/posts, opinions, and stories about ChatGPT—and the larger AI-Language Learning Models (LLMs)—in relation to higher education and teaching/learning in particular. These writings included several perspectives ranging from raising concerns to celebrating new opportunities and a mix of the former and the latter. Also, these writings continue to evolve and grow rapidly in number as new AI-powered LLMs continue to emerge and evolve (e.g., Google’s new AI LLMs: Bard).

The intent of this piece is not to add another article sharing tips or concerns about ChatGPT. That being said, this article (1) summarizes the major concerns about ChatGPT and (2) the ideas about its positive implications based on what it is published to date.

Concerns about ChatGPT

Faculty, scholars, and higher education leaders have raised several concerns about ChatGPT. These concerns stem from possible ways it can be used.

  • Using ChatGPT to cheat by asking it to write essays/answer open-ended questions in exams/discussion forums and homework assignments (December 19th, 2022 NPR Story) (December 6th, 2022 Atlantic Story) (January 16 New York Times Story).
  • Using ChatGPT to author scholarly works which conflict with the ethical standards of scientific inquiry. Several high-impact/profile journals have already formulated principles to guide authors on how to use LLMs AI tools and why it is not allowed to credit such tool as an author—any attribution of authorship carries with it accountability for the scholarly work, and no AI tool can take such responsibility (January 24th, 2023 Nature Editorial).
  • ChatGPT can threaten the privacy of students/faculty (and any other user). Its privacy policy states that data can be shared with third-party vendors, law enforcement, affiliates, and other users. Also, while one can delete their ChatGPT account, the prompts they entered into ChatGPT cannot be deleted. This setup threatens sensitive or controversial topics as this data cannot be removed (January 2023 Publication by Dr. Torrey Trust).
  • ChatGPT is not always trustworthy, as it can fabricate quotes and references. In an experiment conducted by Dr. Daniel Hickey at Indiana University Bloomington, Instructional Systems Technology department, “ChatGPT was able to write a marginally acceptable literature review paper, but fabricated some quotes and references. With more work such as including paper abstracts in the prompts, GPT is scarily good at referencing research literature, perhaps as well as a first-year graduate student.” (January 6th, 2023, Article by Dr. Daniel Hickey)

Excitement about ChatGPT

At the other end of the spectrum, there have been several ideas that express interest and excitement about ChatGPT in higher education. These ideas stem from how they can be used ethically and in a controlled manner.

  • Using ChatGPT to speed up the writing of drafts for several outlets (reports, abstracts, emails, conference proposals, press releases, recommendation letters, etc.) ChatGPT can produce elaborated writing that must be edited to remove any possible inconsistencies or inaccuracies (December 7th, 2022 Social Science Space story)
  • Using ChatGPT in the process of brainstorming ideas for curriculum design, lesson planning, and learning activities. The tool can provide some novel ideas or remind educators of some instructional techniques and strategies that they had heard about in the past (January 23rd, 2023, Article by Dr. David Wiley).
  • Using ChatGPT to provide students tutoring/scaffolds. The tool can act like a virtual tutor who does not simply give the answer to the student but rather scaffold them to reach the correct answers by themselves. (Sal Khan, founder/CEO of Khan Academy, Spring 2023 TED Talk)
  • Teaching with ChatGPT to train students on using AI tools and models, provide opportunities to exercise critical thinking skills, and improve their technological literacy (January 12th New York Times story).

Concluding Thoughts

There are major concerns about ChatGPT and the larger AI-powered Language Learning Models (LLMs) phenomenon. These concerns are legitimate and are opposed by notable ideas about the positive implications of AI-powered LLMs in higher education classrooms. As we aspire to make evidence-based educational and learning design decisions, one should carefully review the research that has been done on AI in relation to higher education up to this point and engage with the gaps as opportunities to expand knowledge and find new opportunities and risks.

Our University’s newly formed advisory committee on the applications of generative AI is a good example of how higher education institutions ought to recommend the use, evaluation, and development of emergent AI tools and services. Additionally, discussions about generative AI and its implications on education happening in public venues are necessary to strengthen the public-facing mission of the University, where input from educators, students, and members of the community is welcome.

The rapid shift to emergency remote instruction during COVID-19 left many instructors questioning how best to assess students, even well after classes resumed. Concerns about academic integrity left some wondering if using online tests made students more likely to violate academic integrity rules. Online test proctoring made news in many higher education settings as a way to ensure academic integrity. However, others have argued it is a violation of students’ privacy.

What is Online Proctoring?

You may be familiar with proctoring in a face-to-face or residential setting where a designated authority oversees an exam in a controlled, specified environment. Similarly, online proctoring is a service that monitors a learner’s environment by either a person or an artificial intelligence algorithm during an online exam. However, the environment an online proctor oversees is a learner’s personal environment. This monitoring can take the form of videotaping, logging students’ keystrokes, browser data, location data, and even biometric data like test-taker eye movements.

Advocates of online proctoring cite concerns about academic integrity in the online environment as a reason to implement proctoring (Dendir & Maxwell, 2020). Some even suggest that students do not mind the additional security because they believe it supports the integrity of the test and/or degree.

Concerns and Research

While onsite-proctoring for academic integrity may seem reasonable, there have been questions about monitoring a learner’s home environment. Monitoring a learner’s home environment has the potential for harm. Online proctoring can be perceived as invasive by students, as personal information about one’s location and physical data is recorded that is not otherwise necessary for an exam. Several institutions, like U-M Dearborn, University of California Berkeley, University of Illinois, and the University of Oregon have placed limitations on, if not discontinuing altogether the use of third-party proctoring services. Institutions cite issues of accessibility, bias, concerns about student privacy, and institutional culture as reasons to discourage third-party proctoring. Student and faculty groups have publicly advocated for institutions to discontinue security features like locked-down browsers and third-party monitoring. At the University of Michigan Ann Arbor, third-party proctoring generally involves a separate fee and may be expensive, but still available through vendor partners.

Most of the academic research involving the use of online proctoring has focused on academic integrity, rather than the impact of proctoring itself. Wuthisatian (2020) found lower student achievement in online proctored exams compared to the same exam proctored onsite. Those students who were the least familiar with technology and the requirements for setting it up performed the most poorly. In addition, students who have test anxiety may experience even more anxiety in certain proctoring situations (Woldeab & Brothen, 2019). With further research, we may find the problem may not necessarily be proctoring, but rather the burden and effort of technology on students when taking an online exam.

Problems with internet connections or the home testing environment may be beyond students’ control. The lack of ability to create a “proper” testing environment raised students concerns about being unjustly accused of cheating (Meulmeester, Dubois, Krommenhoek-van Es, de Jong, & Langers, 2021)

Alternatives to Proctoring

Ultimately, only the instructor can determine whether proctoring is the right choice for a class and sometimes proctoring may be the best choice for your discipline, field, or specific assessment. Particularly in a remote setting, it may feel like the integrity of your assessment (particularly a test) is beyond your control, so proctoring may feel like the only option. However, there are alternatives to proctoring exams, from using exam/quiz security measures, to re-thinking a course’s assessment strategy to deemphasize exams. If you are concerned about how and what you are assessing, the Center for Research on Learning and Teaching provides resources and consultations to discuss academic integrity and different methods of assessment. We also recommend CAI’s Faculty Proctoring document if you have questions about proctoring.

Resources