Skip to main content

Ethical Considerations of Using GenAI Tools

Generative AI (GenAI) tools are becoming increasingly popular for a wide variety of uses, including in classrooms. Whether you’re generating images, building slides, or creating summaries of readings, it’s important to be thoughtful about the tools you’re using and the impact they can have on both your students and our world as a whole.

Bias

A GenAI tool is only as good as its training data; if that data contains content that is racist or sexist, we shouldn’t be surprised when the GenAI tool develops the same kind of bias. Bias can come in a variety of different types: stereotypical, gender, and political. All of these biases can lead to certain groups being inaccurately featured more or less in outputs.

Bloomberg tested the biases present in the Stable Diffusion text-to-image generator in 2023. When they prompted the model to create representations for jobs that were considered “high-paying” and “low-paying,” the images generated of high-paying jobs were typically of people with lighter skin tones. People with darker skin tones featured more prominently in images of low-paying jobs. Bloomberg found similar results when they looked at the gender of the people in the images. Stable Diffusion generated three images of men for one image of a woman. When women did appear in the generated images, they were typically in lower-paying and more traditional roles, like housekeeper. Prompts for jobs like “politician,” “lawyer,” “judge,” and “CEO” led to images that were almost entirely light-skinned men.

Harmful Content

Besides being biased, GenAI can produce content that is harmful in a variety of ways. GenAI can hallucinate content that is not based on actual data, and is instead fictitious or unrealistic. It can be used to produce artificial video or audio content impersonating a person’s likeness. When this kind of video and audio content is done with permission of the person, it’s commonly called “synthetic media.” When people create artificial video or audio content of someone without their permission, it’s referred to as a “deep-fake.” Deep-fakes are often used to harass, humiliate, and spread hate-speech. GenAI has made the creation of deep-fakes easy and cheap, and there have been several high-profile cases in the US and Europe of children and women being abused through their creation. 

Policymaking efforts to combat the proliferation of and harm caused by deep fakes have become common both in the U.S. and abroad, with proposals often including disclosure requirements for the use of synthetic media, at least for certain activities. While educational uses of these technologies are unlikely to be restricted or banned, users should strongly consider disclosing the use of these technologies by default in the interest of transparency and in anticipation of any future requirements to do so that may apply. It may also be worthwhile to consider whether companies offering these products are well positioned to comply with this quickly evolving regulatory landscape as well as whether they are making reasonable efforts to help prevent the misuse of their products.  

Data

The collection of data used to train GenAI models can raise a variety of privacy concerns, particularly around personal and proprietary data. Some personal data collection can be declined, although the methods of how to do so are often buried in lengthy terms of service that most users don’t read. Those terms of service also cover how the GenAI tool can use the data that you put into the tool via prompting, so you should be cognizant of the kind of information you’re feeding it.

Recently, the Cisco 2024 Data Privacy Benchmark Study revealed that most organizations are limiting the use of GenAI, with some banning it entirely, because of data privacy and security issues. This is likely because 48% of employees surveyed admitted to entering non-public company information into GenAI tools. There’s also a general lack of transparency around what kinds of data sets have been used to train GenAI tools. Although some explicitly state where their training data comes from, many are vague about what the training data was and how they accessed it.

Copyright

Right now, many believe that using content, like books, images, and videos, to train GenAI falls under fair use in the U.S., but there are currently multiple lawsuits challenging this notion. If companies are unable to leverage fair use to acquire training data, the effectiveness and availability of GenAI is likely to decrease dramatically. The cost of obtaining licenses for the incredible amount of data needed will likely drive all but the biggest companies out of the market.

The outputs created by GenAI can have their own copyright issues, depending on how much they pull from the training data. If the image generated by GenAI, for example, is substantially similar to an image in the training data, there could potentially be some liability for copyright infringement if or when the image is used. Many GenAI tools are attempting to avoid this by refusing to generate content that is similar to copyrighted material, but there are ways for creative prompters to get around these restrictions.

Although many GenAI tools claim to be trained on openly licensed content, studies show that when asked about licensing requirements, 70% of the tools didn’t specify what license requirements were for the generated work, and if they did, the tool often provided a more permissive license than what the original creator intended.

The use of GenAI brings up ethical issues around authorship that are often related to copyright but are separate. For example, when using information gathered from GenAI, there may be an ethical obligation to cite the original source to avoid claims of plagiarism. GenAI doesn’t typically provide citations, and when it does, those citations are frequently incorrect. There are also concerns about the displacement of human authors and artists by GenAI; this frequently comes up when GenAI is used to create works in the style of certain artists or authors.

Environmental Impact

GenAI has a huge environmental impact. Research has shown that training the early chatbots, such as GPT-3, produced as much greenhouse gas as a gasoline powered vehicle driving for 1 million miles. Generating one image using GenAI uses as much energy as fully charging your phone. ChatGPT alone consumes the same amount of energy as a small town every day. On top of that, the data centers needed to house the training data and infrastructure for these tools require large amounts of electricity and water to keep them from overheating. Right now, it’s nearly impossible to accurately evaluate or know the full extent of the environmental impacts of GenAI.

Equity

There are a variety of different types of equity concerns when it comes to GenAI. Most GenAI tools are trained on data from data rich languages and are less likely to include non-standard dialects or languages. There are also access and efficacy disparities. Not everyone will have access to GenAI tools, whether it’s because of the cost, a lack of internet access, or because there are accessibility issues with the tool. Underrepresented or underserved groups may find their experiences missing from the training data, which is only optimized for some groups, not all, limiting the efficacy of the outputs.

Finally, it’s important to remember that all of the legal and ethical issues discussed so far have a disproportionate effect on marginalized groups. For example, negative environmental effects tend to be felt the worst in more vulnerable communities. Considering the major impact GenAI has on the environment, how are we going to work with these groups to help ensure they’re not further harmed?

Conclusion

Overall, there are pretty significant legal and ethical issues we should consider before using GenAI tools. This doesn’t mean that we shouldn’t use GenAI tools; it means that we should be thoughtful about when, how, and why we’re using them. And we should know that the way we use them might change in the not so distant future. The current lawsuits will take years to work their way through the legal system, and depending on how they shake out, GenAI tools may have to go through some major changes when it comes to their training data.

Practical Tips

Here are five tips for navigating through these complex issues:

  1. Investigate the reputation of the GenAI tool and the company that created it. Perform an online search for any potential legal or ethical issues. Add search terms like “complaint,” “violation,” or “lawsuit” with the company’s name, and be sure to read product reviews.
  2. Check the terms of service. Review the terms of service and privacy policies before using GenAl. Caution should be taken before publishing materials created through GenAI.
  3. Protect sensitive data. In addition to data shared for training purposes, it should be assumed, unless otherwise stated, that data shared when using GenAI tools will be accessible by the third party tool provider and affiliates. Data sharing must adhere to U-M policies
  4. Consider the ethics/limitations. Continue to remember, and remind your students, that GenAI tools are often biased, as the technology is designed to output common results based on its learning model. GenAI can also “hallucinate,” so specific claims should always be verified before sharing.
  5. Consult resources and ask for help. We are still swimming in uncharted waters. Utilize resources available here at U-M, including training and workshops on GenAI that are hosted across U-M. There is also a new GenAI as a Learning Design Partner series led by U-M instructors that is freely available via Coursera.

Since the beginning of the Biden administration, there have been consistent efforts to address concerns over student loan debt. The U.S. Department of Education (ED) has approached this concern both by exploring new repayment options for existing borrowers and by attempting to strengthen consumer protections for future borrowers. While ED’s primary concerns appear to lie with the for-profit sector, public and nonprofit institutions have also been impacted by the resulting regulations and guidance. And as this article highlights, the impact on online learning, regardless of sector, has been particularly significant. Meanwhile, website and online course accessibility has been another major focus area in 2023 for ED, with the Department of Justice (DOJ) joining in as well as leading its own policy changes in this particular area. In addition to administrators, online course instructors and design teams need to be familiar with some of these major shifts to the regulatory landscape.

OPMs and Software Providers

What changes might we anticipate to our ability to procure software or partner with third parties when offering online programs? 

On February 15, 2023, ED announced new guidance concerning Third-Party Servicers (TPS)–which has since been withdrawn pending revisions–and signaled it would also be reviewing the “bundle of services” exception to an ban on incentive compensation that is commonly used by Online Program Managers (OPMs) to share online course revenue with partner institutions. While neither announcement has materialized in actionable rule changes or formal guidance just yet, it may be worth assessing the reliance a program or course may have on third-party partnerships involving the “provision of educational content and instruction” in connection with a Title IV program. 

ED has stated that the “effective date of the revised final guidance letter will be at least six months after its publication, to allow institutions and companies to meet any reporting requirements.” Any new audit and contractual requirements would not take effect until the following fiscal year. However, starting to create an inventory of potentially impacted partnerships and learning software now may prove valuable for meeting whatever future deadlines are announced. It is also important to exercise caution when considering new partnership opportunities that could be impacted.    
This Center for Academic Innovation (CAI) Advisory Note provides more information regarding both the proposal to expand the TPS definition and the current (and potential future) state of revenue sharing with OPMs.

Student-Consumer Protections

To the extent online instructors and design teams are involved in recruitment efforts, drafting language for syllabi and course description materials, or contributing to marketing materials, the rule changes below may be particularly important. 

In addition to state-by-state consumer protection remedies, students and prospective students Federal student loan borrowers are able to have some or all of their loans discharged if ED believes their institutions misrepresented information about an academic program (e.g., its cost or academic or career benefits) and the borrower experienced harm as a result. ED can then recover that amount forgiven from the institution. A revived and strengthened Borrower Defense to Repayment (BDR) rule took effect on July 1, 2023. Due to a court order, however, claims brought under the new rule cannot be adjudicated by ED, at least temporarily, pending the court reaching a decision on the merits of the case. Regardless, claims continue to be filed and should not be ignored, particularly as complaints can still be processed through either the 2016 or 2019 versions of the BDR rule, depending on when ED received the application. 

Best practices for preventing a BDR claim include: 

  • Carefully reviewing course/program descriptions and syllabi for accuracy; 
  • Avoiding language that could suggest any career or salary is guaranteed (as well as forwarding career questions to career services units whenever appropriate);
  • Being transparent about any known limitations of the course or program in consideration of the expressed goals of the individual student and ensuring all required disclosures are shared;  
  • Redirecting any financial aid or program cost questions to financial aid units; and 
  • For programs leading to professional licensure, ensuring any state-by-state research into (and corresponding disclosures for) whether programs will satisfy educational requirements in other states remains up to date. 

While any claims shared with the institution by ED should be routed to general counsel, it is important for all faculty and staff to keep records of communications with students and prospective students involving future career pathways, program costs, etc. and to be responsive to general counsel as they work to investigate claims under potentially tight timelines for submitting a response.  

Digital Accessibility

On May 19, 2023, ED and the DOJ, through their respective civil rights divisions, issued a joint Dear Colleague Letter that summarized recent online accessibility enforcement actions taken against institutions of higher education. These agencies took the opportunity to also remind institutions of higher education about their existing obligations to ensure individuals with disabilities are given equal opportunities, including as part of online activities, under both the Americans with Disabilities Act (ADA) and Section 504 of the Rehabilitation Act (Section 504), and issued the following warning:

Online accessibility for people with disabilities cannot be an afterthought. The Justice

Department and Department of Education will use the ADA and Section 504 as tools to ensure that members of the disability community are able to fully participate in every education program.

The letter also stated that each agency would separately pursue rulemaking on the topic of digital accessibility as part of implementing regulations for these same statutes. While ED has not yet released proposed rules as of the time of this writing (updates can be followed in the Unified Agenda, under Section 504 in this case), the DOJ has moved forward and published proposed changes under Title II of the ADA, which applies to public institutions. The rule, as proposed, would formally adopt the Web Content Accessibility Guidelines (WCAG) 2.1 AA as a standard used to demonstrate accessibility compliance. This is the same standard adopted by the University of Michigan as part of technical guidance for its Electronic and Information Technology Accessibility SPG. 

With regard to educational programs, specifically, the DOJ is proposing a distinction be drawn between public and “password protected” web content. While courses open to the public, such as Massive Open Online Courses, or MOOCs, would need to conform with WCAG 2.1 AA from the start, “closed” courses offered to enrolled students would not necessarily need to conform with these guidelines unless or until it becomes known that a registered student with a disability would otherwise have trouble accessing the content and fully participating in the online course. If this is not something known prior to the term’s start (in which case the course needs to be compliant as it launches), the institution would have five business days to complete the necessary remediation work.  

Again, this rule is not yet final and changes could still be made. However, it should also be noted that the DOJ and ED have regularly cited WCAG as part of lawsuits and agency enforcement actions and in resolution agreements. These agencies may continue to do so regardless of whether these rules take effect as written but the codification of WCAG into ADA implementing regulations would certainly add more weight to this approach to enforcement. 

Finally, conformance with WCAG 2.1 AA does guarantee compliance with the ADA or Section 504. It will remain important to work with disability services units and provide appropriate accommodations even when online course materials might be considered “accessible” under these guidelines. More information on this topic, including online course accessibility resources, can be found at U-M’s Accessibility website and on CAI’s Accessibility and Accommodations for Students with Disabilities webpage