Skip to main content

Innovation Insights: Charting Equity in Online Learning Teams

Finding ways to support every student is a fundamental challenge for instructors. When the learning occurs online, ensuring an equitable experience can seem daunting, especially when students are part of teams that meet outside a professor’s purview.

According to researcher Yiwen Lin, interventions aimed at boosting student engagement and experience are effective, and the strategic use of generative AI could ensure group learning benefits every team member.

Local Inspiration

As an undergraduate student at the University of Michigan, Lin got a glimpse into her future research while attending a talk on the student support tool ECoach. Developed by the Center for Academic Innovation, ECoach software provides students personalized feedback and tailored strategies for success. 

Lin recalled attending the presentation given by ECoach founder Tim McKay, Arthur F. Thurnau Professor of Physics, Astronomy, Education. She was struck by McKay’s finding that while female physics students did not frequently speak in class, they did engage and contribute in other meaningful and important ways. 

“What he found was that women like to back channel,” Lin said. “I thought, well women engage, but oftentimes they just engage differently, and it’s hard for an assessment that only looks at the frequency of participation.”

Lin, now a postdoctoral associate at the University of Pittsburgh, researches this deeper data with an eye on gender differences. She examines how psychological factors impact the persistence of online STEM learners, the quality of participation in team settings, and what interventions can be used to encourage more equity among students. Lin shared her research in an Innovation Insights talk titled “Charting Equity in Online Learning Teams: Opportunities and Challenges,” presented by the center.

Male vs. Female Motivation

Examining gender differences in STEM learning has traditionally evaluated how students’ psychological experiences impact outcomes. Lin’s research delves deeper into the learning process, revealing some surprising findings.

In one study, Lin and her team replicated a previous research project that looked at how a sense of belonging and STEM identity impacted female students’ desire to continue in STEM. But unlike the former study, Lin’s research used a pool of international online learners, many of them graduate students. 

The results corroborated the importance of belonging and identity for women. However, when they examined the same connection for male learners, Lin’s team found that belonging and identity were also strong motivators for men. In fact, identity and belonging showed a slightly stronger link to STEM persistence for men compared to their female peers. This was the opposite of the previous findings. 

Lin believes the pool of students (international and online) may have been a factor in the divergence from past research. Either way, interventions designed to increase female learners’ belonging and identity also clearly impacted male learners.  Subsequent polling showed that a positive group dynamic impacted both male and female retention in STEM. 

“We found that facilitating effective group dynamics can be potentially quite important for cultivating a more inclusive psychological experience,” Lin said.

Beyond Quantifying Participation

It can be challenging for instructors of online courses to incorporate those interventions, especially for small groups meeting outside the virtual classroom. 

Lin outlined those challenges and the importance of diving deeper into the data in a study monitoring 88 small teams (three students per team) who were given a series of challenges to complete in a short period of time. Examining the gender differences in participation, Lin’s team confirmed that women spoke less in mixed-gender groups as well as male-majority teams, using fewer words and speaking less often compared to their peers.

The team then ran a language analysis on the transcripts of the students’ collaboration and found the female students actually provided a higher quality of participation than their male peers. 

“Female students were better at responding to their teammates, building onto their contributions, and also being more cohesive with their own participation,” Lin said. 

It affirmed her assertion that research can help look beyond the initial observations about frequency. Lin hopes that assessing the quality of contributions will be key to developing effective tools that encourage student participation in online courses and bring more equity to small groups.

AI for an Equitable Learning Experience

What those tools may look like is an exciting proposition to Lin, especially generative AI tools that can be applied to what she describes as the “in between,” the learning experience of students as they work through their course and team assignments. 

“We sort of conceptualize that it is useful for AI to help us assess and model collaborative processes, rather than only collaborative outcomes,” Lin said. 

Generative AI tools could provide personalized support for students, identifying learning patterns that may require intervention, like an intelligent tutoring system. Lin also sees potential in creating a similar generative AI program for teams, encouraging more equity in their collaboration and helping students from varied backgrounds and diverse perspectives interact in constructive and respectful ways. She referred to the center tool Tandem as an example of how well-designed support tools can reveal more about team dynamics and help instructors better support and guide students. Tandem coaches students working on team projects and allows instructors the chance to intervene when they see a group needs assistance. 

Lin acknowledges that integrating generative AI with student support comes with challenges. That is why, Lin says, instructor input is key to ensuring tools are built using careful consideration of privacy and bias, and are extensively tested before launch. When done correctly, they could be powerful tools for building a more inclusive and equitable online learning environment. 

“We wanted to think more deeply about how we can leverage AI as a tool for equity,” Lin said. “And this would perhaps be always a constant discussion in the community as we move forward with it.”

References

Lin, Y. & Nixon, N. (2024) STEM pathways in a global online course: Are male and female learners motivated the same?, L@S 2024: Proceedings of the Eleventh ACM Conference on Learning @ Scale, 243-249. 

Lin, Y., Dowell, N., Godfrey, A., Choi, H., & Brooks, C. (2019). Modeling gender dynamics in intra and interpersonal interactions during online collaborative learning. LAK19: Proceedings of the 9th International Conference on Learning Analytics & Knowledge, 431–435.

Nixon, N., Lin, Y., & Snow, L. (2024). Catalyzing equity in STEM teams: Harnessing generative AI for inclusion and diversity. Behavioral and Brain Sciences, 11(1), 85-92.

Lewis, N.A. , Sekaquaptewa, D. , & Meadows, L.A. (2019). Modeling gender counter-stereotypic group behavior: A brief video intervention reduces participation gender gaps on STEM teams. Social Psychology of Education, 22(3), 557–77. 

Dowell, N., Lin, Y., Godfrey, A., & Brooks, C. (2019, June 25-29). Promoting inclusivity through time-dynamic discourse analysis in digitally-mediated collaborative learning. [Proceedings] In Artificial Intelligence in Education: 20th International Conference, AIED 2019, Chicago, IL, USA. Springer International Publishing AG, Part 1(20), 207–19.

Generative AI (GenAI) tools are becoming increasingly popular for a wide variety of uses, including in classrooms. Whether you’re generating images, building slides, or creating summaries of readings, it’s important to be thoughtful about the tools you’re using and the impact they can have on both your students and our world as a whole.

Bias

A GenAI tool is only as good as its training data; if that data contains content that is racist or sexist, we shouldn’t be surprised when the GenAI tool develops the same kind of bias. Bias can come in a variety of different types: stereotypical, gender, and political. All of these biases can lead to certain groups being inaccurately featured more or less in outputs.

Bloomberg tested the biases present in the Stable Diffusion text-to-image generator in 2023. When they prompted the model to create representations for jobs that were considered “high-paying” and “low-paying,” the images generated of high-paying jobs were typically of people with lighter skin tones. People with darker skin tones featured more prominently in images of low-paying jobs. Bloomberg found similar results when they looked at the gender of the people in the images. Stable Diffusion generated three images of men for one image of a woman. When women did appear in the generated images, they were typically in lower-paying and more traditional roles, like housekeeper. Prompts for jobs like “politician,” “lawyer,” “judge,” and “CEO” led to images that were almost entirely light-skinned men.

Harmful Content

Besides being biased, GenAI can produce content that is harmful in a variety of ways. GenAI can hallucinate content that is not based on actual data, and is instead fictitious or unrealistic. It can be used to produce artificial video or audio content impersonating a person’s likeness. When this kind of video and audio content is done with permission of the person, it’s commonly called “synthetic media.” When people create artificial video or audio content of someone without their permission, it’s referred to as a “deep-fake.” Deep-fakes are often used to harass, humiliate, and spread hate-speech. GenAI has made the creation of deep-fakes easy and cheap, and there have been several high-profile cases in the US and Europe of children and women being abused through their creation. 

Policymaking efforts to combat the proliferation of and harm caused by deep fakes have become common both in the U.S. and abroad, with proposals often including disclosure requirements for the use of synthetic media, at least for certain activities. While educational uses of these technologies are unlikely to be restricted or banned, users should strongly consider disclosing the use of these technologies by default in the interest of transparency and in anticipation of any future requirements to do so that may apply. It may also be worthwhile to consider whether companies offering these products are well positioned to comply with this quickly evolving regulatory landscape as well as whether they are making reasonable efforts to help prevent the misuse of their products.  

Data

The collection of data used to train GenAI models can raise a variety of privacy concerns, particularly around personal and proprietary data. Some personal data collection can be declined, although the methods of how to do so are often buried in lengthy terms of service that most users don’t read. Those terms of service also cover how the GenAI tool can use the data that you put into the tool via prompting, so you should be cognizant of the kind of information you’re feeding it.

Recently, the Cisco 2024 Data Privacy Benchmark Study revealed that most organizations are limiting the use of GenAI, with some banning it entirely, because of data privacy and security issues. This is likely because 48% of employees surveyed admitted to entering non-public company information into GenAI tools. There’s also a general lack of transparency around what kinds of data sets have been used to train GenAI tools. Although some explicitly state where their training data comes from, many are vague about what the training data was and how they accessed it.

Copyright

Right now, many believe that using content, like books, images, and videos, to train GenAI falls under fair use in the U.S., but there are currently multiple lawsuits challenging this notion. If companies are unable to leverage fair use to acquire training data, the effectiveness and availability of GenAI is likely to decrease dramatically. The cost of obtaining licenses for the incredible amount of data needed will likely drive all but the biggest companies out of the market.

The outputs created by GenAI can have their own copyright issues, depending on how much they pull from the training data. If the image generated by GenAI, for example, is substantially similar to an image in the training data, there could potentially be some liability for copyright infringement if or when the image is used. Many GenAI tools are attempting to avoid this by refusing to generate content that is similar to copyrighted material, but there are ways for creative prompters to get around these restrictions.

Although many GenAI tools claim to be trained on openly licensed content, studies show that when asked about licensing requirements, 70% of the tools didn’t specify what license requirements were for the generated work, and if they did, the tool often provided a more permissive license than what the original creator intended.

The use of GenAI brings up ethical issues around authorship that are often related to copyright but are separate. For example, when using information gathered from GenAI, there may be an ethical obligation to cite the original source to avoid claims of plagiarism. GenAI doesn’t typically provide citations, and when it does, those citations are frequently incorrect. There are also concerns about the displacement of human authors and artists by GenAI; this frequently comes up when GenAI is used to create works in the style of certain artists or authors.

Environmental Impact

GenAI has a huge environmental impact. Research has shown that training the early chatbots, such as GPT-3, produced as much greenhouse gas as a gasoline powered vehicle driving for 1 million miles. Generating one image using GenAI uses as much energy as fully charging your phone. ChatGPT alone consumes the same amount of energy as a small town every day. On top of that, the data centers needed to house the training data and infrastructure for these tools require large amounts of electricity and water to keep them from overheating. Right now, it’s nearly impossible to accurately evaluate or know the full extent of the environmental impacts of GenAI.

Equity

There are a variety of different types of equity concerns when it comes to GenAI. Most GenAI tools are trained on data from data rich languages and are less likely to include non-standard dialects or languages. There are also access and efficacy disparities. Not everyone will have access to GenAI tools, whether it’s because of the cost, a lack of internet access, or because there are accessibility issues with the tool. Underrepresented or underserved groups may find their experiences missing from the training data, which is only optimized for some groups, not all, limiting the efficacy of the outputs.

Finally, it’s important to remember that all of the legal and ethical issues discussed so far have a disproportionate effect on marginalized groups. For example, negative environmental effects tend to be felt the worst in more vulnerable communities. Considering the major impact GenAI has on the environment, how are we going to work with these groups to help ensure they’re not further harmed?

Conclusion

Overall, there are pretty significant legal and ethical issues we should consider before using GenAI tools. This doesn’t mean that we shouldn’t use GenAI tools; it means that we should be thoughtful about when, how, and why we’re using them. And we should know that the way we use them might change in the not so distant future. The current lawsuits will take years to work their way through the legal system, and depending on how they shake out, GenAI tools may have to go through some major changes when it comes to their training data.

Here are five tips for navigating through these complex issues:

  1. Investigate the reputation of the GenAI tool and the company that created it. Perform an online search for any potential legal or ethical issues. Add search terms like “complaint,” “violation,” or “lawsuit” with the company’s name, and be sure to read product reviews.
  2. Check the terms of service. Review the terms of service and privacy policies before using GenAl. Caution should be taken before publishing materials created through GenAI.
  3. Protect sensitive data. In addition to data shared for training purposes, it should be assumed, unless otherwise stated, that data shared when using GenAI tools will be accessible by the third party tool provider and affiliates. Data sharing must adhere to U-M policies
  4. Consider the ethics/limitations. Continue to remember, and remind your students, that GenAI tools are often biased, as the technology is designed to output common results based on its learning model. GenAI can also “hallucinate,” so specific claims should always be verified before sharing.
  5. Consult resources and ask for help. We are still swimming in uncharted waters. Utilize resources available here at U-M, including training and workshops on GenAI that are hosted across U-M. There is also a new GenAI as a Learning Design Partner series led by U-M instructors that is freely available via Coursera.

Jenni Patterson is a Design Manager Senior at the Center for Academic Innovation. In this interview, Jenni speaks with us about her role at CAI, the GenAI short courses, and how to get started with a short course.

Educators can use generative AI to transform dense, technical material into clear, easily understandable content. This improves students’ comprehension and makes the learning experience more inclusive to a wider audience. While students are growing in their knowledge of complex academic topics, sometimes academic terminology can be a barrier. Particularly early in the course, students may not yet be familiar with the jargon and language of your subject matter. In addition, you may have learners in your course with a wide range of educational and cultural backgrounds. Some of your students may be from countries outside of the United States, and English may not be their first language. By demystifying complex concepts, jargon, and metaphors with generative AI, educators are empowered to create more equitable and effective learning environments for our diverse array of learners. 

For example, you can use the following example prompt to get started: 

In this prompt, we are asking ChatGPT to rewrite text to an 8-10th-grade reading level on the Flesch-Kincaid Grade Scale. This is the reading level recommended for a general adult lay audience. Feel free to adjust this to fit your target audience. 

Example: An Online Course on Neuroscience

Drafting

Now imagine that you are a renowned neuroscientist and a highly regarded faculty member at Michigan Medicine. You are interested in developing an online course that will bring neuroscience concepts to a lay audience. You are excited to get started, but as you begin to develop content, you quickly realize that your typical content is aimed at seasoned medical students and filled with jargon that may be daunting to those without prior knowledge. You realize that generative AI may be able to assist you in breaking down concepts into simpler terms. 

You fill in the example prompt with some of the text from one of your old in-person presentations with key concepts that you would like to include in this online course: 

In response to your input, ChatGPT gives you the following output: 

In this example, ChatGPT keeps all of the main concepts intact while using simpler language, providing definitions of terminology used (rather than removing it entirely), and breaking the large paragraph into more digestible, smaller paragraphs or chunks. 

Refining 

As a content expert, it is important to read through the output and ensure that all key concepts remain intact. It is also up to you to determine whether the revisions are sufficient and appropriate for your audience. You may choose to ask for stylistic revisions as well. For example, ChatGPT wrote the text as though the course is currently happening. However, you plan on delivering this information at the beginning of the course to talk about what the learner will learn. This is your preference. 

You can ask ChatGPT to revise with the following: 

ChatGPT will then go through and make the requested revisions to the text using the appropriate tense that you indicated in your input: 

Continue to refine as needed. Consider feeding into the chat examples of your tone of voice so that the content is not only accessible for learners but also contains a human element. In addition, you can increase your expectation of language understanding as your students grow in their knowledge and your expectations of understanding increase.

Generative AI can be a valuable asset to instructors looking for assistance with creating various aspects of course design. For example, generative AI, such as ChatGPT, can be a valuable tool for educators in drafting learning objectives. Using GenAI in any setting is usually a process of drafting and then refining prompts until the desired result is achieved. In this article, we will outline some ways to generate and refine learning objectives for a course.

Learning objectives are concise statements that articulate what students are expected to learn or achieve in a course. They play a crucial role in guiding both teaching strategies and assessment methods, ensuring that educational experiences are focused and effective. Clear and well-defined learning objectives are essential for aligning educational activities with desired learning outcomes. By analyzing a vast array of educational content and pedagogical methods in its training data, AI can offer a wide range of learning objective recommendations, which educators can then build off of, using their knowledge as experts in the field. 

Articles

A computer monitor displaying lines of code

Generative AI for Course Design: The Basics

Learn more foundational information about Generative AI
A man smiles while working on his laptop

Learning objectives and outcomes

How to craft good learning objectives for instruction

Using your preferred GenAI tool, here is an example prompt that you can use to get started: 

This example prompt can be modified to fit your needs. For example, you may choose to add more ideas and give additional context about the course. The more detail and context you provide in your input, the better the AI output will be. So please feel free to add in outlines, syllabi, or any other materials that may help your GenAI assistant better understand your vision. 

Example: An Online Course on the Cold War

Drafting Objectives

Now that we have our example prompt, let’s see an example of it in action. Imagine you are an instructor for an introductory online course on the Cold War. You plan to use ChatGPT to generate some ideas on potential learning objectives to get you started and guide your curriculum creation. You already have some general ideas on what you want to cover: causes, major events, and overall impact. You fill in the prompt as so: 

You press enter and ChatGPT provides you with the following learning objectives: 

Refining

It is now up to you as the expert to determine which learning objectives are the most relevant and how you should go about revising them. For example, you may look at the list and notice that there are no learning objectives that ask the learners to create something with the knowledge they’ve acquired throughout the course (e.g., a final project). You return to ChatGPT and ask the following: 

In response, ChatGPT provides you with the following: 

If you disagree with this suggestion, you can reply with “More?” to get additional ideas. ChatGPT will then provide you with a longer list: 

You can repeat this process as often as you’d like – adjusting the prompt and adding additional context (e.g., outlines, key ideas, information about your teaching style) to get better responses. When formulating responses for you, ChatGPT looks at the entire chat log so it is recommended that you continue to add to the same chat for best results.

In our next article, we’ll explore how to use Generative AI to improve accessible language in your course.

Introduction

Education is undergoing a significant transformation as generative artificial intelligence continues to develop at a rapid pace. It is now easier than ever for educators to experiment with generative AI in their practice and see for themselves how generative AI can be leveraged during the course development process to brainstorm, synthesize, and draft everything from communications to students to learning objectives.

Generative AI: The Basics

Before experimenting with Generative AI (GenAI), it is helpful to have some high level foundational knowledge of how GenAI works. Essentially, GenAI functions using advanced machine learning algorithms, specifically neural networks, which emulate human brain processing. These networks are trained with large datasets, enabling them to learn language patterns, nuances, and structures. As a result, GenAI can produce contextually relevant and coherent content, a capability exemplified in tools like ChatGPT. 

To better understand how GenAI tools like ChatGPT work, let’s look at a breakdown of the acronym “GPT”: 

GPT stands for “Generative Pre-trained Transformer.” It is a type of artificial intelligence model designed for natural language processing tasks. “Generative” refers to its ability to generate text based on a combination of the data it was trained on and your inputs. It can compose sentences, answer questions, and create coherent and contextually relevant paragraphs. 

The term “Pre-trained” indicates that the model has undergone extensive training on a vast dataset of text before it is fine-tuned for specific tasks. This pre-training enables the model to understand and generate human-like text. 

Finally, “Transformer” is the name of the underlying architecture used by GPT. Transformers are a type of neural network architecture that has proven especially effective for tasks involving understanding and generating human language due to their ability to handle sequences of data, such as sentences, and their capacity for parallel processing, which speeds up the learning process. 

The GPT series, developed by OpenAI, has seen several iterations, with each new version showing significant improvements in language understanding and generation capabilities. Many of these improvements are due to the model continuously training on user inputs. OpenAI has made it transparent that your data is being used to improve model performance and you can choose to opt out by following the steps that will be outlined in the upcoming articles on how to use GenAI tools for course design, learning objectives and more.

Does it matter which GenAI Tool I use?

Not really. Individuals may find preferences for one tool or another based on response speed or comfort with the interface. You may wish to use a tool that can opt out of using personal data for training purposes. Most of the GenAI tools are generally similar.

Next Steps and Considerations

In educational contexts, the incorporation of GenAI tools, such as ChatGPT, will potentially reshape our approach to content creation and improve efficiency for educators who often find themselves pressed for time. However, it is important to note the importance of acknowledging the technology’s limitations, such as potential biases, outdated information due to insufficient training data, and incorrect information – often referred to as “hallucinations.” It is vital that you always fact-check and revise GenAI outputs to maintain the integrity and high quality of your content.

In conclusion, by leveraging GenAI tools like ChatGPT, educators can navigate course design with greater ease and efficiency. From drafting learning objectives and engaging course titles to simplifying complex academic language and brainstorming assessments, GenAI has the potential to be an invaluable asset to your design work. However, it is critical to remember that these tools come with limitations, including potential biases and inaccuracies. By combining the strengths of GenAI with the expertise and critical oversight of educators, we can efficiently create new experiences for our learners.

It is safe to say that by now, you have seen many articles/posts, opinions, and stories about ChatGPT—and the larger AI-Language Learning Models (LLMs)—in relation to higher education and teaching/learning in particular. These writings included several perspectives ranging from raising concerns to celebrating new opportunities and a mix of the former and the latter. Also, these writings continue to evolve and grow rapidly in number as new AI-powered LLMs continue to emerge and evolve (e.g., Google’s new AI LLMs: Bard).

The intent of this piece is not to add another article sharing tips or concerns about ChatGPT. That being said, this article (1) summarizes the major concerns about ChatGPT and (2) the ideas about its positive implications based on what it is published to date.

Concerns about ChatGPT

Faculty, scholars, and higher education leaders have raised several concerns about ChatGPT. These concerns stem from possible ways it can be used.

  • Using ChatGPT to cheat by asking it to write essays/answer open-ended questions in exams/discussion forums and homework assignments (December 19th, 2022 NPR Story) (December 6th, 2022 Atlantic Story) (January 16 New York Times Story).
  • Using ChatGPT to author scholarly works which conflict with the ethical standards of scientific inquiry. Several high-impact/profile journals have already formulated principles to guide authors on how to use LLMs AI tools and why it is not allowed to credit such tool as an author—any attribution of authorship carries with it accountability for the scholarly work, and no AI tool can take such responsibility (January 24th, 2023 Nature Editorial).
  • ChatGPT can threaten the privacy of students/faculty (and any other user). Its privacy policy states that data can be shared with third-party vendors, law enforcement, affiliates, and other users. Also, while one can delete their ChatGPT account, the prompts they entered into ChatGPT cannot be deleted. This setup threatens sensitive or controversial topics as this data cannot be removed (January 2023 Publication by Dr. Torrey Trust).
  • ChatGPT is not always trustworthy, as it can fabricate quotes and references. In an experiment conducted by Dr. Daniel Hickey at Indiana University Bloomington, Instructional Systems Technology department, “ChatGPT was able to write a marginally acceptable literature review paper, but fabricated some quotes and references. With more work such as including paper abstracts in the prompts, GPT is scarily good at referencing research literature, perhaps as well as a first-year graduate student.” (January 6th, 2023, Article by Dr. Daniel Hickey)

Excitement about ChatGPT

At the other end of the spectrum, there have been several ideas that express interest and excitement about ChatGPT in higher education. These ideas stem from how they can be used ethically and in a controlled manner.

  • Using ChatGPT to speed up the writing of drafts for several outlets (reports, abstracts, emails, conference proposals, press releases, recommendation letters, etc.) ChatGPT can produce elaborated writing that must be edited to remove any possible inconsistencies or inaccuracies (December 7th, 2022 Social Science Space story)
  • Using ChatGPT in the process of brainstorming ideas for curriculum design, lesson planning, and learning activities. The tool can provide some novel ideas or remind educators of some instructional techniques and strategies that they had heard about in the past (January 23rd, 2023, Article by Dr. David Wiley).
  • Using ChatGPT to provide students tutoring/scaffolds. The tool can act like a virtual tutor who does not simply give the answer to the student but rather scaffold them to reach the correct answers by themselves. (Sal Khan, founder/CEO of Khan Academy, Spring 2023 TED Talk)
  • Teaching with ChatGPT to train students on using AI tools and models, provide opportunities to exercise critical thinking skills, and improve their technological literacy (January 12th New York Times story).

Concluding Thoughts

There are major concerns about ChatGPT and the larger AI-powered Language Learning Models (LLMs) phenomenon. These concerns are legitimate and are opposed by notable ideas about the positive implications of AI-powered LLMs in higher education classrooms. As we aspire to make evidence-based educational and learning design decisions, one should carefully review the research that has been done on AI in relation to higher education up to this point and engage with the gaps as opportunities to expand knowledge and find new opportunities and risks.

Our University’s newly formed advisory committee on the applications of generative AI is a good example of how higher education institutions ought to recommend the use, evaluation, and development of emergent AI tools and services. Additionally, discussions about generative AI and its implications on education happening in public venues are necessary to strengthen the public-facing mission of the University, where input from educators, students, and members of the community is welcome.