By Lilian H.
Hill
Artificial
intelligence (AI) is increasingly shaping how people learn, work, and access
information. From adaptive learning platforms to automated feedback tools,
adult educators are finding themselves navigating opportunities and challenges
that come with these technologies. One of the most pressing concerns is bias in
AI systems, a complex issue that raises questions of fairness, equity, and
responsibility in teaching and learning.
Concerns about
biased algorithms predate the current popularity of artificial intelligence
(Jennings, 2023). As early as the mid-1980s, a British medical school faced
legal repercussions for discrimination after using a computer system to
evaluate applicants. Although the system’s decisions mirrored those of human
reviewers, it consistently favored men and those with European-sounding names.
Decades later, Amazon attempted to streamline hiring with a similar AI tool,
only to find it was disadvantaging women —an outcome rooted in biased training
data from a male-dominated tech workforce.
OpenAI, the
creator of ChatGPT and the DALL-E image generator, has been at the center of debates
over bias since ChatGPT launched publicly in November 2022 (Jennings, 2023).
The company has actively worked to correct emerging issues, as users flagged
examples ranging from political slants to racial stereotypes. In February 2023,
OpenAI took a proactive step by publishing a clear explanation of ChatGPT’s
behavior, providing valuable insight into how the model functions and how
future improvements are being shaped.
Understanding Bias in AI
Bias in AI
occurs when algorithms produce outcomes that are systematically unfair or
unbalanced, often due to the data used to train these systems. When the data
reflects historical inequities, stereotypes, or informational gaps, AI may
unintentionally reproduce or amplify those patterns (Mehrabi et al., 2022). For
instance, résumé screening tools trained on past hiring data may undervalue
applications from women or people of color (Dastin, 2018). Similarly, language
models can generate content that perpetuates cultural stereotypes (Bender et
al., 2021), and facial recognition systems may be less accurate for specific
demographic groups, particularly individuals with dark skin (Buolamwini &
Gebru, 2018). Understanding that AI bias often mirrors societal biases enables
adult educators to engage with AI tools more critically and thoughtfully.
There are three
primary sources of biased data: 1) use of biased training data, 2) human
influence on training AI systems, and 3) lack of a shared understanding of
bias.
1.
Biased Training Data
AI models learn from vast datasets that reflect the world as it is, including
its prejudices. Just as humans are shaped by their environments, AI is shaped
by the data it consumes, much of which comes from a biased internet. For
instance, Amazon’s hiring algorithm penalized women because it was trained on historical
data that was male-dominated. When datasets disproportionately represent particular
groups or viewpoints, the model’s outputs reflect that imbalance. In short,
there’s no such thing as a perfectly unbiased dataset.
2.
Human Influence in Training
After initial training, AI outputs are
refined through Reinforcement Learning with Human Feedback (RLHF), in which
human reviewers judge and rank responses. While this helps shape AI into
behaving more like a “responsible” human, it also introduces personal and
cultural biases. If all reviewers share similar backgrounds, their preferences
will influence how the model responds, making complete neutrality impossible.
3. No
Shared Definition of Bias
Even if we could remove all data that reflects human bias, we would still face
one unsolvable problem: people disagree on what bias means. While most can
agree that discrimination is harmful, opinions vary widely on how AI should
navigate complex social, political, or moral issues. Over-filtering risks
producing a model that is so neutral it becomes unhelpful, stripped of nuance
and unable to take a stand on anything meaningful.
Why This Matters for Adult Education
Adult learners
bring diverse backgrounds, identities, and experiences into the classroom. AI
tools built on non-representative data can worsen existing inequalities in
education unless developers improve their training methods and educators use
the technology thoughtfully (Klein, 2024). When AI tools are introduced without
awareness of bias, the risk is that inequities become amplified rather than
reduced (Holmes et al., 2022). For instance:
- Learners from marginalized groups may encounter
materials or assessments that do not accurately represent their knowledge
or potential.
- Automated tutoring or feedback systems may respond
differently depending on dialects, accents, or language use.
- Predictive analytics used to flag “at-risk” learners
could disproportionately affect specific student populations (Slade &
Prinsloo, 2013).
Educators play
a pivotal role in mediating these risks, ensuring that AI supports equity
rather than undermining it.
What Adult Educators Should Consider
- Critical Evaluation of Tools
- Ask: How was this AI system trained?
What kinds of data were used?
- Explore whether the developers have
published documentation about bias testing (Mitchell et al., 2019).
- Transparency with Learners
- Explain how AI is being used in the
classroom and its potential limitations.
- Encourage learners to evaluate outputs
critically rather than accepting them at face value.
- Centering Equity and Inclusion
- Select tools that offer options for
cultural and linguistic diversity.
- Advocate for systems that are
designed with universal access in mind (Holmes et al., 2022).
- Ongoing Reflection and Adaptation
- Keep a reflective journal or log of
how AI tools perform with different groups of learners.
- Adjust teaching strategies when
inequities appear.
- Collaborative Dialogue
- Create opportunities for learners to
share their experiences with AI.
- Engage in professional learning
communities where educators discuss emerging issues and solutions.
Moving Forward
AI literacy is
more crucial than ever. When talking about AI with your adult learners, ensure
they understand that these models are not flawless, their responses shouldn't
be accepted as the absolute truth, and that primary sources remain the most
reliable. Until better regulations are in place for this technology, the best
approach is to "trust but verify." AI technologies are not
neutral—they mirror the values, assumptions, and imperfections of the societies
that create them. For adult educators, the challenge is not to reject AI
outright but to engage with it thoughtfully, critically, and ethically. By proactively
recognizing and addressing bias, educators can help ensure that AI contributes
to inclusive, empowering learning environments.
References
Bender, E. M., Gebru,
T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of
stochastic parrots: Can language models be too big? Proceedings of the 2021
ACM Conference on Fairness, Accountability, and Transparency, 610–623.
https://doi.org/10.1145/3442188.3445922
Buolamwini, J., &
Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in
commercial gender classification. Proceedings of Machine Learning Research,
81, 1–15. http://proceedings.mlr.press/v81/buolamwini18a.html
Dastin, J. (2018,
October 10). Amazon scraps secret AI recruiting tool that showed bias against
women. Reuters. https://www.reuters.com/article/idUSKCN1MK08G
Holmes, W.,
Porayska-Pomsta, K., Holstein, K., Sutherland, E., Baker, T., Shum, S. B.,
Santos, O. C., & Koedinger, K. R. (2022). Ethics of AI in education:
Towards a community-wide framework. International Journal of Artificial
Intelligence in Education, 32(4), 731–761. https://doi.org/10.1007/s40593-021-00239-0
Jennings, J. (2023, August
8). AI in education: The bias dilemma. EdTech Insights. https://www.esparklearning.com/blog/get-to-know-ai-the-bias-dilemma/#:~:text=Some%20things%20teachers%20can%20do%20to%20help,use%20primary%20sources%20as%20the%20best%20sources
Klein, A. (2024, June
24). AI's potential for bias puts onus on educators, Developers. Center for
Education Technology. https://www.govtech.com/education/k-12/ais-potential-for-bias-puts-onus-on-educators-developers#:~:text=Schools%20should%20be%20wary%20if,'%22
Mehrabi, N., Morstatter,
F., Saxena, N., Lerman, K., & Galstyan, A. (2022). A survey on bias and
fairness in machine learning. ACM Computing Surveys, 55(6), 1–35.
https://doi.org/10.1145/3457607
Mitchell, M., Wu, S.,
Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I.
D., & Gebru, T. (2019). Model cards for model reporting. Proceedings of
the Conference on Fairness, Accountability, and Transparency, 220–229.
https://doi.org/10.1145/3287560.3287596
Slade, S., &
Prinsloo, P. (2013). Learning analytics: Ethical issues and dilemmas. American
Behavioral Scientist, 57(10), 1510–1529.
https://doi.org/10.1177/0002764213479366