By Lilian H. Hill
Artificial intelligence (AI) increasingly shapes access to employment, education, healthcare, housing, and public services. AI influences decisions that directly affect people’s lives, including résumé screening systems, automated hiring tools, benefits eligibility algorithms, and predictive analytics in social services. Yet awareness of how these systems function, and how they can advantage or disadvantage individuals, is unevenly distributed. For marginalized communities, this gap in understanding can deepen existing inequities rather than alleviate them. Hadar Shoval (2025) notes an emerging digital divide characterized by differential engagement patterns across societal groups, which may exacerbate educational disparities. He advocates for using this idea as a basis for designing more equitable education programs that foster digital and AI literacy. Building AI awareness in marginalized communities is about cultivating informed, critical, and empowered engagement with technologies that already play a role in daily life. Adult, community, and workforce education programs are uniquely positioned to support this work because of their emphasis on relevance, equity, and learner agency.
Why AI Awareness Matters for Marginalized Communities
Marginalized communities are often disproportionately affected by algorithmic decision-making, yet have limited influence over how those systems operate. Research has shown that AI systems can reproduce and amplify historical biases when trained on inequitable data or deployed without safeguards (Benjamin, 2019; Noble, 2018). In employment, automated screening tools may disadvantage candidates with nontraditional career paths. In public services, opaque algorithms can influence eligibility determinations without providing clear avenues for appeal. Surveillance technologies frequently misidentify people with darker skin tones, with error rates significantly higher than those for lighter-skinned individuals, particularly for women and nonbinary people of color, leading to disproportionate false stops, wrongful arrests, and heightened monitoring in already overpoliced communities.
Parthasarathy and Katzman (2024) argue that AI often worsens social inequities, particularly for marginalized communities. Technical fixes and limited oversight are not enough. Instead, they call for a bottom-up approach in which funders, universities, industry leaders, and regulators partner directly with affected communities to shape the design and governance of AI. They recommend incentivizing community-driven research, integrating ethics and social sciences into AI engineering education, strengthening whistleblower protections, supporting civic organizations, and implementing equity-focused regulations that can prohibit harmful technologies.
Meaningful participation from marginalized groups, voluntary and compensated, is essential. Parthasarathy and Katzman (2024) indicate that achieving equitable AI requires not only better rules but a deeper intellectual and moral shift toward inclusive, community-centered innovation. They emphasize that technology development agendas are typically set by technical experts and corporations that often prioritize profit or efficiency over public need. When marginalized communities are excluded from defining problems and solutions, technologies can misdiagnose social issues or reinforce structural bias. By contrast, community-engaged design values local knowledge, lived experience, and grassroots expertise, increasing the likelihood that AI tools address real-world concerns and build trust in science and governance.
They also stress that regulation must move beyond narrow technical audits to consider the broader social contexts in which AI systems operate. Equity-focused impact assessments, interdisciplinary oversight, and strong civic advocacy are necessary to prevent harm before technologies are widely deployed. Ultimately, the promise of AI lies not only in innovation but also in reimagining who has the power to shape technological futures—and ensuring that those most affected have a central voice in that process.
From Awareness to Agency
AI awareness helps learners recognize when automated systems are involved, understand their limitations, and ask critical questions about fairness, transparency, and accountability. This form of literacy supports informed consent, self-advocacy, and civic participation rather than passive acceptance of technological authority.
Hadar-Shoval (2025) maintains that a significant gap exists in research examining the varied impacts of artificial intelligence on minority populations. This concern is particularly salient in educational settings, where longstanding socioeconomic and cultural inequalities may intersect with the complexities of AI integration, potentially compounding existing disparities. He concludes that cultural and technological capital significantly influence AI adoption and recommends designing culturally responsive AI curricula.
Chee et al. (2025) conducted a systematic literature review to develop a competency framework for artificial intelligence and organized the results by educational levels, including higher, community, and workforce education. The image of their results has been adapted to add adult education (see Figure 1).
| Figure 1: Pathways for AI Competency Education, Adapted from Chee et al., 2024 |
Building AI awareness is ultimately about agency. Adult, community, higher, and workforce education programs play a critical role in integrating AI awareness into digital literacy, career development, and civic education efforts. These programs can help ensure that emerging technologies expand opportunity rather than reinforce exclusion.
Educators can frame AI as a human-designed system shaped by social, political, and economic choices rather than an objective or unquestionable authority. When adult learners understand that algorithms reflect values, assumptions, and power structures, they are better equipped to challenge harmful outcomes and participate in shaping technology’s role in their communities.
Community-based learning environments also emphasize trust, dialogue, and collective meaning-making. Discussions of AI bias, surveillance, and data privacy can be grounded in learners’ lived experiences, validating their concerns while introducing shared language and concepts. This approach positions learners not as technology outsiders, but as knowledgeable participants capable of interpreting and responding to complex systems.
In workforce education, AI awareness supports both employability and ethical practice. Workers increasingly interact with AI-powered tools for scheduling, performance monitoring, decision support, and customer engagement. Understanding how these systems function—and where human judgment remains essential—helps learners navigate changing workplace expectations and advocate for fair use. Importantly, AI awareness also prepares learners to engage critically with narratives that frame automation as inevitable or neutral. Workforce programs can help learners distinguish between efficiency claims and actual impacts on job quality, worker autonomy, and equity (West et al., 2019).
Advancing Equity
Artificial intelligence is not a distant or abstract force; it is embedded in the systems that shape opportunity, risk, and access in everyday life. When awareness of these systems is uneven, existing inequities can deepen. But when learners understand how AI works, where it can fail, and how it reflects human choices, they gain the capacity to question, advocate, and participate.
Building AI awareness in marginalized communities is, therefore, an equity strategy. It strengthens digital literacy, supports workforce adaptability, and promotes informed civic engagement. More importantly, it affirms that those most affected by automated decision-making should not be passive subjects of technology, but active contributors to conversations about how it is designed, deployed, and regulated.
Adult, community, and workforce education programs stand at the forefront of this work. By embedding AI awareness into existing learning structures, educators can help ensure that emerging technologies expand opportunity rather than reinforce exclusion. The goal is not technical mastery alone, but shared understanding, critical reflection, and collective agency so that AI serves communities, rather than communities serving AI.
References
Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. Polity Press.
Chee, H., Ahn, S., & Lee, J. (2025). A competency framework for AI literacy: Variations by different learner groups and an implied learning pathway. British Journal of Educational Technology, 56(5), 2146–2182. https://doi-org.lynx.lib.usm.edu/10.1111/bjet.13556
Hadar Shoval, D. (2025). Artificial Intelligence in higher education: Bridging or widening the gap for diverse student populations? Education Sciences, 15(5), 637. https://doi-org.lynx.lib.usm.edu/10.3390/educsci15050637
Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2, 100041. https://doi.org/10.1016/j.caeai.2021.100041
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.
Parthasarathy, S., & Katzman, J. (2024). “Bringing communities in, achieving AI for all.” Issues in Science and Technology, 40(4), 41–44. https://doi.org/10.58875/SLRG2529
West, S. M., Whittaker, M., & Crawford, K. (2019). Discriminating systems: Gender, race, and power in AI. AI Now Institute.
