Thursday, March 19, 2026

AI and Critical Thinking: Encouraging Informed Use, Not Blind Adoption


 

By Simone Conceição

As artificial intelligence (AI) tools become increasingly accessible, they are reshaping how people write, search, solve problems, and learn. From chatbots and essay generators to predictive text and image creation, AI offers both incredible opportunities and significant risks—especially when used without reflection or oversight.

For adult educators and lifelong learners, the central challenge is no longer simply accessing AI but using it in an informed and ethical way. To meet this challenge, education must focus on cultivating critical thinking as a core skill of AI literacy.

This blog post explores how educators can help learners engage with AI tools critically—not blindly—through strategies that foster awareness, reflection, and ethical use.

 

Beyond Convenience: Why Critical Thinking Matters

AI systems, including generative tools like ChatGPT, operate based on data patterns—not understanding. They generate convincing outputs without verifying facts, acknowledging bias, or understanding context. When users adopt AI tools without critical engagement, they risk:

  • Spreading misinformation or fabricated content
  • Accepting biased or incomplete outputs as fact
  • Becoming overly dependent on automation
  • Losing awareness of ethical and privacy concerns

Blind adoption of AI tools undermines the very goals of adult learning: empowerment, autonomy, and informed decision-making. Long and Magerko (2020) emphasize that true AI literacy requires more than tool fluency—it involves the ability to question, evaluate, and use AI responsibly.

 

Core Critical Thinking Skills for AI Use

Educators can support learners in developing the following skills to ensure informed and ethical AI use:

1. Source Awareness and Verification

AI tools may provide plausible but inaccurate or fabricated information. Learners must learn to verify AI-generated content using credible, external sources.

Strategy: Assign activities where learners compare AI-generated summaries with scholarly articles, highlighting discrepancies and omissions.

2. Bias Identification

Since AI tools are trained on historical data, they can reproduce societal, cultural, or ideological biases (Benjamin, 2019). Learners should be taught to recognize when outputs reflect skewed or stereotypical perspectives.

Strategy: Facilitate discussions on who is represented—or left out—in AI-generated narratives or recommendations.

3. Prompt and Input Reflection

The quality and bias of AI outputs are often shaped by user prompts. Teaching learners how to craft, revise, and evaluate prompts fosters metacognitive awareness of how AI systems work.

Strategy: Use “prompt comparison” exercises to show how framing affects responses—and reflect on the ethical implications.

4. Evaluation of Use Context

Not all tasks benefit from AI. Learners should think critically about when and how to use AI tools—and when to rely on their own judgment or creativity.

Strategy: Discuss appropriate vs. inappropriate uses of AI in academic, workplace, and civic contexts (e.g., writing a resume vs. writing a reflective journal).

 

Embedding Critical AI Literacy into Instruction

To encourage informed—not blind—adoption, instructors should model critical engagement themselves. Here are effective practices:

  • Use AI in the classroom with transparency—demonstrate tools, then critique their strengths and weaknesses together.
  • Design reflective assignments that ask learners to explain how and why they used AI tools, and to assess the quality of outputs.
  • Incorporate ethical frameworks (e.g., transparency, fairness, accountability) into course discussions about AI use.
  • Provide resources for AI literacy, such as plain-language articles, tool comparison charts, and guidelines for responsible use.

UNESCO (2021) encourages educators to empower learners as active, responsible participants in the digital ecosystem—not passive consumers of automated content.

 

Critical Thinking as a Cornerstone of AI Literacy

Artificial intelligence is not going away. But whether it becomes a force for empowerment or dependency will depend on how we prepare learners to engage with it. Critical thinking—paired with ethical reflection—must become the default mode of AI interaction in education.

At the AI Literacy Forum, part of the Adult Learning Exchange Virtual Community, adult educators, designers, and professionals are discussing how to develop these skills in inclusive, practical, and empowering ways. Moderated by Drs. Simone Conceição and Lilian Hill, the forum invites you to share your insights and explore strategies for preparing learners to use AI thoughtfully, not automatically.

 

References

Benjamin Ruha (2019) Race After Technology: Abolitionist Tools for the New Jim Code. Medford: Polity Press. 172 pages. eISBN: 9781509526437. Science & Technology Studies, 34(2), 92-94.

Long, D., & Magerko, B. (2020). What is AI literacy? Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–16. https://doi.org/10.1145/3313831.3376727

UNESCO. (2021). AI and education: Guidance for policy-makers. https://unesdoc.unesco.org/ark:/48223/pf0000377071

 

 

 

Thursday, March 5, 2026

Building AI Awareness in Marginalized Communities

 


By Lilian H. Hill

 

Artificial intelligence (AI) increasingly shapes access to employment, education, healthcare, housing, and public services. AI influences decisions that directly affect people’s lives, including résumé screening systems, automated hiring tools, benefits eligibility algorithms, and predictive analytics in social services. Yet awareness of how these systems function, and how they can advantage or disadvantage individuals, is unevenly distributed. For marginalized communities, this gap in understanding can deepen existing inequities rather than alleviate them. Hadar Shoval (2025) notes an emerging digital divide characterized by differential engagement patterns across societal groups, which may exacerbate educational disparities. He advocates for using this idea as a basis for designing more equitable education programs that foster digital and AI literacy. Building AI awareness in marginalized communities is about cultivating informed, critical, and empowered engagement with technologies that already play a role in daily life. Adult, community, and workforce education programs are uniquely positioned to support this work because of their emphasis on relevance, equity, and learner agency.

 

Why AI Awareness Matters for Marginalized Communities

Marginalized communities are often disproportionately affected by algorithmic decision-making, yet have limited influence over how those systems operate. Research has shown that AI systems can reproduce and amplify historical biases when trained on inequitable data or deployed without safeguards (Benjamin, 2019; Noble, 2018). In employment, automated screening tools may disadvantage candidates with nontraditional career paths. In public services, opaque algorithms can influence eligibility determinations without providing clear avenues for appeal. Surveillance technologies frequently misidentify people with darker skin tones, with error rates significantly higher than those for lighter-skinned individuals, particularly for women and nonbinary people of color, leading to disproportionate false stops, wrongful arrests, and heightened monitoring in already overpoliced communities.

 

Parthasarathy and Katzman (2024) argue that AI often worsens social inequities, particularly for marginalized communities. Technical fixes and limited oversight are not enough. Instead, they call for a bottom-up approach in which funders, universities, industry leaders, and regulators partner directly with affected communities to shape the design and governance of AI. They recommend incentivizing community-driven research, integrating ethics and social sciences into AI engineering education, strengthening whistleblower protections, supporting civic organizations, and implementing equity-focused regulations that can prohibit harmful technologies.

 

Meaningful participation from marginalized groups, voluntary and compensated, is essential. Parthasarathy and Katzman (2024) indicate that achieving equitable AI requires not only better rules but a deeper intellectual and moral shift toward inclusive, community-centered innovation. They emphasize that technology development agendas are typically set by technical experts and corporations that often prioritize profit or efficiency over public need. When marginalized communities are excluded from defining problems and solutions, technologies can misdiagnose social issues or reinforce structural bias. By contrast, community-engaged design values local knowledge, lived experience, and grassroots expertise, increasing the likelihood that AI tools address real-world concerns and build trust in science and governance.

 

They also stress that regulation must move beyond narrow technical audits to consider the broader social contexts in which AI systems operate. Equity-focused impact assessments, interdisciplinary oversight, and strong civic advocacy are necessary to prevent harm before technologies are widely deployed. Ultimately, the promise of AI lies not only in innovation but also in reimagining who has the power to shape technological futures—and ensuring that those most affected have a central voice in that process.

 

From Awareness to Agency

AI awareness helps learners recognize when automated systems are involved, understand their limitations, and ask critical questions about fairness, transparency, and accountability. This form of literacy supports informed consent, self-advocacy, and civic participation rather than passive acceptance of technological authority.

 

Hadar-Shoval (2025) maintains that a significant gap exists in research examining the varied impacts of artificial intelligence on minority populations. This concern is particularly salient in educational settings, where longstanding socioeconomic and cultural inequalities may intersect with the complexities of AI integration, potentially compounding existing disparities. He concludes that cultural and technological capital significantly influence AI adoption and recommends designing culturally responsive AI curricula.

 

Chee et al. (2025) conducted a systematic literature review to develop a competency framework for artificial intelligence and organized the results by educational levels, including higher, community, and workforce education. The image of their results has been adapted to add adult education (see Figure 1). 

 

Figure 1: Pathways for AI Competency Education, Adapted from Chee et al., 2024

 

Building AI awareness is ultimately about agency. Adult, community, higher, and workforce education programs play a critical role in integrating AI awareness into digital literacy, career development, and civic education efforts. These programs can help ensure that emerging technologies expand opportunity rather than reinforce exclusion. 

 

Educators can frame AI as a human-designed system shaped by social, political, and economic choices rather than an objective or unquestionable authority. When adult learners understand that algorithms reflect values, assumptions, and power structures, they are better equipped to challenge harmful outcomes and participate in shaping technology’s role in their communities.

 

Community-based learning environments also emphasize trust, dialogue, and collective meaning-making. Discussions of AI bias, surveillance, and data privacy can be grounded in learners’ lived experiences, validating their concerns while introducing shared language and concepts. This approach positions learners not as technology outsiders, but as knowledgeable participants capable of interpreting and responding to complex systems.

 

In workforce education, AI awareness supports both employability and ethical practice. Workers increasingly interact with AI-powered tools for scheduling, performance monitoring, decision support, and customer engagement. Understanding how these systems function—and where human judgment remains essential—helps learners navigate changing workplace expectations and advocate for fair use. Importantly, AI awareness also prepares learners to engage critically with narratives that frame automation as inevitable or neutral. Workforce programs can help learners distinguish between efficiency claims and actual impacts on job quality, worker autonomy, and equity (West et al., 2019).

 

Advancing Equity

Artificial intelligence is not a distant or abstract force; it is embedded in the systems that shape opportunity, risk, and access in everyday life. When awareness of these systems is uneven, existing inequities can deepen. But when learners understand how AI works, where it can fail, and how it reflects human choices, they gain the capacity to question, advocate, and participate.

 

Building AI awareness in marginalized communities is, therefore, an equity strategy. It strengthens digital literacy, supports workforce adaptability, and promotes informed civic engagement. More importantly, it affirms that those most affected by automated decision-making should not be passive subjects of technology, but active contributors to conversations about how it is designed, deployed, and regulated.

 

Adult, community, and workforce education programs stand at the forefront of this work. By embedding AI awareness into existing learning structures, educators can help ensure that emerging technologies expand opportunity rather than reinforce exclusion. The goal is not technical mastery alone, but shared understanding, critical reflection, and collective agency so that AI serves communities, rather than communities serving AI.

 

References

Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. Polity Press.

Chee, H., Ahn, S., & Lee, J. (2025). A competency framework for AI literacy: Variations by different learner groups and an implied learning pathway. British Journal of Educational Technology56(5), 2146–2182. https://doi-org.lynx.lib.usm.edu/10.1111/bjet.13556

Hadar Shoval, D. (2025). Artificial Intelligence in higher education: Bridging or widening the gap for diverse student populations? Education Sciences15(5), 637. https://doi-org.lynx.lib.usm.edu/10.3390/educsci15050637

Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2, 100041. https://doi.org/10.1016/j.caeai.2021.100041

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.

Parthasarathy, S., & Katzman, J. (2024). “Bringing communities in, achieving AI for all.” Issues in Science and Technology, 40(4), 41–44. https://doi.org/10.58875/SLRG2529

West, S. M., Whittaker, M., & Crawford, K. (2019). Discriminating systems: Gender, race, and power in AI. AI Now Institute.