Thursday, March 5, 2026

Building AI Awareness in Marginalized Communities

 


By Lilian H. Hill

 

Artificial intelligence (AI) increasingly shapes access to employment, education, healthcare, housing, and public services. AI influences decisions that directly affect people’s lives, including résumé screening systems, automated hiring tools, benefits eligibility algorithms, and predictive analytics in social services. Yet awareness of how these systems function, and how they can advantage or disadvantage individuals, is unevenly distributed. For marginalized communities, this gap in understanding can deepen existing inequities rather than alleviate them. Hadar Shoval (2025) notes an emerging digital divide characterized by differential engagement patterns across societal groups, which may exacerbate educational disparities. He advocates for using this idea as a basis for designing more equitable education programs that foster digital and AI literacy. Building AI awareness in marginalized communities is about cultivating informed, critical, and empowered engagement with technologies that already play a role in daily life. Adult, community, and workforce education programs are uniquely positioned to support this work because of their emphasis on relevance, equity, and learner agency.

 

Why AI Awareness Matters for Marginalized Communities

Marginalized communities are often disproportionately affected by algorithmic decision-making, yet have limited influence over how those systems operate. Research has shown that AI systems can reproduce and amplify historical biases when trained on inequitable data or deployed without safeguards (Benjamin, 2019; Noble, 2018). In employment, automated screening tools may disadvantage candidates with nontraditional career paths. In public services, opaque algorithms can influence eligibility determinations without providing clear avenues for appeal. Surveillance technologies frequently misidentify people with darker skin tones, with error rates significantly higher than those for lighter-skinned individuals, particularly for women and nonbinary people of color, leading to disproportionate false stops, wrongful arrests, and heightened monitoring in already overpoliced communities.

 

Parthasarathy and Katzman (2024) argue that AI often worsens social inequities, particularly for marginalized communities. Technical fixes and limited oversight are not enough. Instead, they call for a bottom-up approach in which funders, universities, industry leaders, and regulators partner directly with affected communities to shape the design and governance of AI. They recommend incentivizing community-driven research, integrating ethics and social sciences into AI engineering education, strengthening whistleblower protections, supporting civic organizations, and implementing equity-focused regulations that can prohibit harmful technologies.

 

Meaningful participation from marginalized groups, voluntary and compensated, is essential. Parthasarathy and Katzman (2024) indicate that achieving equitable AI requires not only better rules but a deeper intellectual and moral shift toward inclusive, community-centered innovation. They emphasize that technology development agendas are typically set by technical experts and corporations that often prioritize profit or efficiency over public need. When marginalized communities are excluded from defining problems and solutions, technologies can misdiagnose social issues or reinforce structural bias. By contrast, community-engaged design values local knowledge, lived experience, and grassroots expertise, increasing the likelihood that AI tools address real-world concerns and build trust in science and governance.

 

They also stress that regulation must move beyond narrow technical audits to consider the broader social contexts in which AI systems operate. Equity-focused impact assessments, interdisciplinary oversight, and strong civic advocacy are necessary to prevent harm before technologies are widely deployed. Ultimately, the promise of AI lies not only in innovation but also in reimagining who has the power to shape technological futures—and ensuring that those most affected have a central voice in that process.

 

From Awareness to Agency

AI awareness helps learners recognize when automated systems are involved, understand their limitations, and ask critical questions about fairness, transparency, and accountability. This form of literacy supports informed consent, self-advocacy, and civic participation rather than passive acceptance of technological authority.

 

Hadar-Shoval (2025) maintains that a significant gap exists in research examining the varied impacts of artificial intelligence on minority populations. This concern is particularly salient in educational settings, where longstanding socioeconomic and cultural inequalities may intersect with the complexities of AI integration, potentially compounding existing disparities. He concludes that cultural and technological capital significantly influence AI adoption and recommends designing culturally responsive AI curricula.

 

Chee et al. (2025) conducted a systematic literature review to develop a competency framework for artificial intelligence and organized the results by educational levels, including higher, community, and workforce education. The image of their results has been adapted to add adult education (see Figure 1). 

 

Figure 1: Pathways for AI Competency Education, Adapted from Chee et al., 2024

 

Building AI awareness is ultimately about agency. Adult, community, higher, and workforce education programs play a critical role in integrating AI awareness into digital literacy, career development, and civic education efforts. These programs can help ensure that emerging technologies expand opportunity rather than reinforce exclusion. 

 

Educators can frame AI as a human-designed system shaped by social, political, and economic choices rather than an objective or unquestionable authority. When adult learners understand that algorithms reflect values, assumptions, and power structures, they are better equipped to challenge harmful outcomes and participate in shaping technology’s role in their communities.

 

Community-based learning environments also emphasize trust, dialogue, and collective meaning-making. Discussions of AI bias, surveillance, and data privacy can be grounded in learners’ lived experiences, validating their concerns while introducing shared language and concepts. This approach positions learners not as technology outsiders, but as knowledgeable participants capable of interpreting and responding to complex systems.

 

In workforce education, AI awareness supports both employability and ethical practice. Workers increasingly interact with AI-powered tools for scheduling, performance monitoring, decision support, and customer engagement. Understanding how these systems function—and where human judgment remains essential—helps learners navigate changing workplace expectations and advocate for fair use. Importantly, AI awareness also prepares learners to engage critically with narratives that frame automation as inevitable or neutral. Workforce programs can help learners distinguish between efficiency claims and actual impacts on job quality, worker autonomy, and equity (West et al., 2019).

 

Advancing Equity

Artificial intelligence is not a distant or abstract force; it is embedded in the systems that shape opportunity, risk, and access in everyday life. When awareness of these systems is uneven, existing inequities can deepen. But when learners understand how AI works, where it can fail, and how it reflects human choices, they gain the capacity to question, advocate, and participate.

 

Building AI awareness in marginalized communities is, therefore, an equity strategy. It strengthens digital literacy, supports workforce adaptability, and promotes informed civic engagement. More importantly, it affirms that those most affected by automated decision-making should not be passive subjects of technology, but active contributors to conversations about how it is designed, deployed, and regulated.

 

Adult, community, and workforce education programs stand at the forefront of this work. By embedding AI awareness into existing learning structures, educators can help ensure that emerging technologies expand opportunity rather than reinforce exclusion. The goal is not technical mastery alone, but shared understanding, critical reflection, and collective agency so that AI serves communities, rather than communities serving AI.

 

References

Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. Polity Press.

Chee, H., Ahn, S., & Lee, J. (2025). A competency framework for AI literacy: Variations by different learner groups and an implied learning pathway. British Journal of Educational Technology56(5), 2146–2182. https://doi-org.lynx.lib.usm.edu/10.1111/bjet.13556

Hadar Shoval, D. (2025). Artificial Intelligence in higher education: Bridging or widening the gap for diverse student populations? Education Sciences15(5), 637. https://doi-org.lynx.lib.usm.edu/10.3390/educsci15050637

Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2, 100041. https://doi.org/10.1016/j.caeai.2021.100041

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.

Parthasarathy, S., & Katzman, J. (2024). “Bringing communities in, achieving AI for all.” Issues in Science and Technology, 40(4), 41–44. https://doi.org/10.58875/SLRG2529

West, S. M., Whittaker, M., & Crawford, K. (2019). Discriminating systems: Gender, race, and power in AI. AI Now Institute.

 

 

Thursday, February 19, 2026

Microlearning and AI: Bite-Sized Strategies for Skill Development


 

By Simone Conceição

In an era marked by fast-changing technologies and shrinking attention spans, microlearning has emerged as a powerful strategy for adult skill development. At the same time, artificial intelligence (AI) is reshaping how learning content is delivered, accessed, and personalized. Together, microlearning and AI form an ideal pairing, enabling educators and training providers to deliver targeted, accessible, and adaptive learning experiences that meet the needs of modern learners.

This blog post explores how AI enhances microlearning, what this means for adult education and workforce development, and how to implement effective strategies in practice.

 

What Is Microlearning?

Microlearning refers to the delivery of short, focused learning segments designed to meet specific objectives. These sessions typically range from 2 to 10 minutes and often incorporate multimedia elements like videos, quizzes, infographics, or interactive modules.

In adult learning environments, microlearning is especially valuable because it:

  • Respects the time constraints of working adults
  • Supports just-in-time learning in real-world contexts
  • Encourages spaced repetition for knowledge retention
  • Aligns with mobile-first, digital learning preferences

Microlearning isn't just about reducing content—it's about designing meaningful, focused learning that is purposefully small and highly relevant (Hug, 2017).

 

How AI Enhances Microlearning

Artificial intelligence can significantly expand the effectiveness of microlearning by making it personalized, adaptive, and data-informed. Here's how:

1. Content Personalization

AI-powered platforms analyze user behavior and learning history to deliver tailored microlearning modules. Learners receive content aligned with their skill gaps, goals, or preferences—maximizing relevance and motivation.

Example: An AI system identifies a learner’s weakness in data analysis and pushes a 5-minute video on interpreting visualizations, followed by a quiz.

2. Automated Content Generation

Generative AI tools such as ChatGPT, Jasper, or Copilot can assist instructors in creating bite-sized quizzes, lesson summaries, and flashcards aligned with specific learning objectives.

This reduces instructor workload and allows for faster development of microlearning libraries (Zawacki-Richter et al., 2019).

3. Spaced Repetition and Review

AI systems can schedule timely refreshers or follow-up questions based on when a learner is most likely to forget content, applying the principles of cognitive science to improve retention.

Example: Tools like Anki use AI-supported spaced repetition algorithms to resurface learning at optimal intervals.

4. Real-Time Feedback and Assessment

AI-driven tools can provide instant feedback on short tasks or quizzes, helping adult learners self-correct and reinforce knowledge immediately (Ifenthaler & Yau, 2020).

 

Applications in Adult and Workforce Learning

Microlearning supported by AI is gaining momentum in areas such as:

  • Professional certification prep (e.g., cybersecurity, project management)
  • Onboarding and compliance training in workplace settings
  • Digital literacy and upskilling programs for underserved populations
  • Language learning and soft skills development (e.g., communication, leadership)

Adarkwah (2024) argues that when integrated into AI-enhanced ecosystems, microlearning becomes a flexible, equitable solution for upskilling in diverse learning environments.

 

Best Practices for Implementing AI-Powered Microlearning

To maximize impact, educators and program designers should:

  1. Define Clear, Measurable Objectives: Each microlearning unit should address a specific skill or concept.
  2. Use AI Tools Judiciously: Rely on AI for support, but vet content for accuracy, bias, and alignment with learner needs.
  3. Design for Mobile and Accessibility: Ensure content is device-agnostic and compatible with assistive technologies.
  4. Provide Learner Autonomy: Allow learners to choose their learning paths or repeat modules as needed.
  5. Collect and Respond to Data: Use analytics to adapt future content and support learners who may be disengaging.

 

Microlearning + AI = Scalable, Personalized, Lifelong Learning

The convergence of microlearning and AI represents a powerful shift in how adult learners access and apply knowledge. These small, smart learning moments—delivered through AI-driven platforms—can accelerate skill development, reduce barriers, and support lifelong learning goals.

The AI Literacy Forum at the Adult Learning Exchange Virtual Community, moderated by Drs. Simone Conceição and Lilian Hill invite educators, designers, and adult learning professionals to explore and exchange practical strategies like these. Join the discussion and help shape how emerging technologies serve adult learners across contexts.

 

References

Adarkwah, M. A. (2024). GenAI-infused adult learning in the digital era: A conceptual framework for higher education. Adult Learning, 36(3), 149–161. https://doi.org/10.1177/10451595241271161

Hug, T. (2017). Didactics of microlearning: Concepts, discourses and examples. In T. Hug (Ed.), Didactics of Microlearning: Concepts, Discourses and Examples (pp. 3–22). Waxmann Verlag.

Ifenthaler, D., & Yau, J. Y.-K. (2020). Utilising learning analytics to support study success in higher education: A systematic review. Educational Technology Research and Development, 68, 1961–1990. https://doi.org/10.1007/s11423-020-09788-z

Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education – Where are the educators? International Journal of Educational Technology in Higher Education, 16, 1–27. https://doi.org/10.1186/s41239-019-0171-0

 

 

Thursday, February 5, 2026

Teaching AI Literacy in Workforce and Community Education Programs


 

By Lilian H. Hill

Artificial intelligence (AI) is increasingly embedded in systems that shape work, learning, and civic life.  To fully participate in the workforce, the next generation of workers and community members must be AI-literate and possess the technological skills required. From résumé screening and performance analytics to scheduling software and generative writing tools, AI now influences how adults access opportunities, make decisions, and engage with institutions. As a result, AI literacy has emerged as a critical educational priority in workforce and community education programs serving adult learners navigating rapid technological change.

 

Why AI Literacy Matters for Adult Learners

AI is often encountered not as an abstract technology but as a gatekeeper. Algorithmic systems influence hiring, promotion, credit access, healthcare decisions, and the information people see online. Without a foundational understanding of how these systems operate, adults may experience AI as opaque, uncontestable, or inevitable (Eubanks, 2018).

 

AI literacy equips learners to interpret, question, and respond to these systems. In workforce contexts, it supports adaptability and employability by helping learners understand how AI tools are used in their fields and how to collaborate effectively with automated systems rather than defer to them uncritically (OECD, 2021). In community education settings, AI literacy supports informed citizenship, privacy protection, and collective agency in the face of expanding algorithmic governance.

 

In August 2025, the U.S. Departments of Labor, Commerce, and Education released America’s Talent Strategy: Building the Workforce for the Golden Age. The document is aspirational and contains a vision statement that prioritizes investing in American workers by strengthening a workforce development system, delivering job-ready talent to employers, and ensuring accountability in preparing workers for the jobs critical to the nation’s economic future. The report articulates five pillars of strategic actions, including industry-driven strategies, worker mobility, integrated systems, accountability, flexibility, and innovation. The latter strategy aims to create new models of workforce innovation built to match the speed and scale of AI-driven economic transformation. The Talent Strategy is expected to shape how Workforce Innovation and Opportunity Act (WIOA) programs are funded, evaluated, and administered.

 

Federal funding priorities are likely to reflect the Talent Strategy’s objectives, and states, workforce boards, and partner organizations that align their programs accordingly may be better positioned in future funding and accountability processes. Unfortunately, these initiatives may reduce funding for community literacy programs that teach reading and print literacy. Yet, national and international literacy assessments have consistently shown that approximately 30% of American adults are low literate, and another 20% have only basic skills. That means that almost half of American adults struggle with the reading needed to function well in daily life. The most recent Programme for Assessment of Adult Competencies (PIAAC) survey in 2023 indicates that U.S. adults’ literacy skills may have decreased.

 

Defining AI Literacy in Adult, Community, and Workforce Education

AI literacy includes the knowledge, skills, and dispositions needed to understand how AI systems function, how they influence decisions, and how humans can exercise judgment and responsibility in relation to them (Ng et al., 2021). For adult learners, AI literacy integrates practical application with critical reflection, ethical awareness, and contextual understanding.

 

Across adult, community, and workforce education settings, AI literacy is most effective when grounded in real-world contexts. Learners regularly encounter AI through employment systems, digital services, educational platforms, and workplace technologies. Instruction should therefore emphasize recognizing AI use, understanding its impacts on access and opportunity, and developing the ability to question automated outcomes.

 

These educational contexts also share a focus on equity, participation, and agency. Adult and community programs often support learners navigating structural barriers, while workforce education emphasizes informed and responsible use of AI in professional roles. In all settings, AI literacy should frame AI as a human-designed system shaped by social values, enabling learners to engage with technology thoughtfully, ethically, and with confidence.

 

Core Dimensions of AI Literacy Instruction

The core dimensions of AI literacy provide a framework for helping adult learners develop the knowledge, skills, and dispositions needed to engage with artificial intelligence thoughtfully, ethically, and effectively. Rather than focusing on technical mastery, these dimensions emphasize understanding how AI works, critically evaluating its impacts, protecting personal data, and maintaining human judgment and agency in real-world contexts where AI increasingly shapes decisions and opportunities.

 

1.             Foundational Understanding
Adult learners benefit from clear, accessible explanations of what artificial intelligence is—and what it is not—because public discourse around AI often alternates between alarmist narratives and unrealistic promises. Foundational AI literacy instruction should introduce core concepts such as algorithms, training data, automation, and generative systems using plain language and concrete, everyday examples rather than technical abstractions (Long & Magerko, 2020). Emphasizing that AI systems do not possess consciousness, intent, or understanding, but instead operate by identifying patterns within large datasets, helps learners develop realistic expectations about AI’s capabilities and limitations (Russell & Norvig, 2021). This demystification is especially important for adult learners who may feel intimidated by highly technical explanations or unsettled by rapid technological change. By grounding AI concepts in familiar tools such as recommendation systems, spell-checkers, or scheduling software, educators can reduce fear, prevent overreliance, and support informed, confident engagement with AI technologies.

 

2.             Critical Evaluation and Bias Awareness
AI systems are not neutral or objective; they reflect the social, cultural, and institutional assumptions embedded in their design and training data. Effective AI literacy instruction must therefore help learners critically evaluate AI outputs rather than accepting them as authoritative or unbiased. This includes understanding how biased, incomplete, or historically unequal data can reproduce and amplify discrimination, particularly for marginalized populations (Benjamin, 2019; Noble, 2018). Adult learners should be taught to question how AI-generated information is produced, whose interests it serves, and what perspectives may be absent, and to verify outputs using independent and credible sources. These skills are especially critical in employment, education, healthcare, and public services, where AI-informed decisions can significantly affect access to opportunities and resources. Developing critical evaluation skills empowers learners to engage with AI thoughtfully and ethically, rather than passively or distrustfully.

 

3.             Data Privacy and Ethical Use
Protecting personal data in the age of AI is important because personal information has become a powerful resource—or currency— that shapes how individuals are represented, evaluated, and treated across education, employment, healthcare, finance, and public services. AI systems rely on large-scale data to make predictions and recommendations, and when that data is inaccurate or taken out of context, it can lead to misclassification, unfair decision-making, and long-term consequences that are difficult for individuals to see or contest (Eubanks, 2018). AI literacy must address how personal data are collected, stored, and reused. Adult learners need to understand that inputs into generative AI systems may be retained or used to train models, requiring caution when sharing sensitive or identifiable information. Ethical literacy supports safer participation and informed consent in digital environments (Zuboff, 2019).

 

4.             Human Judgment and Agency
Rather than positioning AI as a replacement for human expertise, adult education should emphasize human oversight, responsibility, and ethical decision-making. AI literacy instruction should reinforce that while AI systems can support human work by increasing efficiency or identifying patterns, they cannot account for lived experience, contextual nuance, moral reasoning, or accountability (Nissenbaum, 2010). Learners benefit from explicit discussion of the limits of automation and the continuing need for human judgment in interpreting results, making final decisions, and addressing unintended consequences. Framing AI as a tool rather than an authority helps preserve learner agency and counters narratives that portray technological systems as inevitable or uncontestable. This emphasis is particularly important in workforce contexts, where fears of automation and displacement can undermine confidence and obscure opportunities for meaningful use of AI tools (Eubanks, 2018).

 

Implications for Workforce and Community Education Programs
Workforce and community education programs are well-positioned to support AI literacy because they are grounded in practical application, real-world challenges, and adult learners’ lived experiences. Integrating AI literacy into existing curricula, such as career readiness, digital skills development, professional writing, or civic education, allows learners to connect abstract concepts to the tasks and decisions they encounter in daily life. Research suggests that participatory, discussion-based approaches are especially effective for adult learners navigating complex technologies, as they validate prior knowledge and encourage collective sense-making (Brookfield, 2013). AI literacy instruction in these settings should therefore be dialogic and reflective, inviting learners to share concerns, ask critical questions, and examine how AI systems shape their workplaces and communities. By centering discussion, ethical reflection, and agency, workforce and community education programs can foster not only technical understanding but also confidence, critical awareness, and responsible engagement with AI.

 

Conclusion

As AI systems continue to shape access to opportunity and participation in society, AI literacy must be treated as a core component of adult education. Workforce and community education programs play a crucial role in helping learners develop not only technical familiarity with AI, but also the critical judgment, ethical awareness, and agency needed to navigate an increasingly algorithmic world.

 

References

Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. Polity Press.

Brookfield, S. D. (2013). Powerful techniques for teaching adults. Jossey-Bass.

Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.

Long, D., & Magerko, B. (2020). What is AI literacy? Competencies and design considerations. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–16. https://doi.org/10.1145/3313831.3376727

Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). AI literacy: Definition, teaching, evaluation and ethical issues. Proceedings of the Association for Information Science and Technology, 58(1), 504–509.

Nissenbaum, H. (2010). Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press.

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.

OECD. (2021). Artificial intelligence, skills and work. OECD Publishing.

Russell, S., & Norvig, P. (2021). Artificial intelligence: A modern approach (4th ed.). Pearson.

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.

 

 

 

Thursday, January 22, 2026

Preparing Instructors for AI Integration: Professional Learning Strategies

 


 

 

By Simone Conceição

 

As artificial intelligence (AI) becomes a transformative force in education, instructors across all disciplines and levels—especially in adult and continuing education—must be prepared to integrate AI tools responsibly, effectively, and equitably into their teaching practices. However, the rapid pace of technological change has left many educators uncertain about how to begin, what tools to use, and what ethical considerations to address.

 

This post outlines key professional learning strategies that institutions and educators can adopt to build confidence, competence, and critical awareness around AI in teaching and learning.

 

Why Faculty Development Is Critical for AI Integration

Effective AI integration doesn’t begin with technology—it begins with pedagogy. According to Zawacki-Richter et al. (2019), most AI research in higher education has focused on technological capabilities, often overlooking the pedagogical and professional needs of instructors. Without appropriate support, educators may underutilize tools, reinforce bias, or resist AI altogether.

 

Adult educators must cultivate both technical fluency and andragogical insight when navigating AI, ensuring that use of these tools aligns with adult learning principles such as relevance, autonomy, and critical reflection.

 

Professional Learning Strategies for AI Integration

1. Start with Foundational AI Literacy. Instructors need a working understanding of how AI functions, what types of tools are available, and how algorithms use data to generate outcomes.

  • Offer self-paced modules or short workshops on AI basics.
  • Use plain-language explanations and real-world examples.
  • Introduce key terms such as machine learning, natural language processing, and generative AI.

 

Goal: Reduce fear and foster curiosity by demystifying the technology.

 

2. Contextualize AI within Pedagogical Practice. AI should be introduced not as a standalone innovation, but as a tool that supports learning goals.

  • Explore case studies showing how AI enhances feedback, scaffolding, or engagement.
  • Encourage faculty to align AI use with course outcomes, not convenience alone.
  • Include discussions on AI’s role in formative assessment and inclusive practices.

 

Goal: Ensure instructional use is meaningful and learner-centered.

 

3. Encourage Exploration and Experimentation. Hands-on experience builds confidence. Provide protected time and space for faculty to explore AI tools and assess their potential.

  • Organize low-stakes “sandbox” sessions.
  • Host faculty learning communities focused on experimentation.
  • Provide small grants or micro-credentials for course redesign projects that integrate AI.

 

Goal: Empower instructors to learn by doing in a supportive environment.

 

4. Facilitate Ethical and Critical Discussions. Professional learning should include ethical inquiry—not just technical training.

  • Discuss issues such as data privacy, algorithmic bias, authorship, and transparency.
  • Introduce frameworks like those from Holmes et al. (2022) for ethical AI in education.
  • Encourage reflection on how AI may impact learner equity and agency.

 

Goal: Promote responsible, reflective AI use aligned with educational values.

 

5. Model AI Use in Faculty Development. Lead by example: integrate AI tools into the professional learning experience itself.

  • Use generative AI to personalize workshop content or simulate scenarios.
  • Demonstrate how AI can streamline feedback or facilitate knowledge construction.

 

Goal: Show—not just tell—how AI can be pedagogically productive.

 

Institutional Support for Sustainable AI Integration

In addition to individual professional development, institutions should:

  • Create cross-functional AI task forces involving faculty, learning designers, and IT staff.
  • Develop guidance on appropriate and transparent AI use, including academic integrity policies.
  • Recognize and reward faculty who engage in innovative, ethical AI practices.

 

Embed AI into broader digital transformation strategies, ensuring it complements—not disrupts—existing instructional and student support systems.

 


Conclusion: Building a Culture of AI Readiness

Preparing instructors for AI integration is not just a technical challenge—it is a professional learning imperative. Through sustained, collaborative, and values-driven professional development, educators can harness AI’s potential while remaining grounded in human-centered teaching.

 

At the AI Literacy Forum in the Adult Learning Exchange Virtual Community, faculty developers and educators are invited to share practices, ask questions, and collaborate on creating inclusive, ethical, and engaging AI-enhanced learning environments. Moderated by Drs. Simone Conceição and Lilian Hill, the forum is a space for growing collective capacity in the age of AI.


 

References

Holmes, W., Porayska-Pomsta, K., Holstein, K., Sutherland, E., Baker, T., & Santos, O. C. (2022). Ethics of AI in education: Towards a community-wide framework. International Journal of Artificial Intelligence in Education, 32(4), 575–617. https://doi.org/10.1007/s40593-021-00239-1

Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education–Where are the educators? International Journal of Educational Technology in Higher Education, 16(1), 1–27. https://doi.org/10.1186/s41239-019-0171-0

Thursday, January 8, 2026

Data Privacy and Security for Adult Learners in AI Systems

 


By Lilian H. Hill

 

Artificial intelligence (AI) systems are now embedded in many adult learning environments, including learning management systems, adaptive learning platforms, writing and tutoring tools, learning analytics dashboards, and virtual advising systems. These technologies promise personalization, efficiency, and expanded access to learning. At the same time, they raise critical concerns about data privacy and security, especially for adult learners navigating education alongside their professional, familial, and civic responsibilities.

 

Understanding how AI systems collect, analyze, store, and protect learner data is essential for fostering trust, supporting ethical practice, and empowering adult learners to make informed decisions about their participation in AI-enabled learning environments.

 

Why Data Privacy Is Especially Important for Adult Learners

Data privacy for adult learners in AI systems hinges on data minimization, strong security (encryption, access controls), and transparency, ensuring only necessary data is collected and used ethically, with learners retaining control, while security measures like multi-factor authorization, encryption, and regular audits protect sensitive information from breaches, acknowledging that user inputs in GenAI can train the models, requiring caution about sharing private data. 

 

Adult learners differ from traditional-age students in ways that heighten the stakes of data privacy. Many adult learners are employed professionals whose learning data may intersect with workplace evaluations, licensure requirements, or career advancement. Others may be returning to education after long absences or engaging in learning to reskill in rapidly changing labor markets. These contexts make confidentiality, consent, and control over personal information particularly important (Kasworm, 2010; Rose et al., 2023).

 

AI systems collect extensive data, including demographic information, learning behaviors, written assignments, discussion posts, performance metrics, and engagement patterns. When these data are inadequately protected or used beyond their original purpose, adult learners may face risks such as loss of privacy, data misuse, reputational harm, or unintended surveillance (Azevedo et al., 2025; Prinsloo & Slade, 2017).

 

How AI Systems Use Learner Data

AI-driven learning technologies rely on data to function. Algorithms analyze learner inputs to personalize content, generate feedback, predict performance, or automate decision-making processes. While these capabilities can support learning, they also introduce complexity and opacity. Learners may not know what data are collected, how long they are retained, or how algorithmic decisions are made (Zuboff, 2019).

 

From an ethical perspective, transparency is critical. Responsible AI systems should clearly communicate what data are collected and why, how data are processed and analyzed, whether data are shared with third parties, how long data are retained, and what rights learners must access, correct, or delete their data. Without transparency, learners are asked to trust systems they may not fully understand, undermining autonomy and informed consent (Floridi et al., 2018).

 

Data Security Risks in AI-Enabled Learning

Beyond privacy, data security refers to the technical and organizational safeguards that protect information from unauthorized access, breaches, or misuse. Educational institutions and technology vendors increasingly store learner data in cloud-based systems, which can be vulnerable to cyberattacks if not adequately secured (Azevedo et al., 2015; Means et al., 2020).

 

Despite the rapid adoption of AI tools, institutional guidance on their responsible integration into higher education remains uneven. Where policies exist, they differ substantially in scope, enforceability, and levels of faculty involvement, leaving many educators uncertain about what is permitted, encouraged, or restricted (Azevedo et al., 2024). As a result, institutions face an increasing imperative to develop AI policies that not only address emerging risks but also provide faculty with clarity, support, and flexibility.

 

For adult learners, data breaches may expose not only academic information but also sensitive personal and professional details. Strong data security practices such as encryption, access controls, regular audits, and incident response planning are essential to minimizing these risks. Institutions have an ethical responsibility to ensure that efficiency and innovation do not come at the expense of learner protection.

 

Power, Surveillance, and Learning Analytics

AI systems in education often operate through learning analytics, which track and analyze learner behavior to inform instructional decisions. While analytics can identify students who need support, they can also create surveillance environments that disproportionately affect adult learners who balance learning with work, caregiving, or health challenges (Prinsloo & Slade, 2017).

 

When predictive models label learners as “at risk,” those classifications may shape how instructors, advisors, or systems interact with them. Without careful governance, such systems risk reinforcing bias, reducing learner agency, and privileging efficiency over human judgment (Selwyn, 2019).

 

Empowering Adult Learners Through Digital Literacy

Supporting data privacy and security is not solely a technical challenge; it is also an educational one. Adult learners benefit from opportunities to develop digital and data literacy, including understanding privacy policies, consent mechanisms, and the implications of sharing data with AI systems (Selwyn, 2016).

 

Educators and institutions can empower learners by explaining how AI tools work in an accessible language, providing choices about tool use when possible, modeling ethical and transparent data practices, and encouraging critical reflection on technology’s role in learning. Such practices align with adult learning principles that emphasize autonomy, relevance, and respect for learners’ lived experiences (Knowles et al., 2015).

 

Toward Ethical and Trustworthy AI in Adult Learning

As AI becomes more prevalent in adult education, data privacy and security must be treated as foundational—not optional—components of effective learning design. Ethical AI systems prioritize learner rights, minimize data collection to what is necessary, protect data rigorously, and involve learners as informed participants rather than passive data sources (Floridi et al., 2018).

 

For adult learners, trust is central. When learners trust that their data are being handled responsibly, they are more likely to engage meaningfully with AI tools, experiment with new forms of learning, and fully benefit from technological innovation. Protecting data privacy and security is therefore not only a legal or technical obligation, but a pedagogical and ethical one.

 

References

Azevedo, L., Robles, P, Best, E. &and Mallinson, D. J. (2025). Institutional policies on artificial intelligence in higher education: Frameworks and best practices for faculty. New Directions for Adult and Continuing Education 2025, 188, 70–78. https://doi.org/10.1002/ace.70013

Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., … Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5

Kasworm, C. E. (2010). Adult learners in a research university: Negotiating undergraduate student identity. The Journal of Continuing Higher Education, 58(2), 143–151. https://doi.org/10.1177/0741713609336110

Knowles, M. S., Holton, E. F., & Swanson, R. A. (2015). The adult learner (8th ed.). Routledge.

Means, B., Bakia, M., & Murphy, R. (2020). Learning online: What research tells us about whether, when and how. Routledge.

Prinsloo, P., & Slade, S. (2017). An elephant in the learning analytics room: The obligation to act. Proceedings of the Seventh International Learning Analytics & Knowledge Conference, 46–55. https://doi.org/10.1145/3027385.3027406

Rose, A. D., Ross-Gordon, J. & Kasworm, C. E. (2023). Creating a place for adult learners in higher education: Challenges and opportunities. Routledge.

Selwyn, N. (2019). Education and technology: Key issues and debates (3rd ed.). Bloomsbury.

Selwyn, N. (2019). What’s the problem with learning analytics? Journal of Learning Analytics, 6(3), 11–19. https://doi.org/10.18608/jla.2019.63.3

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.