Showing posts with label Ethics. Show all posts
Showing posts with label Ethics. Show all posts

Thursday, January 8, 2026

Data Privacy and Security for Adult Learners in AI Systems

 


By Lilian H. Hill

 

Artificial intelligence (AI) systems are now embedded in many adult learning environments, including learning management systems, adaptive learning platforms, writing and tutoring tools, learning analytics dashboards, and virtual advising systems. These technologies promise personalization, efficiency, and expanded access to learning. At the same time, they raise critical concerns about data privacy and security, especially for adult learners navigating education alongside their professional, familial, and civic responsibilities.

 

Understanding how AI systems collect, analyze, store, and protect learner data is essential for fostering trust, supporting ethical practice, and empowering adult learners to make informed decisions about their participation in AI-enabled learning environments.

 

Why Data Privacy Is Especially Important for Adult Learners

Data privacy for adult learners in AI systems hinges on data minimization, strong security (encryption, access controls), and transparency, ensuring only necessary data is collected and used ethically, with learners retaining control, while security measures like multi-factor authorization, encryption, and regular audits protect sensitive information from breaches, acknowledging that user inputs in GenAI can train the models, requiring caution about sharing private data. 

 

Adult learners differ from traditional-age students in ways that heighten the stakes of data privacy. Many adult learners are employed professionals whose learning data may intersect with workplace evaluations, licensure requirements, or career advancement. Others may be returning to education after long absences or engaging in learning to reskill in rapidly changing labor markets. These contexts make confidentiality, consent, and control over personal information particularly important (Kasworm, 2010; Rose et al., 2023).

 

AI systems collect extensive data, including demographic information, learning behaviors, written assignments, discussion posts, performance metrics, and engagement patterns. When these data are inadequately protected or used beyond their original purpose, adult learners may face risks such as loss of privacy, data misuse, reputational harm, or unintended surveillance (Azevedo et al., 2025; Prinsloo & Slade, 2017).

 

How AI Systems Use Learner Data

AI-driven learning technologies rely on data to function. Algorithms analyze learner inputs to personalize content, generate feedback, predict performance, or automate decision-making processes. While these capabilities can support learning, they also introduce complexity and opacity. Learners may not know what data are collected, how long they are retained, or how algorithmic decisions are made (Zuboff, 2019).

 

From an ethical perspective, transparency is critical. Responsible AI systems should clearly communicate what data are collected and why, how data are processed and analyzed, whether data are shared with third parties, how long data are retained, and what rights learners must access, correct, or delete their data. Without transparency, learners are asked to trust systems they may not fully understand, undermining autonomy and informed consent (Floridi et al., 2018).

 

Data Security Risks in AI-Enabled Learning

Beyond privacy, data security refers to the technical and organizational safeguards that protect information from unauthorized access, breaches, or misuse. Educational institutions and technology vendors increasingly store learner data in cloud-based systems, which can be vulnerable to cyberattacks if not adequately secured (Azevedo et al., 2015; Means et al., 2020).

 

Despite the rapid adoption of AI tools, institutional guidance on their responsible integration into higher education remains uneven. Where policies exist, they differ substantially in scope, enforceability, and levels of faculty involvement, leaving many educators uncertain about what is permitted, encouraged, or restricted (Azevedo et al., 2024). As a result, institutions face an increasing imperative to develop AI policies that not only address emerging risks but also provide faculty with clarity, support, and flexibility.

 

For adult learners, data breaches may expose not only academic information but also sensitive personal and professional details. Strong data security practices such as encryption, access controls, regular audits, and incident response planning are essential to minimizing these risks. Institutions have an ethical responsibility to ensure that efficiency and innovation do not come at the expense of learner protection.

 

Power, Surveillance, and Learning Analytics

AI systems in education often operate through learning analytics, which track and analyze learner behavior to inform instructional decisions. While analytics can identify students who need support, they can also create surveillance environments that disproportionately affect adult learners who balance learning with work, caregiving, or health challenges (Prinsloo & Slade, 2017).

 

When predictive models label learners as “at risk,” those classifications may shape how instructors, advisors, or systems interact with them. Without careful governance, such systems risk reinforcing bias, reducing learner agency, and privileging efficiency over human judgment (Selwyn, 2019).

 

Empowering Adult Learners Through Digital Literacy

Supporting data privacy and security is not solely a technical challenge; it is also an educational one. Adult learners benefit from opportunities to develop digital and data literacy, including understanding privacy policies, consent mechanisms, and the implications of sharing data with AI systems (Selwyn, 2016).

 

Educators and institutions can empower learners by explaining how AI tools work in an accessible language, providing choices about tool use when possible, modeling ethical and transparent data practices, and encouraging critical reflection on technology’s role in learning. Such practices align with adult learning principles that emphasize autonomy, relevance, and respect for learners’ lived experiences (Knowles et al., 2015).

 

Toward Ethical and Trustworthy AI in Adult Learning

As AI becomes more prevalent in adult education, data privacy and security must be treated as foundational—not optional—components of effective learning design. Ethical AI systems prioritize learner rights, minimize data collection to what is necessary, protect data rigorously, and involve learners as informed participants rather than passive data sources (Floridi et al., 2018).

 

For adult learners, trust is central. When learners trust that their data are being handled responsibly, they are more likely to engage meaningfully with AI tools, experiment with new forms of learning, and fully benefit from technological innovation. Protecting data privacy and security is therefore not only a legal or technical obligation, but a pedagogical and ethical one.

 

References

Azevedo, L., Robles, P, Best, E. &and Mallinson, D. J. (2025). Institutional policies on artificial intelligence in higher education: Frameworks and best practices for faculty. New Directions for Adult and Continuing Education 2025, 188, 70–78. https://doi.org/10.1002/ace.70013

Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., … Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5

Kasworm, C. E. (2010). Adult learners in a research university: Negotiating undergraduate student identity. The Journal of Continuing Higher Education, 58(2), 143–151. https://doi.org/10.1177/0741713609336110

Knowles, M. S., Holton, E. F., & Swanson, R. A. (2015). The adult learner (8th ed.). Routledge.

Means, B., Bakia, M., & Murphy, R. (2020). Learning online: What research tells us about whether, when and how. Routledge.

Prinsloo, P., & Slade, S. (2017). An elephant in the learning analytics room: The obligation to act. Proceedings of the Seventh International Learning Analytics & Knowledge Conference, 46–55. https://doi.org/10.1145/3027385.3027406

Rose, A. D., Ross-Gordon, J. & Kasworm, C. E. (2023). Creating a place for adult learners in higher education: Challenges and opportunities. Routledge.

Selwyn, N. (2019). Education and technology: Key issues and debates (3rd ed.). Bloomsbury.

Selwyn, N. (2019). What’s the problem with learning analytics? Journal of Learning Analytics, 6(3), 11–19. https://doi.org/10.18608/jla.2019.63.3

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.

 

Thursday, November 27, 2025

The Role of AI in Inclusive Learning Environments


 

By Simone C. O. Conceição

 

As artificial intelligence (AI) becomes increasingly integrated into educational tools and systems, it holds the potential to advance inclusive teaching and learning—if applied with care and intentionality. AI can support learners with diverse needs, streamline accessibility features, and personalize learning pathways. At the same time, it can reinforce inequities if not thoughtfully designed and implemented.

 

This post explores how AI can promote inclusion in adult education, the challenges to be aware of, and strategies educators can use to ensure AI supports equitable learning environments for all.

 

What Is Inclusive Education in the Age of AI?

Inclusive education aims to ensure that all learners—regardless of ability, language, background, or identity—can access and fully participate in meaningful learning experiences. With AI, this vision expands beyond physical accessibility to encompass digital inclusion, personalized support, and equity in learning outcomes.

 

AI tools can help realize this vision by offering assistive technologies, adapting content in real time, and identifying learner needs through data-driven insights (UNESCO, 2021). However, true inclusivity depends not just on access to tools, but on how they are developed, selected, and used by educators.

 

Opportunities: How AI Can Support Inclusion

1. Adaptive Learning for Diverse Needs. AI can adjust the pace, format, and complexity of content based on a learner’s interactions. This is particularly beneficial for adult learners with varying literacy levels, learning differences, or limited prior experience in digital environments (Holmes et al., 2022).

Example: Adaptive platforms like ALEKS or Knewton Alta personalize instruction by identifying learning gaps and adjusting content delivery accordingly.

 

2. Assistive Technologies. AI powers tools like real-time transcription (e.g., Otter.ai), text-to-speech (e.g., Microsoft Immersive Reader), and automated captioning—all of which improve access for learners with disabilities or English language learners.

These tools align with Universal Design for Learning (UDL) principles, which emphasize providing multiple means of engagement, representation, and expression (CAST, 2018).

 

3. Multilingual and Cultural Accessibility. AI-driven translation tools, such as Google Translate or DeepL, can break down language barriers and support culturally diverse learners. Additionally, AI chatbots and voice assistants can be trained in various dialects and languages to offer support beyond the dominant culture.

 

4. Equity Through Predictive Analytics. Learning analytics supported by AI can help identify learners who may be falling behind—based on patterns in engagement or assessment data—and enable early intervention (Ifenthaler & Yau, 2020). When used ethically, this can prevent learners from being overlooked due to implicit bias or lack of visibility in online environments.

 

Challenges and Ethical Considerations

Despite these opportunities, there are risks that must be addressed to ensure AI truly serves inclusion:

  • Bias in Training Data: If AI systems are trained on datasets that lack diversity, they may reproduce stereotypes or exclude underrepresented groups.
  • Privacy Concerns: Collecting sensitive learner data for personalization or analytics raises questions about consent, surveillance, and autonomy.
  • Technology Access Gaps: AI-powered tools often assume stable internet, updated devices, and digital fluency—conditions not all adult learners have.

 

Without intentional design, AI tools can unintentionally amplify exclusion rather than mitigate it.

 

Strategies for Ethical and Inclusive AI Use

Educators, designers, and institutions can take the following steps to promote inclusive AI use:

  1. Evaluate Tools for Bias and Accessibility
    Choose vendors and platforms that are transparent about their algorithms and committed to accessibility standards.
  2. Involve Diverse Learners in Design and Testing
    Co-design AI-enhanced tools with input from learners of different ages, abilities, and cultural backgrounds.
  3. Provide Digital Literacy Support
    Ensure learners have the skills and support to use AI-powered tools confidently and critically.
  4. Ensure Human Oversight
    Use AI as a support—not a replacement—for relational teaching, dialogue, and community-building.
  5. Establish Data Ethics Protocols
    Be clear with learners about what data is collected, how it’s used, and what choices they have in the process.

Conclusion: Inclusion Must Be Intentional

AI is not inherently inclusive—but it can be a powerful tool for inclusion when paired with ethical practice, thoughtful pedagogy, and an unwavering commitment to equity. Integrating AI into education requires thoughtful consideration to ensure it advances equitable learning and protects the rights and needs of all students.

 

The AI Literacy Forum, hosted by the Adult Learning Exchange Virtual Community, offers a space for adult educators to discuss, question, and share resources related to equitable AI integration, moderated by Drs. Simone Conceição and Lilian Hill, the forum welcomes your voice in shaping a more inclusive digital learning future.

 


 

References

CAST. (2018). Universal Design for Learning Guidelines version 2.2. http://udlguidelines.cast.org

Holmes, W., Porayska-Pomsta, K., Holstein, K., Sutherland, E., Baker, T., & Santos, O. C. (2022). Ethics of AI in education: Towards a community-wide framework. International Journal of Artificial Intelligence in Education, 32(4), 575–617. https://doi.org/10.1007/s40593-021-00239-1

Ifenthaler, D., & Yau, J. Y.-K. (2020). Utilising learning analytics to support study success in higher education: A systematic review. Educational Technology Research and Development, 68, 1961–1990. https://doi.org/10.1007/s11423-020-09788-z

UNESCO. (2021). AI and education: Guidance for policy-makers. https://unesdoc.unesco.org/ark:/48223/pf0000377071

 

Thursday, November 13, 2025

Addressing Bias in AI: What Adult Educators Should Consider


 

By Lilian H. Hill

 

Artificial intelligence (AI) is increasingly shaping how people learn, work, and access information. From adaptive learning platforms to automated feedback tools, adult educators are finding themselves navigating opportunities and challenges that come with these technologies. One of the most pressing concerns is bias in AI systems, a complex issue that raises questions of fairness, equity, and responsibility in teaching and learning.

 

Concerns about biased algorithms predate the current popularity of artificial intelligence (Jennings, 2023). As early as the mid-1980s, a British medical school faced legal repercussions for discrimination after using a computer system to evaluate applicants. Although the system’s decisions mirrored those of human reviewers, it consistently favored men and those with European-sounding names. Decades later, Amazon attempted to streamline hiring with a similar AI tool, only to find it was disadvantaging women —an outcome rooted in biased training data from a male-dominated tech workforce.

 

OpenAI, the creator of ChatGPT and the DALL-E image generator, has been at the center of debates over bias since ChatGPT launched publicly in November 2022 (Jennings, 2023). The company has actively worked to correct emerging issues, as users flagged examples ranging from political slants to racial stereotypes. In February 2023, OpenAI took a proactive step by publishing a clear explanation of ChatGPT’s behavior, providing valuable insight into how the model functions and how future improvements are being shaped.

 

Understanding Bias in AI

Bias in AI occurs when algorithms produce outcomes that are systematically unfair or unbalanced, often due to the data used to train these systems. When the data reflects historical inequities, stereotypes, or informational gaps, AI may unintentionally reproduce or amplify those patterns (Mehrabi et al., 2022). For instance, résumé screening tools trained on past hiring data may undervalue applications from women or people of color (Dastin, 2018). Similarly, language models can generate content that perpetuates cultural stereotypes (Bender et al., 2021), and facial recognition systems may be less accurate for specific demographic groups, particularly individuals with dark skin (Buolamwini & Gebru, 2018). Understanding that AI bias often mirrors societal biases enables adult educators to engage with AI tools more critically and thoughtfully.

There are three primary sources of biased data: 1) use of biased training data, 2) human influence on training AI systems, and 3) lack of a shared understanding of bias.

 

1.    Biased Training Data

AI models learn from vast datasets that reflect the world as it is, including its prejudices. Just as humans are shaped by their environments, AI is shaped by the data it consumes, much of which comes from a biased internet. For instance, Amazon’s hiring algorithm penalized women because it was trained on historical data that was male-dominated. When datasets disproportionately represent particular groups or viewpoints, the model’s outputs reflect that imbalance. In short, there’s no such thing as a perfectly unbiased dataset.

 

2.     Human Influence in Training

After initial training, AI outputs are refined through Reinforcement Learning with Human Feedback (RLHF), in which human reviewers judge and rank responses. While this helps shape AI into behaving more like a “responsible” human, it also introduces personal and cultural biases. If all reviewers share similar backgrounds, their preferences will influence how the model responds, making complete neutrality impossible.

 

3.    No Shared Definition of Bias


Even if we could remove all data that reflects human bias, we would still face one unsolvable problem: people disagree on what bias means. While most can agree that discrimination is harmful, opinions vary widely on how AI should navigate complex social, political, or moral issues. Over-filtering risks producing a model that is so neutral it becomes unhelpful, stripped of nuance and unable to take a stand on anything meaningful.

 

Why This Matters for Adult Education

Adult learners bring diverse backgrounds, identities, and experiences into the classroom. AI tools built on non-representative data can worsen existing inequalities in education unless developers improve their training methods and educators use the technology thoughtfully (Klein, 2024). When AI tools are introduced without awareness of bias, the risk is that inequities become amplified rather than reduced (Holmes et al., 2022). For instance:

 

  • Learners from marginalized groups may encounter materials or assessments that do not accurately represent their knowledge or potential.
  • Automated tutoring or feedback systems may respond differently depending on dialects, accents, or language use.
  • Predictive analytics used to flag “at-risk” learners could disproportionately affect specific student populations (Slade & Prinsloo, 2013).

 

Educators play a pivotal role in mediating these risks, ensuring that AI supports equity rather than undermining it.

 

What Adult Educators Should Consider

  1. Critical Evaluation of Tools
    • Ask: How was this AI system trained? What kinds of data were used?
    • Explore whether the developers have published documentation about bias testing (Mitchell et al., 2019).
  2. Transparency with Learners
    • Explain how AI is being used in the classroom and its potential limitations.
    • Encourage learners to evaluate outputs critically rather than accepting them at face value.
  3. Centering Equity and Inclusion
    • Select tools that offer options for cultural and linguistic diversity.
    • Advocate for systems that are designed with universal access in mind (Holmes et al., 2022).
  4. Ongoing Reflection and Adaptation
    • Keep a reflective journal or log of how AI tools perform with different groups of learners.
    • Adjust teaching strategies when inequities appear.
  5. Collaborative Dialogue
    • Create opportunities for learners to share their experiences with AI.
    • Engage in professional learning communities where educators discuss emerging issues and solutions.

 

Moving Forward

AI literacy is more crucial than ever. When talking about AI with your adult learners, ensure they understand that these models are not flawless, their responses shouldn't be accepted as the absolute truth, and that primary sources remain the most reliable. Until better regulations are in place for this technology, the best approach is to "trust but verify." AI technologies are not neutral—they mirror the values, assumptions, and imperfections of the societies that create them. For adult educators, the challenge is not to reject AI outright but to engage with it thoughtfully, critically, and ethically. By proactively recognizing and addressing bias, educators can help ensure that AI contributes to inclusive, empowering learning environments.

 

References

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922

Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 1–15. http://proceedings.mlr.press/v81/buolamwini18a.html

Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/idUSKCN1MK08G

Holmes, W., Porayska-Pomsta, K., Holstein, K., Sutherland, E., Baker, T., Shum, S. B., Santos, O. C., & Koedinger, K. R. (2022). Ethics of AI in education: Towards a community-wide framework. International Journal of Artificial Intelligence in Education, 32(4), 731–761. https://doi.org/10.1007/s40593-021-00239-0

Jennings, J. (2023, August 8). AI in education: The bias dilemma. EdTech Insights. https://www.esparklearning.com/blog/get-to-know-ai-the-bias-dilemma/#:~:text=Some%20things%20teachers%20can%20do%20to%20help,use%20primary%20sources%20as%20the%20best%20sources

Klein, A. (2024, June 24). AI's potential for bias puts onus on educators, Developers. Center for Education Technology. https://www.govtech.com/education/k-12/ais-potential-for-bias-puts-onus-on-educators-developers#:~:text=Schools%20should%20be%20wary%20if,'%22

Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2022). A survey on bias and fairness in machine learning. ACM Computing Surveys, 55(6), 1–35. https://doi.org/10.1145/3457607

Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019). Model cards for model reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency, 220–229. https://doi.org/10.1145/3287560.3287596

Slade, S., & Prinsloo, P. (2013). Learning analytics: Ethical issues and dilemmas. American Behavioral Scientist, 57(10), 1510–1529. https://doi.org/10.1177/0002764213479366

 

 

 

Thursday, October 30, 2025

Ethical Use of AI in Teaching and Learning

 


By Simone C. O. Conceição

 

Artificial Intelligence (AI) is rapidly becoming a fixture in educational practice. Whether through chatbots offering academic support, automated grading systems, adaptive learning platforms, or generative tools like ChatGPT, AI promises to improve efficiency, accessibility, and personalization. However, with great power comes significant ethical responsibility.

 

As AI becomes embedded in teaching and learning environments, educators must consider how to integrate these tools ethically, ensuring they enhance—not diminish—the quality, fairness, and inclusivity of education.

 

Why AI Ethics Matter in Education

AI systems differ from traditional software because they evolve based on data, learn from patterns, and often operate without full transparency. This complexity introduces serious ethical risks, including privacy breaches, algorithmic bias, and diminished human agency (Floridi et al., 2018).

 

In educational contexts, these concerns are amplified. Learners—especially adults returning to education or navigating online environments—place trust in systems to guide their progress. Ethical use of AI ensures that learners are respected as individuals, not treated as data points, and that educational systems support inclusion, equity, and agency (Holmes et al., 2022).

 

Key Principles for Ethical AI Integration

1. Transparency and Explainability. Educators and students should understand when AI is used and how it functions. For example, if an AI grades an assignment or suggests learning paths, users should know how those decisions are made.

 

Example: Platforms like Gradescope provide AI-assisted grading while allowing instructors to view, verify, and modify outcomes.

 

2. Fairness and Bias Prevention. AI systems can unintentionally replicate biases found in their training data, leading to unfair recommendations or assessments.

 

Best practice: Choose AI tools that have been tested for equity across diverse learner populations. Regularly review outputs for disproportionate patterns.

 

3. Privacy and Data Ethics. AI systems often require access to learner data. Mishandling this data can violate privacy or lead to surveillance-style practices (Slade & Prinsloo, 2013).

 

Recommendation: Always inform learners about what data are collected, why they are needed, and how they will be used. Select platforms that comply with FERPA or other data protection laws.

 

4. Human Oversight. AI should support, not supplant, the role of the educator. Human judgment remains crucial for understanding context, emotions, and individual needs.

 

Reminder: Use AI for administrative and instructional support—but retain personal engagement for grading, feedback, and mentorship.

 

5. Equity and Access. Not all learners have equal access to high-speed internet, modern devices, or digital fluency. Ethical use means considering how AI tools impact learners from different backgrounds.

 

Action: Provide alternatives to AI-based tools when needed and offer digital literacy support to close usage gaps.

 

Ethical Challenges in Practice

Despite the best intentions, real-world implementation often raises dilemmas:

  • Should an AI flagging a "low engagement" student notify the instructor or wait for context?
  • How do you handle learner consent in systems where data are automatically collected?
  • What safeguards are needed to prevent overreliance on AI-generated feedback?

 

These questions don’t have one-size-fits-all answers, but they underscore the importance of developing institutional policies, faculty guidelines, and learner consent protocols.

 

Preparing Educators and Learners for Ethical AI Use

Ethical use of AI in education starts with awareness and professional development. Faculty should be equipped not only to use AI tools, but to evaluate their implications critically. Similarly, adult learners should be encouraged to reflect on how AI affects their learning experience and data footprint.

 

Holmes et al. (2022) call for embedding AI ethics into digital literacy efforts so learners can become informed users and responsible digital citizens.

 


Continue the Conversation

AI’s influence on education will only grow. Educators must lead conversations about ethics—not as a constraint, but as a framework for responsible innovation. The AI Literacy Forum, hosted by the Adult Learning Exchange Virtual Community, provides a collaborative space to explore these challenges.

 

Moderated by Dr. Simone Conceição and Dr. Lilian Hill, the forum invites educators, designers, and learners to reflect on ethical practices, share resources, and build a more equitable digital learning future.


 

References

Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., ... & Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5

Holmes, W., Porayska-Pomsta, K., Holstein, K., Sutherland, E., Baker, T., & Santos, O. C. (2022). Ethics of AI in education: Towards a community-wide framework. International Journal of Artificial Intelligence in Education, 32(4), 575–617. https://doi.org/10.1007/s40593-021-00239-1

Slade, S., & Prinsloo, P. (2013). Learning analytics: Ethical issues and dilemmas. American Behavioral Scientist, 57(10), 1510–1529. https://doi.org/10.1177/0002764213479366