Showing posts with label Equity. Show all posts
Showing posts with label Equity. Show all posts

Thursday, October 30, 2025

Ethical Use of AI in Teaching and Learning

 


By Simone C. O. Conceição

 

Artificial Intelligence (AI) is rapidly becoming a fixture in educational practice. Whether through chatbots offering academic support, automated grading systems, adaptive learning platforms, or generative tools like ChatGPT, AI promises to improve efficiency, accessibility, and personalization. However, with great power comes significant ethical responsibility.

 

As AI becomes embedded in teaching and learning environments, educators must consider how to integrate these tools ethically, ensuring they enhance—not diminish—the quality, fairness, and inclusivity of education.

 

Why AI Ethics Matter in Education

AI systems differ from traditional software because they evolve based on data, learn from patterns, and often operate without full transparency. This complexity introduces serious ethical risks, including privacy breaches, algorithmic bias, and diminished human agency (Floridi et al., 2018).

 

In educational contexts, these concerns are amplified. Learners—especially adults returning to education or navigating online environments—place trust in systems to guide their progress. Ethical use of AI ensures that learners are respected as individuals, not treated as data points, and that educational systems support inclusion, equity, and agency (Holmes et al., 2022).

 

Key Principles for Ethical AI Integration

1. Transparency and Explainability. Educators and students should understand when AI is used and how it functions. For example, if an AI grades an assignment or suggests learning paths, users should know how those decisions are made.

 

Example: Platforms like Gradescope provide AI-assisted grading while allowing instructors to view, verify, and modify outcomes.

 

2. Fairness and Bias Prevention. AI systems can unintentionally replicate biases found in their training data, leading to unfair recommendations or assessments.

 

Best practice: Choose AI tools that have been tested for equity across diverse learner populations. Regularly review outputs for disproportionate patterns.

 

3. Privacy and Data Ethics. AI systems often require access to learner data. Mishandling this data can violate privacy or lead to surveillance-style practices (Slade & Prinsloo, 2013).

 

Recommendation: Always inform learners about what data are collected, why they are needed, and how they will be used. Select platforms that comply with FERPA or other data protection laws.

 

4. Human Oversight. AI should support, not supplant, the role of the educator. Human judgment remains crucial for understanding context, emotions, and individual needs.

 

Reminder: Use AI for administrative and instructional support—but retain personal engagement for grading, feedback, and mentorship.

 

5. Equity and Access. Not all learners have equal access to high-speed internet, modern devices, or digital fluency. Ethical use means considering how AI tools impact learners from different backgrounds.

 

Action: Provide alternatives to AI-based tools when needed and offer digital literacy support to close usage gaps.

 

Ethical Challenges in Practice

Despite the best intentions, real-world implementation often raises dilemmas:

  • Should an AI flagging a "low engagement" student notify the instructor or wait for context?
  • How do you handle learner consent in systems where data are automatically collected?
  • What safeguards are needed to prevent overreliance on AI-generated feedback?

 

These questions don’t have one-size-fits-all answers, but they underscore the importance of developing institutional policies, faculty guidelines, and learner consent protocols.

 

Preparing Educators and Learners for Ethical AI Use

Ethical use of AI in education starts with awareness and professional development. Faculty should be equipped not only to use AI tools, but to evaluate their implications critically. Similarly, adult learners should be encouraged to reflect on how AI affects their learning experience and data footprint.

 

Holmes et al. (2022) call for embedding AI ethics into digital literacy efforts so learners can become informed users and responsible digital citizens.

 


Continue the Conversation

AI’s influence on education will only grow. Educators must lead conversations about ethics—not as a constraint, but as a framework for responsible innovation. The AI Literacy Forum, hosted by the Adult Learning Exchange Virtual Community, provides a collaborative space to explore these challenges.

 

Moderated by Dr. Simone Conceição and Dr. Lilian Hill, the forum invites educators, designers, and learners to reflect on ethical practices, share resources, and build a more equitable digital learning future.


 

References

Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., ... & Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5

Holmes, W., Porayska-Pomsta, K., Holstein, K., Sutherland, E., Baker, T., & Santos, O. C. (2022). Ethics of AI in education: Towards a community-wide framework. International Journal of Artificial Intelligence in Education, 32(4), 575–617. https://doi.org/10.1007/s40593-021-00239-1

Slade, S., & Prinsloo, P. (2013). Learning analytics: Ethical issues and dilemmas. American Behavioral Scientist, 57(10), 1510–1529. https://doi.org/10.1177/0002764213479366