Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Thursday, August 21, 2025

AI-Assisted Feedback and Assessment: Opportunities and Limitations


 

By Lilian H. Hill

 

Knowledge assessment determines how well students have learned and evaluates the effectiveness of teaching content and strategies for future improvement (Hill, 2020). Research has shown that incorporating knowledge assessments and effective feedback during instruction can boost both student motivation and overall learning effectiveness (Minn, 2022). AI innovations in education promise faster, scalable, and personalized guidance for learning. While AI-based automation can reduce the labor-intensive aspects of conducting learning assessments, its true value lies in enabling a deeper understanding of students and freeing up time to respond creatively to teachable moments. A key priority with AI is ensuring that humans remain actively involved and in control, with attention given to all those participating in the process—students, educators, and others who support learners (U.S. Department of Education, 2023). This blog post explores the opportunities and limitations of using AI for feedback and assessment, along with best practices for effective integration.

 

Opportunities

AI-driven personalized inputs are revolutionizing education by creating dynamic, tailored learning experiences that foster student engagement, improve learning outcomes, and equip individuals with the skills needed to thrive in a rapidly evolving world. AI recognizes patterns within data and automates decisions to create an adaptive learning environment, a technology-enhanced educational system that uses data and algorithms to personalize instruction in real time, based on each learner’s performance, needs, and preferences. Effective adaptive learning environments depend on three key adaptations: (a) delivering precise, timely, and meaningful feedback during problem-solving, and (b) organizing learning content to match each student’s unique skill level and proficiency, and (c) enhancing formative assessment feedback loops.

 

1.    Timely and Scalable Feedback

AI feedback leverages advancements in natural language processing to provide automated, personalized evaluations that can be scaled according to predefined criteria. AI systems can deliver instant feedback at scale, which is valuable in large classes or for repetitive tasks. According to a 2025 review of educational measurement technology, AI-powered scoring and personalized feedback enhance consistency and speed in assessment delivery. Drawing on extensive linguistic databases, these systems generate responses that mimic human engagement with student work. This technology has sparked considerable discussion in academic contexts, with the potential to transform teaching and learning practices (Zapata et al., 2025).

 

2.    Personalized Input and Adaptive Growth

Adaptive learning systems are essential for delivering personalized experiences in online instruction, particularly for those courses with large enrollment, such as MOOCS or Intelligent Tutoring Systems. For example, in a randomized controlled trial involving 259 undergraduates, researchers found that students receiving AI-generated feedback showed significant improvements across various writing dimensions compared to traditional instruction, with particularly strong effects on organization and content development (Zhang, 2025). The intervention also revealed that students valued usefulness over surface ease of use.

 

3.    Enhanced Formative Assessment Loops

Technological interventions can create more personalized, timely feedback loops that facilitate deeper engagement with learning. Formative assessment has long been a central application of educational technology, as feedback loops are essential for enhancing teaching and learning. AI may enable richer feedback loops by supporting formative assessment—when paired intentionally with human oversight—helping teachers adapt instruction based on student progress.

 

Limitations and Key Concerns

Creating machine learning models that deliver meaningful, personalized, and authentic feedback demands substantial involvement from human domain experts. Choices about whose expertise is included, how it is gathered, and when it is significantly applied influence the relevance and quality of the feedback produced. These models also require ongoing maintenance and refinement to align with changing contexts, evolving theories, and diverse student needs. Without continuous updates, feedback can quickly become outdated or misaligned with current learner requirements. Key limitations include (a) concerns about AI system accuracy, (b) loss of contextual understanding and embedded bias, (c) overreliance that diminishes human interaction, and (d) important ethical and pedagogical challenges.

 

1.    Accuracy

Researchers have recorded numerous cases of AI systems making harmful decisions due to coding errors or biased training data. Such failures have rendered inaccurate teaching evaluations, caused job and license losses, and discriminated based on names, addresses, gender, and skin color. AI systems can sometimes exploit shortcuts without capturing the deeper intent of their designers or the domain’s full complexity. For instance, a 2017 image recognition system “cheated” by identifying a copyright tag linked to horse images instead of learning to recognize images of horses (Sample, 2017).

 

2.    Context Loss and Bias

Lindsay et al. (2025) note that the convenience of automation carries the risk of neglecting the distinct needs of minority or atypical learners because they are more difficult to standardize and address. For example, automated essay scoring (AES) systems often rely on surface features like essay length or keywords, making them insensitive to nuance, creativity, and accurate content understanding. In experiments with several chatbots, Taylor (2024) found that AI-generated feedback tends to be generic and provide variations of the same feedback for multiple students (Taylor, 2024). Algorithmic bias is also a concern. Models trained on unbalanced data can amplify cultural or linguistic disparities, potentially disadvantaging Black, Indigenous, and People of Color (BIPOC) or non‑native English speakers unless bias mitigation strategies are in place.

 

3.    Over-reliance and Reduced Human Interaction

Evidence suggests that when students depend too heavily on AI-generated feedback, their opportunities for critical reflection and dialogue diminish, both key foundations for higher-order thinking and deep learning. A recent comparative study found that students tend to mistrust AI feedback when it is not combined with human guidance, while academic staff were more open, especially if AI suggestions augmented rather than replaced instructor feedback (Henderson et al., 2025). Moreover, educators’ reflections indicate that adopting AI for meaningful feedback may serve to increase instructor workload and complexity compared to traditional teaching methods, especially when contextual interpretation is needed (Taylor, 2024).

 

4.    Ethical and Pedagogical Considerations

Generative AI tools raise essential ethical dimensions—notably involving participation, impact, fairness, and evolution over time. Unless systems are carefully designed to be inclusive, AI-generated feedback may marginalize minority learners with unique needs (Lindsay et al., 2025). The National Council on Educational Measurement’s AIME group has similarly stressed validity, equity, and transparency as pillars for responsible AI in educational measurement (Bulut et al., 2024). With thoughtful implementation, ethical frameworks, educator training, and human oversight, AI can enhance education without sacrificing critical thinking or integrity.

 

Best Practices for Implementation

  • Keep humans in the loop. Use AI as a supplement, not a replacement, for instructor-led feedback and assessment.
  • Pilot first. Collect user feedback on pilot deployments before full-scale adoption to ensure transparency, acceptance, and reliability.
  • Disclose AI use. State clearly when AI tools produce summaries or initial feedback, including platform and prompt details when appropriate.
  • Educate users. Teach students to interpret AI output critically and support educators in leveraging feedback meaningfully.
  • Audit for bias and fairness. Apply algorithmic audits and explainable AI techniques to evaluate model performance across diverse groups.

 

References

Bulut, O., Beiting-Parrish, M., Casablanca, J. M., Slater, S. C., Jiao, H., Song, D. … Morilova, P. (2024). The Rise of Artificial Intelligence in Educational Measurement: Opportunities and Ethical Challenges. Journal of Educational Measurement and Evaluation, 5(3). https://doi.org/10.59863/miql7785

Henderson, M., Bearman, M., Chung, J., Fawns, T., Buckingham Shum, S., Matthews, K. E., & de Mello Heredia, J. (2025). Comparing Generative AI and teacher feedback: Student perceptions of usefulness and trustworthiness. Assessment & Evaluation in Higher Education, 1–16. https://doi.org/10.1080/02602938.2025.2502582

Hill, L. H. (Ed.). (2020). Assessment, evaluation, and accountability in adult education. Stylus Publishing.

Minn, S. (2022). AI-assisted knowledge assessment techniques for adaptive learning environments. Computers and Education: Artificial Intelligence, 3, 100050. https://doi.org/10.1016/j.caeai.2022.100050

Sample, I. (5 November 2017). Computer says no: Why making Ais fair, accountable, and transparent is critical. The Guardian. https://www.theguardian.com/science/2017/nov/05/computer-says-no-why-making-ais-fair-accountable-and-transparent-is-crucial

Taylor, P. (2024, September 6). The imperfect tutor: Grading, feedback and AI. Inside Higher Education. https://www.insidehighered.com/opinion/career-advice/teaching/2024/09/06/challenges-using-ai-give-feedback-and-grade-students?utm_source=chatgpt.com

U.S. Department of Education, Office of Educational Technology (2023). Artificial intelligence and future of teaching and learning: Insights and recommendations. Washington, DC.

Zapata, G. C., Cope, B., Kalantzis, M., Tzirides, A. O. (Olnancy), Saini, A. K., Searsmith, D., … Abrantes da Silva, R. (2025). AI and peer reviews in higher education: Students’ multimodal views on benefits, differences and limitations. Technology, Pedagogy and Education, 1–19. https://doi.org/10.1080/1475939X.2025.2480807

Zhang, K. (2025). Enhancing Critical Writing Through AI Feedback: A Randomized Control Study. Behavioral Sciences, 15(5):600. https://doi.org/10.3390/bs15050600


 

Thursday, July 17, 2025

How AI Is Shaping the Future of Work and Lifelong Learning


 

By Simone C. O. Conceição 

 

Artificial intelligence (AI) is no longer a futuristic concept—it is a present-day force driving change across industries, reshaping job roles, and redefining what it means to learn throughout life. For adult learners, educators, and workforce development professionals, understanding how AI is influencing work and lifelong learning is essential for staying current, competitive, and empowered.


This post examines how AI is transforming the workforce and learning systems, identifies key challenges, and discusses strategies for adult educators, trainers, and program designers to prepare learners for success in this evolving landscape.

 

The Impact of AI on the Workforce

AI is automating routine tasks, augmenting human decision-making, and generating new types of work across sectors. From healthcare and manufacturing to finance and education, AI technologies are streamlining operations and creating new efficiencies. At the same time, they are changing the skills required for employment. As a result, the types of jobs available—and the skills required to perform them—are undergoing rapid change.

 

The World Economic Forum (2023) estimates that by 2027, AI and automation will have displaced 85 million jobs globally, while also creating 97 million new roles that require different competencies, especially in analytical thinking, creativity, and digital literacy. Many of these new roles will require continuous skill upgrading, hallmarks of lifelong learning in the modern economy. 

 

These projections underscore the need for reskilling and ongoing professional development across all sectors, placing a premium on adaptability, digital fluency, and lifelong learning competencies that are not only desirable but also necessary. Jobs that involve predictable, repetitive tasks are most at risk of automation, while roles requiring human judgment, emotional intelligence, and adaptability are likely to expand in the future. As such, adult learners must not only upgrade their technical knowledge but also develop soft skills that machines cannot replicate.

 

Brynjolfsson and McAfee (2014) argue that while technology increases productivity and creates new opportunities, it also widens skill gaps and can exacerbate socioeconomic inequality if not accompanied by inclusive reskilling efforts. For this reason, integrating AI awareness into workforce development is essential—not just to prepare individuals for new roles, but to help them understand the larger forces shaping labor markets.

 

AI and Lifelong Learning

Lifelong learning, once a theoretical ideal, has become a practical necessity. AI is reshaping how learning happens in several ways:

  • Personalized learning pathways: AI-powered platforms can tailor content to learners' needs, enabling them to progress at their own pace.
  • Just-in-time training: AI systems can deliver microlearning modules or refresher content in real time based on job performance data.
  • Predictive analytics: Institutions and employers use AI to identify learning gaps and tailor programs to evolving industry demands.
  • Credentialing and upskilling: AI is facilitating the rise of short-term, skills-based credentials that align more closely with labor market trends.

For adult learners, especially those navigating career transitions or returning to education, these innovations offer flexible, relevant, and responsive options for growth.

 

Challenges and Considerations

Despite its potential, the integration of AI into work and learning presents serious challenges:

  • Equity and access: Not all learners have equal access to technology or support systems, which can deepen existing educational and economic divides (Robinson et al., 2020).
  • Algorithmic bias: AI systems trained on biased data may perpetuate inequalities in hiring, promotion, or learning recommendations, leading to unfair outcomes in hiring, admissions, and learning assessments (O’Neil, 2017).
  • Digital literacy gaps: Many adult learners lack the foundational digital and data literacy skills necessary to engage with AI-enhanced systems.

 

Educators and policymakers must address these challenges to ensure that the benefits of AI are distributed in an equitable and ethical manner. These concerns underscore the need for intentional design of inclusive learning environments that support diverse learners and cultivate a critical awareness of how technology impacts educational and economic opportunities.

 

Preparing for an AI-Enhanced Future

To thrive in this new landscape, adult learners must cultivate AI literacy—the ability to understand, interact with, and evaluate AI technologies. Educators, trainers, and program designers play a key role in equipping adults with the mindset and skills to thrive in an AI-enhanced society. Effective strategies include:

  • Integrating discussions of AI and automation into workforce readiness programs
  • Promoting project-based and experiential learning that engages learners with real-world AI tools
  • Encouraging critical reflection on the social and ethical dimensions of AI
  • Creating accessible, flexible learning pathways that account for learners' varying levels of tech proficiency

 

AI is not a replacement for human talent—it is a tool that can expand opportunities when used thoughtfully and inclusively. As noted by Schleicher (2018) of the OECD, education systems must shift from preparing learners for specific jobs to equipping them with lifelong competencies, including learning how to learn, adapting to change, and making informed choices in complex environments.

 

Join the Conversation

The AI Literacy Forum at the Adult Learning Exchange Virtual Community provides a platform for educators, practitioners, and learners to explore how AI is transforming work and lifelong learning. Moderated by Dr. Simone Conceição and Dr. Lilian Hill, the forum fosters critical conversations, resource sharing, and professional collaboration.

 

We invite you to join the conversation and help shape a future where AI enhances—not replaces—human potential in work and learning.

 

References

Brynjolfsson, E., & McAfee, A. (2014). The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. W. W. Norton & Company.

O’Neil, C. (2017). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Publishing Group.

Robinson, L., Cotten, S. R., Ono, H., Quan-Haase, A., Mesch, G., Chen, W., ... & Stern, M. J. (2015). Digital inequalities and why they matter. Information, communication & society, 18(5), 569-582.

Schleicher, A. (2018). The future of education and skills: Education 2030. The future we want. OECD Education Directorate.

World Economic Forum. (2023). The Future of Jobs Report 2023. https://www.weforum.org/publications/the-future-of-jobs-report-2023/


 

 

Thursday, June 19, 2025

Demystifying AI: A Beginner’s Guide for Educators and Learners

 


 

By Simone C. O. Conceição

 

Artificial intelligence (AI) is increasingly shaping how we live, work, and learn. Yet for many adult educators and learners, AI remains an abstract or intimidating concept—often viewed as complex, technical, or only relevant to data scientists and tech professionals. In reality, AI is already embedded in the tools and platforms we use every day, and understanding its fundamental principles is now crucial for effective digital participation.

 

This post offers an accessible introduction to AI, examines its relevance to adult education, and outlines key steps for developing AI literacy. Readers are also encouraged to continue the conversation in the AI Literacy Forum, moderated by Dr. Simone Conceição and Dr. Lilian Hill.

 

What Is Artificial Intelligence?

Artificial intelligence refers to computer systems that perform tasks typically requiring human intelligence, such as recognizing speech, analyzing data, or making decisions. A significant branch of AI is machine learning, where systems improve their performance by learning from data over time.

 

One recent development in this space is generative AI, which can produce original content such as text, images, or audio. Tools like ChatGPT, DALL·E, and others are designed to respond to user prompts with information, summaries, visuals, and more.

 

Why AI Literacy Matters in Adult Education

For adult learners and educators alike, AI literacy is becoming as fundamental as traditional digital literacy. As Wolff et al. (2016) emphasize, literacy in a data-driven society requires not only technical proficiency but also critical awareness of how technologies shape access to knowledge, decision-making, and power.

 

Long and Magerko (2020) further define AI literacy as a multidimensional framework involving conceptual understanding, applied skills, and ethical reflection. In educational settings, this means helping learners not just use AI tools but understand how they function, question how they are built, and consider their broader social impacts.

In the context of adult education, AI literacy can help:

  • Empower learners to use AI tools for writing, research, and communication
  • Enable educators to adopt AI for personalized instruction, feedback, and course design
  • Support workforce readiness as AI becomes embedded across industries
  • Foster ethical reflection on privacy, data usage, and algorithmic bias

Rather than replacing human educators, AI can serve as a tool to augment teaching and support differentiated instruction.

 

Key Concepts and Terms

Understanding the following terms provides a foundation for AI literacy:

  • Artificial Intelligence (AI): The ability of machines to perform tasks that typically require human intelligence
  • Machine Learning (ML): A process where machines improve performance through data analysis
  • Generative AI: AI that creates new content, such as writing, images, or audio
  • Algorithm: A set of rules or calculations used by AI to make decisions
  • Bias in AI: Systematic errors in output due to biased data or design flaws

Critically engaging with these terms allows adult learners to move from passive users of AI to informed participants in a data-driven society.

 

Steps Toward Building AI Literacy

Becoming AI-literate doesn't mean becoming an AI expert. It means developing the ability to understand, question, and use AI tools thoughtfully. Here are a few ways to start:

  • Explore AI in action: Try tools like ChatGPT or Microsoft Copilot in a learning or teaching activity
  • Encourage discussion: Create space in classrooms or programs for critical conversations about ethics and AI
  • Integrate AI literacy: Include AI-related concepts in digital literacy, workforce development, and lifelong learning curricula
  • Engage in community learning: Participate in spaces like the AI Literacy Forum to exchange ideas and stay informed

 

Connect with the Community

The Adult Learning Exchange Virtual Community offers a collaborative space for exploring these topics in greater depth. In the AI Literacy Forum, moderated by Drs. Simone Conceição and Lilian Hill, professionals from diverse sectors, discuss how AI is influencing adult learning, share practical strategies, and examine critical concerns such as equity, bias, and data ethics.

 

We invite you to join the conversation, share your insights, and help shape the understanding and application of AI literacy in adult education.

 

References

Long, D., & Magerko, B. (2020). What is AI literacy? Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–16. https://doi.org/10.1145/3313831.3376727

Wolff, A., Gooch, D., Montaner, J. J. C., Rashid, U., & Kortuem, G. (2016). Creating an understanding of data literacy for a data-driven society. The Journal of Community Informatics, 12(3). https://doi.org/10.15353/joci.v12i3.3275

 

Thursday, June 5, 2025

AI Literacy: What It Is and How It Affects Adult Education

 

Image Credit: Ali Pizani at Pexels


By: Lilian H. Hill

AI literacy refers to the set of knowledge, skills, and ethical awareness necessary to understand, evaluate, and interact with AI systems in informed and socially responsible ways. As Artificial Intelligence (AI) technologies are being integrated into nearly every aspect of life, understanding how these systems function is essential for individuals and societies alike. Long and Magerko (2020) defined AI literacy as “a set of competencies that enables individuals to critically evaluate AI technologies, communicate and collaborate effectively with AI, and use AI as a tool online, at home, and in the workplace” (p. 2). Laupichler et al. (2023) explain that AI literacy refers to the skills and understanding of AI that adults should have especially non-experts with no computer science background. Based on an exploratory review of literature, Ng et al. (2021) identify four key aspects of AI literacy:

1.    know and understand,

2.    use and apply,

3.    evaluate and create, and

4.    understand ethical issues 

 



At its core, AI literacy involves both conceptual and ethical dimensions. On the conceptual side, it requires a foundational understanding of how AI works. This includes familiarity with:

·      Algorithms, the sets of rules AI systems use to solve problems,

·      Machine learning that enables AI systems to learn from data and improve over time,

·      Neural networks that mimic the structure of the human brain are designed to recognize patterns in data.

 

It also includes an understanding of automation and how AI systems can replace or augment human decision-making. These concepts empower individuals to engage with AI technologies more confidently and to evaluate their strengths and limitations.

 

AI literacy extends well beyond technical comprehension. It involves the ability to critically evaluate AI systems in terms of accuracy, transparency, and fairness (Long & Magerko, 2020). AI systems are often described as “black boxes,” meaning that their internal workings are obscure, even to their developers. This makes it difficult for users to understand how decisions are made or to contest biased outcomes. For example, when AI is used in hiring or credit scoring, it may reflect or even amplify existing societal biases, particularly if it is trained on historical data that already includes discrimination. Individuals with AI literacy are prone to ask essential questions: Who designed this system? What data was it trained on? Who benefits, and who might be harmed?

 

Data rights are a critical concern in the context of AI training, as massive datasets containing personal and publicly available information are needed to develop effective machine learning models. When AI systems are trained on data that includes sensitive or identifiable information, such as social media posts, biometric data, or online behavior, there is a risk of infringing on individuals' rights to privacy, consent, and data ownership. Many individuals are unaware that their digital interactions and even records may be collected and used for AI development without their explicit permission, raising serious ethical and legal concerns (Crawford, 2021). Issues of data provenance, consent, and transparency become especially pressing when such data are used in systems that influence decisions related to hiring, law enforcement, healthcare, or education. Ensuring that individuals retain control over how their data are used requires the enforcement of robust data protection laws, implementation of informed consent mechanisms, and use of privacy-preserving techniques like data anonymization and minimization (Veale & Binns, 2017). As the capabilities of AI systems continue to expand, prioritizing data rights is essential for protecting individual autonomy and fostering public trust in AI technologies (Solove, 2025).

Equally important is the ethical and social dimension of AI literacy (Crawford, 2021). AI is not a neutral technology. It is shaped by the values, assumptions, and power structures of those who build and deploy it. Ethical AI literacy encompasses awareness of how AI can perpetuate systemic inequalities, its impact on privacy and surveillance, and its contribution to labor displacement or environmental degradation. For instance, AI-driven surveillance systems have been disproportionately used against marginalized communities, raising concerns about civil liberties. In addition, the environmental impact of training large AI models, including the carbon emissions from running massive data centers, is increasingly recognized as a significant ethical concern.

Civic and societal engagement are also critical components of AI literacy. Across disciplines and sectors, there is a growing recognition of the need for public involvement in decisions surrounding the use of AI. Engaging the public in governmental decision-making is essential for supporting democratic processes and reducing the potential harms associated with AI. However, the opaque nature of AI systems, their rapid evolution, and the substantial resources they demand can hinder meaningful civic participation (Sieber, 2024). Informed citizens are better equipped to participate in democratic processes related to AI, such as public consultations, advocating for equitable AI policy, and demanding algorithmic accountability. As AI becomes central to public decision-making, from predictive policing to resource allocation, AI literacy allows people to challenge unjust uses and propose alternatives that are more transparent and inclusive.

The importance of AI literacy cannot be overstated. AI literacy enables people not only to use AI tools effectively but also to critically assess their impact and participate in shaping their development. It promotes individual empowerment by helping people make informed decisions about their digital lives, such as protecting their data, choosing platforms that respect privacy, and recognizing manipulative algorithms. It also contributes to social equity by ensuring that marginalized groups are not left behind in the algorithmic age. Furthermore, AI literacy prepares workers for the changing demands of the labor market and supports critical thinking in the face of misinformation and automated influence in democratic systems.

AI Literacy and Adult Education

AI literacy is playing a growing role in shaping the goals and methods of adult education by equipping learners with the critical understanding needed to navigate, evaluate, and utilize AI in both personal and professional contexts. As AI becomes increasingly integrated into workplaces, civic life, and everyday decision-making, adult learners must develop a foundational understanding of how AI systems operate, their capabilities and limitations, and the ethical implications of their use. Adult education programs that integrate AI literacy foster digital agency, enabling learners to make informed choices about their data, interact responsibly with AI technologies, and participate in public discourse about the societal impacts of AI (Long & Magerko, 2020).

AI literacy in adult education promotes lifelong learning and workforce adaptability. According to the World Economic Forum (2025), workers can expect that approximately 39% of their current skills will either be significantly transformed or rendered obsolete. Leading the demand for new competencies are skills in AI and big data, followed closely by expertise in networks, cybersecurity, and overall technology literacy. Alongside these technical proficiencies, there will be a growing emphasis on human-centric capabilities such as creative thinking, resilience, adaptability, curiosity, and a commitment to lifelong learning, all of which are anticipated to become increasingly vital in the evolving workforce landscape.

Storey and Wagner (20240 comment that AI has transformed the role of adult educators by evolving the learning environment into an open, intelligent system that adapts to learners' needs. They further state that this shift presents ongoing challenges, including ethical concerns regarding data privacy, intellectual property, cybersecurity, and academic integrity, all of which must be continually addressed and regulated in tandem with AI’s rapid advancement. To ensure meaningful and relevant learning experiences, adult educators must adopt research-based approaches to curriculum design that incorporate AI literacy and competencies. The integration of AI in adult education prompts educators to reconsider and redefine their roles, pushing them to enhance their andragogical strategies, analytical thinking, and digital literacy.

Integrating AI literacy into adult learning environments can help reduce digital inequality by ensuring that all learners, regardless of background, have access to knowledge that is increasingly essential in a digitally mediated society (UNESCO, 2021). This approach promotes equitable participation in the evolving digital economy and enhances democratic engagement by fostering informed citizenship in an era of algorithmic influence.

Conclusion

To cultivate AI literacy, (a) educational institutions must integrate it into curricula, (b) governments and organizations should promote public awareness, and (c) workplaces should provide training that addresses both the technical and ethical aspects of AI. Civic organizations can also play a key role by making AI literacy accessible to underserved communities. As AI continues to shape the future, AI literacy is no longer optional. It is a fundamental skill for navigating, questioning, and influencing the increasingly automated world.

 

At the Adult Learning Exchange Virtual Community, we invite you to share your experiences, tools, and questions in the AI Literacy Forum, moderated by Drs. Simone Conceição and Lilian Hill. Together, we can explore how to harness AI for more inclusive, effective, and empowering adult learning. 

 

References

Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.

Long, D., & Magerko, B. (2020). What is AI literacy? Competencies and design considerations. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–16. https://doi.org/10.1145/3313831.3376727

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.

O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Publishing.

Sieber, R., Brandusescu, A., Sangiambut, S., & Adu-Daako, A. (2024). What is civic participation in artificial intelligence? Environment and Planning B: Urban Analytics and City Science0(0). https://doi.org/10.1177/23998083241296200

Solove, D. J. (2025). On privacy and technology. Oxford University Press.

Storey, V. A., & Wagner, A. (2024). Integrating Artificial Intelligence (AI) Into adult education: Opportunities, challenges, and future directions. International Journal of Adult Education and Technology, 15 (1), 1-15. https://doi.org/10.4018/IJAET.345921

UNESCO. (2021). AI and education: Guidance for policy-makers. https://unesdoc.unesco.org/ark:/48223/pf0000376709

World Economic Forum. (2025). The Future of Jobs Report 2025. https://www.weforum.org/publications/the-future-of-jobs-report-2025/digest/