Thursday, September 18, 2025

Exploring ChatGPT and Other Generative Tools in the Adult Classroom


By Lilian H. Hill

 

Generative AI (GenAI) tools like ChatGPT, DALL-E, or Co-Pilot are quickly becoming part of the everyday digital landscape. Generative AI refers to systems that can produce new content: text, images, audio, video—based on patterns learned from vast datasets. Tools like ChatGPT, Claude, Gemini, and DALL·E can generate human-like responses, summarize complex ideas, or create original examples in seconds. Using these tools, a teacher can quickly produce tailored practice materials, conversational prompts, or real-world scenarios aligned to learners’ needs. For adult educators, these technologies present both exciting opportunities and important questions about how they can—and should—be integrated into teaching and learning. Used thoughtfully, GenAI can become a powerful partner in creating richer, more personalized, and more engaging educational experiences.

 

Why GenAI is Applicable to Adult Education

Adult learners often bring a wealth of prior knowledge, diverse life experiences, and specific goals to the classroom. The table below links ways that incorporating GenAI tools in instruction relates to the principles of andragogy (Adarkwah, 2024):

 

Principle of Andragogy

GenAI Tools in Instruction

Personalized, self-directed learning

Adults typically bring diverse backgrounds, experiences, and learning goals. GenAI tools can tailor explanations, examples, and practice materials to individual needs, supporting self-paced and self-directed learning.

Immediate relevance and application

Adult learners often seek education that directly connects to their careers, personal growth, or problem-solving in daily life. GenAI can generate context-specific resources, simulations, or writing support aligned with real-world tasks.

Flexibility and accessibility

Many adults balance education with jobs, families, and other responsibilities. GenAI offers on-demand tutoring, feedback, and content generation, making learning more flexible and accessible.

Support for diverse skill levels

Adult classrooms can vary widely in terms of prior knowledge, literacy levels, or digital skills. GenAI adapts dynamically, providing scaffolded explanations for beginners and advanced insights for experienced learners.

Enhancement of critical thinking and creativity

Adults often bring rich experiences that allow them to critique and expand on generated outputs. GenAI serves as a partner in brainstorming, reflection, and creative problem-solving rather than just a source of answers.

Lifelong learning orientation

Adult education emphasizes continuous learning beyond formal degrees. GenAI supports this by offering lifelong, personalized, and low-cost opportunities for exploration and skill-building.

 

Practical Classroom Applications

The most effective use comes when instructors frame GenAI as a support tool, not a replacement—encouraging learners to use outputs as starting points for critical analysis, revision, and discussion. Here are six ways educators can integrate generative tools (Storey & Wagner, 2024):

 

1.    Personalized Learning Assistance: Because adult learners bring different skill levels and backgrounds to the classroom, GenAI can serve as an adaptive learning assistant. Learners can ask the tool to re-explain difficult concepts in simpler terms, provide step-by-step guidance, or create analogies that connect with their professional experiences. In addition, GenAI can generate study aids such as practice quizzes, flashcards, and summaries that align with class content, helping learners prepare more effectively.

 

2.    Writing and Communication Support: Adult learners can use GenAI as a tool for drafting and revising various forms of writing, from essays and reports to professional emails. For those learning English as an additional language, GenAI tools can provide grammar corrections, vocabulary suggestions, and conversational practice. Instructors can then guide learners in refining the AI-generated drafts, turning the process into a valuable exercise in editing and communication.

 

3.    Career and Professional Development: GenAI offers practical applications in career-focused education. Learners can use it to draft resumes, cover letters, or professional profiles, which they can then refine through peer review or instructor feedback. The technology can also simulate job interviews by posing realistic, industry-specific questions, giving learners the opportunity to rehearse their responses in a low-stakes environment before entering the real job market.

 

4.    Critical Thinking and Media Literacy: One powerful use of GenAI in adult education is cultivating critical thinking. Learners can be tasked with analyzing AI-generated content to identify potential bias, inaccuracies, or missing perspectives. They can also engage in fact-checking exercises, comparing the AI’s responses against credible sources. These activities not only strengthen critical evaluation skills but also build media and digital literacy, both of which are essential in today’s information-rich society.

 

5.    Creative Applications: Adult learners can use GenAI to brainstorm project ideas, develop proposals, or solve workplace-related problems in innovative ways. The tool can also support storytelling and reflective writing by generating prompts that help learners articulate personal narratives or professional case studies. In this way, AI fosters both creative expression and deeper engagement with course material.

 

6.    Accessibility and Inclusivity: GenAI can play a crucial role in making learning more accessible for adults with diverse needs. It can simplify complex texts into plain language for learners with lower literacy levels or reframe content in different formats, such as visual diagrams or role-play scenarios, to suit various learning styles. This flexibility helps ensure that all learners, regardless of background, can engage meaningfully with course materials.

 

Ethical and Pedagogical Considerations

While GenAI tools offer benefits for adult education, their use also raises important ethical and pedagogical concerns that educators must address thoughtfully (Reihanian et al., 2025).

 

A key issue is accuracy, as these tools are prone to generating responses that may sound authoritative but contain factual errors or incomplete information. These are sometimes referred to as AI hallucinations that can be misleading for students and educators. This makes it essential for both educators and learners to adopt verification practices, such as cross-checking AI outputs with credible sources.

 

Another concern is bias, since AI systems are trained on vast datasets that may carry historical or cultural stereotypes. If left unexamined, these biases can influence the output and potentially even reinforce inequities.

 

Equally important is transparency. Learners need to understand not only the capabilities of generative tools but also their limitations, including how they arrive at certain outputs and why their responses should be treated critically rather than accepted at face value.

 

Finally, assessment integrity presents a pedagogical challenge. Instructors must consider how to design assignments and evaluation strategies that encourage authentic learning while discouraging overreliance on AI-generated content. This may involve clarifying expectations around responsible use, integrating AI literacy into the curriculum, and developing assessments that prioritize process, reflection, and critical thinking alongside final products.

 

Collectively, these considerations highlight the importance of using GenAI in ways that enhance learning without compromising ethical standards or academic integrity.

 

Keeping Humans in the Loop

GenAI should not replace the educator. Instead, it should enhance their role. Teachers remain essential for providing context, fostering critical thinking, and building the human connections that are at the heart of learning. By positioning AI as a co-pilot rather than an autopilot, educators can ensure that technology supports, rather than dictates, the learning process. As tools like ChatGPT continue to evolve, adult educators have an opportunity to shape how they are used in ways that promote equity, creativity, and lifelong learning. The key is to remain curious, informed, and willing to experiment—while keeping learners’ needs and goals at the center of the process.

 

References

Adarkwah, M. A. (2024). GenAI-infused adult learning in the digital era: a conceptual framework for higher education. Adult Learning36(3), 149-161. https://doi.org/10.1177/10451595241271161 

Reihanian, I., Hou, Y., Chen, Y., Zheng, Y.  (2025). A Review of Generative AI in Computer Science Education: Challenges and Opportunities in Accuracy, Authenticity, and Assessment. arXiv. https://arxiv.org/html/2507.11543v1?utm_source=chatgpt.com

Storey, V., & Wagner, A. (2024). Integrating artificial intelligence (AI) Into adult education.  International Journal of Adult Education and Technology, 15(1), https://doi.org/10.4018/IJAET.345921

 

Thursday, September 4, 2025

Integrating AI into Online Course Design: Tools and Strategies


 

By Simone C. O. Conceição

 

As artificial intelligence (AI) continues to reshape higher education and adult learning, learning designers and educators are exploring new ways to integrate AI into online course design. Whether enhancing student engagement, improving learning outcomes, or streamlining administrative tasks, AI-powered tools can support more effective, efficient, and personalized online instruction.

 

This post examines practical tools and strategies for integrating AI into online course design, along with key considerations for ensuring an ethical, inclusive, and learner-centered implementation.

 

Why Integrate AI into Online Course Design?

AI offers a variety of benefits in the online learning environment, including:

  • Personalized learning experiences that adapt to individual student needs
  • Efficient content creation and curation using generative AI
  • Data-informed decision-making through learning analytics
  • Automated support via chatbots and tutoring systems
  • Enhanced accessibility through transcription, translation, and adaptive technologies

By integrating AI thoughtfully, educators can shift from static course materials to dynamic learning environments that respond to student progress and preferences.

 

AI Tools for Online Course Design

Here is a table summarizing the AI tools and their applications that can enhance online course development and delivery:

Tool Category

Examples (with Links)

Purpose / Function

Generative AI for Content Creation

ChatGPT, Jasper, Copilot

Create quiz questions, discussion prompts, summaries; save instructor time

Adaptive Learning Platforms

Knewton Alta, Smart Sparrow

Adjust content delivery based on learner performance; personalize learning paths

Intelligent Tutoring Systems (ITS)

Carnegie Learning, ALEKS

Simulate tutoring; offer real-time feedback and scaffolding for mastery learning

AI-Powered Analytics Tools

Analytics Canvas, Brightspace Insights

Provide predictive insights into student engagement, risk, and performance

Supportive AI Assistants

Watson Assistant, Pounce

Answer FAQs, assist with navigation, and offer 24/7 learner support

Accessibility and Language Tools

Otter.ai, Microsoft Immersive Reader, Google Translate

Enhance access and support for multilingual learners; enable transcription and translation

 

Strategies for Effective Integration

To use AI tools responsibly and effectively in online course design, consider the following strategies:

  • Align AI tools with learning objectives: Ensure that the technology supports, rather than distracts from, the intended outcomes.
  • Maintain instructor presence: AI should augment, not replace, instructor interaction and feedback.
  • Support digital and AI literacy: Help learners understand the tools they’re using and how to use them critically and ethically.
  • Pilot tools before full-scale implementation: Test features and gather feedback to ensure usability and accessibility.
  • Ensure transparency: Let learners know when and how AI is being used, especially if their data is collected or used to inform decisions.

 

Ethical and Pedagogical Considerations

While AI can enrich online learning, it also raises concerns around:

  • Data privacy and consent
  • Algorithmic bias
  • Over-automation of instruction
  • Access disparities for underserved learners

As Holmes et al. (2019) note, integrating AI into education necessitates careful consideration of ethics, inclusion, and pedagogical intent. The goal should always be to enhance human learning, rather than replace the relational and contextual elements that define effective teaching.

 

Join the Discussion

At the AI Literacy Forum hosted by the Adult Learning Exchange Virtual Community, educators, designers, and adult learning professionals are exchanging ideas, tools, and practices for using AI in teaching and learning. Moderated by Dr. Simone Conceição and Dr. Lilian Hill, the forum is a space for thoughtful dialogue and community support.

 

We invite you to share your questions, strategies, and experiences as we explore how to design more responsive, inclusive, and AI-informed online learning environments.

 

References

Holmes, W., Bialik, M., & Fadel, C. (2019). Artificial intelligence in education: Promises

 

 

Thursday, August 21, 2025

AI-Assisted Feedback and Assessment: Opportunities and Limitations


 

By Lilian H. Hill

 

Knowledge assessment determines how well students have learned and evaluates the effectiveness of teaching content and strategies for future improvement (Hill, 2020). Research has shown that incorporating knowledge assessments and effective feedback during instruction can boost both student motivation and overall learning effectiveness (Minn, 2022). AI innovations in education promise faster, scalable, and personalized guidance for learning. While AI-based automation can reduce the labor-intensive aspects of conducting learning assessments, its true value lies in enabling a deeper understanding of students and freeing up time to respond creatively to teachable moments. A key priority with AI is ensuring that humans remain actively involved and in control, with attention given to all those participating in the process—students, educators, and others who support learners (U.S. Department of Education, 2023). This blog post explores the opportunities and limitations of using AI for feedback and assessment, along with best practices for effective integration.

 

Opportunities

AI-driven personalized inputs are revolutionizing education by creating dynamic, tailored learning experiences that foster student engagement, improve learning outcomes, and equip individuals with the skills needed to thrive in a rapidly evolving world. AI recognizes patterns within data and automates decisions to create an adaptive learning environment, a technology-enhanced educational system that uses data and algorithms to personalize instruction in real time, based on each learner’s performance, needs, and preferences. Effective adaptive learning environments depend on three key adaptations: (a) delivering precise, timely, and meaningful feedback during problem-solving, and (b) organizing learning content to match each student’s unique skill level and proficiency, and (c) enhancing formative assessment feedback loops.

 

1.    Timely and Scalable Feedback

AI feedback leverages advancements in natural language processing to provide automated, personalized evaluations that can be scaled according to predefined criteria. AI systems can deliver instant feedback at scale, which is valuable in large classes or for repetitive tasks. According to a 2025 review of educational measurement technology, AI-powered scoring and personalized feedback enhance consistency and speed in assessment delivery. Drawing on extensive linguistic databases, these systems generate responses that mimic human engagement with student work. This technology has sparked considerable discussion in academic contexts, with the potential to transform teaching and learning practices (Zapata et al., 2025).

 

2.    Personalized Input and Adaptive Growth

Adaptive learning systems are essential for delivering personalized experiences in online instruction, particularly for those courses with large enrollment, such as MOOCS or Intelligent Tutoring Systems. For example, in a randomized controlled trial involving 259 undergraduates, researchers found that students receiving AI-generated feedback showed significant improvements across various writing dimensions compared to traditional instruction, with particularly strong effects on organization and content development (Zhang, 2025). The intervention also revealed that students valued usefulness over surface ease of use.

 

3.    Enhanced Formative Assessment Loops

Technological interventions can create more personalized, timely feedback loops that facilitate deeper engagement with learning. Formative assessment has long been a central application of educational technology, as feedback loops are essential for enhancing teaching and learning. AI may enable richer feedback loops by supporting formative assessment—when paired intentionally with human oversight—helping teachers adapt instruction based on student progress.

 

Limitations and Key Concerns

Creating machine learning models that deliver meaningful, personalized, and authentic feedback demands substantial involvement from human domain experts. Choices about whose expertise is included, how it is gathered, and when it is significantly applied influence the relevance and quality of the feedback produced. These models also require ongoing maintenance and refinement to align with changing contexts, evolving theories, and diverse student needs. Without continuous updates, feedback can quickly become outdated or misaligned with current learner requirements. Key limitations include (a) concerns about AI system accuracy, (b) loss of contextual understanding and embedded bias, (c) overreliance that diminishes human interaction, and (d) important ethical and pedagogical challenges.

 

1.    Accuracy

Researchers have recorded numerous cases of AI systems making harmful decisions due to coding errors or biased training data. Such failures have rendered inaccurate teaching evaluations, caused job and license losses, and discriminated based on names, addresses, gender, and skin color. AI systems can sometimes exploit shortcuts without capturing the deeper intent of their designers or the domain’s full complexity. For instance, a 2017 image recognition system “cheated” by identifying a copyright tag linked to horse images instead of learning to recognize images of horses (Sample, 2017).

 

2.    Context Loss and Bias

Lindsay et al. (2025) note that the convenience of automation carries the risk of neglecting the distinct needs of minority or atypical learners because they are more difficult to standardize and address. For example, automated essay scoring (AES) systems often rely on surface features like essay length or keywords, making them insensitive to nuance, creativity, and accurate content understanding. In experiments with several chatbots, Taylor (2024) found that AI-generated feedback tends to be generic and provide variations of the same feedback for multiple students (Taylor, 2024). Algorithmic bias is also a concern. Models trained on unbalanced data can amplify cultural or linguistic disparities, potentially disadvantaging Black, Indigenous, and People of Color (BIPOC) or non‑native English speakers unless bias mitigation strategies are in place.

 

3.    Over-reliance and Reduced Human Interaction

Evidence suggests that when students depend too heavily on AI-generated feedback, their opportunities for critical reflection and dialogue diminish, both key foundations for higher-order thinking and deep learning. A recent comparative study found that students tend to mistrust AI feedback when it is not combined with human guidance, while academic staff were more open, especially if AI suggestions augmented rather than replaced instructor feedback (Henderson et al., 2025). Moreover, educators’ reflections indicate that adopting AI for meaningful feedback may serve to increase instructor workload and complexity compared to traditional teaching methods, especially when contextual interpretation is needed (Taylor, 2024).

 

4.    Ethical and Pedagogical Considerations

Generative AI tools raise essential ethical dimensions—notably involving participation, impact, fairness, and evolution over time. Unless systems are carefully designed to be inclusive, AI-generated feedback may marginalize minority learners with unique needs (Lindsay et al., 2025). The National Council on Educational Measurement’s AIME group has similarly stressed validity, equity, and transparency as pillars for responsible AI in educational measurement (Bulut et al., 2024). With thoughtful implementation, ethical frameworks, educator training, and human oversight, AI can enhance education without sacrificing critical thinking or integrity.

 

Best Practices for Implementation

  • Keep humans in the loop. Use AI as a supplement, not a replacement, for instructor-led feedback and assessment.
  • Pilot first. Collect user feedback on pilot deployments before full-scale adoption to ensure transparency, acceptance, and reliability.
  • Disclose AI use. State clearly when AI tools produce summaries or initial feedback, including platform and prompt details when appropriate.
  • Educate users. Teach students to interpret AI output critically and support educators in leveraging feedback meaningfully.
  • Audit for bias and fairness. Apply algorithmic audits and explainable AI techniques to evaluate model performance across diverse groups.

 

References

Bulut, O., Beiting-Parrish, M., Casablanca, J. M., Slater, S. C., Jiao, H., Song, D. … Morilova, P. (2024). The Rise of Artificial Intelligence in Educational Measurement: Opportunities and Ethical Challenges. Journal of Educational Measurement and Evaluation, 5(3). https://doi.org/10.59863/miql7785

Henderson, M., Bearman, M., Chung, J., Fawns, T., Buckingham Shum, S., Matthews, K. E., & de Mello Heredia, J. (2025). Comparing Generative AI and teacher feedback: Student perceptions of usefulness and trustworthiness. Assessment & Evaluation in Higher Education, 1–16. https://doi.org/10.1080/02602938.2025.2502582

Hill, L. H. (Ed.). (2020). Assessment, evaluation, and accountability in adult education. Stylus Publishing.

Minn, S. (2022). AI-assisted knowledge assessment techniques for adaptive learning environments. Computers and Education: Artificial Intelligence, 3, 100050. https://doi.org/10.1016/j.caeai.2022.100050

Sample, I. (5 November 2017). Computer says no: Why making Ais fair, accountable, and transparent is critical. The Guardian. https://www.theguardian.com/science/2017/nov/05/computer-says-no-why-making-ais-fair-accountable-and-transparent-is-crucial

Taylor, P. (2024, September 6). The imperfect tutor: Grading, feedback and AI. Inside Higher Education. https://www.insidehighered.com/opinion/career-advice/teaching/2024/09/06/challenges-using-ai-give-feedback-and-grade-students?utm_source=chatgpt.com

U.S. Department of Education, Office of Educational Technology (2023). Artificial intelligence and future of teaching and learning: Insights and recommendations. Washington, DC.

Zapata, G. C., Cope, B., Kalantzis, M., Tzirides, A. O. (Olnancy), Saini, A. K., Searsmith, D., … Abrantes da Silva, R. (2025). AI and peer reviews in higher education: Students’ multimodal views on benefits, differences and limitations. Technology, Pedagogy and Education, 1–19. https://doi.org/10.1080/1475939X.2025.2480807

Zhang, K. (2025). Enhancing Critical Writing Through AI Feedback: A Randomized Control Study. Behavioral Sciences, 15(5):600. https://doi.org/10.3390/bs15050600


 

Tuesday, August 12, 2025

Understanding Generative AI: Benefits, Risks, and Ethical Use

 


By Lilian H. Hill

 

Generative Artificial Intelligence (GenAI) refers to systems that can create new content such as text, images, music, or even video based on patterns learned from large datasets. Unlike traditional AI systems that classify or predict, generative models generate original content. These sophisticated tools are popular and widely available at a low cost. Tools like OpenAI’s ChatGPT, DALL-E, and Google’s Gemini are notable examples (Bommasani et al., 2021).

 

GenAI is rapidly transforming the way we work, create, and communicate. From producing human-like text and generating realistic images to assisting in software development and content creation, GenAI is no longer a futuristic concept; it’s a tool many of us are already using, knowingly or not. But as with any powerful technology, its potential comes with critical questions about benefits, risks, ethics, and responsible use.

 

Benefits of GenAI
GenAI offers a wide range of benefits across sectors by enhancing creativity, efficiency, and accessibility. Some key advantages include:

 

1.    Creativity and Content Generation. GenAI can produce text, images, music, code, and video, supporting creative professionals and everyday users. It enables rapid prototyping of ideas, assists in drafting content, and offers inspiration for writers, designers, educators, and artists.

 

2.    Efficiency and Automation. By automating repetitive or time-consuming tasks—such as summarizing documents, composing emails, or generating reports—GenAI saves time and increases productivity. In industries like marketing or journalism, it can streamline content creation workflows.

 

3.    Personalization. GenAI can tailor content to individual preferences or needs. For example, in education, it can create adaptive learning materials suited to different skill levels. In business, it can generate personalized marketing messages or customer support responses.

 

4.    Accessibility. Gen AI helps break down barriers to access by generating content in different formats and languages. For instance, it can convert text to audio, simplify complex language, or create visual aids, making information more inclusive for people with diverse needs.

 

5.    Support for Learning and Skill Development. Tools powered by GenAI can act as tutors or writing assistants, offering feedback, explanations, or examples. This empowers learners to practice and improve their skills in real-time, whether they’re learning a new language, writing an essay, or studying a complex concept.

 

6.    Innovation in Research and Development. GenAI accelerates discovery by simulating ideas, generating hypotheses, or assisting with data interpretation. In fields like drug discovery or materials science, it can suggest novel compounds or design prototypes more quickly than traditional methods.

 

Risks and Challenges

Despite its promise, GenAI presents several risks:

 

1.    Spreading Misinformation. AI-generated content can be used to create convincing fake news, propaganda, deepfakes, or misleading scientific papers, which can undermine trust and amplify social harm (Zellers et al., 2019). Fleming (2023) noted that AI tools can generate distorted historical accounts, enabling malicious actors to flood the public sphere with misinformation and hateful content. The global reach of social media enables falsehoods and conspiracy theories to spread instantly across borders.

 

2.    Bias and Fairness. Generative models can replicate and amplify the biases found in the data they were trained on, including stereotypes based on race, gender, or disability (Bender et al., 2021). This can lead to discriminatory output or harmful content, even when unintended. With the rise of GenAI, concerns around data justice have grown, as these technologies rely on large datasets that may carry embedded biases. For example, a GenAI-driven predictive policing system that draws from historically biased crime data could disproportionately target communities of color, leading to over-policing and further marginalization.

 

3.    Intellectual Property and Plagiarism. GenAI tools can produce text, images, music, and other forms of content that closely resemble or even replicate existing works that are often shared without clear attribution. This raises complex questions about authorship, originality, and ownership in both academic and creative domains (Crawford, 2021). Users may unknowingly commit plagiarism or violate intellectual property laws. The rapid proliferation of AI-generated content is prompting urgent discussions about how to define and protect original work in the age of GenAI.

 

4.    Environmental Impacts. Artificial intelligence is an extractive industry due to its significant environmental footprint. Training large AI models requires substantial computing power, resulting in high energy consumption. Data centers rely on extracting finite natural resources, such as lithium. This parallels traditional extractive industries by drawing heavily on both human and natural resources, often without equitable returns or sustainability safeguards (Crawford, 2021).

 

Ethical Use and Best Practices

Ethical use of GenAI begins with transparency. Users should disclose when AI-generated content is used, especially in educational, professional, or public communication contexts. For researchers and educators, citing tools appropriately and understanding their limitations is crucial.

 

Human oversight is essential. While AI can support decisions, it should not replace human judgment in contexts like grading, hiring, or healthcare. Ensuring accountability for AI-assisted decisions is crucial for maintaining trust and upholding ethical integrity (Floridi & Cowls, 2019). Inclusive and responsible design of AI systems requires incorporating diverse data, testing for bias, minimizing environmental impacts, and involving stakeholders, which is key to building technology that serves all members of society fairly.

 

Conclusion

GenAI is a powerful tool with immense potential to enhance human creativity and productivity. But to realize its benefits responsibly, we must remain vigilant about its risks and committed to ethical practices. As users, educators, researchers, and citizens, our role is to use GenAI wisely.

 

References

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922

Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., ... & Liang, P. (2021). On the opportunities and risks of foundation models. Stanford University. https://arxiv.org/abs/2108.07258

Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.

Fleming, M. (2023, June 13). Healing our troubled information ecosystem. Medium. https://melissa-fleming.medium.com/healing-our-troubled-information-ecosystem-cf2e9e8a4bed

Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1). https://doi.org/10.1162/99608f92.8cd550d1

Zellers, R., Holtzman, A., Rashkin, H., Bisk, Y., Farhadi, A., Roesner, F., & Choi, Y. (2019). Defending against neural fake news. Advances in Neural Information Processing Systems, 32, 9051–9062.