Category: Sector: Education

Hard Questions: Education

  • Call for Book Chapters: OUR AI PROBLEMS

    Call for Book Chapters: OUR AI PROBLEMS

    Call for Book Chapters: Our AI Problems (Edited Volume)

    We believe that there are no easy answers when it comes to artificial intelligence and society. Across jurisdictions and decision-making bodies, those who develop or enforce regulations are confronted with difficult questions. These challenges arise for many reasons: the issues are often embedded in complex sociotechnical systems, lack straightforward solutions, or involve tensions between competing values and needs.

    The editors hold that AI can be of great service for humanity. At the same time, current regulatory frameworks lag far behind what is needed to ensure just, safe, and equitable access and outcomes. 

    Policymakers and subject-matter specialists are increasingly converging on a shared set of especially challenging issues.  Society is learning to join in the conversations. Accordingly, the proposed volume is envisioned as addressing the following areas: Economics and Power; Democracy and Trust; Risks Large and Small; Building Bridges and Inclusion; Media and Art; Environment and Health; Justice, Security, and Defense.

    If you are interested in contributing, we would be delighted to hear from you. If you know colleagues or collaborators who might wish to participate, please feel free to share this call with them as well.

    Deadline for chapter abstracts (250–300 words): 15 January 2026
    Deadline for chapter draft submission (8000–10,000 words; US English; APA style): 31 March 2026
    Deadline for final revisions: 15 May 2026

    Edited by Tammy Mackenzie, Ashley Elizabeth Muller, and Branislav Radeljić

    For more info about the editors, please see: Fellows
    Submissions and questions: Contact Branislav Radeljić, Ph.D., Director of Research.

  • Whose Identity Counts? / Keynote

    Whose Identity Counts? / Keynote

    Whose Identity Counts? explores how AI shapes whose voices are heard and whose are overlooked. Drawing on her research at the University of Cambridge, Hannah highlights the role of language and culture in building more inclusive technologies.

    See the presentation here.

  • WiCyS
Vulnerability Disclosure Program

    WiCyS Vulnerability Disclosure Program

    Proud and happy to see that our Fellow, cybersecurity specialist Temitope Banjo-CISM will be joining Women in CyberSecurity (WiCyS)’s Vulnerability Disclosure Program.

  • Canary in the Mine: An LLM Augmented Survey of Disciplinary Complaints to the Ordre des ingénieurs du Québec (OIQ) (Peer Reviewed)

    Canary in the Mine: An LLM Augmented Survey of Disciplinary Complaints to the Ordre des ingénieurs du Québec (OIQ) (Peer Reviewed)

    This study investigates disciplinary incidents involving engineers in Quebec, shedding light on critical gaps in engineering education. Through a comprehensive review of the disciplinary register of the Ordre des ingénieurs du Québec (OIQ)’s disciplinary register for 2010 to 2024, researchers from engineering education and human resources management in technological development laboratories conducted a thematic analysis of reported incidents to identify patterns, trends, and areas for improvement. The analysis aims to uncover the most common types of disciplinary incidents, underlying causes, and implications for the field in how engineering education addresses (or fails to address) these issues. Our findings identify recurring themes, analyze root causes, and offer recommendations for engineering educators and students to mitigate similar incidents. This research has implications for informing curriculum development, professional development, and performance evaluation, ultimately fostering a culture of professionalism and ethical responsibility in engineering. By providing empirical evidence of disciplinary incidents and their causes, this study contributes to evidence-based practices for engineering education and professional development, enhancing the engineering education community’s understanding of professionalism and ethics.

    More Information

  • Whole-Person Education for AI Engineers: Presented to CEEA (Peer Reviewed)

    Whole-Person Education for AI Engineers: Presented to CEEA (Peer Reviewed)

    This autoethnographic study explores the need for interdisciplinary education spanning both technical an philosophical skills – as such, this study leverages whole-person education as a theoretical approach needed in AI engineering education to address the limitations of current paradigms that prioritize technical expertise over ethical and societal considerations. Drawing on a collaborative autoethnography approach of fourteen diverse stakeholders, the study identifies key motivations driving the call for change, including the need for global perspectives, bridging the gap between academia and industry, integrating ethics and societal impact, and fostering interdisciplinary collaboration. The findings challenge the myths of technological neutrality and technosaviourism, advocating for a future where AI engineers are equipped not only with technical skills but also with the ethical awareness, social responsibility, and interdisciplinary understanding necessary to navigate the complex challenges of AI development. The study provides valuable insights and recommendations for transforming AI engineering education to ensure the responsible development of AI technologies.

    More Information

  • WIP: Gen AI in Engineering Education and the Da Vinci Cube (Peer Reviewed)

    WIP: Gen AI in Engineering Education and the Da Vinci Cube (Peer Reviewed)

    As generative AI (GenAI) tools rapidly transform the engineering landscape, a critical question emerges: Are current educational innovations adequately preparing engineers for the socio-technical challenges of the future? This work-in-progress paper presents two key contributions. First, we build on prior work presenting a systematic review of over 160 scholarly articles on GenAI implementations in engineering education, revealing a predominant focus on enhancing technical proficiency while often neglecting essential socio-technical competencies. Second, we apply an emerging framework—the da Vinci Cube (dVC)—to support engineering educators in critically evaluating GenAI-driven innovations. The dVC framework extends traditional models of innovation by incorporating three dimensions: the pursuit of knowledge, consideration of use, and contemplation of sentiment. Our analysis suggests that while GenAI tools can improve problem-solving and technical efficiency, engineering education must also address ethical, human-centered, and societal impacts. The dVC framework provides a structured lens for assessing how GenAI tools are integrated into curricula and research, encouraging a more holistic, reflective approach. Ultimately, this paper aims to provoke dialogue on the future of engineering education and to challenge the prevailing assumption that technical skill development alone is sufficient in an AI-mediated world.

    More Information

  • Work in Progress: Exclusive Rhetoric in AI Conference Mission Statements

    Work in Progress: Exclusive Rhetoric in AI Conference Mission Statements

    AI conferences are pivotal spaces for knowledge exchange, collaboration, and shaping the trajectory of research, practice, and education. This paper presents preliminary findings from an analysis of AI conference mission statements, investigating how their stated goals affect who is welcomed into AI conversations. We find that many mission statements reflect assumptions that may unintentionally narrow participation and reinforce disciplinary and institutional silos. This limits engagement from a broad range of contributors—including educators, students, working professionals, and even younger users —who are essential to a thriving AI ecosystem. We advocate for clearer framing that supports democratizing and demystifying AI. By broadening participation and intentionally fostering cross-sector and interdisciplinary connections, AI conferences can help unlock more innovation.

    More Information

  • Presenting to the United Nations

    Presenting to the United Nations

    Our Director, Tammy Mackenzie, was honoured to present our recommendations to the United Nations Committee on the Formation of a Scientific Panel on AI. We recommended that the committee include civil society in these works and that meetings should be held in countries where safe travel can be guaranteed for delegates. You can consult our recommendations here.

    See the PDF of the Consultation here: Google Drive

  • Pre-conference workshop: Université de l’Alberta Conférence Annuelle

    Pre-conference workshop: Université de l’Alberta Conférence Annuelle

    We were pleased to sponsor the 2025 Campus St Jean Annual Conference of the University of Alberta. Two Aula Fellows were present, and offered a workshop for faculty. The event was well attended. As Fellows, we were happy to receive feedback that the workshop empowered faculty to continue conversations on the complexities of AI in society and at the University, outside the conference and into their fields of work. Some of the attendees have since joined us as Fellows.

    More Information

  • Les leviers du pouvoir dans l’IA

    Les leviers du pouvoir dans l’IA

    La camaraderie Aula est fièr de commanditer la conférence annuel la faculté du campus Saint Jean de l’Université de l’Alberta. Notre directrice, Tammy Mackenzie, y présente les leviers du pouvoir dans l’IA dans la société albertaine et de la francopĥonie. Le tout dans le but de remmettre le pouvoir décisionnel dans les mains des gens qui sont impliqués: nous tous.

    Pour en savoir plus, voir: https://www.ualberta.ca/en/campus-saint-jean/congress/index.html

  • Potential and perils of large language models as judges of unstructured textual data

    Potential and perils of large language models as judges of unstructured textual data

    Rapid advancements in large language models have unlocked remarkable capabilities when it comes to processing and summarizing unstructured text data. This has implications for the analysis of rich, open-ended datasets, such as survey responses, where LLMs hold the promise of efficiently distilling key themes and sentiments. However, as organizations increasingly turn to these powerful AI systems to make sense of textual feedback, a critical question arises, can we trust LLMs to accurately represent the perspectives contained within these text based datasets? While LLMs excel at generating human-like summaries, there is a risk that their outputs may inadvertently diverge from the true substance of the original responses. Discrepancies between the LLM-generated outputs and the actual themes present in the data could lead to flawed decision-making, with far-reaching consequences for organizations. This research investigates the effectiveness of LLM-as-judge models to evaluate the thematic alignment of summaries generated by other LLMs. We utilized an Anthropic Claude model to generate thematic summaries from open-ended survey responses, with Amazon’s Titan Express, Nova Pro, and Meta’s Llama serving as judges. This LLM-as-judge approach was compared to human evaluations using Cohen’s kappa, Spearman’s rho, and Krippendorff’s alpha, validating a scalable alternative to traditional human centric evaluation methods. Our findings reveal that while LLM-as-judge offer a scalable solution comparable to human raters, humans may still excel at detecting subtle, context-specific nuances. Our research contributes to the growing body of knowledge on AI assisted text analysis. Further, we provide recommendations for future research, emphasizing the need for careful consideration when generalizing LLM-as-judge models across various contexts and use cases.

    More Information

  • Evaluating Online AI Detection Tools: An Empirical Study Using Microsoft Copilot-Generated Content

    Evaluating Online AI Detection Tools: An Empirical Study Using Microsoft Copilot-Generated Content

    Our findings reveal significant inconsistencies and limitations in AI detection tools, with many failing to accurately identify Copilotauthored text. Examining eight freely available online AI detection tools using text samples produced by Microsoft Copilot, we assess their accuracy and consistency. We feed a short sentence and a small paragraph and note the estimate of these tools. Our results suggest that educators should not rely on these tools to check for AI use.

    More Information