Category: Sector: Society

  • Aula Convening Guideline 2025 Ed.

    Aula Convening Guideline 2025 Ed.

    The Aula Convening Guidelines, 2025 ed.

    These Aula Convening Guidelines are for people working on tech governance and AI in society, these are 6 guidelines for convening communities for legitimate collective decision-making on how AI is implemented in society.

    Since our founding in 2023, Aula Fellows have hosted and participated in 100s of conversations in more than 30 countries and regions on AI. We have spoken with people who have a variety of needs, spanning through Learning AI, Living with AI, Working with AI, and Shaping AI.

    We have worked through 3 project phases, to develop these guidelines, from the common elements that make for conversations in which communities make decisions about AI. Our goal is not a new type of consultation, but rather to see to it that community convenings are conductive to collective decision making on AI.

    In 2026 we will be reaching out to partner organizations to continue to refine these guidelines and to bring them to more groups of people.

    They are complete and available now under a Creative Commons license, in this V.01, 2025 Edition.

    Link to the PDF.

  • Call for Book Chapters: OUR AI PROBLEMS

    Call for Book Chapters: OUR AI PROBLEMS

    Call for Book Chapters: Our AI Problems (Edited Volume)

    We believe that there are no easy answers when it comes to artificial intelligence and society. Across jurisdictions and decision-making bodies, those who develop or enforce regulations are confronted with difficult questions. These challenges arise for many reasons: the issues are often embedded in complex sociotechnical systems, lack straightforward solutions, or involve tensions between competing values and needs.

    The editors hold that AI can be of great service for humanity. At the same time, current regulatory frameworks lag far behind what is needed to ensure just, safe, and equitable access and outcomes. 

    Policymakers and subject-matter specialists are increasingly converging on a shared set of especially challenging issues.  Society is learning to join in the conversations. Accordingly, the proposed volume is envisioned as addressing the following areas: Economics and Power; Democracy and Trust; Risks Large and Small; Building Bridges and Inclusion; Media and Art; Environment and Health; Justice, Security, and Defense.

    If you are interested in contributing, we would be delighted to hear from you. If you know colleagues or collaborators who might wish to participate, please feel free to share this call with them as well.

    Deadline for chapter abstracts (250–300 words): 15 January 2026
    Deadline for chapter draft submission (8000–10,000 words; US English; APA style): 31 March 2026
    Deadline for final revisions: 15 May 2026

    Edited by Tammy Mackenzie, Ashley Elizabeth Muller, and Branislav Radeljić

    For more info about the editors, please see: Fellows
    Submissions and questions: Contact Branislav Radeljić, Ph.D., Director of Research.

  • Book review of Human Power:
Seven Traits for the Politics of the AI Machine Age

    Book review of Human Power: Seven Traits for the Politics of the AI Machine Age

    Book review of Human Power:
    Seven Traits for the Politics of the AI Machine Age

    I am a practitioner in the field of AI policymaking, as a civil society advocate and a researcher. I was excited to read Ms. Gry Hasselbalch because she has a very good reputation for telling people the truth and for not backing down on values-based work. I’ve had the opportunity to hear her present in the past.

    This was exactly the read I hoped for and more. She describes our “human powers” like unpacking a really great care package, full of everything you love but forgot you were missing. And in details. In quotable, academic details, heading off through history and into the conversations between people about how AI policy needs become enacted. I love it. It’s the next best thing to being in the room.

    The best part for me as a social systems geek is that she’s been in this work, she ties each of our human powers to policy power as you read, so it builds you up. And she brings it all together in the final chapter. Direct conversations with the people making the decisions, about the challenges they face. For me this type of thinking underpins what we’re doing with the Aula Fellowship, about connecting people to these conversations. She also gives me personally a lot of analogies and examples to help make the conversations we’re having around hard questions gain some clarity. So I am not a habitual book reviewer, but count me in as a book recommender. I liked this, a lot, and it’s already being useful to how I think and talk about tech policy.  It’s a reminder that we as people have choices in how this is going to affect the future. And it’s a cheerful reminder that we humans get to keep all the good stuff, like loving each other and creating society.

    Thank you for your work, Ms. Hasselbalch.

  • Obama Foundation Fellow: Victoria Kuketz

    Obama Foundation Fellow: Victoria Kuketz

    We are proud to announce Aula Fellow’s Victoria Kuketz’s recent appointment as an Obama Fellow. Follow Victoria for news of her Fellowship this year, where she will be concentrating on inclusion and rational governance.

    More Information

  • AI and Human Oversight: A Risk-Based Framework for Alignment

    AI and Human Oversight: A Risk-Based Framework for Alignment

    As Artificial Intelligence (AI) technologies continue to advance, protecting human autonomy and promoting ethical decision-making are essential to fostering trust and accountability. Human agency (the capacity of individuals to make informed decisions) should be actively preserved and reinforced by AI systems. This paper examines strategies for designing AI systems that uphold fundamental rights, strengthen human agency, and embed effective human oversight mechanisms. It discusses key oversight models, including Human-in-Command (HIC), Human-in-the-Loop (HITL), and Human-on-the-Loop (HOTL), and proposes a risk-based framework to guide the implementation of these mechanisms. By linking the level of AI model risk to the appropriate form of human oversight, the paper underscores the critical role of human involvement in the responsible deployment of AI, balancing technological innovation with the protection of individual values and rights. In doing so, it aims to ensure that AI technologies are used responsibly, safeguarding individual autonomy while maximizing societal benefits.

    More Information

  • Oui, mais je LLM !

    Oui, mais je LLM !

    L’IA générative nous joue des tours, en manipulant notre perception de la vérité en tentant de devenir notre confident et en créant une relation de dépendance. Mais, on peut aussi à notre tour l’utiliser pour extraire des informations privilégiées mal sécurisées, en utilisant des tactiques adaptées de l’ingénierie sociale.

    Le manque d’expérience autour de cette technologie et l’empressement à en mettre partout expose à de nouveaux risques.

    Je te présente un survol des concepts de base en cybersécurité revisités pour l’IA générative, différents risques que posent ces algorithmes et différents conseils de prévention pour bien les intégrer dans nos systèmes informatiques et notre pratique professionnelle.

    More Information

  • West Island Women’s Center

    West Island Women’s Center

    Presenting a workshop on navigating the hard and strange questions on AI in society and in our lives.

    More Information

  • The EU AI Act – Enabling the Next Generation Internet (NGI).

    The EU AI Act – Enabling the Next Generation Internet (NGI).

    How the pioneering AI law enables the NGI’s aim of establishing key technological building blocks of tomorrow’s Internet and shaping the future Internet as an interoperable platform ecosystem that embodies the values that Europe holds dear: openness, inclusivity, transparency, privacy, cooperation, and protection of data.

    More Information

  • The Architecture of Responsible AI: Balancing Innovation and Accountability

    The Architecture of Responsible AI: Balancing Innovation and Accountability

    Artificial Intelligence (AI) has become a key factor driving change in industries, organizations, and society. While technological capabilities advance rapidly, the mechanisms guiding AI implementation reveal critical structural flaws (Closing the AI accountability gap). There lies an opportunity to architect a future where we can collaboratively design systems that leverage AI to augment human capabilities while upholding ethical integrity.

    More Information

  • Whole-Person Education for AI Engineers: Presented to CEEA (Peer Reviewed)

    Whole-Person Education for AI Engineers: Presented to CEEA (Peer Reviewed)

    This autoethnographic study explores the need for interdisciplinary education spanning both technical an philosophical skills – as such, this study leverages whole-person education as a theoretical approach needed in AI engineering education to address the limitations of current paradigms that prioritize technical expertise over ethical and societal considerations. Drawing on a collaborative autoethnography approach of fourteen diverse stakeholders, the study identifies key motivations driving the call for change, including the need for global perspectives, bridging the gap between academia and industry, integrating ethics and societal impact, and fostering interdisciplinary collaboration. The findings challenge the myths of technological neutrality and technosaviourism, advocating for a future where AI engineers are equipped not only with technical skills but also with the ethical awareness, social responsibility, and interdisciplinary understanding necessary to navigate the complex challenges of AI development. The study provides valuable insights and recommendations for transforming AI engineering education to ensure the responsible development of AI technologies.

    More Information

  • Canary in the Mine: An LLM Augmented Survey of Disciplinary Complaints to the Ordre des ingénieurs du Québec (OIQ) (Peer Reviewed)

    Canary in the Mine: An LLM Augmented Survey of Disciplinary Complaints to the Ordre des ingénieurs du Québec (OIQ) (Peer Reviewed)

    This study investigates disciplinary incidents involving engineers in Quebec, shedding light on critical gaps in engineering education. Through a comprehensive review of the disciplinary register of the Ordre des ingénieurs du Québec (OIQ)’s disciplinary register for 2010 to 2024, researchers from engineering education and human resources management in technological development laboratories conducted a thematic analysis of reported incidents to identify patterns, trends, and areas for improvement. The analysis aims to uncover the most common types of disciplinary incidents, underlying causes, and implications for the field in how engineering education addresses (or fails to address) these issues. Our findings identify recurring themes, analyze root causes, and offer recommendations for engineering educators and students to mitigate similar incidents. This research has implications for informing curriculum development, professional development, and performance evaluation, ultimately fostering a culture of professionalism and ethical responsibility in engineering. By providing empirical evidence of disciplinary incidents and their causes, this study contributes to evidence-based practices for engineering education and professional development, enhancing the engineering education community’s understanding of professionalism and ethics.

    More Information

  • Developing the Permanent Symposium on AI (poster): Presented at Engineering and Public Policy Division (EPP) Poster Session

    Developing the Permanent Symposium on AI (poster): Presented at Engineering and Public Policy Division (EPP) Poster Session

    A multidisciplinary, reflective autoethnography by some of the people who are building the Permanent Symposium on AI. Includes the history of the project.

    RQ 1: Challenges that unite AI policy & tech

    RQ 2: How to design the PSAI?

    RQ 3: What factors influence the adoption and scalability of the PSAI?

    This is the Flagship project of the Aula Fellowship.

    Read the Poster