Category: Tammy Mackenzie, M.B.A.

  • Aula Convening Guideline 2025 Ed.

    Aula Convening Guideline 2025 Ed.

    The Aula Convening Guidelines, 2025 ed.

    These Aula Convening Guidelines are for people working on tech governance and AI in society, these are 6 guidelines for convening communities for legitimate collective decision-making on how AI is implemented in society.

    Since our founding in 2023, Aula Fellows have hosted and participated in 100s of conversations in more than 30 countries and regions on AI. We have spoken with people who have a variety of needs, spanning through Learning AI, Living with AI, Working with AI, and Shaping AI.

    We have worked through 3 project phases, to develop these guidelines, from the common elements that make for conversations in which communities make decisions about AI. Our goal is not a new type of consultation, but rather to see to it that community convenings are conductive to collective decision making on AI.

    In 2026 we will be reaching out to partner organizations to continue to refine these guidelines and to bring them to more groups of people.

    They are complete and available now under a Creative Commons license, in this V.01, 2025 Edition.

    Link to the PDF.

  • Call for Book Chapters: OUR AI PROBLEMS

    Call for Book Chapters: OUR AI PROBLEMS

    Call for Book Chapters: Our AI Problems (Edited Volume)

    We believe that there are no easy answers when it comes to artificial intelligence and society. Across jurisdictions and decision-making bodies, those who develop or enforce regulations are confronted with difficult questions. These challenges arise for many reasons: the issues are often embedded in complex sociotechnical systems, lack straightforward solutions, or involve tensions between competing values and needs.

    The editors hold that AI can be of great service for humanity. At the same time, current regulatory frameworks lag far behind what is needed to ensure just, safe, and equitable access and outcomes. 

    Policymakers and subject-matter specialists are increasingly converging on a shared set of especially challenging issues.  Society is learning to join in the conversations. Accordingly, the proposed volume is envisioned as addressing the following areas: Economics and Power; Democracy and Trust; Risks Large and Small; Building Bridges and Inclusion; Media and Art; Environment and Health; Justice, Security, and Defense.

    If you are interested in contributing, we would be delighted to hear from you. If you know colleagues or collaborators who might wish to participate, please feel free to share this call with them as well.

    Deadline for chapter abstracts (250–300 words): 15 January 2026
    Deadline for chapter draft submission (8000–10,000 words; US English; APA style): 31 March 2026
    Deadline for final revisions: 15 May 2026

    Edited by Tammy Mackenzie, Ashley Elizabeth Muller, and Branislav Radeljić

    For more info about the editors, please see: Fellows
    Submissions and questions: Contact Branislav Radeljić, Ph.D., Director of Research.

  • Levers of Power in the Field of AI

    Levers of Power in the Field of AI

    Forthcoming study, now available on Arxiv:

    Levers of Power in the Field of AI
    An Ethnography of Personal Influence in Institutionalization

    Who holds power over decisions in our society? How do these people influence decisions, and how are these people influenced? How is this the same or different when it comes to questions about AI?  These are some of the questions we set out to understand.

    Abstract: This paper examines how decision makers in academia, government, business, and civil society navigate questions of power in implementations of artificial intelligence (AI). The study explores how individuals experience and exercise “levers of power”, which are presented as social mechanisms that shape institutional responses to technological change. The study reports on the responses of personalised questionnaires designed to gather insight on a decision maker’s institutional purview, based on an institutional governance framework developed from the work of Neo Institutionalists. Findings present the anonymized, real responses and circumstances of respondents in the form of twelve fictional personas of high-level decision makers from North America and Europe. These personas illustrate how personal agency, organizational logics, and institutional infrastructures may intersect in the governance of AI. The decision makers’ responses to the questionnaires then inform a discussion of the field level personal power of decision-makers, methods of fostering institutional stability in times of change, and methods of influencing institutional change in the field of AI. The final section of the discussion presents a table of the dynamics of the levers of power in the field of AI for change makers and 5 testable hypotheses for institutional and social movement researchers. In summary, this study provides insight on the means for policymakers within institutions and their counterparts in civil society to personally engage with AI governance.

    Read  on Arxiv.

  • ISED Canada Consultation to Define the Next Chapter of Canada’s AI leadership

    ISED Canada Consultation to Define the Next Chapter of Canada’s AI leadership

    Aula Fellows contributed to the recent consultation on the government of Canada’s AI Strategy. Our principle recommendations are that the government needs to empower civil society inclusion in decision making and support small businesses. These will ensure not just social acceptability, but also fiscal and technical fit-to-purpose.

    Read the full consultation document here.

  • Book review of Human Power:
Seven Traits for the Politics of the AI Machine Age

    Book review of Human Power: Seven Traits for the Politics of the AI Machine Age

    Book review of Human Power:
    Seven Traits for the Politics of the AI Machine Age

    I am a practitioner in the field of AI policymaking, as a civil society advocate and a researcher. I was excited to read Ms. Gry Hasselbalch because she has a very good reputation for telling people the truth and for not backing down on values-based work. I’ve had the opportunity to hear her present in the past.

    This was exactly the read I hoped for and more. She describes our “human powers” like unpacking a really great care package, full of everything you love but forgot you were missing. And in details. In quotable, academic details, heading off through history and into the conversations between people about how AI policy needs become enacted. I love it. It’s the next best thing to being in the room.

    The best part for me as a social systems geek is that she’s been in this work, she ties each of our human powers to policy power as you read, so it builds you up. And she brings it all together in the final chapter. Direct conversations with the people making the decisions, about the challenges they face. For me this type of thinking underpins what we’re doing with the Aula Fellowship, about connecting people to these conversations. She also gives me personally a lot of analogies and examples to help make the conversations we’re having around hard questions gain some clarity. So I am not a habitual book reviewer, but count me in as a book recommender. I liked this, a lot, and it’s already being useful to how I think and talk about tech policy.  It’s a reminder that we as people have choices in how this is going to affect the future. And it’s a cheerful reminder that we humans get to keep all the good stuff, like loving each other and creating society.

    Thank you for your work, Ms. Hasselbalch.

  • AIMS Hackathon Against Modern Slavery

    AIMS Hackathon Against Modern Slavery

    We are proud to announce that an Aula Team has joined the AIMS Hackathon 2025: AI Against Modern Slavery in Supply Chains. This is an issue that touches everyone on earth, and that everyone can take part in fixing.

    We will be examining problems in this space and, among other things, an open data set of 15,000+ annual corporate reports and Walk Free’s Global Slavery Index, for ways to identify, mitigate, and eradicate modern slavery.

    We are seeing what we can do to help. How do you see it? Want to check out the data and let us know? We’ll be sharing, returning, and building collaborations. Thank you and all honour to the Hackathon conveners, and director Adriana Eufrosina Bora:

    Fundación Pasos Libres: project link https://lnkd.in/gdsczfKc
    Mila – Quebec Artificial Intelligence Institute: project link also includes links to all of the open data sets and studies done so far: https://lnkd.in/dAApAvqu
    QUT (Queensland University of Technology) (QUT): https://lnkd.in/ehG66MXs

    The business reports database on GitHub, built and hosted by The Future Society: https://lnkd.in/eUa6an9s

    There’s a world-class group of trainers. Numerous other partners are providing support, including The Future Society, Walk Free, UNESCO, the International Committee of the Red Cross – ICRC, Australian Red Cross, and governments of Australia, Canada, the UK. And many more to come.

    Get to the heart of the matter by hearing from survivors: Faith, Love, and Human Trafficking: The Story of Karola De la Cuesta. On Goodreads and available at most online retailers in EN and SP (ask your library): https://lnkd.in/eitSUk4c

    If like us you are also working on these issues, we welcome your interest in potential collaborations. Check out “How to Get Involved”.

    Infographic from Respect International: https://lnkd.in/exRb_NNA

    AIMS Hackathon

  • West Island Women’s Center

    West Island Women’s Center

    Presenting a workshop on navigating the hard and strange questions on AI in society and in our lives.

    More Information

  • Tech Tool: the Survivor’s Dashboard

    Tech Tool: the Survivor’s Dashboard

    A dashboard of curated information for survivor’s of modern slavery and the people who work to rescue others. This tool is available for collaborations. Please contact our Technical Director, François Pelletier, for more information.

  • Canary in the Mine: An LLM Augmented Survey of Disciplinary Complaints to the Ordre des ingénieurs du Québec (OIQ) (Peer Reviewed)

    Canary in the Mine: An LLM Augmented Survey of Disciplinary Complaints to the Ordre des ingénieurs du Québec (OIQ) (Peer Reviewed)

    This study investigates disciplinary incidents involving engineers in Quebec, shedding light on critical gaps in engineering education. Through a comprehensive review of the disciplinary register of the Ordre des ingénieurs du Québec (OIQ)’s disciplinary register for 2010 to 2024, researchers from engineering education and human resources management in technological development laboratories conducted a thematic analysis of reported incidents to identify patterns, trends, and areas for improvement. The analysis aims to uncover the most common types of disciplinary incidents, underlying causes, and implications for the field in how engineering education addresses (or fails to address) these issues. Our findings identify recurring themes, analyze root causes, and offer recommendations for engineering educators and students to mitigate similar incidents. This research has implications for informing curriculum development, professional development, and performance evaluation, ultimately fostering a culture of professionalism and ethical responsibility in engineering. By providing empirical evidence of disciplinary incidents and their causes, this study contributes to evidence-based practices for engineering education and professional development, enhancing the engineering education community’s understanding of professionalism and ethics.

    More Information

  • Developing the Permanent Symposium on AI (poster): Presented at Engineering and Public Policy Division (EPP) Poster Session

    Developing the Permanent Symposium on AI (poster): Presented at Engineering and Public Policy Division (EPP) Poster Session

    A multidisciplinary, reflective autoethnography by some of the people who are building the Permanent Symposium on AI. Includes the history of the project.

    RQ 1: Challenges that unite AI policy & tech

    RQ 2: How to design the PSAI?

    RQ 3: What factors influence the adoption and scalability of the PSAI?

    This is the Flagship project of the Aula Fellowship.

    Read the Poster

  • Whole-Person Education for AI Engineers: Presented to CEEA (Peer Reviewed)

    Whole-Person Education for AI Engineers: Presented to CEEA (Peer Reviewed)

    This autoethnographic study explores the need for interdisciplinary education spanning both technical an philosophical skills – as such, this study leverages whole-person education as a theoretical approach needed in AI engineering education to address the limitations of current paradigms that prioritize technical expertise over ethical and societal considerations. Drawing on a collaborative autoethnography approach of fourteen diverse stakeholders, the study identifies key motivations driving the call for change, including the need for global perspectives, bridging the gap between academia and industry, integrating ethics and societal impact, and fostering interdisciplinary collaboration. The findings challenge the myths of technological neutrality and technosaviourism, advocating for a future where AI engineers are equipped not only with technical skills but also with the ethical awareness, social responsibility, and interdisciplinary understanding necessary to navigate the complex challenges of AI development. The study provides valuable insights and recommendations for transforming AI engineering education to ensure the responsible development of AI technologies.

    More Information

  • WIP: Gen AI in Engineering Education and the Da Vinci Cube (Peer Reviewed)

    WIP: Gen AI in Engineering Education and the Da Vinci Cube (Peer Reviewed)

    As generative AI (GenAI) tools rapidly transform the engineering landscape, a critical question emerges: Are current educational innovations adequately preparing engineers for the socio-technical challenges of the future? This work-in-progress paper presents two key contributions. First, we build on prior work presenting a systematic review of over 160 scholarly articles on GenAI implementations in engineering education, revealing a predominant focus on enhancing technical proficiency while often neglecting essential socio-technical competencies. Second, we apply an emerging framework—the da Vinci Cube (dVC)—to support engineering educators in critically evaluating GenAI-driven innovations. The dVC framework extends traditional models of innovation by incorporating three dimensions: the pursuit of knowledge, consideration of use, and contemplation of sentiment. Our analysis suggests that while GenAI tools can improve problem-solving and technical efficiency, engineering education must also address ethical, human-centered, and societal impacts. The dVC framework provides a structured lens for assessing how GenAI tools are integrated into curricula and research, encouraging a more holistic, reflective approach. Ultimately, this paper aims to provoke dialogue on the future of engineering education and to challenge the prevailing assumption that technical skill development alone is sufficient in an AI-mediated world.

    More Information