Category: 1/ Topic

Research Topic

  • Fundraiser for: Aula @ AI House Davos 2026

    Fundraiser for: Aula @ AI House Davos 2026

    We are preparing The Aula Fellowship’s annual report, “Our AI Questions”. Would you consider sponsoring a Hard Question on AI?

    Link to support with funding: Sponsoring a Hard Question on AI

    Important: We have been informed of fraudsters trying to identify as fellows. We run our campaign through Zeffy exclusively. Beware of any other platform or email. We only use @theaulafellowship.org email accounts.

    The first part of our sponsorship budget is received! Thank you. Our Director Tammy Mackenzie and Tech Lead François Pelletier, along with 30 Fellows working from their homes around the world will join AI House Davos ’26 and WEF-adjacent conversations, during the week of the World Economic Forum, this January 19th to 24th, 2026.

    Why this matters

    AI House Davos is a non-profit organization that comes from a history of activism in sustainability. When founders realized that there was value in having intersectional spaces during the WEF that are topic-focused rather than hosted by a specific country or other organization.

    Our Fellows have attended and hosted hundreds of convenings on AI. We’ve learned from this. Going to AI House and WEF-adjacent conversations in Davos will empower us to work with AI House Davos community, so that we can participate in connecting decision makers to communities.

    Your support goes to travel costs, salaries for the team, and communications materials. This directly supports Aula’s mission to ensure that everyone can access the conversation on AI.

    There are several custom sponsorship arrangements, all meant to provide you with information, connections, and the amplification of your work.

    • Live conversations with Tammy or François while they are in Davos. See the AI House Davos 2026 Schedule for topic ideas or choose your own.
    • For $1000, your organization can sponsor a Hard Question from Society in our Annual report.
    • If you are working on getting people access to the conversation on AI, we can present your work in the report section on making connections.
    • For $3000, you or your organization can request a Private or Custom Question, with an Executive briefing or Podcast episode.
    • 10$ to 300$: Any personal amount is much appreciated to help us support Aula’s mission and teams. You will be treated with the same respect and attention to detail as other Sponsors.

    As a sponsor, you can choose your preferred level of visibility, at Davos and in the Report, and for related activities.  


    This year’s Public Questions are:

    Money Question: How can we fund sovereign AI?
    SDG 9: Industry, Innovation, and Infrastructure & SDG 8: Decent Work and Economic Growth

    Power Question: What can we learn from climate policymaking failures, and do more effectively in AI policy?
    SDG 10: Reduced Inequalities & SDG 17: Partnerships for the SDG goals

    Life Question: How can we give nature and communities protection from the pollution and resource extraction of AI data centres?
    SDG 13: Climate Action & SDG 11: Sustainable Cities and Communities

    This is how you can help us.

    1. Fund: Every dollar counts. Contribute now to help us reach our goal.
    2. Share: Share our campaign with your friends, family, and on social media. Your advocacy can amplify our impact.
    3. Participate: This mission can only be done together. Volunteer, bring opportunities, do what you can. 

    Thank you for supporting the mission to ensure everyone can access the conversation on AI.

    Sponsor the Aula Fellowship.

    Risks and Mitigations

    UPDATE: Success for stage 1! our Director Tammy Mackenzie and Technical Lead François Pelletier will be going to Davos 2026, for the whole week. Next step: we need to support the teams. 30 Fellows are participating from home.

    Prior version, December 15th 2025:

    If we don’t raise the first 2 x 9,000$ quickly enough, we don’t know if our Director and Director of Research can make it to Davos for the full week, and we know we can’t schedule our Impact Team. What we can already confirm is that some Aula Fellows will be in town that week, and that there are usually 25 or more events online. And we know that we will be working to put out our report this coming spring, come what may. Most of all, we know that can only meet this mission together.

    Sponsor the Aula Fellowship.

    Additional information :

    The Aula Fellows on this project are:

    Tammy Mackenzie, MBA. Institutionalism and Multidisciplinary AI. Canada.

    Branislav Radeljic, Ph.D. Political science. UK.

    Francois Pelletier, M.Sc. Data science and civil organization. Canada.

    Ley Muller, Ph.D., Standards and Health.

    Uloma Okoro, LLM, MBA, Law, Regulation, civil organization. Nigeria.

    Hager Hesham, MA, Journalism and human rights, Egypt

    Leslie Salgado, Ph.D. Rhetoric, Media, Rights and Health. Canada.

    Andrew Ham, MIA, Governance and coordination. USA.

    Gwendolyn Alston, Ph.D., Documentarist, USA.

  • Aula Convening Guidelines 2025 Ed.

    Aula Convening Guidelines 2025 Ed.

    The Aula Convening Guidelines, 2025 ed.

    These Aula Convening Guidelines are for people working on tech governance and AI in society, these are 6 guidelines for convening communities for legitimate collective decision-making on how AI is implemented in society.

    Since our founding in 2023, Aula Fellows have hosted and participated in 100s of conversations in more than 30 countries and regions on AI. We have spoken with people who have a variety of needs, spanning through Learning AI, Living with AI, Working with AI, and Shaping AI.

    We have worked through 3 project phases, to develop these guidelines, from the common elements that make for conversations in which communities make decisions about AI. Our goal is not a new type of consultation, but rather to see to it that community convenings are conductive to collective decision making on AI.

    In 2026 we will be reaching out to partner organizations to continue to refine these guidelines and to bring them to more groups of people.

    They are complete and available now under a Creative Commons license, in this V.01, 2025 Edition.

    Link to the PDF.

  • Call for Book Chapters: OUR AI PROBLEMS

    Call for Book Chapters: OUR AI PROBLEMS

    Call for Book Chapters: Our AI Problems (Edited Volume)

    We believe that there are no easy answers when it comes to artificial intelligence and society. Across jurisdictions and decision-making bodies, those who develop or enforce regulations are confronted with difficult questions. These challenges arise for many reasons: the issues are often embedded in complex sociotechnical systems, lack straightforward solutions, or involve tensions between competing values and needs.

    The editors hold that AI can be of great service for humanity. At the same time, current regulatory frameworks lag far behind what is needed to ensure just, safe, and equitable access and outcomes. 

    Policymakers and subject-matter specialists are increasingly converging on a shared set of especially challenging issues.  Society is learning to join in the conversations. Accordingly, the proposed volume is envisioned as addressing the following areas: Economics and Power; Democracy and Trust; Risks Large and Small; Building Bridges and Inclusion; Media and Art; Environment and Health; Justice, Security, and Defense.

    If you are interested in contributing, we would be delighted to hear from you. If you know colleagues or collaborators who might wish to participate, please feel free to share this call with them as well.

    Deadline for chapter abstracts (250–300 words): 31 January 2026
    Deadline for chapter draft submission (8000–10,000 words; US English; APA style): 31 March 2026
    Deadline for final revisions: 15 May 2026

    Edited by Tammy Mackenzie, Ashley Elizabeth Muller, and Branislav Radeljić

    For more info about the editors, please see: Fellows
    Submissions and questions: Contact Branislav Radeljić, Ph.D., Director of Research.

  • ISED Canada Consultation to Define the Next Chapter of Canada’s AI leadership

    ISED Canada Consultation to Define the Next Chapter of Canada’s AI leadership

    Aula Fellows contributed to the recent consultation on the government of Canada’s AI Strategy. Our principle recommendations are that the government needs to empower civil society inclusion in decision making and support small businesses. These will ensure not just social acceptability, but also fiscal and technical fit-to-purpose.

    Read the full consultation document here.

  • Book review of Human Power:
Seven Traits for the Politics of the AI Machine Age

    Book review of Human Power: Seven Traits for the Politics of the AI Machine Age

    Book review of Human Power:
    Seven Traits for the Politics of the AI Machine Age

    I am a practitioner in the field of AI policymaking, as a civil society advocate and a researcher. I was excited to read Ms. Gry Hasselbalch because she has a very good reputation for telling people the truth and for not backing down on values-based work. I’ve had the opportunity to hear her present in the past.

    This was exactly the read I hoped for and more. She describes our “human powers” like unpacking a really great care package, full of everything you love but forgot you were missing. And in details. In quotable, academic details, heading off through history and into the conversations between people about how AI policy needs become enacted. I love it. It’s the next best thing to being in the room.

    The best part for me as a social systems geek is that she’s been in this work, she ties each of our human powers to policy power as you read, so it builds you up. And she brings it all together in the final chapter. Direct conversations with the people making the decisions, about the challenges they face. For me this type of thinking underpins what we’re doing with the Aula Fellowship, about connecting people to these conversations. She also gives me personally a lot of analogies and examples to help make the conversations we’re having around hard questions gain some clarity. So I am not a habitual book reviewer, but count me in as a book recommender. I liked this, a lot, and it’s already being useful to how I think and talk about tech policy.  It’s a reminder that we as people have choices in how this is going to affect the future. And it’s a cheerful reminder that we humans get to keep all the good stuff, like loving each other and creating society.

    Thank you for your work, Ms. Hasselbalch.

  • A Response to Government of Canada’s AI Strategy Task Force

    A Response to Government of Canada’s AI Strategy Task Force

    The Aula Fellowship is proud to join over 40 civil advocacy organizations in an open letter to the government of Canada. Let’s lead the world in crafting tech governance that works for all. Thank you to Alex Tveit of the Sustainable Impact Foundation for intellectual and operational leadership in this space.

    See more on LinkedIn

  • Rogers Cybersecure Cataylst Fellow 2025-2026

    Rogers Cybersecure Cataylst Fellow 2025-2026

    We are very proud  to announce Aula Fellow Jake Okechukwu Effoduh has been named a Rogers Cybersecure Catalyst Fellow for 2025-2026.

    Read more: “Fighting for justice beyond borders: Jake Okechukwu Effoduh’s journey from grassroots advocacy to cyber law.

  • State of AI Policy in Africa 2025

    State of AI Policy in Africa 2025

    Robert’s perspective is featured by the report authors; emphasis on transparent, sovereign AI infrastructure.

    See the Report here.

  • Whose Identity Counts? / Keynote

    Whose Identity Counts? / Keynote

    Whose Identity Counts? explores how AI shapes whose voices are heard and whose are overlooked. Drawing on her research at the University of Cambridge, Hannah highlights the role of language and culture in building more inclusive technologies.

    See the presentation here.