Home

Our mission is to ensure that everyone can access the conversation on how AI is being implemented in society.

About The Aula Fellowship:

Fellows speak for themselves and have absolute independence over their work, and don’t speak on behalf of one another. What unites us is our shared mission.

We are a think-tank and non-profit NGO, facilitating the global conversation on A.I. Each Fellow brings their own expertise, from science, engineering, social justice, governance, law, education, philosophy, business, and more. In ancient times, in Latin-speaking regions, an aula was the place where a community gathered to discuss matters of shared importance. The Aula Fellowship is a modern aula: a gathering of interdisciplinary Fellows from around the world, committed to advancing the responsible proliferation of AI.

Governance and Civil Advocacy Tools →

The Aula Convening Guidelines, 2025 ed.

For people working on tech governance and AI in society:

Convene communities for legitimate collective decision-making on how AI is implemented in society.

Levers of Power in the Field of AI

Ongoing research and collective work on how individuals in institutions and in civil society can work together to steer our social systems.

Our most recent work is now on Arxiv and accepted to ASEE 2026.

The Aula Early Assessment Instrument for AI Implementations in SMEs (AulaEAI v.01)

This instrument is a light-weight AI Implementation assessment for a proposed AI implementation in a small or medium organization: company / institutional / or non-profit.

Research programs in engineering education, institutionalism, management, politics, governance, tech, and international cooperation. Open to researchers from all disciplines and skillsets. All types of research are conducted by our Fellows and the Fellowship. We work together on trade, journalistic, legal, policy, academic, etc.

[Expression of Interest to join an Orientation Session →]

Fellows share opportunities for professional development and collaborate to ensure that our communities as specialists and as people can connect to each other, and make decisions together about AI in society.

[Collaborate With The Aula Fellowship →]

Please get in touch if you want to collaborate on a specific project with several Fellows. Otherwise, reach out to specific Fellows.

We work with individuals, communities, and institutions who are trying to make sense of what AI means for their work, their people, their missions, and their futures. We’re not here to simplify AI or sell you a solution. We create space for the conversations.

What we offer:

  • Presentation and events, conference curation.
  • Community and governance support.
  • Mediation and arbitrage, including negotiations and forensics (select topics).

→ Learn more about our Services.

→ Email us at info@theaulafellowship.org

Book a call

There are several ways to engage with the Aula Fellowship. For more information for each of these options, see Get Involved or Support the Mission.

Are you already working on AI in society, and looking for opportunities to expand on your impact? Please see the Expression of Interest to Join an Orientation Session form.

[See our Works & News→]

Our collected works highlights are presented on our Works & News page.

Academic / Scholarly Research highlights are presented in our Works & News / Research sub-topics.

Specific scholars among the Fellows have links to their Google Scholar or Research Gate accounts in their bio. Individual Fellow’s records can be found in the Works & News, listed alphabetically.

Follow us on LinkedIn for current news.

Recent Works:

  • Aula Convening Guideline 2025 Ed.

    The Aula Convening Guidelines, 2025 ed. These Aula Convening Guidelines are for people working on tech governance and AI in society, these are 6 guidelines for convening communities for legitimate collective decision-making on how AI is implemented in society. Since our founding in 2023, Aula Fellows have hosted and participated in 100s of conversations in

    Read more…

  • Call for Book Chapters: OUR AI PROBLEMS

    Call for Book Chapters: Our AI Problems (Edited Volume) We believe that there are no easy answers when it comes to artificial intelligence and society. Across jurisdictions and decision-making bodies, those who develop or enforce regulations are confronted with difficult questions. These challenges arise for many reasons: the issues are often embedded in complex sociotechnical

    Read more…

  • Levers of Power in the Field of AI

    Forthcoming study, now available on Arxiv: Levers of Power in the Field of AI An Ethnography of Personal Influence in Institutionalization Who holds power over decisions in our society? How do these people influence decisions, and how are these people influenced? How is this the same or different when it comes to questions about AI? 

    Read more…

  • Three Fellow’s Works in Nature this Year

    We are proud to announce that Fellows were published 3 times in Nature this year. So far! We are very proud to be connecting people to the conversation in these ways. Thanks for your work, Fellows Jake Okechukwu Effoduh and Peer Herholz! Here are the works in question: Nature: Opinion: The path for AI in

    Read more…

  • ISED Canada Consultation to Define the Next Chapter of Canada’s AI leadership

    Aula Fellows contributed to the recent consultation on the government of Canada’s AI Strategy. Our principle recommendations are that the government needs to empower civil society inclusion in decision making and support small businesses. These will ensure not just social acceptability, but also fiscal and technical fit-to-purpose. Read the full consultation document here.

    Read more…

  • Book review of Human Power: Seven Traits for the Politics of the AI Machine Age

    Book review of Human Power:Seven Traits for the Politics of the AI Machine Age I am a practitioner in the field of AI policymaking, as a civil society advocate and a researcher. I was excited to read Ms. Gry Hasselbalch because she has a very good reputation for telling people the truth and for not

    Read more…

[See our Works & News→]