Author: Aula Blog Editor

  • A Response to Government of Canada’s AI Strategy Task Force

    A Response to Government of Canada’s AI Strategy Task Force

    The Aula Fellowship is proud to join over 40 civil advocacy organizations in an open letter to the government of Canada. Let’s lead the world in crafting tech governance that works for all. Thank you to Alex Tveit of the Sustainable Impact Foundation for intellectual and operational leadership in this space.

    See more on LinkedIn

  • Rogers Cybersecure Cataylst Fellow 2025-2026

    Rogers Cybersecure Cataylst Fellow 2025-2026

    We are very proud  to announce Aula Fellow Jake Okechukwu Effoduh has been named a Rogers Cybersecure Catalyst Fellow for 2025-2026.

    Read more: “Fighting for justice beyond borders: Jake Okechukwu Effoduh’s journey from grassroots advocacy to cyber law.

  • Whose Identity Counts? / Keynote

    Whose Identity Counts? / Keynote

    Whose Identity Counts? explores how AI shapes whose voices are heard and whose are overlooked. Drawing on her research at the University of Cambridge, Hannah highlights the role of language and culture in building more inclusive technologies.

    See the presentation here.

  • State of AI Policy in Africa 2025

    State of AI Policy in Africa 2025

    Robert’s perspective is featured by the report authors; emphasis on transparent, sovereign AI infrastructure.

    See the Report here.

  • AI and Human Oversight: A Risk-Based Framework for Alignment

    AI and Human Oversight: A Risk-Based Framework for Alignment

    As Artificial Intelligence (AI) technologies continue to advance, protecting human autonomy and promoting ethical decision-making are essential to fostering trust and accountability. Human agency (the capacity of individuals to make informed decisions) should be actively preserved and reinforced by AI systems. This paper examines strategies for designing AI systems that uphold fundamental rights, strengthen human agency, and embed effective human oversight mechanisms. It discusses key oversight models, including Human-in-Command (HIC), Human-in-the-Loop (HITL), and Human-on-the-Loop (HOTL), and proposes a risk-based framework to guide the implementation of these mechanisms. By linking the level of AI model risk to the appropriate form of human oversight, the paper underscores the critical role of human involvement in the responsible deployment of AI, balancing technological innovation with the protection of individual values and rights. In doing so, it aims to ensure that AI technologies are used responsibly, safeguarding individual autonomy while maximizing societal benefits.

    More Information

  • Dataset: Curated database and methodology for Modern Slavery Context Analyses

    Dataset: Curated database and methodology for Modern Slavery Context Analyses

    This dataset is not yet available for collaborations. We expect to report on it in December, 2025. Please contact our research Director, Dr. Branislav Radeljic, Ph.D., for more information.

    More Information.

  • Generative AI and the Future of News: Examining AI’s Agency, Power, and Authority

    Generative AI and the Future of News: Examining AI’s Agency, Power, and Authority

    This special issue interrogates how artificial intelligence (AI), particularly generative AI (GenAI), is reshaping journalism at a moment of profound uncertainty for the profession. The rapid rise of GenAI technologies, particularly following the release of tools like ChatGPT, has intensified longstanding tensions between economic precarity, technological innovation, and journalistic values. Across diverse contexts in the Global North and South, articles examine how AI is simultaneously heralded as a source of efficiency, personalization, and newsroom survival, while also feared as a destabilizing force that threatens jobs, erodes professional norms, and concentrates power in the hands of technology corporations.

    More Information

  • Obama Foundation Fellow: Victoria Kuketz

    Obama Foundation Fellow: Victoria Kuketz

    We are proud to announce Aula Fellow’s Victoria Kuketz’s recent appointment as an Obama Fellow. Follow Victoria for news of her Fellowship this year, where she will be concentrating on inclusion and rational governance.

    More Information

  • Oui, mais je LLM !

    Oui, mais je LLM !

    L’IA générative nous joue des tours, en manipulant notre perception de la vérité en tentant de devenir notre confident et en créant une relation de dépendance. Mais, on peut aussi à notre tour l’utiliser pour extraire des informations privilégiées mal sécurisées, en utilisant des tactiques adaptées de l’ingénierie sociale.

    Le manque d’expérience autour de cette technologie et l’empressement à en mettre partout expose à de nouveaux risques.

    Je te présente un survol des concepts de base en cybersécurité revisités pour l’IA générative, différents risques que posent ces algorithmes et différents conseils de prévention pour bien les intégrer dans nos systèmes informatiques et notre pratique professionnelle.

    More Information

  • Aula Fellow Emmanuel Taiwo named a Vanier Scholar

    Aula Fellow Emmanuel Taiwo named a Vanier Scholar

    We are proud to announce that Aula Fellow Emmanuel Taiwo has been named a recipient of the Vanier Canada Graduate Scholarship Award for 2025.

    From their site: “The Vanier award recognizes PhD students at Canadian universities who demonstrate excellence across three key areas, namely, leadership, academic performance and research potential. Widely regarded as one of the most prestigious scholarship awards at the doctoral level, Vanier Scholars are seen as some of the best of the best doctoral researchers in Canada.”

    IMPACT Lab doctoral candidate named recipient of prestigious Vanier Scholarship Award!

  • AIMS Hackathon Against Modern Slavery

    AIMS Hackathon Against Modern Slavery

    We are proud to announce that an Aula Team has joined the AIMS Hackathon 2025: AI Against Modern Slavery in Supply Chains. This is an issue that touches everyone on earth, and that everyone can take part in fixing.

    We will be examining problems in this space and, among other things, an open data set of 15,000+ annual corporate reports and Walk Free’s Global Slavery Index, for ways to identify, mitigate, and eradicate modern slavery.

    We are seeing what we can do to help. How do you see it? Want to check out the data and let us know? We’ll be sharing, returning, and building collaborations. Thank you and all honour to the Hackathon conveners, and director Adriana Eufrosina Bora:

    Fundación Pasos Libres: project link https://lnkd.in/gdsczfKc
    Mila – Quebec Artificial Intelligence Institute: project link also includes links to all of the open data sets and studies done so far: https://lnkd.in/dAApAvqu
    QUT (Queensland University of Technology) (QUT): https://lnkd.in/ehG66MXs

    The business reports database on GitHub, built and hosted by The Future Society: https://lnkd.in/eUa6an9s

    There’s a world-class group of trainers. Numerous other partners are providing support, including The Future Society, Walk Free, UNESCO, the International Committee of the Red Cross – ICRC, Australian Red Cross, and governments of Australia, Canada, the UK. And many more to come.

    Get to the heart of the matter by hearing from survivors: Faith, Love, and Human Trafficking: The Story of Karola De la Cuesta. On Goodreads and available at most online retailers in EN and SP (ask your library): https://lnkd.in/eitSUk4c

    If like us you are also working on these issues, we welcome your interest in potential collaborations. Check out “How to Get Involved”.

    Infographic from Respect International: https://lnkd.in/exRb_NNA

    AIMS Hackathon