Category: 4/ Fellow”s Projects

Aula Fellow Project

  • Generative AI and the Future of News: Examining AI’s Agency, Power, and Authority

    Generative AI and the Future of News: Examining AI’s Agency, Power, and Authority

    This special issue interrogates how artificial intelligence (AI), particularly generative AI (GenAI), is reshaping journalism at a moment of profound uncertainty for the profession. The rapid rise of GenAI technologies, particularly following the release of tools like ChatGPT, has intensified longstanding tensions between economic precarity, technological innovation, and journalistic values. Across diverse contexts in the Global North and South, articles examine how AI is simultaneously heralded as a source of efficiency, personalization, and newsroom survival, while also feared as a destabilizing force that threatens jobs, erodes professional norms, and concentrates power in the hands of technology corporations.

    More Information

  • Obama Foundation Fellow: Victoria Kuketz

    Obama Foundation Fellow: Victoria Kuketz

    We are proud to announce Aula Fellow’s Victoria Kuketz’s recent appointment as an Obama Fellow. Follow Victoria for news of her Fellowship this year, where she will be concentrating on inclusion and rational governance.

    More Information

  • AI and Human Oversight: A Risk-Based Framework for Alignment

    AI and Human Oversight: A Risk-Based Framework for Alignment

    As Artificial Intelligence (AI) technologies continue to advance, protecting human autonomy and promoting ethical decision-making are essential to fostering trust and accountability. Human agency (the capacity of individuals to make informed decisions) should be actively preserved and reinforced by AI systems. This paper examines strategies for designing AI systems that uphold fundamental rights, strengthen human agency, and embed effective human oversight mechanisms. It discusses key oversight models, including Human-in-Command (HIC), Human-in-the-Loop (HITL), and Human-on-the-Loop (HOTL), and proposes a risk-based framework to guide the implementation of these mechanisms. By linking the level of AI model risk to the appropriate form of human oversight, the paper underscores the critical role of human involvement in the responsible deployment of AI, balancing technological innovation with the protection of individual values and rights. In doing so, it aims to ensure that AI technologies are used responsibly, safeguarding individual autonomy while maximizing societal benefits.

    More Information

  • Oui, mais je LLM !

    Oui, mais je LLM !

    L’IA générative nous joue des tours, en manipulant notre perception de la vérité en tentant de devenir notre confident et en créant une relation de dépendance. Mais, on peut aussi à notre tour l’utiliser pour extraire des informations privilégiées mal sécurisées, en utilisant des tactiques adaptées de l’ingénierie sociale.

    Le manque d’expérience autour de cette technologie et l’empressement à en mettre partout expose à de nouveaux risques.

    Je te présente un survol des concepts de base en cybersécurité revisités pour l’IA générative, différents risques que posent ces algorithmes et différents conseils de prévention pour bien les intégrer dans nos systèmes informatiques et notre pratique professionnelle.

    More Information

  • Aula Fellow Emmanuel Taiwo named a Vanier Scholar

    Aula Fellow Emmanuel Taiwo named a Vanier Scholar

    We are proud to announce that Aula Fellow Emmanuel Taiwo has been named a recipient of the Vanier Canada Graduate Scholarship Award for 2025.

    From their site: “The Vanier award recognizes PhD students at Canadian universities who demonstrate excellence across three key areas, namely, leadership, academic performance and research potential. Widely regarded as one of the most prestigious scholarship awards at the doctoral level, Vanier Scholars are seen as some of the best of the best doctoral researchers in Canada.”

    IMPACT Lab doctoral candidate named recipient of prestigious Vanier Scholarship Award!

  • AIMS Hackathon Against Modern Slavery

    AIMS Hackathon Against Modern Slavery

    We are proud to announce that an Aula Team has joined the AIMS Hackathon 2025: AI Against Modern Slavery in Supply Chains. This is an issue that touches everyone on earth, and that everyone can take part in fixing.

    We will be examining problems in this space and, among other things, an open data set of 15,000+ annual corporate reports and Walk Free’s Global Slavery Index, for ways to identify, mitigate, and eradicate modern slavery.

    We are seeing what we can do to help. How do you see it? Want to check out the data and let us know? We’ll be sharing, returning, and building collaborations. Thank you and all honour to the Hackathon conveners, and director Adriana Eufrosina Bora:

    Fundación Pasos Libres: project link https://lnkd.in/gdsczfKc
    Mila – Quebec Artificial Intelligence Institute: project link also includes links to all of the open data sets and studies done so far: https://lnkd.in/dAApAvqu
    QUT (Queensland University of Technology) (QUT): https://lnkd.in/ehG66MXs

    The business reports database on GitHub, built and hosted by The Future Society: https://lnkd.in/eUa6an9s

    There’s a world-class group of trainers. Numerous other partners are providing support, including The Future Society, Walk Free, UNESCO, the International Committee of the Red Cross – ICRC, Australian Red Cross, and governments of Australia, Canada, the UK. And many more to come.

    Get to the heart of the matter by hearing from survivors: Faith, Love, and Human Trafficking: The Story of Karola De la Cuesta. On Goodreads and available at most online retailers in EN and SP (ask your library): https://lnkd.in/eitSUk4c

    If like us you are also working on these issues, we welcome your interest in potential collaborations. Check out “How to Get Involved”.

    Infographic from Respect International: https://lnkd.in/exRb_NNA

    AIMS Hackathon

  • West Island Women’s Center

    West Island Women’s Center

    Presenting a workshop on navigating the hard and strange questions on AI in society and in our lives.

    More Information

  • Tech Tool: the Survivor’s Dashboard

    Tech Tool: the Survivor’s Dashboard

    A dashboard of curated information for survivor’s of modern slavery and the people who work to rescue others. This tool is available for collaborations. Please contact our Technical Director, François Pelletier, for more information.

    We developed a lightweight version of this tool for the Davos — WEF summit: https://theaulafellowship.org/survivors-dashboard/

  • AWS blog: “AI judging AI”

    AWS blog: “AI judging AI”

    “Picture this: Your team just received 10,000 customer feedback responses. The traditional approach? Weeks of manual analysis. But what if AI could not only analyze this feedback but also validate its own work? Welcome to the world of large language model (LLM) jury systems deployed using Amazon Bedrock. As more organizations embrace generative AI, particularly LLMs for various applications, a new challenge has emerged: ensuring that the output from these AI models aligns with human perspectives and is accurate and relevant to the business context. ”

    Read the work on their blog: https://aws.amazon.com/blogs/machine-learning/ai-judging-ai-scaling-unstructured-text-analysis-with-amazon-nova/

  • WiCyS
Vulnerability Disclosure Program

    WiCyS Vulnerability Disclosure Program

    Proud and happy to see that our Fellow, cybersecurity specialist Temitope Banjo-CISM will be joining Women in CyberSecurity (WiCyS)’s Vulnerability Disclosure Program.