Category: 3/ Type

  • Oui, mais je LLM !

    Oui, mais je LLM !

    L’IA générative nous joue des tours, en manipulant notre perception de la vérité en tentant de devenir notre confident et en créant une relation de dépendance. Mais, on peut aussi à notre tour l’utiliser pour extraire des informations privilégiées mal sécurisées, en utilisant des tactiques adaptées de l’ingénierie sociale.

    Le manque d’expérience autour de cette technologie et l’empressement à en mettre partout expose à de nouveaux risques.

    Je te présente un survol des concepts de base en cybersécurité revisités pour l’IA générative, différents risques que posent ces algorithmes et différents conseils de prévention pour bien les intégrer dans nos systèmes informatiques et notre pratique professionnelle.

    More Information

  • Aula Fellow Emmanuel Taiwo named a Vanier Scholar

    Aula Fellow Emmanuel Taiwo named a Vanier Scholar

    We are proud to announce that Aula Fellow Emmanuel Taiwo has been named a recipient of the Vanier Canada Graduate Scholarship Award for 2025.

    From their site: “The Vanier award recognizes PhD students at Canadian universities who demonstrate excellence across three key areas, namely, leadership, academic performance and research potential. Widely regarded as one of the most prestigious scholarship awards at the doctoral level, Vanier Scholars are seen as some of the best of the best doctoral researchers in Canada.”

    IMPACT Lab doctoral candidate named recipient of prestigious Vanier Scholarship Award!

  • AIMS Hackathon Against Modern Slavery

    AIMS Hackathon Against Modern Slavery

    We are proud to announce that an Aula Team has joined the AIMS Hackathon 2025: AI Against Modern Slavery in Supply Chains. This is an issue that touches everyone on earth, and that everyone can take part in fixing.

    We will be examining problems in this space and, among other things, an open data set of 15,000+ annual corporate reports and Walk Free’s Global Slavery Index, for ways to identify, mitigate, and eradicate modern slavery.

    We are seeing what we can do to help. How do you see it? Want to check out the data and let us know? We’ll be sharing, returning, and building collaborations. Thank you and all honour to the Hackathon conveners, and director Adriana Eufrosina Bora:

    Fundación Pasos Libres: project link https://lnkd.in/gdsczfKc
    Mila – Quebec Artificial Intelligence Institute: project link also includes links to all of the open data sets and studies done so far: https://lnkd.in/dAApAvqu
    QUT (Queensland University of Technology) (QUT): https://lnkd.in/ehG66MXs

    The business reports database on GitHub, built and hosted by The Future Society: https://lnkd.in/eUa6an9s

    There’s a world-class group of trainers. Numerous other partners are providing support, including The Future Society, Walk Free, UNESCO, the International Committee of the Red Cross – ICRC, Australian Red Cross, and governments of Australia, Canada, the UK. And many more to come.

    Get to the heart of the matter by hearing from survivors: Faith, Love, and Human Trafficking: The Story of Karola De la Cuesta. On Goodreads and available at most online retailers in EN and SP (ask your library): https://lnkd.in/eitSUk4c

    If like us you are also working on these issues, we welcome your interest in potential collaborations. Check out “How to Get Involved”.

    Infographic from Respect International: https://lnkd.in/exRb_NNA

    AIMS Hackathon

  • West Island Women’s Center

    West Island Women’s Center

    Presenting a workshop on navigating the hard and strange questions on AI in society and in our lives.

    More Information

  • AWS blog: “AI judging AI”

    AWS blog: “AI judging AI”

    “Picture this: Your team just received 10,000 customer feedback responses. The traditional approach? Weeks of manual analysis. But what if AI could not only analyze this feedback but also validate its own work? Welcome to the world of large language model (LLM) jury systems deployed using Amazon Bedrock. As more organizations embrace generative AI, particularly LLMs for various applications, a new challenge has emerged: ensuring that the output from these AI models aligns with human perspectives and is accurate and relevant to the business context. ”

    Read the work on their blog: https://aws.amazon.com/blogs/machine-learning/ai-judging-ai-scaling-unstructured-text-analysis-with-amazon-nova/

  • WiCyS
Vulnerability Disclosure Program

    WiCyS Vulnerability Disclosure Program

    Proud and happy to see that our Fellow, cybersecurity specialist Temitope Banjo-CISM will be joining Women in CyberSecurity (WiCyS)’s Vulnerability Disclosure Program.

  • Shaping AI Justice Together: Join Canada’s First Black Consultation on Responsible AI Governance

    Shaping AI Justice Together: Join Canada’s First Black Consultation on Responsible AI Governance

    Announcement by Jake Okechukwu Effoduh: “I’m convening 40 brilliant minds at the Lincoln Alexander School of Law to ensure Canada’s new AI governance framework centers racial equity, accountability, and justice.

    Shaping AI Justice Together: Join Canada’s First Black Consultation on Responsible AI Governance

    Calling all Black and racial justice experts working at the forefront of algorithmic fairness: this is your invitation to help define Canada’s AI future.

    If you or someone you know is doing this critical work, step forward. Share widely, and be part of this historic conversation.

    Read more about this on LinkedIn

  • The Architecture of Responsible AI: Balancing Innovation and Accountability

    The Architecture of Responsible AI: Balancing Innovation and Accountability

    Artificial Intelligence (AI) has become a key factor driving change in industries, organizations, and society. While technological capabilities advance rapidly, the mechanisms guiding AI implementation reveal critical structural flaws (Closing the AI accountability gap). There lies an opportunity to architect a future where we can collaboratively design systems that leverage AI to augment human capabilities while upholding ethical integrity.

    More Information

  • The EU AI Act – Enabling the Next Generation Internet (NGI).

    The EU AI Act – Enabling the Next Generation Internet (NGI).

    How the pioneering AI law enables the NGI’s aim of establishing key technological building blocks of tomorrow’s Internet and shaping the future Internet as an interoperable platform ecosystem that embodies the values that Europe holds dear: openness, inclusivity, transparency, privacy, cooperation, and protection of data.

    More Information

  • Democracy Dialogues Lead Boldly, Inspire Globally: Meet the 2025-2026 Obama Foundation Scholars

    Democracy Dialogues Lead Boldly, Inspire Globally: Meet the 2025-2026 Obama Foundation Scholars

    Quoting from the organizers: “In a moment that former President Barack Obama describes as a “political crisis of the sort that we haven’t seen before,” we are proud to welcome the 2025–2026 Obama Foundation Scholars to the TMU campus for a special in-person episode of our Democracy Dialogues series. With three Canadians among this year’s global cohort—Victoria Kuketz (TMU’s own), Khalid Hashi, and Michelle Cartier—this is a rare chance to meet inspiring leaders working across disciplines, borders, and systems to develop solutions to some of the most pressing challenges of our times.

    Open to students, community members, and leaders from across all sectors, this event is an opportunity to engage with the Obama Scholars, and reflect on how each of us can respond to the crisis we are experiencing. Engage in meaningful discussion, connect with the next generation of global leaders, and consider the kind of world we want to build – now.

    Be part of a conversation on transforming trust into action and participation into impact.

    Learn more about the Obama Foundation Scholars here and the Democratic Engagement Exchange here.

    About Democracy Dialogues:

    Democracy Dialogues is a public conversation series hosted by the Democratic Engagement Exchange at TMU. Each episode brings together thought leaders and community voices to explore the challenges and possibilities of building a more inclusive and resilient democracy.”

    See the event recording here: https://www.torontomu.ca/arts/news-events/2025/10/democracy-dialogues-lead-boldly–inspire-globally–meet-the-2025/

  • Beyond mere automation: A techno-functional framework for reimagining gen-AI in supply chain operations

    Beyond mere automation: A techno-functional framework for reimagining gen-AI in supply chain operations

    As Generative AI (Gen-AI) continues to evolve rapidly, its potential to transform supply chain operations remains largely unexplored. Narrowing in on retail supply chain, this paper presents a taxonomy diagram that categorizes trends in Gen-AI adoption across various functions thereby mapping current Gen-AI capabilities and identifying immediate opportunities and potential challenges. We identify several key patterns in Gen-AI integration, including the automation of routine cognitive tasks, and enhancement of human decision-making capabilities. We posit that while Gen-AI shows immense promise in improving supply chain efficiency and resilience, successful implementation requires careful consideration of existing workflows, user capabilities, and organizational readiness. Finally, we present a cohesive vision for scaling Gen-AI in Supply Chain operations. Ultimately, this position paper provides insights for both practitioners looking to implement Gen-AI solutions and researchers exploring the future of AI in and for supply chain management.

    Read the full workshop report here.

  • Dis/Misinformation, WhatsApp Groups, and Informal Fact-Checking Practices in Namibia

    Dis/Misinformation, WhatsApp Groups, and Informal Fact-Checking Practices in Namibia

    This chapter contributes to our understanding of organic and informal user correction practices emerging in WhatsApp groups in Namibia, South Africa, and Zimbabwe. This is important in a context where formal infrastructures of correcting and debunking dis/misinformation have been dominated by top-down initiatives. These formal infrastructures include platform-centric content moderation practices and professional fact-checking processes. Unlike social platforms such as Twitter and Facebook, which can perform content moderation and hence take down offending content, the end-to-end encrypted (E2EE) infrastructure of WhatsApp creates a very different scenario where the same approach is not possible. This is because only the users involved in the conversation have access to the content shared, shielding false and abusive content from being detected or removed. As Kuru et al.(2022) opine, the privacy of end-to-end encryption provides a highly closed communication space, posing a different set of challenges for misinformation detection and intervention than with more open social media, such as Facebook and Twitter. In this regard, false and misleading information on WhatsApp constitutes” a distinctive problem”(Kuru et al. 2022; Melo et al. 2020). As Reis et al.(2020, 2) observe,“the end-to-end en-crypted (E2EE) structure of WhatsApp creates a very different scenario” where content moderation and fact checking at scale is not possible. Fact-checking WhatsApp groups, which have been flagged as the major distributors of mis-and disinformation is equally difficult.

    More Information