Category: Hard Questions: Society

  • AI and Human Oversight: A Risk-Based Framework for Alignment

    AI and Human Oversight: A Risk-Based Framework for Alignment

    As Artificial Intelligence (AI) technologies continue to advance, protecting human autonomy and promoting ethical decision-making are essential to fostering trust and accountability. Human agency (the capacity of individuals to make informed decisions) should be actively preserved and reinforced by AI systems. This paper examines strategies for designing AI systems that uphold fundamental rights, strengthen human agency, and embed effective human oversight mechanisms. It discusses key oversight models, including Human-in-Command (HIC), Human-in-the-Loop (HITL), and Human-on-the-Loop (HOTL), and proposes a risk-based framework to guide the implementation of these mechanisms. By linking the level of AI model risk to the appropriate form of human oversight, the paper underscores the critical role of human involvement in the responsible deployment of AI, balancing technological innovation with the protection of individual values and rights. In doing so, it aims to ensure that AI technologies are used responsibly, safeguarding individual autonomy while maximizing societal benefits.

    More Information

  • Obama Foundation Fellow: Victoria Kuketz

    Obama Foundation Fellow: Victoria Kuketz

    We are proud to announce Aula Fellow’s Victoria Kuketz’s recent appointment as an Obama Fellow. Follow Victoria for news of her Fellowship this year, where she will be concentrating on inclusion and rational governance.

    More Information

  • Oui, mais je LLM !

    Oui, mais je LLM !

    L’IA générative nous joue des tours, en manipulant notre perception de la vérité en tentant de devenir notre confident et en créant une relation de dépendance. Mais, on peut aussi à notre tour l’utiliser pour extraire des informations privilégiées mal sécurisées, en utilisant des tactiques adaptées de l’ingénierie sociale.

    Le manque d’expérience autour de cette technologie et l’empressement à en mettre partout expose à de nouveaux risques.

    Je te présente un survol des concepts de base en cybersécurité revisités pour l’IA générative, différents risques que posent ces algorithmes et différents conseils de prévention pour bien les intégrer dans nos systèmes informatiques et notre pratique professionnelle.

    More Information

  • West Island Women’s Center

    West Island Women’s Center

    Presenting a workshop on navigating the hard and strange questions on AI in society and in our lives.

    More Information

  • The Architecture of Responsible AI: Balancing Innovation and Accountability

    The Architecture of Responsible AI: Balancing Innovation and Accountability

    Artificial Intelligence (AI) has become a key factor driving change in industries, organizations, and society. While technological capabilities advance rapidly, the mechanisms guiding AI implementation reveal critical structural flaws (Closing the AI accountability gap). There lies an opportunity to architect a future where we can collaboratively design systems that leverage AI to augment human capabilities while upholding ethical integrity.

    More Information

  • The EU AI Act – Enabling the Next Generation Internet (NGI).

    The EU AI Act – Enabling the Next Generation Internet (NGI).

    How the pioneering AI law enables the NGI’s aim of establishing key technological building blocks of tomorrow’s Internet and shaping the future Internet as an interoperable platform ecosystem that embodies the values that Europe holds dear: openness, inclusivity, transparency, privacy, cooperation, and protection of data.

    More Information

  • Whole-Person Education for AI Engineers: Presented to CEEA (Peer Reviewed)

    Whole-Person Education for AI Engineers: Presented to CEEA (Peer Reviewed)

    This autoethnographic study explores the need for interdisciplinary education spanning both technical an philosophical skills – as such, this study leverages whole-person education as a theoretical approach needed in AI engineering education to address the limitations of current paradigms that prioritize technical expertise over ethical and societal considerations. Drawing on a collaborative autoethnography approach of fourteen diverse stakeholders, the study identifies key motivations driving the call for change, including the need for global perspectives, bridging the gap between academia and industry, integrating ethics and societal impact, and fostering interdisciplinary collaboration. The findings challenge the myths of technological neutrality and technosaviourism, advocating for a future where AI engineers are equipped not only with technical skills but also with the ethical awareness, social responsibility, and interdisciplinary understanding necessary to navigate the complex challenges of AI development. The study provides valuable insights and recommendations for transforming AI engineering education to ensure the responsible development of AI technologies.

    More Information

  • Canary in the Mine: An LLM Augmented Survey of Disciplinary Complaints to the Ordre des ingénieurs du Québec (OIQ) (Peer Reviewed)

    Canary in the Mine: An LLM Augmented Survey of Disciplinary Complaints to the Ordre des ingénieurs du Québec (OIQ) (Peer Reviewed)

    This study investigates disciplinary incidents involving engineers in Quebec, shedding light on critical gaps in engineering education. Through a comprehensive review of the disciplinary register of the Ordre des ingénieurs du Québec (OIQ)’s disciplinary register for 2010 to 2024, researchers from engineering education and human resources management in technological development laboratories conducted a thematic analysis of reported incidents to identify patterns, trends, and areas for improvement. The analysis aims to uncover the most common types of disciplinary incidents, underlying causes, and implications for the field in how engineering education addresses (or fails to address) these issues. Our findings identify recurring themes, analyze root causes, and offer recommendations for engineering educators and students to mitigate similar incidents. This research has implications for informing curriculum development, professional development, and performance evaluation, ultimately fostering a culture of professionalism and ethical responsibility in engineering. By providing empirical evidence of disciplinary incidents and their causes, this study contributes to evidence-based practices for engineering education and professional development, enhancing the engineering education community’s understanding of professionalism and ethics.

    More Information

  • Developing the Permanent Symposium on AI (poster): Presented at Engineering and Public Policy Division (EPP) Poster Session

    Developing the Permanent Symposium on AI (poster): Presented at Engineering and Public Policy Division (EPP) Poster Session

    A multidisciplinary, reflective autoethnography by some of the people who are building the Permanent Symposium on AI. Includes the history of the project.

    RQ 1: Challenges that unite AI policy & tech

    RQ 2: How to design the PSAI?

    RQ 3: What factors influence the adoption and scalability of the PSAI?

    This is the Flagship project of the Aula Fellowship.

    Read the Poster

  • Nature Opinion: The path for AI in poor nations does not need to be paved with billions

    Nature Opinion: The path for AI in poor nations does not need to be paved with billions

    NATURE

    Researchers in low- and middle-income countries show that home-grown artificial-intelligence technologies can be developed, even without large external investments.

    More Information

  • Saptarishi Futures: An Indian Intergenerational Wayfinding Framework

    Saptarishi Futures: An Indian Intergenerational Wayfinding Framework

    An Intergenerational Future Study model contextualized within Indian mythology, folklore, and generational value systems. This fusion explores ancient cultural wisdom and modern anticipatory governance to imagine just, inclusive, and regenerative futures across generations.

    More Information

  • Yakshi: A Transmedia Narrative Exploration

    Yakshi: A Transmedia Narrative Exploration

    At Smart Story Labs, we are excited to announce a new GitHub project that dives into the transmedia narrative of Yakshi – reimagining this South Asian folklore spirit as a lens to explore cross-cultural storytelling, feminist hauntings, and ecological narratives.

    More Information