This special issue interrogates how artificial intelligence (AI), particularly generative AI (GenAI), is reshaping journalism at a moment of profound uncertainty for the profession. The rapid rise of GenAI technologies, particularly following the release of tools like ChatGPT, has intensified longstanding tensions between economic precarity, technological innovation, and journalistic values. Across diverse contexts in the Global North and South, articles examine how AI is simultaneously heralded as a source of efficiency, personalization, and newsroom survival, while also feared as a destabilizing force that threatens jobs, erodes professional norms, and concentrates power in the hands of technology corporations.
Category: 4/ Fellow”s Projects
Aula Fellow Project
-

AI and Human Oversight: A Risk-Based Framework for Alignment
As Artificial Intelligence (AI) technologies continue to advance, protecting human autonomy and promoting ethical decision-making are essential to fostering trust and accountability. Human agency (the capacity of individuals to make informed decisions) should be actively preserved and reinforced by AI systems. This paper examines strategies for designing AI systems that uphold fundamental rights, strengthen human agency, and embed effective human oversight mechanisms. It discusses key oversight models, including Human-in-Command (HIC), Human-in-the-Loop (HITL), and Human-on-the-Loop (HOTL), and proposes a risk-based framework to guide the implementation of these mechanisms. By linking the level of AI model risk to the appropriate form of human oversight, the paper underscores the critical role of human involvement in the responsible deployment of AI, balancing technological innovation with the protection of individual values and rights. In doing so, it aims to ensure that AI technologies are used responsibly, safeguarding individual autonomy while maximizing societal benefits.
-

Aula Fellow Emmanuel Taiwo named a Vanier Scholar
We are proud to announce that Aula Fellow Emmanuel Taiwo has been named a recipient of the Vanier Canada Graduate Scholarship Award for 2025.
From their site: “The Vanier award recognizes PhD students at Canadian universities who demonstrate excellence across three key areas, namely, leadership, academic performance and research potential. Widely regarded as one of the most prestigious scholarship awards at the doctoral level, Vanier Scholars are seen as some of the best of the best doctoral researchers in Canada.”
IMPACT Lab doctoral candidate named recipient of prestigious Vanier Scholarship Award!
-

AWS blog: “AI judging AI”
“Picture this: Your team just received 10,000 customer feedback responses. The traditional approach? Weeks of manual analysis. But what if AI could not only analyze this feedback but also validate its own work? Welcome to the world of large language model (LLM) jury systems deployed using Amazon Bedrock. As more organizations embrace generative AI, particularly LLMs for various applications, a new challenge has emerged: ensuring that the output from these AI models aligns with human perspectives and is accurate and relevant to the business context. ”
Read the work on their blog: https://aws.amazon.com/blogs/machine-learning/ai-judging-ai-scaling-unstructured-text-analysis-with-amazon-nova/
-

WiCyS Vulnerability Disclosure Program
Proud and happy to see that our Fellow, cybersecurity specialist Temitope Banjo-CISM will be joining Women in CyberSecurity (WiCyS)’s Vulnerability Disclosure Program.





