Category: Hard Questions: Inclusion

Hard Questions: Inclusion

  • Obama Foundation Fellow: Victoria Kuketz

    Obama Foundation Fellow: Victoria Kuketz

    We are proud to announce Aula Fellow’s Victoria Kuketz’s recent appointment as an Obama Fellow. Follow Victoria for news of her Fellowship this year, where she will be concentrating on inclusion and rational governance.

    More Information

  • Work in Progress: Exclusive Rhetoric in AI Conference Mission Statements

    Work in Progress: Exclusive Rhetoric in AI Conference Mission Statements

    AI conferences are pivotal spaces for knowledge exchange, collaboration, and shaping the trajectory of research, practice, and education. This paper presents preliminary findings from an analysis of AI conference mission statements, investigating how their stated goals affect who is welcomed into AI conversations. We find that many mission statements reflect assumptions that may unintentionally narrow participation and reinforce disciplinary and institutional silos. This limits engagement from a broad range of contributors—including educators, students, working professionals, and even younger users —who are essential to a thriving AI ecosystem. We advocate for clearer framing that supports democratizing and demystifying AI. By broadening participation and intentionally fostering cross-sector and interdisciplinary connections, AI conferences can help unlock more innovation.

    More Information

  • Developing the Permanent Symposium on AI (poster): Presented at Engineering and Public Policy Division (EPP) Poster Session

    Developing the Permanent Symposium on AI (poster): Presented at Engineering and Public Policy Division (EPP) Poster Session

    A multidisciplinary, reflective autoethnography by some of the people who are building the Permanent Symposium on AI. Includes the history of the project.

    RQ 1: Challenges that unite AI policy & tech

    RQ 2: How to design the PSAI?

    RQ 3: What factors influence the adoption and scalability of the PSAI?

    This is the Flagship project of the Aula Fellowship.

    Read the Poster

  • Saptarishi Futures: An Indian Intergenerational Wayfinding Framework

    Saptarishi Futures: An Indian Intergenerational Wayfinding Framework

    An Intergenerational Future Study model contextualized within Indian mythology, folklore, and generational value systems. This fusion explores ancient cultural wisdom and modern anticipatory governance to imagine just, inclusive, and regenerative futures across generations.

    More Information

  • Towards Real Diversity and Gender Equality in Artificial Intelligence

    Towards Real Diversity and Gender Equality in Artificial Intelligence

    This is an Advancement Report for the Global Partnership on Artificial Intelligence (GPAI) project “Towards Real Diversity and Gender Equality in Artificial Intelligence: Evidence-Based Promising Practices and Recommendations.” It describes, at a high level, the strategy, approach, and progress of the project thus far in its efforts to provide governments and other stakeholders of the artificial intelligence (AI) ecosystem with recommendations, tools, and promising practices to integrate Diversity and Gender Equality (DGE) considerations into the AI life cycle and related policy-making. The report starts with an overview of the human rights perspective, which serves as the framework upon which this project is building. By acknowledging domains where AI systems can pose risks and harms to global populations, and further, where they pose disproportionate risks and harms to women and other marginalized populations due to a lack of consideration for these groups throughout the AI life cycle, the need to address such inequalities becomes clear.

    More Information

  • World AI: Women in AI

    World AI: Women in AI

    The Aula Fellowship were present, to discuss social and environmental concerns to do with the marketing of AI as a panacea. The event brings together technology companies, civil society, and decision-makers. We were able to connect with other non-profits and universities in this sector, and to build collaborations with several attendees and presenters.

    More Information

  • ‘Mind the gap’: artificial intelligence and journalism training in Southern African journalism schools

    ‘Mind the gap’: artificial intelligence and journalism training in Southern African journalism schools

    This article examines journalism schools (J-schools) responses to the Artificial Intelligence (AI) ‘disruption’. It critically provides an exploratory examination of how J-Schools in Southern Africa are responding to the AI wave in their journalism curriculums. We answer the question: How are Southern African J-Schools responding to AI in their curriculums? Using a disruptive innovation theoretical lens and through documentary review of university teaching initiatives and accredited journalism curriculums, augmented by in-depth interviews, we demonstrate that AI has opened up new horizons for journalism training in multi-dimensional ways. However, this has brought challenges, including covert forms of resistance to AI integration by some Journalism educators. Furthermore, resource constraints and the obduracy of J-schools’ curriculums also contribute to the slow introduction of AI in J-schools.

    More Information

  • Options and Motivations for International AI Benefit Sharing

    Options and Motivations for International AI Benefit Sharing

    Advanced AI systems could generate substantial economic and other societal benefits, but these benefits may not be widely shared by default. For a range of reasons, a number of prominent actors and institutions have called for efforts to expand access to AI’s benefits. In this report, we define the concept of international AI benefit sharing (“benefit sharing”) as efforts to support and accelerate international access to AI’s economic or broader societal benefits. Calls for benefit sharing typically invoke at least one of three motivations: 1) supporting inclusive economic growth and sustainable development, 2) fostering technological self-determination in low- and middle-income countries, and 3) advancing geopolitical objectives, including strengthening international partnerships on AI governance. Notably, as a subset of the third motive, some powerful actors – like the US government – may support benefit sharing as a tool to further their economic and national security interests. Benefit sharing could be implemented by (1) sharing AI resources (e.g., computing power or data), (2) expanding access to AI systems, or (3) transferring a portion of the financial proceeds from AI commercialisation or AI-driven economic growth. Depending on the objective that benefit sharing is intended to achieve, each of these approaches offers distinct opportunities and implementation challenges. These challenges include the potential for some benefit-sharing options to raise security concerns and increase certain global risks. Actors interested in benefit sharing may consider implementing low-risk forms of benefit sharing immediately, while launching cooperative international discussions to develop more comprehensive, mutually-beneficial initiatives.

    More Information

  • IndicMMLU-Pro: Benchmarking Indic Large Language Models on Multi-Task Language Understanding

    IndicMMLU-Pro: Benchmarking Indic Large Language Models on Multi-Task Language Understanding

    Known by more than 1.5 billion people in the Indian subcontinent, Indic languages present unique challenges and opportunities for natural language processing (NLP) research due to their rich cultural heritage, linguistic diversity, and complex structures. IndicMMLU-Pro is a comprehensive benchmark designed to evaluate Large Language Models (LLMs) across Indic languages, building upon the MMLU Pro (Massive Multitask Language Understanding) framework. Covering major languages such as Hindi, Bengali, Gujarati, Marathi, Kannada, Punjabi, Tamil, Telugu, and Urdu, our benchmark addresses the unique challenges and opportunities presented by the linguistic diversity of the Indian subcontinent. This benchmark encompasses a wide range of tasks in language comprehension, reasoning, and generation, meticulously crafted to capture the intricacies of Indian languages. IndicMMLU-Pro provides a standardized evaluation framework to push the research boundaries in Indic language AI, facilitating the development of more accurate, efficient, and culturally sensitive models. This paper outlines the benchmarks’ design principles, task taxonomy, and data collection methodology, and presents baseline results from state-of-the-art multilingual models.

    More Information

  • Data Journalism Appropriation in African Newsrooms: A Comparative Study of Botswana and Namibia

    Data Journalism Appropriation in African Newsrooms: A Comparative Study of Botswana and Namibia

    Data journalism has received relatively limited academic attention in Southern Africa, with even less focus on smaller countries such as Botswana and Namibia. This article seeks to address this gap by exploring how selected newsrooms in these countries have engaged with data journalism, the ways it has enhanced their daily news reporting, and its impact on newsgathering and production routines. The study reveals varied patterns in the adoption of technology for data journalism across the two contexts. While certain skills remain underdeveloped, efforts to train journalists in data journalism have been evident. These findings support the argument that in emerging economies, the uneven adoption of data journalism technologies is influenced by exposure to these tools and practices.

    More Information

  • Potential and perils of large language models as judges of unstructured textual data

    Potential and perils of large language models as judges of unstructured textual data

    Rapid advancements in large language models have unlocked remarkable capabilities when it comes to processing and summarizing unstructured text data. This has implications for the analysis of rich, open-ended datasets, such as survey responses, where LLMs hold the promise of efficiently distilling key themes and sentiments. However, as organizations increasingly turn to these powerful AI systems to make sense of textual feedback, a critical question arises, can we trust LLMs to accurately represent the perspectives contained within these text based datasets? While LLMs excel at generating human-like summaries, there is a risk that their outputs may inadvertently diverge from the true substance of the original responses. Discrepancies between the LLM-generated outputs and the actual themes present in the data could lead to flawed decision-making, with far-reaching consequences for organizations. This research investigates the effectiveness of LLM-as-judge models to evaluate the thematic alignment of summaries generated by other LLMs. We utilized an Anthropic Claude model to generate thematic summaries from open-ended survey responses, with Amazon’s Titan Express, Nova Pro, and Meta’s Llama serving as judges. This LLM-as-judge approach was compared to human evaluations using Cohen’s kappa, Spearman’s rho, and Krippendorff’s alpha, validating a scalable alternative to traditional human centric evaluation methods. Our findings reveal that while LLM-as-judge offer a scalable solution comparable to human raters, humans may still excel at detecting subtle, context-specific nuances. Our research contributes to the growing body of knowledge on AI assisted text analysis. Further, we provide recommendations for future research, emphasizing the need for careful consideration when generalizing LLM-as-judge models across various contexts and use cases.

    More Information

  • Towards regulating AI : A natural, labour and capital resources perspective

    Towards regulating AI : A natural, labour and capital resources perspective

    Policymakers who are looking at artificial intelligence (AI) applications are thinking about what we as a society want to achieve and what we need to protect, yet it is not commonly known that AI apps require intensive natural resources, labour and capital.

    More Information