Category: Type: Research

In this category:

1. Peer-Reviewed:
Research Papers
Chapters
Conference Proceedings

2. Pre-Prints. Pre-Prints are standard in some fields. They are not always peer reviewed.

  • Call for Book Chapters: OUR AI PROBLEMS

    Call for Book Chapters: OUR AI PROBLEMS

    Call for Book Chapters: Our AI Problems (Edited Volume)

    We believe that there are no easy answers when it comes to artificial intelligence and society. Across jurisdictions and decision-making bodies, those who develop or enforce regulations are confronted with difficult questions. These challenges arise for many reasons: the issues are often embedded in complex sociotechnical systems, lack straightforward solutions, or involve tensions between competing values and needs.

    The editors hold that AI can be of great service for humanity. At the same time, current regulatory frameworks lag far behind what is needed to ensure just, safe, and equitable access and outcomes. 

    Policymakers and subject-matter specialists are increasingly converging on a shared set of especially challenging issues.  Society is learning to join in the conversations. Accordingly, the proposed volume is envisioned as addressing the following areas: Economics and Power; Democracy and Trust; Risks Large and Small; Building Bridges and Inclusion; Media and Art; Environment and Health; Justice, Security, and Defense.

    If you are interested in contributing, we would be delighted to hear from you. If you know colleagues or collaborators who might wish to participate, please feel free to share this call with them as well.

    Deadline for chapter abstracts (250–300 words): 15 January 2026
    Deadline for chapter draft submission (8000–10,000 words; US English; APA style): 31 March 2026
    Deadline for final revisions: 15 May 2026

    Edited by Tammy Mackenzie, Ashley Elizabeth Muller, and Branislav Radeljić

    For more info about the editors, please see: Fellows
    Submissions and questions: Contact Branislav Radeljić, Ph.D., Director of Research.

  • AI and Human Oversight: A Risk-Based Framework for Alignment

    AI and Human Oversight: A Risk-Based Framework for Alignment

    As Artificial Intelligence (AI) technologies continue to advance, protecting human autonomy and promoting ethical decision-making are essential to fostering trust and accountability. Human agency (the capacity of individuals to make informed decisions) should be actively preserved and reinforced by AI systems. This paper examines strategies for designing AI systems that uphold fundamental rights, strengthen human agency, and embed effective human oversight mechanisms. It discusses key oversight models, including Human-in-Command (HIC), Human-in-the-Loop (HITL), and Human-on-the-Loop (HOTL), and proposes a risk-based framework to guide the implementation of these mechanisms. By linking the level of AI model risk to the appropriate form of human oversight, the paper underscores the critical role of human involvement in the responsible deployment of AI, balancing technological innovation with the protection of individual values and rights. In doing so, it aims to ensure that AI technologies are used responsibly, safeguarding individual autonomy while maximizing societal benefits.

    More Information

  • Generative AI and the Future of News: Examining AI’s Agency, Power, and Authority

    Generative AI and the Future of News: Examining AI’s Agency, Power, and Authority

    This special issue interrogates how artificial intelligence (AI), particularly generative AI (GenAI), is reshaping journalism at a moment of profound uncertainty for the profession. The rapid rise of GenAI technologies, particularly following the release of tools like ChatGPT, has intensified longstanding tensions between economic precarity, technological innovation, and journalistic values. Across diverse contexts in the Global North and South, articles examine how AI is simultaneously heralded as a source of efficiency, personalization, and newsroom survival, while also feared as a destabilizing force that threatens jobs, erodes professional norms, and concentrates power in the hands of technology corporations.

    More Information

  • AWS blog: “AI judging AI”

    AWS blog: “AI judging AI”

    “Picture this: Your team just received 10,000 customer feedback responses. The traditional approach? Weeks of manual analysis. But what if AI could not only analyze this feedback but also validate its own work? Welcome to the world of large language model (LLM) jury systems deployed using Amazon Bedrock. As more organizations embrace generative AI, particularly LLMs for various applications, a new challenge has emerged: ensuring that the output from these AI models aligns with human perspectives and is accurate and relevant to the business context. ”

    Read the work on their blog: https://aws.amazon.com/blogs/machine-learning/ai-judging-ai-scaling-unstructured-text-analysis-with-amazon-nova/

  • Dis/Misinformation, WhatsApp Groups, and Informal Fact-Checking Practices in Namibia

    Dis/Misinformation, WhatsApp Groups, and Informal Fact-Checking Practices in Namibia

    This chapter contributes to our understanding of organic and informal user correction practices emerging in WhatsApp groups in Namibia, South Africa, and Zimbabwe. This is important in a context where formal infrastructures of correcting and debunking dis/misinformation have been dominated by top-down initiatives. These formal infrastructures include platform-centric content moderation practices and professional fact-checking processes. Unlike social platforms such as Twitter and Facebook, which can perform content moderation and hence take down offending content, the end-to-end encrypted (E2EE) infrastructure of WhatsApp creates a very different scenario where the same approach is not possible. This is because only the users involved in the conversation have access to the content shared, shielding false and abusive content from being detected or removed. As Kuru et al.(2022) opine, the privacy of end-to-end encryption provides a highly closed communication space, posing a different set of challenges for misinformation detection and intervention than with more open social media, such as Facebook and Twitter. In this regard, false and misleading information on WhatsApp constitutes” a distinctive problem”(Kuru et al. 2022; Melo et al. 2020). As Reis et al.(2020, 2) observe,“the end-to-end en-crypted (E2EE) structure of WhatsApp creates a very different scenario” where content moderation and fact checking at scale is not possible. Fact-checking WhatsApp groups, which have been flagged as the major distributors of mis-and disinformation is equally difficult.

    More Information

  • Shifting the Gaze? Photojournalism Practices in the Age of Artificial Intelligence

    Shifting the Gaze? Photojournalism Practices in the Age of Artificial Intelligence

    In this article, we explore the impact of artificial intelligence (AI) technologies on photojournalism in less-researched contexts in Botswana and Zimbabwe. We aim to understand how AI technologies, proliferating aspects of news production, are impacting one of journalism’s respected and enduring trades- photojournalism. We answer the question: In what ways are AI-driven technologies impacting photojournalism practices? Furthermore, we investigate how photojournalists perceive their roles and the ethical considerations that come to the fore as AI begin to technically influence photojournalism. We deploy an eclectic analytical framework consisting of the critical technology theory, disruptive innovation theory and Baudrillard’s concept of simulation to theorise how AI technologies affect photojournalism in Botswana and Zimbabwe. Data were collected using in-depth interviews with practising photojournalists and …

    More Information

  • Whole-Person Education for AI Engineers: Presented to CEEA (Peer Reviewed)

    Whole-Person Education for AI Engineers: Presented to CEEA (Peer Reviewed)

    This autoethnographic study explores the need for interdisciplinary education spanning both technical an philosophical skills – as such, this study leverages whole-person education as a theoretical approach needed in AI engineering education to address the limitations of current paradigms that prioritize technical expertise over ethical and societal considerations. Drawing on a collaborative autoethnography approach of fourteen diverse stakeholders, the study identifies key motivations driving the call for change, including the need for global perspectives, bridging the gap between academia and industry, integrating ethics and societal impact, and fostering interdisciplinary collaboration. The findings challenge the myths of technological neutrality and technosaviourism, advocating for a future where AI engineers are equipped not only with technical skills but also with the ethical awareness, social responsibility, and interdisciplinary understanding necessary to navigate the complex challenges of AI development. The study provides valuable insights and recommendations for transforming AI engineering education to ensure the responsible development of AI technologies.

    More Information

  • WIP: Gen AI in Engineering Education and the Da Vinci Cube (Peer Reviewed)

    WIP: Gen AI in Engineering Education and the Da Vinci Cube (Peer Reviewed)

    As generative AI (GenAI) tools rapidly transform the engineering landscape, a critical question emerges: Are current educational innovations adequately preparing engineers for the socio-technical challenges of the future? This work-in-progress paper presents two key contributions. First, we build on prior work presenting a systematic review of over 160 scholarly articles on GenAI implementations in engineering education, revealing a predominant focus on enhancing technical proficiency while often neglecting essential socio-technical competencies. Second, we apply an emerging framework—the da Vinci Cube (dVC)—to support engineering educators in critically evaluating GenAI-driven innovations. The dVC framework extends traditional models of innovation by incorporating three dimensions: the pursuit of knowledge, consideration of use, and contemplation of sentiment. Our analysis suggests that while GenAI tools can improve problem-solving and technical efficiency, engineering education must also address ethical, human-centered, and societal impacts. The dVC framework provides a structured lens for assessing how GenAI tools are integrated into curricula and research, encouraging a more holistic, reflective approach. Ultimately, this paper aims to provoke dialogue on the future of engineering education and to challenge the prevailing assumption that technical skill development alone is sufficient in an AI-mediated world.

    More Information

  • Nature Opinion: The path for AI in poor nations does not need to be paved with billions

    Nature Opinion: The path for AI in poor nations does not need to be paved with billions

    NATURE

    Researchers in low- and middle-income countries show that home-grown artificial-intelligence technologies can be developed, even without large external investments.

    More Information

  • Developing the Permanent Symposium on AI (poster): Presented at Engineering and Public Policy Division (EPP) Poster Session

    Developing the Permanent Symposium on AI (poster): Presented at Engineering and Public Policy Division (EPP) Poster Session

    A multidisciplinary, reflective autoethnography by some of the people who are building the Permanent Symposium on AI. Includes the history of the project.

    RQ 1: Challenges that unite AI policy & tech

    RQ 2: How to design the PSAI?

    RQ 3: What factors influence the adoption and scalability of the PSAI?

    This is the Flagship project of the Aula Fellowship.

    Read the Poster

  • Work in Progress: Exclusive Rhetoric in AI Conference Mission Statements

    Work in Progress: Exclusive Rhetoric in AI Conference Mission Statements

    AI conferences are pivotal spaces for knowledge exchange, collaboration, and shaping the trajectory of research, practice, and education. This paper presents preliminary findings from an analysis of AI conference mission statements, investigating how their stated goals affect who is welcomed into AI conversations. We find that many mission statements reflect assumptions that may unintentionally narrow participation and reinforce disciplinary and institutional silos. This limits engagement from a broad range of contributors—including educators, students, working professionals, and even younger users —who are essential to a thriving AI ecosystem. We advocate for clearer framing that supports democratizing and demystifying AI. By broadening participation and intentionally fostering cross-sector and interdisciplinary connections, AI conferences can help unlock more innovation.

    More Information

  • PARADIM: A Platform to Support Research at the Interface of Data Science and Medical Imaging

    PARADIM: A Platform to Support Research at the Interface of Data Science and Medical Imaging

    This paper describes PARADIM, a digital infrastructure designed to support research at the interface of data science and medical imaging, with a focus on Research Data Management best practices. The platform is built from open-source components and rooted in the FAIR principles through strict compliance with the DICOM standard. It addresses key needs in data curation, governance, privacy, and scalable resource management. Supporting every stage of the data science discovery cycle, the platform offers robust functionalities for user identity and access management, data de-identification, storage, annotation, as well as model training and evaluation. Rich metadata are generated all along the research lifecycle to ensure the traceability and reproducibility of results. PARADIM hosts several medical image collections and allows the automation of large-scale, computationally intensive pipelines (e.g., automatic segmentation, dose calculations, AI model evaluation). The platform fills a gap at the interface of data science and medical imaging, where digital infrastructures are key in the development, evaluation, and deployment of innovative solutions in the real world.

    More Information