Category: Type: Research

In this category:

1. Peer-Reviewed:
Research Papers
Chapters
Conference Proceedings

2. Pre-Prints. Pre-Prints are standard in some fields. They are not always peer reviewed.

  • AI and Human Oversight: A Risk-Based Framework for Alignment

    AI and Human Oversight: A Risk-Based Framework for Alignment

    As Artificial Intelligence (AI) technologies continue to advance, protecting human autonomy and promoting ethical decision-making are essential to fostering trust and accountability. Human agency (the capacity of individuals to make informed decisions) should be actively preserved and reinforced by AI systems. This paper examines strategies for designing AI systems that uphold fundamental rights, strengthen human agency, and embed effective human oversight mechanisms. It discusses key oversight models, including Human-in-Command (HIC), Human-in-the-Loop (HITL), and Human-on-the-Loop (HOTL), and proposes a risk-based framework to guide the implementation of these mechanisms. By linking the level of AI model risk to the appropriate form of human oversight, the paper underscores the critical role of human involvement in the responsible deployment of AI, balancing technological innovation with the protection of individual values and rights. In doing so, it aims to ensure that AI technologies are used responsibly, safeguarding individual autonomy while maximizing societal benefits.

    More Information

  • Generative AI and the Future of News: Examining AI’s Agency, Power, and Authority

    Generative AI and the Future of News: Examining AI’s Agency, Power, and Authority

    This special issue interrogates how artificial intelligence (AI), particularly generative AI (GenAI), is reshaping journalism at a moment of profound uncertainty for the profession. The rapid rise of GenAI technologies, particularly following the release of tools like ChatGPT, has intensified longstanding tensions between economic precarity, technological innovation, and journalistic values. Across diverse contexts in the Global North and South, articles examine how AI is simultaneously heralded as a source of efficiency, personalization, and newsroom survival, while also feared as a destabilizing force that threatens jobs, erodes professional norms, and concentrates power in the hands of technology corporations.

    More Information

  • Dis/Misinformation, WhatsApp Groups, and Informal Fact-Checking Practices in Namibia

    Dis/Misinformation, WhatsApp Groups, and Informal Fact-Checking Practices in Namibia

    This chapter contributes to our understanding of organic and informal user correction practices emerging in WhatsApp groups in Namibia, South Africa, and Zimbabwe. This is important in a context where formal infrastructures of correcting and debunking dis/misinformation have been dominated by top-down initiatives. These formal infrastructures include platform-centric content moderation practices and professional fact-checking processes. Unlike social platforms such as Twitter and Facebook, which can perform content moderation and hence take down offending content, the end-to-end encrypted (E2EE) infrastructure of WhatsApp creates a very different scenario where the same approach is not possible. This is because only the users involved in the conversation have access to the content shared, shielding false and abusive content from being detected or removed. As Kuru et al.(2022) opine, the privacy of end-to-end encryption provides a highly closed communication space, posing a different set of challenges for misinformation detection and intervention than with more open social media, such as Facebook and Twitter. In this regard, false and misleading information on WhatsApp constitutes” a distinctive problem”(Kuru et al. 2022; Melo et al. 2020). As Reis et al.(2020, 2) observe,“the end-to-end en-crypted (E2EE) structure of WhatsApp creates a very different scenario” where content moderation and fact checking at scale is not possible. Fact-checking WhatsApp groups, which have been flagged as the major distributors of mis-and disinformation is equally difficult.

    More Information

  • Canary in the Mine: An LLM Augmented Survey of Disciplinary Complaints to the Ordre des ingénieurs du Québec (OIQ) (Peer Reviewed)

    Canary in the Mine: An LLM Augmented Survey of Disciplinary Complaints to the Ordre des ingénieurs du Québec (OIQ) (Peer Reviewed)

    This study investigates disciplinary incidents involving engineers in Quebec, shedding light on critical gaps in engineering education. Through a comprehensive review of the disciplinary register of the Ordre des ingénieurs du Québec (OIQ)’s disciplinary register for 2010 to 2024, researchers from engineering education and human resources management in technological development laboratories conducted a thematic analysis of reported incidents to identify patterns, trends, and areas for improvement. The analysis aims to uncover the most common types of disciplinary incidents, underlying causes, and implications for the field in how engineering education addresses (or fails to address) these issues. Our findings identify recurring themes, analyze root causes, and offer recommendations for engineering educators and students to mitigate similar incidents. This research has implications for informing curriculum development, professional development, and performance evaluation, ultimately fostering a culture of professionalism and ethical responsibility in engineering. By providing empirical evidence of disciplinary incidents and their causes, this study contributes to evidence-based practices for engineering education and professional development, enhancing the engineering education community’s understanding of professionalism and ethics.

    More Information

  • Shifting the Gaze? Photojournalism Practices in the Age of Artificial Intelligence

    Shifting the Gaze? Photojournalism Practices in the Age of Artificial Intelligence

    In this article, we explore the impact of artificial intelligence (AI) technologies on photojournalism in less-researched contexts in Botswana and Zimbabwe. We aim to understand how AI technologies, proliferating aspects of news production, are impacting one of journalism’s respected and enduring trades- photojournalism. We answer the question: In what ways are AI-driven technologies impacting photojournalism practices? Furthermore, we investigate how photojournalists perceive their roles and the ethical considerations that come to the fore as AI begin to technically influence photojournalism. We deploy an eclectic analytical framework consisting of the critical technology theory, disruptive innovation theory and Baudrillard’s concept of simulation to theorise how AI technologies affect photojournalism in Botswana and Zimbabwe. Data were collected using in-depth interviews with practising photojournalists and …

    More Information

  • Whole-Person Education for AI Engineers: Presented to CEEA (Peer Reviewed)

    Whole-Person Education for AI Engineers: Presented to CEEA (Peer Reviewed)

    This autoethnographic study explores the need for interdisciplinary education spanning both technical an philosophical skills – as such, this study leverages whole-person education as a theoretical approach needed in AI engineering education to address the limitations of current paradigms that prioritize technical expertise over ethical and societal considerations. Drawing on a collaborative autoethnography approach of fourteen diverse stakeholders, the study identifies key motivations driving the call for change, including the need for global perspectives, bridging the gap between academia and industry, integrating ethics and societal impact, and fostering interdisciplinary collaboration. The findings challenge the myths of technological neutrality and technosaviourism, advocating for a future where AI engineers are equipped not only with technical skills but also with the ethical awareness, social responsibility, and interdisciplinary understanding necessary to navigate the complex challenges of AI development. The study provides valuable insights and recommendations for transforming AI engineering education to ensure the responsible development of AI technologies.

    More Information

  • WIP: Gen AI in Engineering Education and the Da Vinci Cube (Peer Reviewed)

    WIP: Gen AI in Engineering Education and the Da Vinci Cube (Peer Reviewed)

    As generative AI (GenAI) tools rapidly transform the engineering landscape, a critical question emerges: Are current educational innovations adequately preparing engineers for the socio-technical challenges of the future? This work-in-progress paper presents two key contributions. First, we build on prior work presenting a systematic review of over 160 scholarly articles on GenAI implementations in engineering education, revealing a predominant focus on enhancing technical proficiency while often neglecting essential socio-technical competencies. Second, we apply an emerging framework—the da Vinci Cube (dVC)—to support engineering educators in critically evaluating GenAI-driven innovations. The dVC framework extends traditional models of innovation by incorporating three dimensions: the pursuit of knowledge, consideration of use, and contemplation of sentiment. Our analysis suggests that while GenAI tools can improve problem-solving and technical efficiency, engineering education must also address ethical, human-centered, and societal impacts. The dVC framework provides a structured lens for assessing how GenAI tools are integrated into curricula and research, encouraging a more holistic, reflective approach. Ultimately, this paper aims to provoke dialogue on the future of engineering education and to challenge the prevailing assumption that technical skill development alone is sufficient in an AI-mediated world.

    More Information

  • Nature Opinion: The path for AI in poor nations does not need to be paved with billions

    Nature Opinion: The path for AI in poor nations does not need to be paved with billions

    NATURE

    Researchers in low- and middle-income countries show that home-grown artificial-intelligence technologies can be developed, even without large external investments.

    More Information

  • Work in Progress: Exclusive Rhetoric in AI Conference Mission Statements

    Work in Progress: Exclusive Rhetoric in AI Conference Mission Statements

    AI conferences are pivotal spaces for knowledge exchange, collaboration, and shaping the trajectory of research, practice, and education. This paper presents preliminary findings from an analysis of AI conference mission statements, investigating how their stated goals affect who is welcomed into AI conversations. We find that many mission statements reflect assumptions that may unintentionally narrow participation and reinforce disciplinary and institutional silos. This limits engagement from a broad range of contributors—including educators, students, working professionals, and even younger users —who are essential to a thriving AI ecosystem. We advocate for clearer framing that supports democratizing and demystifying AI. By broadening participation and intentionally fostering cross-sector and interdisciplinary connections, AI conferences can help unlock more innovation.

    More Information

  • Developing the Permanent Symposium on AI (poster): Presented at Engineering and Public Policy Division (EPP) Poster Session

    Developing the Permanent Symposium on AI (poster): Presented at Engineering and Public Policy Division (EPP) Poster Session

    A multidisciplinary, reflective autoethnography by some of the people who are building the Permanent Symposium on AI. Includes the history of the project.

    RQ 1: Challenges that unite AI policy & tech

    RQ 2: How to design the PSAI?

    RQ 3: What factors influence the adoption and scalability of the PSAI?

    This is the Flagship project of the Aula Fellowship.

    Read the Poster

  • PARADIM: A Platform to Support Research at the Interface of Data Science and Medical Imaging

    PARADIM: A Platform to Support Research at the Interface of Data Science and Medical Imaging

    This paper describes PARADIM, a digital infrastructure designed to support research at the interface of data science and medical imaging, with a focus on Research Data Management best practices. The platform is built from open-source components and rooted in the FAIR principles through strict compliance with the DICOM standard. It addresses key needs in data curation, governance, privacy, and scalable resource management. Supporting every stage of the data science discovery cycle, the platform offers robust functionalities for user identity and access management, data de-identification, storage, annotation, as well as model training and evaluation. Rich metadata are generated all along the research lifecycle to ensure the traceability and reproducibility of results. PARADIM hosts several medical image collections and allows the automation of large-scale, computationally intensive pipelines (e.g., automatic segmentation, dose calculations, AI model evaluation). The platform fills a gap at the interface of data science and medical imaging, where digital infrastructures are key in the development, evaluation, and deployment of innovative solutions in the real world.

    More Information

  • The philanthrocapitalism of google news initiative in Africa, Latin America, and the middle east–empirical reflections

    The philanthrocapitalism of google news initiative in Africa, Latin America, and the middle east–empirical reflections

    In recent years, media organizations globally have increasingly benefited from financial support from digital platforms. In 2018, Google launched the Google News Initiative (GNI) Innovation Challenge aimed at bolstering journalism by encouraging innovation in media organizations. This study, conducted through 36 in-depth interviews with GNI beneficiaries in Africa, Latin America, and the Middle East, reveals that despite its narrative of enhancing technological innovation for the media’s future, this scheme inadvertently fosters dependence and extends the philanthrocapitalism concept to the media industry on a global scale. Employing a theory-building approach, our research underscores the emergence of a new form of ‘philanthrocapitalism’ that prompts critical questions about the dependency of media organizations on big tech and the motives of these tech giants in their evolving relationship with such institutions. We also demonstrate that the GNI Innovative Challenge, while ostensibly promoting sustainable business models through technological innovation, poses challenges for organizations striving to sustain and develop these projects. The proposed path to sustainability by the GNI is found to be indirect and difficult for organizations to navigate, hindering their adoption of new technologies. Additionally, the study highlights the creation of a dependency syndrome among news organizations, driven by the perception that embracing GNI initiatives is crucial for survival in the digital age. Ultimately, the research contributes valuable insights to the understanding of these issues, aiming to raise awareness among relevant stakeholders and conceptualize philanthrocapitalism through a new lens.

    More Information