This special issue interrogates how artificial intelligence (AI), particularly generative AI (GenAI), is reshaping journalism at a moment of profound uncertainty for the profession. The rapid rise of GenAI technologies, particularly following the release of tools like ChatGPT, has intensified longstanding tensions between economic precarity, technological innovation, and journalistic values. Across diverse contexts in the Global North and South, articles examine how AI is simultaneously heralded as a source of efficiency, personalization, and newsroom survival, while also feared as a destabilizing force that threatens jobs, erodes professional norms, and concentrates power in the hands of technology corporations.
Category: Hard Questions: Media
Hard Questions: Media
-

Dis/Misinformation, WhatsApp Groups, and Informal Fact-Checking Practices in Namibia
This chapter contributes to our understanding of organic and informal user correction practices emerging in WhatsApp groups in Namibia, South Africa, and Zimbabwe. This is important in a context where formal infrastructures of correcting and debunking dis/misinformation have been dominated by top-down initiatives. These formal infrastructures include platform-centric content moderation practices and professional fact-checking processes. Unlike social platforms such as Twitter and Facebook, which can perform content moderation and hence take down offending content, the end-to-end encrypted (E2EE) infrastructure of WhatsApp creates a very different scenario where the same approach is not possible. This is because only the users involved in the conversation have access to the content shared, shielding false and abusive content from being detected or removed. As Kuru et al.(2022) opine, the privacy of end-to-end encryption provides a highly closed communication space, posing a different set of challenges for misinformation detection and intervention than with more open social media, such as Facebook and Twitter. In this regard, false and misleading information on WhatsApp constitutes” a distinctive problem”(Kuru et al. 2022; Melo et al. 2020). As Reis et al.(2020, 2) observe,“the end-to-end en-crypted (E2EE) structure of WhatsApp creates a very different scenario” where content moderation and fact checking at scale is not possible. Fact-checking WhatsApp groups, which have been flagged as the major distributors of mis-and disinformation is equally difficult.
-

Shifting the Gaze? Photojournalism Practices in the Age of Artificial Intelligence
In this article, we explore the impact of artificial intelligence (AI) technologies on photojournalism in less-researched contexts in Botswana and Zimbabwe. We aim to understand how AI technologies, proliferating aspects of news production, are impacting one of journalism’s respected and enduring trades- photojournalism. We answer the question: In what ways are AI-driven technologies impacting photojournalism practices? Furthermore, we investigate how photojournalists perceive their roles and the ethical considerations that come to the fore as AI begin to technically influence photojournalism. We deploy an eclectic analytical framework consisting of the critical technology theory, disruptive innovation theory and Baudrillard’s concept of simulation to theorise how AI technologies affect photojournalism in Botswana and Zimbabwe. Data were collected using in-depth interviews with practising photojournalists and …
-

The philanthrocapitalism of google news initiative in Africa, Latin America, and the middle east–empirical reflections
In recent years, media organizations globally have increasingly benefited from financial support from digital platforms. In 2018, Google launched the Google News Initiative (GNI) Innovation Challenge aimed at bolstering journalism by encouraging innovation in media organizations. This study, conducted through 36 in-depth interviews with GNI beneficiaries in Africa, Latin America, and the Middle East, reveals that despite its narrative of enhancing technological innovation for the media’s future, this scheme inadvertently fosters dependence and extends the philanthrocapitalism concept to the media industry on a global scale. Employing a theory-building approach, our research underscores the emergence of a new form of ‘philanthrocapitalism’ that prompts critical questions about the dependency of media organizations on big tech and the motives of these tech giants in their evolving relationship with such institutions. We also demonstrate that the GNI Innovative Challenge, while ostensibly promoting sustainable business models through technological innovation, poses challenges for organizations striving to sustain and develop these projects. The proposed path to sustainability by the GNI is found to be indirect and difficult for organizations to navigate, hindering their adoption of new technologies. Additionally, the study highlights the creation of a dependency syndrome among news organizations, driven by the perception that embracing GNI initiatives is crucial for survival in the digital age. Ultimately, the research contributes valuable insights to the understanding of these issues, aiming to raise awareness among relevant stakeholders and conceptualize philanthrocapitalism through a new lens.
-

Yakshi: A Transmedia Narrative Exploration
At Smart Story Labs, we are excited to announce a new GitHub project that dives into the transmedia narrative of Yakshi – reimagining this South Asian folklore spirit as a lens to explore cross-cultural storytelling, feminist hauntings, and ecological narratives.
-

‘Mind the gap’: artificial intelligence and journalism training in Southern African journalism schools
This article examines journalism schools (J-schools) responses to the Artificial Intelligence (AI) ‘disruption’. It critically provides an exploratory examination of how J-Schools in Southern Africa are responding to the AI wave in their journalism curriculums. We answer the question: How are Southern African J-Schools responding to AI in their curriculums? Using a disruptive innovation theoretical lens and through documentary review of university teaching initiatives and accredited journalism curriculums, augmented by in-depth interviews, we demonstrate that AI has opened up new horizons for journalism training in multi-dimensional ways. However, this has brought challenges, including covert forms of resistance to AI integration by some Journalism educators. Furthermore, resource constraints and the obduracy of J-schools’ curriculums also contribute to the slow introduction of AI in J-schools.
-

Mediatized discourses on Europeanization in Spain
Political and media polarization has had a detrimental impact on democratic principles and democratic processes on a
global scale. In Europe, such polarization has eroded the trust in national and European institutions and has challenged the
basic values that stand at the heart of the European integration project. The aim of this study is to analyze Spanish media discourses on Europeanization, with an attempt to identify key areas in which polarizing narratives related to Europeanization
are more prevalent. To conduct our study, six national media outlets were selected based on four criteria: media format,
ownership, ideology, and consumption. A final sample of 540 news items collected between July 2021 to March 2022 was
selected for analysis. Using a qualitative methodological approach, the study was carried out in two stages. In the first
phase, we conducted a content analysis to identify the main topics discussed in relation to the European Union and the
actors represented in them. This led to the identification of polarizing narratives and discourses emerging in the context
of the discussed topics. In the second phase, we used critical discourse analysis to analyze polarizing discourses. -

Data Journalism Appropriation in African Newsrooms: A Comparative Study of Botswana and Namibia
Data journalism has received relatively limited academic attention in Southern Africa, with even less focus on smaller countries such as Botswana and Namibia. This article seeks to address this gap by exploring how selected newsrooms in these countries have engaged with data journalism, the ways it has enhanced their daily news reporting, and its impact on newsgathering and production routines. The study reveals varied patterns in the adoption of technology for data journalism across the two contexts. While certain skills remain underdeveloped, efforts to train journalists in data journalism have been evident. These findings support the argument that in emerging economies, the uneven adoption of data journalism technologies is influenced by exposure to these tools and practices.
-

AI as a New Public Intellectual?
In a dialogue with ChatGPT, I asked if it could be considered a public intellectual.
-

Data Journalism, Accountability and Transparency in Zimbabwe’s ‘New Dispensation’: Some Empirical Reflections
In this chapter, we explore the intersection of data journalism practices with issues of (governance) transparency and accountability. We advance the argument that data journalism can be instrumental in helping journalists seek accountability in opaque regimes that have an uneasy relationship with watchdog journalism. We use the Zimbabwe’s post-coup regime to demonstrate that at the centre of political authoritarianism, is a refusal to account, and a culture of non-transparency. Faced with such, the media can utilise publicly available sources of data journalism to exercise their responsibility. Data journalism is, hence, critical as a media practice that provides avenues for journalists in semi-authoritarian regimes to continuously pursue their mandates as accountability seekers. Our chapter contributes to emerging literature on data journalism in Africa, especially in semi-authoritarian contests like that of Zimbabwe.
-

Evaluating Online AI Detection Tools: An Empirical Study Using Microsoft Copilot-Generated Content
Our findings reveal significant inconsistencies and limitations in AI detection tools, with many failing to accurately identify Copilotauthored text. Examining eight freely available online AI detection tools using text samples produced by Microsoft Copilot, we assess their accuracy and consistency. We feed a short sentence and a small paragraph and note the estimate of these tools. Our results suggest that educators should not rely on these tools to check for AI use.


