A dashboard of curated information for survivor’s of modern slavery and the people who work to rescue others. This tool is available for collaborations. Please contact our Technical Director, François Pelletier, for more information.
Category: Hard Questions: Economy
Hard Questions: Economy
-

Keeping Players Hooked: Story-Driven iGaming Ecosystem
This GitHub project explores how to:
✅ Build modular narrative systems that expand over seasons and quests. ✅ Design story-powered payment systems that turn transactions into experiences. ✅ Grow sustainable gaming enterprises around live storytelling, community co-creation, and ethical monetization. ✅ Create ecosystems where players return not out of compulsion, but love for the story.
-

The philanthrocapitalism of google news initiative in Africa, Latin America, and the middle east–empirical reflections
In recent years, media organizations globally have increasingly benefited from financial support from digital platforms. In 2018, Google launched the Google News Initiative (GNI) Innovation Challenge aimed at bolstering journalism by encouraging innovation in media organizations. This study, conducted through 36 in-depth interviews with GNI beneficiaries in Africa, Latin America, and the Middle East, reveals that despite its narrative of enhancing technological innovation for the media’s future, this scheme inadvertently fosters dependence and extends the philanthrocapitalism concept to the media industry on a global scale. Employing a theory-building approach, our research underscores the emergence of a new form of ‘philanthrocapitalism’ that prompts critical questions about the dependency of media organizations on big tech and the motives of these tech giants in their evolving relationship with such institutions. We also demonstrate that the GNI Innovative Challenge, while ostensibly promoting sustainable business models through technological innovation, poses challenges for organizations striving to sustain and develop these projects. The proposed path to sustainability by the GNI is found to be indirect and difficult for organizations to navigate, hindering their adoption of new technologies. Additionally, the study highlights the creation of a dependency syndrome among news organizations, driven by the perception that embracing GNI initiatives is crucial for survival in the digital age. Ultimately, the research contributes valuable insights to the understanding of these issues, aiming to raise awareness among relevant stakeholders and conceptualize philanthrocapitalism through a new lens.
-

Potential and perils of large language models as judges of unstructured textual data
Rapid advancements in large language models have unlocked remarkable capabilities when it comes to processing and summarizing unstructured text data. This has implications for the analysis of rich, open-ended datasets, such as survey responses, where LLMs hold the promise of efficiently distilling key themes and sentiments. However, as organizations increasingly turn to these powerful AI systems to make sense of textual feedback, a critical question arises, can we trust LLMs to accurately represent the perspectives contained within these text based datasets? While LLMs excel at generating human-like summaries, there is a risk that their outputs may inadvertently diverge from the true substance of the original responses. Discrepancies between the LLM-generated outputs and the actual themes present in the data could lead to flawed decision-making, with far-reaching consequences for organizations. This research investigates the effectiveness of LLM-as-judge models to evaluate the thematic alignment of summaries generated by other LLMs. We utilized an Anthropic Claude model to generate thematic summaries from open-ended survey responses, with Amazon’s Titan Express, Nova Pro, and Meta’s Llama serving as judges. This LLM-as-judge approach was compared to human evaluations using Cohen’s kappa, Spearman’s rho, and Krippendorff’s alpha, validating a scalable alternative to traditional human centric evaluation methods. Our findings reveal that while LLM-as-judge offer a scalable solution comparable to human raters, humans may still excel at detecting subtle, context-specific nuances. Our research contributes to the growing body of knowledge on AI assisted text analysis. Further, we provide recommendations for future research, emphasizing the need for careful consideration when generalizing LLM-as-judge models across various contexts and use cases.









