Category: Sector: Economy

Hard Questions: Economy

  • Potential and perils of large language models as judges of unstructured textual data

    Potential and perils of large language models as judges of unstructured textual data

    Rapid advancements in large language models have unlocked remarkable capabilities when it comes to processing and summarizing unstructured text data. This has implications for the analysis of rich, open-ended datasets, such as survey responses, where LLMs hold the promise of efficiently distilling key themes and sentiments. However, as organizations increasingly turn to these powerful AI systems to make sense of textual feedback, a critical question arises, can we trust LLMs to accurately represent the perspectives contained within these text based datasets? While LLMs excel at generating human-like summaries, there is a risk that their outputs may inadvertently diverge from the true substance of the original responses. Discrepancies between the LLM-generated outputs and the actual themes present in the data could lead to flawed decision-making, with far-reaching consequences for organizations. This research investigates the effectiveness of LLM-as-judge models to evaluate the thematic alignment of summaries generated by other LLMs. We utilized an Anthropic Claude model to generate thematic summaries from open-ended survey responses, with Amazon’s Titan Express, Nova Pro, and Meta’s Llama serving as judges. This LLM-as-judge approach was compared to human evaluations using Cohen’s kappa, Spearman’s rho, and Krippendorff’s alpha, validating a scalable alternative to traditional human centric evaluation methods. Our findings reveal that while LLM-as-judge offer a scalable solution comparable to human raters, humans may still excel at detecting subtle, context-specific nuances. Our research contributes to the growing body of knowledge on AI assisted text analysis. Further, we provide recommendations for future research, emphasizing the need for careful consideration when generalizing LLM-as-judge models across various contexts and use cases.

    More Information

  • Towards regulating AI : A natural, labour and capital resources perspective

    Towards regulating AI : A natural, labour and capital resources perspective

    Policymakers who are looking at artificial intelligence (AI) applications are thinking about what we as a society want to achieve and what we need to protect, yet it is not commonly known that AI apps require intensive natural resources, labour and capital.

    More Information

  • Ceiba Law Firm Annual Retreat

    Ceiba Law Firm Annual Retreat

    We were pleased to be of service to Ceiba Law for their annual retreat this year. Our director Tammy Mackenzie joined the firm partners and associates to discuss the changes and opportunities of AI for lawyers in cybersecurity and corporate law. A huge shout out to partners Vanessa Henri, Elodie Meyer and Shawn Ford for an invigorating retreat and inspiring firm ethos. A law firm purpose built for the 21st century.

    For more information on Ceiba Law, see https://ceiba.law/

  • Evaluating Online AI Detection Tools: An Empirical Study Using Microsoft Copilot-Generated Content

    Evaluating Online AI Detection Tools: An Empirical Study Using Microsoft Copilot-Generated Content

    Our findings reveal significant inconsistencies and limitations in AI detection tools, with many failing to accurately identify Copilotauthored text. Examining eight freely available online AI detection tools using text samples produced by Microsoft Copilot, we assess their accuracy and consistency. We feed a short sentence and a small paragraph and note the estimate of these tools. Our results suggest that educators should not rely on these tools to check for AI use.

    More Information

  • The Climate Imperative: How AI Can Transform Africa’s Future

    The Climate Imperative: How AI Can Transform Africa’s Future

    Africa contributes minimally to global greenhouse gas emissions but bears a disproportionate burden of climate change impacts. This article explores how artificial intelligence (AI) can bolster conservation and sustainability efforts across the continent. While challenges such as technological import reliance and digital divides persist, AI offers transformative potential by enhancing early prediction, disaster preparedness, and environmental management. Examples like Rwanda’s Wastezon, Ghana’s Okuafo Foundation, and Kenya’s Kuzi illustrate successful AI-driven initiatives. The article proposes adapting a public health prevention model-primary, secondary, and tertiary prevention-to structure AI-based environmental interventions. This approach would enable early detection of climate risks, timely mitigation efforts, and rehabilitation of damaged ecosystems. The authors also caution about AI’s environmental costs, including energy-intensive operations and resource extraction, advocating for ethical and Africa-centered AI solutions. Overall, the article argues that innovative, community-driven, and preventive uses of AI are essential for building climate resilience in Africa.

    More Information

  • Advancements in Modern Recommender Systems: Industrial Applications in Social Media, E-commerce, Entertainment, and Beyond

    Advancements in Modern Recommender Systems: Industrial Applications in Social Media, E-commerce, Entertainment, and Beyond

    In the current digital era, the proliferation of online content has overwhelmed users with vast amounts of information, necessitating effective filtering mechanisms. Recommender systems have become indispensable in addressing this challenge, tailoring content to individual preferences and significantly enhancing user experience. This paper delves into the latest advancements in recommender systems, analyzing 115 research papers and 10 articles, and dissecting their application across various domains such as e-commerce, entertainment, and social media. We categorize these systems into content-based, collaborative, and hybrid approaches, scrutinizing their methodologies and performance. Despite their transformative impact, recommender systems grapple with persistent issues like scalability, cold-start problems, and data sparsity. Our comprehensive review not only maps the current landscape of recommender system research but also identifies critical gaps and future directions. By offering a detailed analysis of datasets, simulation platforms, and evaluation metrics, we provide a robust foundation for developing next-generation recommender systems poised to deliver more accurate, efficient, and personalized user experiences, inspiring innovative solutions to drive forward the evolution of recommender technology.

    More Information

  • Qualitative Insights Tool (QualIT): LLM Enhanced Topic Modeling

    Qualitative Insights Tool (QualIT): LLM Enhanced Topic Modeling

    Topic modeling is a widely used technique for uncovering thematic structures from large text corpora. However, most topic modeling approaches e.g. Latent Dirichlet Allocation (LDA) struggle to capture nuanced semantics and contextual understanding required to accurately model complex narratives. Recent advancements in this area include methods like BERTopic, which have demonstrated significantly improved topic coherence and thus established a new standard for benchmarking. In this paper, we present a novel approach, the Qualitative Insights Tool (QualIT) that integrates large language models (LLMs) with existing clustering-based topic modeling approaches. Our method leverages the deep contextual understanding and powerful language generation capabilities of LLMs to enrich the topic modeling process using clustering. We evaluate our approach on a large corpus of news articles and demonstrate substantial improvements in topic coherence and topic diversity compared to baseline topic modeling techniques. On the 20 ground-truth topics, our method shows 70% topic coherence (vs 65% & 57% benchmarks) and 95.5% topic diversity (vs 85% & 72% benchmarks). Our findings suggest that the integration of LLMs can unlock new opportunities for topic modeling of dynamic and complex text data, as is common in talent management research contexts.

    More Information

  • Skills Lab Panel “Building Bridges”, for the Culture and Cohesion Summit.

    Skills Lab Panel “Building Bridges”, for the Culture and Cohesion Summit.

    Join Victoria Kuketz for an intercultural Skills Lab Panel “Building Bridges”, for the Culture and Cohesion Summit.

    More Information

  • Generative AI through the Lens of Institutional Theory

    Generative AI through the Lens of Institutional Theory

    This study examines the adoption of Generative AI (GenAI) systems through the lens of Institutional Theory. Using a mixed-methods approach, we analyze how coercive, normative, and mimetic pressures influence GenAI integration in organizations. Key findings reveal:(1) regulatory frameworks significantly shape GenAI adoption strategies, with variations across industries and regions;(2) organizations balance conformity to institutional norms with innovation, often through strategic decoupling;(3) GenAI’s unique capabilities challenge traditional institutional pressures, necessitating new governance models; and (4) early GenAI adopters emerge as new sources of mimetic pressure, accelerating industry-wide adoption. We propose a novel framework capturing the interplay between GenAI characteristics and institutional dynamics, contributing to both Institutional Theory and AI adoption literature.

    More Information

  • Toward a trustworthy and inclusive data governance policy for the use of artificial intelligence in Africa

    Toward a trustworthy and inclusive data governance policy for the use of artificial intelligence in Africa

    This article proposes five ideas that the design of data governance policies for the trustworthy use of artificial intelligence (AI) in Africa should consider. The first is for African states to assess their domestic strategic priorities, strengths, and weaknesses. The second is a human-centric approach to data governance, which involves data processing practices that protect the security of personal data and the privacy of data subjects; ensure that personal data are processed in a fair, lawful, and accountable manner; minimize the harmful effect of personal data misuse or abuse on data subjects and other victims; and promote a beneficial, trusted use of personal data. The third is for the data policy to be in alignment with supranational rights-respecting AI standards like the African Charter on Human and Peoples Rights, the AU Convention on Cybersecurity, and Personal Data Protection. The fourth is for states to be critical about the extent to which AI systems can be relied on in certain public sectors or departments. The fifth and final proposition is for the need to prioritize the use of representative and interoperable data and ensure a transparent procurement process for AI systems from abroad where no local options exist.

    More Information