Category: Hard Questions: Economy

Hard Questions: Economy

  • Towards regulating AI : A natural, labour and capital resources perspective

    Towards regulating AI : A natural, labour and capital resources perspective

    Policymakers who are looking at artificial intelligence (AI) applications are thinking about what we as a society want to achieve and what we need to protect, yet it is not commonly known that AI apps require intensive natural resources, labour and capital.

    More Information

  • Evaluating Online AI Detection Tools: An Empirical Study Using Microsoft Copilot-Generated Content

    Evaluating Online AI Detection Tools: An Empirical Study Using Microsoft Copilot-Generated Content

    Our findings reveal significant inconsistencies and limitations in AI detection tools, with many failing to accurately identify Copilotauthored text. Examining eight freely available online AI detection tools using text samples produced by Microsoft Copilot, we assess their accuracy and consistency. We feed a short sentence and a small paragraph and note the estimate of these tools. Our results suggest that educators should not rely on these tools to check for AI use.

    More Information

  • Qualitative Insights Tool (QualIT): LLM Enhanced Topic Modeling

    Qualitative Insights Tool (QualIT): LLM Enhanced Topic Modeling

    Topic modeling is a widely used technique for uncovering thematic structures from large text corpora. However, most topic modeling approaches e.g. Latent Dirichlet Allocation (LDA) struggle to capture nuanced semantics and contextual understanding required to accurately model complex narratives. Recent advancements in this area include methods like BERTopic, which have demonstrated significantly improved topic coherence and thus established a new standard for benchmarking. In this paper, we present a novel approach, the Qualitative Insights Tool (QualIT) that integrates large language models (LLMs) with existing clustering-based topic modeling approaches. Our method leverages the deep contextual understanding and powerful language generation capabilities of LLMs to enrich the topic modeling process using clustering. We evaluate our approach on a large corpus of news articles and demonstrate substantial improvements in topic coherence and topic diversity compared to baseline topic modeling techniques. On the 20 ground-truth topics, our method shows 70% topic coherence (vs 65% & 57% benchmarks) and 95.5% topic diversity (vs 85% & 72% benchmarks). Our findings suggest that the integration of LLMs can unlock new opportunities for topic modeling of dynamic and complex text data, as is common in talent management research contexts.

    More Information

  • Skills Lab Panel “Building Bridges”, for the Culture and Cohesion Summit.

    Skills Lab Panel “Building Bridges”, for the Culture and Cohesion Summit.

    Join Victoria Kuketz for an intercultural Skills Lab Panel “Building Bridges”, for the Culture and Cohesion Summit.

    More Information

  • Generative AI through the Lens of Institutional Theory

    Generative AI through the Lens of Institutional Theory

    This study examines the adoption of Generative AI (GenAI) systems through the lens of Institutional Theory. Using a mixed-methods approach, we analyze how coercive, normative, and mimetic pressures influence GenAI integration in organizations. Key findings reveal:(1) regulatory frameworks significantly shape GenAI adoption strategies, with variations across industries and regions;(2) organizations balance conformity to institutional norms with innovation, often through strategic decoupling;(3) GenAI’s unique capabilities challenge traditional institutional pressures, necessitating new governance models; and (4) early GenAI adopters emerge as new sources of mimetic pressure, accelerating industry-wide adoption. We propose a novel framework capturing the interplay between GenAI characteristics and institutional dynamics, contributing to both Institutional Theory and AI adoption literature.

    More Information

  • The Climate Imperative: How AI Can Transform Africa’s Future

    The Climate Imperative: How AI Can Transform Africa’s Future

    Africa contributes minimally to global greenhouse gas emissions but bears a disproportionate burden of climate change impacts. This article explores how artificial intelligence (AI) can bolster conservation and sustainability efforts across the continent. While challenges such as technological import reliance and digital divides persist, AI offers transformative potential by enhancing early prediction, disaster preparedness, and environmental management. Examples like Rwanda’s Wastezon, Ghana’s Okuafo Foundation, and Kenya’s Kuzi illustrate successful AI-driven initiatives. The article proposes adapting a public health prevention model-primary, secondary, and tertiary prevention-to structure AI-based environmental interventions. This approach would enable early detection of climate risks, timely mitigation efforts, and rehabilitation of damaged ecosystems. The authors also caution about AI’s environmental costs, including energy-intensive operations and resource extraction, advocating for ethical and Africa-centered AI solutions. Overall, the article argues that innovative, community-driven, and preventive uses of AI are essential for building climate resilience in Africa.

    More Information

  • Advancements in Modern Recommender Systems: Industrial Applications in Social Media, E-commerce, Entertainment, and Beyond

    Advancements in Modern Recommender Systems: Industrial Applications in Social Media, E-commerce, Entertainment, and Beyond

    In the current digital era, the proliferation of online content has overwhelmed users with vast amounts of information, necessitating effective filtering mechanisms. Recommender systems have become indispensable in addressing this challenge, tailoring content to individual preferences and significantly enhancing user experience. This paper delves into the latest advancements in recommender systems, analyzing 115 research papers and 10 articles, and dissecting their application across various domains such as e-commerce, entertainment, and social media. We categorize these systems into content-based, collaborative, and hybrid approaches, scrutinizing their methodologies and performance. Despite their transformative impact, recommender systems grapple with persistent issues like scalability, cold-start problems, and data sparsity. Our comprehensive review not only maps the current landscape of recommender system research but also identifies critical gaps and future directions. By offering a detailed analysis of datasets, simulation platforms, and evaluation metrics, we provide a robust foundation for developing next-generation recommender systems poised to deliver more accurate, efficient, and personalized user experiences, inspiring innovative solutions to drive forward the evolution of recommender technology.

    More Information

  • Reconciling methodological paradigms: Employing large language models as novice qualitative research assistants in talent management research

    Reconciling methodological paradigms: Employing large language models as novice qualitative research assistants in talent management research

    Qualitative data collection and analysis approaches, such as those employing interviews and focus groups, provide rich insights into customer attitudes, sentiment, and behavior. However, manually analyzing qualitative data requires extensive time and effort to identify relevant topics and thematic insights. This study proposes a novel approach to address this challenge by leveraging Retrieval Augmented Generation (RAG) based Large Language Models (LLMs) for analyzing interview transcripts. The novelty of this work lies in strategizing the research inquiry as one that is augmented by an LLM that serves as a novice research assistant. This research explores the mental model of LLMs to serve as novice qualitative research assistants for researchers in the talent management space. A RAG-based LLM approach is extended to enable topic modeling of semi-structured interview data, showcasing the versatility of these models beyond their traditional use in information retrieval and search. Our findings demonstrate that the LLM-augmented RAG approach can successfully extract topics of interest, with significant coverage compared to manually generated topics from the same dataset. This establishes the viability of employing LLMs as novice qualitative research assistants. Additionally, the study recommends that researchers leveraging such models lean heavily on quality criteria used in traditional qualitative research to ensure rigor and trustworthiness of their approach. Finally, the paper presents key recommendations for industry practitioners seeking to reconcile the use of LLMs with established qualitative research paradigms, providing a roadmap for the effective integration of these powerful, albeit novice, AI tools in the analysis of qualitative datasets within talent

    More Information

  • Tech Tool: TechAIRS Confidential AI Reporting System Application

    Tech Tool: TechAIRS Confidential AI Reporting System Application

    A curated OODA triage system for AI Incident Reporting. This tool is available for collaborations. Please contact our Technical Director, François Pelletier, for more information.

    More Information

  • Toward a trustworthy and inclusive data governance policy for the use of artificial intelligence in Africa

    Toward a trustworthy and inclusive data governance policy for the use of artificial intelligence in Africa

    This article proposes five ideas that the design of data governance policies for the trustworthy use of artificial intelligence (AI) in Africa should consider. The first is for African states to assess their domestic strategic priorities, strengths, and weaknesses. The second is a human-centric approach to data governance, which involves data processing practices that protect the security of personal data and the privacy of data subjects; ensure that personal data are processed in a fair, lawful, and accountable manner; minimize the harmful effect of personal data misuse or abuse on data subjects and other victims; and promote a beneficial, trusted use of personal data. The third is for the data policy to be in alignment with supranational rights-respecting AI standards like the African Charter on Human and Peoples Rights, the AU Convention on Cybersecurity, and Personal Data Protection. The fourth is for states to be critical about the extent to which AI systems can be relied on in certain public sectors or departments. The fifth and final proposition is for the need to prioritize the use of representative and interoperable data and ensure a transparent procurement process for AI systems from abroad where no local options exist.

    More Information

  • OECD Gender Equality in Technology Governance

    OECD Gender Equality in Technology Governance

    Director Mackenzie represented the Aula Fellowship and the AI context in conversations at this global conference for equity. We stand together, or we fall.

    More Information

  • Reimagining AI Conference Mission Statements to Promote Inclusion in the Emerging Institutional Field of AI

    Reimagining AI Conference Mission Statements to Promote Inclusion in the Emerging Institutional Field of AI

    AI conferences play a crucial role in education by providing a platform for knowledge sharing, networking, and collaboration, shaping the future of AI research and applications, and informing curricula and teaching practices. This work-in-progress, innovative practice paper presents preliminary findings from textual analysis of mission statements from select artificial intelligence (AI) conferences to uncover information gaps and opportunities that hinder inclusivity and accessibility in the emerging institutional field of AI. By examining language and focus, we identify potential barriers to entry for individuals interested in the AI domain, including educators, researchers, practitioners, and students from underrepresented groups. Our paper employs the use of the Language as Symbolic Action (LSA) framework [1] to reveal information gaps in areas such as no explicit emphasis on DEI, undefined promises of business and personal empowerment and power, and occasional elitism. These preliminary findings uncover opportunities for improvement, including the need for more inclusive language, an explicit commitment to diversity, equity, and inclusion (DEI) initiatives, clearer communications about conference goals and expectations, and emphasis on strategies to address power imbalances and promote equal opportunities for participation. The impact of our work is bi-fold: 1) we demonstrate preliminary results from using the Language as Symbolic Action framework to text-analysis of mission statements, and 2) our preliminary findings will be valuable to the education community in understanding gaps in current AI conferences and consequently, outreach. Our work is thus of practical use for conference organizers, engineering and CS educators and other AI-related domains, researchers, and the broader AI community. Our paper highlights the need for more intentional and inclusive conference design to foster a diverse and vibrant community and community of AI professionals.

    More Information