Category: Hard Questions: Economy

Hard Questions: Economy

  • Tech Tool: the Survivor’s Dashboard

    Tech Tool: the Survivor’s Dashboard

    A dashboard of curated information for survivor’s of modern slavery and the people who work to rescue others. This tool is available for collaborations. Please contact our Technical Director, François Pelletier, for more information.

  • The Architecture of Responsible AI: Balancing Innovation and Accountability

    The Architecture of Responsible AI: Balancing Innovation and Accountability

    Artificial Intelligence (AI) has become a key factor driving change in industries, organizations, and society. While technological capabilities advance rapidly, the mechanisms guiding AI implementation reveal critical structural flaws (Closing the AI accountability gap). There lies an opportunity to architect a future where we can collaboratively design systems that leverage AI to augment human capabilities while upholding ethical integrity.

    More Information

  • Work in Progress: Exclusive Rhetoric in AI Conference Mission Statements

    Work in Progress: Exclusive Rhetoric in AI Conference Mission Statements

    AI conferences are pivotal spaces for knowledge exchange, collaboration, and shaping the trajectory of research, practice, and education. This paper presents preliminary findings from an analysis of AI conference mission statements, investigating how their stated goals affect who is welcomed into AI conversations. We find that many mission statements reflect assumptions that may unintentionally narrow participation and reinforce disciplinary and institutional silos. This limits engagement from a broad range of contributors—including educators, students, working professionals, and even younger users —who are essential to a thriving AI ecosystem. We advocate for clearer framing that supports democratizing and demystifying AI. By broadening participation and intentionally fostering cross-sector and interdisciplinary connections, AI conferences can help unlock more innovation.

    More Information

  • Keeping Players Hooked: Story-Driven iGaming Ecosystem

    Keeping Players Hooked: Story-Driven iGaming Ecosystem

    This GitHub project explores how to:

    ✅ Build modular narrative systems that expand over seasons and quests. ✅ Design story-powered payment systems that turn transactions into experiences. ✅ Grow sustainable gaming enterprises around live storytelling, community co-creation, and ethical monetization. ✅ Create ecosystems where players return not out of compulsion, but love for the story.

    More Information

  • Nature Opinion: The path for AI in poor nations does not need to be paved with billions

    Nature Opinion: The path for AI in poor nations does not need to be paved with billions

    NATURE

    Researchers in low- and middle-income countries show that home-grown artificial-intelligence technologies can be developed, even without large external investments.

    More Information

  • The philanthrocapitalism of google news initiative in Africa, Latin America, and the middle east–empirical reflections

    The philanthrocapitalism of google news initiative in Africa, Latin America, and the middle east–empirical reflections

    In recent years, media organizations globally have increasingly benefited from financial support from digital platforms. In 2018, Google launched the Google News Initiative (GNI) Innovation Challenge aimed at bolstering journalism by encouraging innovation in media organizations. This study, conducted through 36 in-depth interviews with GNI beneficiaries in Africa, Latin America, and the Middle East, reveals that despite its narrative of enhancing technological innovation for the media’s future, this scheme inadvertently fosters dependence and extends the philanthrocapitalism concept to the media industry on a global scale. Employing a theory-building approach, our research underscores the emergence of a new form of ‘philanthrocapitalism’ that prompts critical questions about the dependency of media organizations on big tech and the motives of these tech giants in their evolving relationship with such institutions. We also demonstrate that the GNI Innovative Challenge, while ostensibly promoting sustainable business models through technological innovation, poses challenges for organizations striving to sustain and develop these projects. The proposed path to sustainability by the GNI is found to be indirect and difficult for organizations to navigate, hindering their adoption of new technologies. Additionally, the study highlights the creation of a dependency syndrome among news organizations, driven by the perception that embracing GNI initiatives is crucial for survival in the digital age. Ultimately, the research contributes valuable insights to the understanding of these issues, aiming to raise awareness among relevant stakeholders and conceptualize philanthrocapitalism through a new lens.

    More Information

  • Developing the Permanent Symposium on AI (poster): Presented at Engineering and Public Policy Division (EPP) Poster Session

    Developing the Permanent Symposium on AI (poster): Presented at Engineering and Public Policy Division (EPP) Poster Session

    A multidisciplinary, reflective autoethnography by some of the people who are building the Permanent Symposium on AI. Includes the history of the project.

    RQ 1: Challenges that unite AI policy & tech

    RQ 2: How to design the PSAI?

    RQ 3: What factors influence the adoption and scalability of the PSAI?

    This is the Flagship project of the Aula Fellowship.

    Read the Poster

  • What We Do Not Know: GPT Use in Business and Management

    What We Do Not Know: GPT Use in Business and Management

    This systematic review examines peer-reviewed studies on application of GPT in business management, revealing significant knowledge gaps. Despite identifying interesting research directions such as best practices, benchmarking, performance comparisons, social impacts, our analysis yields only 42 relevant studies for the 22 months since its release. There are so few studies looking at a particular sector or subfield that management researchers, business consultants, policymakers, and journalists do not yet have enough information to make well-founded statements on how GPT is being used in businesses. The primary contribution of this paper is a call to action for further research. We provide a description of current research and identify knowledge gaps on the use of GPT in business. We cover the management subfields of finance, marketing, human resources, strategy, operations, production, and analytics, excluding retail and sales. We discuss gaps in knowledge of GPT potential consequences on employment, productivity, environmental costs, oppression, and small businesses. We propose how management consultants and the media can help fill those gaps. We call for practical work on business control systems as they relate to existing and foreseeable AI-related business challenges. This work may be of interest to managers, to management researchers, and to people working on AI in society.

    More Information

  • World AI: Women in AI

    World AI: Women in AI

    The Aula Fellowship were present, to discuss social and environmental concerns to do with the marketing of AI as a panacea. The event brings together technology companies, civil society, and decision-makers. We were able to connect with other non-profits and universities in this sector, and to build collaborations with several attendees and presenters.

    More Information

  • Université de l’Alberta Conférence Annuelle: Les leviers du pouvoir dans l’IA

    Université de l’Alberta Conférence Annuelle: Les leviers du pouvoir dans l’IA

    Les leviers du pouvoir en IA
    Conférencière : Tammy MacKenzie
    Congrès du Campus Saint-Jean de l’Université de l’Alberta, le 25 avril 2025, Edmonton, AB (Canada).

    More Information

  • Easy to read, easier to write: the politics of AI in consultancy trade research

    Easy to read, easier to write: the politics of AI in consultancy trade research

    AI systems have been rapidly implemented in all sectors, of all sizes and in every country. In this article, we conduct a bibliometric review of references in recent consultancy reports on AI use in business, policymaking, and strategic management. The uptake of these reports is high. We find three positive factors: focus on client-facing solutions, speed of production, and ease of access. We find that the evidentiary quality of reports is often unsatisfactory because of references-clubbing with other consultancy reports, references to surveys without transparency, or poor or missing references. To optimize the utility of consultancy reports for decision-makers and their pertinence for policy, we present recommendations for the quality assessment of consultancy reporting on AI’s use in organizations. We discuss how to improve general knowledge of AI use in business and policymaking, through effective collaborations between consultants and management scientists. In addition to being of interest to managers and consultants, this work may also be of interest to media, political scientists, and business-school communities.

    More Information

  • Potential and perils of large language models as judges of unstructured textual data

    Potential and perils of large language models as judges of unstructured textual data

    Rapid advancements in large language models have unlocked remarkable capabilities when it comes to processing and summarizing unstructured text data. This has implications for the analysis of rich, open-ended datasets, such as survey responses, where LLMs hold the promise of efficiently distilling key themes and sentiments. However, as organizations increasingly turn to these powerful AI systems to make sense of textual feedback, a critical question arises, can we trust LLMs to accurately represent the perspectives contained within these text based datasets? While LLMs excel at generating human-like summaries, there is a risk that their outputs may inadvertently diverge from the true substance of the original responses. Discrepancies between the LLM-generated outputs and the actual themes present in the data could lead to flawed decision-making, with far-reaching consequences for organizations. This research investigates the effectiveness of LLM-as-judge models to evaluate the thematic alignment of summaries generated by other LLMs. We utilized an Anthropic Claude model to generate thematic summaries from open-ended survey responses, with Amazon’s Titan Express, Nova Pro, and Meta’s Llama serving as judges. This LLM-as-judge approach was compared to human evaluations using Cohen’s kappa, Spearman’s rho, and Krippendorff’s alpha, validating a scalable alternative to traditional human centric evaluation methods. Our findings reveal that while LLM-as-judge offer a scalable solution comparable to human raters, humans may still excel at detecting subtle, context-specific nuances. Our research contributes to the growing body of knowledge on AI assisted text analysis. Further, we provide recommendations for future research, emphasizing the need for careful consideration when generalizing LLM-as-judge models across various contexts and use cases.

    More Information