Category: Sector: Economy

Hard Questions: Economy

  • Nature Opinion: The path for AI in poor nations does not need to be paved with billions

    Nature Opinion: The path for AI in poor nations does not need to be paved with billions

    NATURE

    Researchers in low- and middle-income countries show that home-grown artificial-intelligence technologies can be developed, even without large external investments.

    More Information

  • The philanthrocapitalism of google news initiative in Africa, Latin America, and the middle east–empirical reflections

    The philanthrocapitalism of google news initiative in Africa, Latin America, and the middle east–empirical reflections

    In recent years, media organizations globally have increasingly benefited from financial support from digital platforms. In 2018, Google launched the Google News Initiative (GNI) Innovation Challenge aimed at bolstering journalism by encouraging innovation in media organizations. This study, conducted through 36 in-depth interviews with GNI beneficiaries in Africa, Latin America, and the Middle East, reveals that despite its narrative of enhancing technological innovation for the media’s future, this scheme inadvertently fosters dependence and extends the philanthrocapitalism concept to the media industry on a global scale. Employing a theory-building approach, our research underscores the emergence of a new form of ‘philanthrocapitalism’ that prompts critical questions about the dependency of media organizations on big tech and the motives of these tech giants in their evolving relationship with such institutions. We also demonstrate that the GNI Innovative Challenge, while ostensibly promoting sustainable business models through technological innovation, poses challenges for organizations striving to sustain and develop these projects. The proposed path to sustainability by the GNI is found to be indirect and difficult for organizations to navigate, hindering their adoption of new technologies. Additionally, the study highlights the creation of a dependency syndrome among news organizations, driven by the perception that embracing GNI initiatives is crucial for survival in the digital age. Ultimately, the research contributes valuable insights to the understanding of these issues, aiming to raise awareness among relevant stakeholders and conceptualize philanthrocapitalism through a new lens.

    More Information

  • Developing the Permanent Symposium on AI (poster): Presented at Engineering and Public Policy Division (EPP) Poster Session

    Developing the Permanent Symposium on AI (poster): Presented at Engineering and Public Policy Division (EPP) Poster Session

    A multidisciplinary, reflective autoethnography by some of the people who are building the Permanent Symposium on AI. Includes the history of the project.

    RQ 1: Challenges that unite AI policy & tech

    RQ 2: How to design the PSAI?

    RQ 3: What factors influence the adoption and scalability of the PSAI?

    This is the Flagship project of the Aula Fellowship.

    Read the Poster

  • Presenting to the United Nations

    Presenting to the United Nations

    Our Director, Tammy Mackenzie, was honoured to present our recommendations to the United Nations Committee on the Formation of a Scientific Panel on AI. We recommended that the committee include civil society in these works and that meetings should be held in countries where safe travel can be guaranteed for delegates. You can consult our recommendations here.

    See the PDF of the Consultation here: Google Drive

  • Université de l’Alberta Conférence Annuelle: Les leviers du pouvoir dans l’IA

    Université de l’Alberta Conférence Annuelle: Les leviers du pouvoir dans l’IA

    Les leviers du pouvoir en IA
    Conférencière : Tammy MacKenzie
    Congrès du Campus Saint-Jean de l’Université de l’Alberta, le 25 avril 2025, Edmonton, AB (Canada).

    More Information

  • What We Do Not Know: GPT Use in Business and Management

    What We Do Not Know: GPT Use in Business and Management

    This systematic review examines peer-reviewed studies on application of GPT in business management, revealing significant knowledge gaps. Despite identifying interesting research directions such as best practices, benchmarking, performance comparisons, social impacts, our analysis yields only 42 relevant studies for the 22 months since its release. There are so few studies looking at a particular sector or subfield that management researchers, business consultants, policymakers, and journalists do not yet have enough information to make well-founded statements on how GPT is being used in businesses. The primary contribution of this paper is a call to action for further research. We provide a description of current research and identify knowledge gaps on the use of GPT in business. We cover the management subfields of finance, marketing, human resources, strategy, operations, production, and analytics, excluding retail and sales. We discuss gaps in knowledge of GPT potential consequences on employment, productivity, environmental costs, oppression, and small businesses. We propose how management consultants and the media can help fill those gaps. We call for practical work on business control systems as they relate to existing and foreseeable AI-related business challenges. This work may be of interest to managers, to management researchers, and to people working on AI in society.

    More Information

  • World AI: Women in AI

    World AI: Women in AI

    The Aula Fellowship were present, to discuss social and environmental concerns to do with the marketing of AI as a panacea. The event brings together technology companies, civil society, and decision-makers. We were able to connect with other non-profits and universities in this sector, and to build collaborations with several attendees and presenters.

    More Information

  • Easy to read, easier to write: the politics of AI in consultancy trade research

    Easy to read, easier to write: the politics of AI in consultancy trade research

    AI systems have been rapidly implemented in all sectors, of all sizes and in every country. In this article, we conduct a bibliometric review of references in recent consultancy reports on AI use in business, policymaking, and strategic management. The uptake of these reports is high. We find three positive factors: focus on client-facing solutions, speed of production, and ease of access. We find that the evidentiary quality of reports is often unsatisfactory because of references-clubbing with other consultancy reports, references to surveys without transparency, or poor or missing references. To optimize the utility of consultancy reports for decision-makers and their pertinence for policy, we present recommendations for the quality assessment of consultancy reporting on AI’s use in organizations. We discuss how to improve general knowledge of AI use in business and policymaking, through effective collaborations between consultants and management scientists. In addition to being of interest to managers and consultants, this work may also be of interest to media, political scientists, and business-school communities.

    More Information

  • Potential and perils of large language models as judges of unstructured textual data

    Potential and perils of large language models as judges of unstructured textual data

    Rapid advancements in large language models have unlocked remarkable capabilities when it comes to processing and summarizing unstructured text data. This has implications for the analysis of rich, open-ended datasets, such as survey responses, where LLMs hold the promise of efficiently distilling key themes and sentiments. However, as organizations increasingly turn to these powerful AI systems to make sense of textual feedback, a critical question arises, can we trust LLMs to accurately represent the perspectives contained within these text based datasets? While LLMs excel at generating human-like summaries, there is a risk that their outputs may inadvertently diverge from the true substance of the original responses. Discrepancies between the LLM-generated outputs and the actual themes present in the data could lead to flawed decision-making, with far-reaching consequences for organizations. This research investigates the effectiveness of LLM-as-judge models to evaluate the thematic alignment of summaries generated by other LLMs. We utilized an Anthropic Claude model to generate thematic summaries from open-ended survey responses, with Amazon’s Titan Express, Nova Pro, and Meta’s Llama serving as judges. This LLM-as-judge approach was compared to human evaluations using Cohen’s kappa, Spearman’s rho, and Krippendorff’s alpha, validating a scalable alternative to traditional human centric evaluation methods. Our findings reveal that while LLM-as-judge offer a scalable solution comparable to human raters, humans may still excel at detecting subtle, context-specific nuances. Our research contributes to the growing body of knowledge on AI assisted text analysis. Further, we provide recommendations for future research, emphasizing the need for careful consideration when generalizing LLM-as-judge models across various contexts and use cases.

    More Information

  • Towards regulating AI : A natural, labour and capital resources perspective

    Towards regulating AI : A natural, labour and capital resources perspective

    Policymakers who are looking at artificial intelligence (AI) applications are thinking about what we as a society want to achieve and what we need to protect, yet it is not commonly known that AI apps require intensive natural resources, labour and capital.

    More Information

  • Ceiba Law Firm Annual Retreat

    Ceiba Law Firm Annual Retreat

    We were pleased to be of service to Ceiba Law for their annual retreat this year. Our director Tammy Mackenzie joined the firm partners and associates to discuss the changes and opportunities of AI for lawyers in cybersecurity and corporate law. A huge shout out to partners Vanessa Henri, Elodie Meyer and Shawn Ford for an invigorating retreat and inspiring firm ethos. A law firm purpose built for the 21st century.

    For more information on Ceiba Law, see https://ceiba.law/

  • Evaluating Online AI Detection Tools: An Empirical Study Using Microsoft Copilot-Generated Content

    Evaluating Online AI Detection Tools: An Empirical Study Using Microsoft Copilot-Generated Content

    Our findings reveal significant inconsistencies and limitations in AI detection tools, with many failing to accurately identify Copilotauthored text. Examining eight freely available online AI detection tools using text samples produced by Microsoft Copilot, we assess their accuracy and consistency. We feed a short sentence and a small paragraph and note the estimate of these tools. Our results suggest that educators should not rely on these tools to check for AI use.

    More Information