Author: Aula Blog Editor

  • Qualitative Insights Tool (QualIT): LLM Enhanced Topic Modeling

    Qualitative Insights Tool (QualIT): LLM Enhanced Topic Modeling

    Topic modeling is a widely used technique for uncovering thematic structures from large text corpora. However, most topic modeling approaches e.g. Latent Dirichlet Allocation (LDA) struggle to capture nuanced semantics and contextual understanding required to accurately model complex narratives. Recent advancements in this area include methods like BERTopic, which have demonstrated significantly improved topic coherence and thus established a new standard for benchmarking. In this paper, we present a novel approach, the Qualitative Insights Tool (QualIT) that integrates large language models (LLMs) with existing clustering-based topic modeling approaches. Our method leverages the deep contextual understanding and powerful language generation capabilities of LLMs to enrich the topic modeling process using clustering. We evaluate our approach on a large corpus of news articles and demonstrate substantial improvements in topic coherence and topic diversity compared to baseline topic modeling techniques. On the 20 ground-truth topics, our method shows 70% topic coherence (vs 65% & 57% benchmarks) and 95.5% topic diversity (vs 85% & 72% benchmarks). Our findings suggest that the integration of LLMs can unlock new opportunities for topic modeling of dynamic and complex text data, as is common in talent management research contexts.

    More Information

  • Skills Lab Panel “Building Bridges”, for the Culture and Cohesion Summit.

    Skills Lab Panel “Building Bridges”, for the Culture and Cohesion Summit.

    Join Victoria Kuketz for an intercultural Skills Lab Panel “Building Bridges”, for the Culture and Cohesion Summit.

    More Information

  • The Climate Imperative: How AI Can Transform Africa’s Future

    The Climate Imperative: How AI Can Transform Africa’s Future

    Africa contributes minimally to global greenhouse gas emissions but bears a disproportionate burden of climate change impacts. This article explores how artificial intelligence (AI) can bolster conservation and sustainability efforts across the continent. While challenges such as technological import reliance and digital divides persist, AI offers transformative potential by enhancing early prediction, disaster preparedness, and environmental management. Examples like Rwanda’s Wastezon, Ghana’s Okuafo Foundation, and Kenya’s Kuzi illustrate successful AI-driven initiatives. The article proposes adapting a public health prevention model-primary, secondary, and tertiary prevention-to structure AI-based environmental interventions. This approach would enable early detection of climate risks, timely mitigation efforts, and rehabilitation of damaged ecosystems. The authors also caution about AI’s environmental costs, including energy-intensive operations and resource extraction, advocating for ethical and Africa-centered AI solutions. Overall, the article argues that innovative, community-driven, and preventive uses of AI are essential for building climate resilience in Africa.

    More Information

  • Trustworthy and Responsible AI for Human-Centric Autonomous Decision-Making Systems

    Trustworthy and Responsible AI for Human-Centric Autonomous Decision-Making Systems

    Artificial Intelligence (AI) has paved the way for revolutionary decision-making processes, which if harnessed appropriately, can contribute to advancements in various sectors, from healthcare to economics. However, its black box nature presents significant ethical challenges related to bias and transparency. AI applications are hugely impacted by biases, presenting inconsistent and unreliable findings, leading to significant costs and consequences, highlighting and perpetuating inequalities and unequal access to resources. Hence, developing safe, reliable, ethical, and Trustworthy AI systems is essential. Our team of researchers working with Trustworthy and Responsible AI, part of the Transdisciplinary Scholarship Initiative within the University of Calgary, conducts research on Trustworthy and Responsible AI, including fairness, bias mitigation, reproducibility, generalization, interpretability, and authenticity. In this paper, we review and discuss the intricacies of AI biases, definitions, methods of detection and mitigation, and metrics for evaluating bias. We also discuss open challenges with regard to the trustworthiness and widespread application of AI across diverse domains of human-centric decision making, as well as guidelines to foster Responsible and Trustworthy AI models.

    More Information

  • Unravelling socio-technological barriers to AI integration: A qualitative study of Southern African newsrooms

    Unravelling socio-technological barriers to AI integration: A qualitative study of Southern African newsrooms

    This study explores the socio-technological barriers to the adoption of artificial intelligence (AI)-powered solutions in three countries of the global south – South Africa, Lesotho, Eswatini, Botswana and Zimbabwe. Through 20 in-depth interviews with key stakeholders, it examines the distribution and circulation of AI technologies within selected newsrooms. Furthermore, the article explores socio-technological obstacles to the integration of AI among journalists. Lastly, it examines the consequences of these socio-technological obstacles to journalism. The article specifically seeks to answer three questions: How are AI technologies integrated in southern African newsrooms? What are the socio-technological barriers attendant to the use of AI in selected news organisations of sub-Saharan Africa? What are the implications of these socio-technological barriers to the process of news production in these newsrooms?

    More Information

  • Dataset: Engineering Education Research on LLMs (Full Systematic Review)

    Dataset: Engineering Education Research on LLMs (Full Systematic Review)

    This dataset is available for collaborations. Please contact our research Director, Dr. Branislav Radeljic, Ph.D., for more information.

    Used in: Path to Personalization: A Systematic Review of GenAI in Engineering Education

    More Information

  • Path to Personalization: A Systematic Review of GenAI in Engineering Education

    Path to Personalization: A Systematic Review of GenAI in Engineering Education

    This systematic review paper provides a comprehensive synthesis across 162 articles on Generative Artificial Intelligence (GenAI) in engineering education (EE), making two specific contributions to advance research in the space. First, we develop a taxonomy that categorizes the current research landscape, identifying key areas such as Coding or Writing Assistance, Design Methodology, and Personalization. Second, we highlight significant gaps and opportunities, such as lack of customer-centricity and need for increased transparency in future research, paving the way for increased personalization in GenAI-augmented engineering education. There are indications of widening lines of enquiry, for example into human-AI collaborations and multidisciplinary learning. We conclude that there are opportunities to enrich engineering epistemology and
    competencies with the use of GenAI tools for educators and students, as well as a need for further research into best and novel practices. Our discussion serves as a roadmap for researchers and educators, guiding the development of GenAI applications that will continue to transform the engineering education landscape, in classrooms and the workforce.

    More Information

  • Reconciling methodological paradigms: Employing large language models as novice qualitative research assistants in talent management research

    Reconciling methodological paradigms: Employing large language models as novice qualitative research assistants in talent management research

    Qualitative data collection and analysis approaches, such as those employing interviews and focus groups, provide rich insights into customer attitudes, sentiment, and behavior. However, manually analyzing qualitative data requires extensive time and effort to identify relevant topics and thematic insights. This study proposes a novel approach to address this challenge by leveraging Retrieval Augmented Generation (RAG) based Large Language Models (LLMs) for analyzing interview transcripts. The novelty of this work lies in strategizing the research inquiry as one that is augmented by an LLM that serves as a novice research assistant. This research explores the mental model of LLMs to serve as novice qualitative research assistants for researchers in the talent management space. A RAG-based LLM approach is extended to enable topic modeling of semi-structured interview data, showcasing the versatility of these models beyond their traditional use in information retrieval and search. Our findings demonstrate that the LLM-augmented RAG approach can successfully extract topics of interest, with significant coverage compared to manually generated topics from the same dataset. This establishes the viability of employing LLMs as novice qualitative research assistants. Additionally, the study recommends that researchers leveraging such models lean heavily on quality criteria used in traditional qualitative research to ensure rigor and trustworthiness of their approach. Finally, the paper presents key recommendations for industry practitioners seeking to reconcile the use of LLMs with established qualitative research paradigms, providing a roadmap for the effective integration of these powerful, albeit novice, AI tools in the analysis of qualitative datasets within talent

    More Information

  • Tech Tool: TechAIRS Confidential AI Reporting System Application

    Tech Tool: TechAIRS Confidential AI Reporting System Application

    A curated OODA triage system for AI Incident Reporting. This tool is available for collaborations. Please contact our Technical Director, François Pelletier, for more information.

    More Information

  • Toward a trustworthy and inclusive data governance policy for the use of artificial intelligence in Africa

    Toward a trustworthy and inclusive data governance policy for the use of artificial intelligence in Africa

    This article proposes five ideas that the design of data governance policies for the trustworthy use of artificial intelligence (AI) in Africa should consider. The first is for African states to assess their domestic strategic priorities, strengths, and weaknesses. The second is a human-centric approach to data governance, which involves data processing practices that protect the security of personal data and the privacy of data subjects; ensure that personal data are processed in a fair, lawful, and accountable manner; minimize the harmful effect of personal data misuse or abuse on data subjects and other victims; and promote a beneficial, trusted use of personal data. The third is for the data policy to be in alignment with supranational rights-respecting AI standards like the African Charter on Human and Peoples Rights, the AU Convention on Cybersecurity, and Personal Data Protection. The fourth is for states to be critical about the extent to which AI systems can be relied on in certain public sectors or departments. The fifth and final proposition is for the need to prioritize the use of representative and interoperable data and ensure a transparent procurement process for AI systems from abroad where no local options exist.

    More Information

  • Collision Toronto: Booth

    Collision Toronto: Booth

    Emmanuel Taiwo and Tammy Mackenzie represented the Aula Fellowship in the social good start-up stage. Thank you, Collision Toronto, for this excellent experience with old and new friends, working for good AI together in excellent company. Thanks also to Aula Fellow Rubaina Khan, Victoria Kuketz, and Marisa Eleuterio, for technical support throughout!

    More Information

  • Collision Toronto: Presentation

    Collision Toronto: Presentation

    Presented the Aula Fellowship from the Non-Profits Stage. Thank you, Victoria!

    More Information