This autoethnographic study explores the need for interdisciplinary education spanning both technical an philosophical skills – as such, this study leverages whole-person education as a theoretical approach needed in AI engineering education to address the limitations of current paradigms that prioritize technical expertise over ethical and societal considerations. Drawing on a collaborative autoethnography approach of fourteen diverse stakeholders, the study identifies key motivations driving the call for change, including the need for global perspectives, bridging the gap between academia and industry, integrating ethics and societal impact, and fostering interdisciplinary collaboration. The findings challenge the myths of technological neutrality and technosaviourism, advocating for a future where AI engineers are equipped not only with technical skills but also with the ethical awareness, social responsibility, and interdisciplinary understanding necessary to navigate the complex challenges of AI development. The study provides valuable insights and recommendations for transforming AI engineering education to ensure the responsible development of AI technologies.
Category: Leslie Salgado, M.P.S.
Leslie Salgado, M.P.S.
Biography
LinkedIn
Google Scholar
-

Work in Progress: Exclusive Rhetoric in AI Conference Mission Statements
AI conferences are pivotal spaces for knowledge exchange, collaboration, and shaping the trajectory of research, practice, and education. This paper presents preliminary findings from an analysis of AI conference mission statements, investigating how their stated goals affect who is welcomed into AI conversations. We find that many mission statements reflect assumptions that may unintentionally narrow participation and reinforce disciplinary and institutional silos. This limits engagement from a broad range of contributors—including educators, students, working professionals, and even younger users —who are essential to a thriving AI ecosystem. We advocate for clearer framing that supports democratizing and demystifying AI. By broadening participation and intentionally fostering cross-sector and interdisciplinary connections, AI conferences can help unlock more innovation.
-

Developing the Permanent Symposium on AI (poster): Presented at Engineering and Public Policy Division (EPP) Poster Session
A multidisciplinary, reflective autoethnography by some of the people who are building the Permanent Symposium on AI. Includes the history of the project.
RQ 1: Challenges that unite AI policy & tech
RQ 2: How to design the PSAI?
RQ 3: What factors influence the adoption and scalability of the PSAI?
This is the Flagship project of the Aula Fellowship.
-

Towards Real Diversity and Gender Equality in Artificial Intelligence
This is an Advancement Report for the Global Partnership on Artificial Intelligence (GPAI) project “Towards Real Diversity and Gender Equality in Artificial Intelligence: Evidence-Based Promising Practices and Recommendations.” It describes, at a high level, the strategy, approach, and progress of the project thus far in its efforts to provide governments and other stakeholders of the artificial intelligence (AI) ecosystem with recommendations, tools, and promising practices to integrate Diversity and Gender Equality (DGE) considerations into the AI life cycle and related policy-making. The report starts with an overview of the human rights perspective, which serves as the framework upon which this project is building. By acknowledging domains where AI systems can pose risks and harms to global populations, and further, where they pose disproportionate risks and harms to women and other marginalized populations due to a lack of consideration for these groups throughout the AI life cycle, the need to address such inequalities becomes clear.
-

What We Do Not Know: GPT Use in Business and Management
This systematic review examines peer-reviewed studies on application of GPT in business management, revealing significant knowledge gaps. Despite identifying interesting research directions such as best practices, benchmarking, performance comparisons, social impacts, our analysis yields only 42 relevant studies for the 22 months since its release. There are so few studies looking at a particular sector or subfield that management researchers, business consultants, policymakers, and journalists do not yet have enough information to make well-founded statements on how GPT is being used in businesses. The primary contribution of this paper is a call to action for further research. We provide a description of current research and identify knowledge gaps on the use of GPT in business. We cover the management subfields of finance, marketing, human resources, strategy, operations, production, and analytics, excluding retail and sales. We discuss gaps in knowledge of GPT potential consequences on employment, productivity, environmental costs, oppression, and small businesses. We propose how management consultants and the media can help fill those gaps. We call for practical work on business control systems as they relate to existing and foreseeable AI-related business challenges. This work may be of interest to managers, to management researchers, and to people working on AI in society.
-

Pre-conference workshop: Université de l’Alberta Conférence Annuelle
We were pleased to sponsor the 2025 Campus St Jean Annual Conference of the University of Alberta. Two Aula Fellows were present, and offered a workshop for faculty. The event was well attended. As Fellows, we were happy to receive feedback that the workshop empowered faculty to continue conversations on the complexities of AI in society and at the University, outside the conference and into their fields of work. Some of the attendees have since joined us as Fellows.
-

United Nations Commission on the creation of a Scientific Panel on AI
Consultation on the governance of the UN’s Scientific Advsory Panel on AI. Posted on LinkedIn.
-

Trustworthy and Responsible AI for Human-Centric Autonomous Decision-Making Systems
Artificial Intelligence (AI) has paved the way for revolutionary decision-making processes, which if harnessed appropriately, can contribute to advancements in various sectors, from healthcare to economics. However, its black box nature presents significant ethical challenges related to bias and transparency. AI applications are hugely impacted by biases, presenting inconsistent and unreliable findings, leading to significant costs and consequences, highlighting and perpetuating inequalities and unequal access to resources. Hence, developing safe, reliable, ethical, and Trustworthy AI systems is essential. Our team of researchers working with Trustworthy and Responsible AI, part of the Transdisciplinary Scholarship Initiative within the University of Calgary, conducts research on Trustworthy and Responsible AI, including fairness, bias mitigation, reproducibility, generalization, interpretability, and authenticity. In this paper, we review and discuss the intricacies of AI biases, definitions, methods of detection and mitigation, and metrics for evaluating bias. We also discuss open challenges with regard to the trustworthiness and widespread application of AI across diverse domains of human-centric decision making, as well as guidelines to foster Responsible and Trustworthy AI models.
-

Beyond the algorithm: Empowering ai practitioners through liberal education
As AI technology continues to transform society, there is a growing need for engineers and technologists to develop interdisciplinary skills to address complex, society-wide problems. However, there is a gap in understanding how to effectively design and deliver inter-disciplinary education programs for AI-related training. This paper addresses this gap by reporting on a successful summer school program that brought together specialists from around the world to engage in deliberations on responsible AI, as part of a Summer School in Responsible AI led by Mila – Quebec Artificial Intelligence Institute. Through deep dive auto-ethnographic reflections from five individuals, who were either organizers or participants, augmented with end-of-program feedback, we provide a rich description of the program’s planning, activities, and impact. Specifically, our study draws from engineering education research, bridging the gap between research and practice to answer three research questions related to the program: (1) How did the program design enable a more effective understanding of interdisciplinary problem-sets? (2) How did participants experience the interdisciplinary work of the program? (3) Did the program affect participants’ impact on interdisciplinary problem-sets after the program? Our findings highlight the benefits of interdisciplinary, holistic, and hands-on approaches to AI education and provide insights for fellow engineering education researchers on how to design effective programs in this field.
-

Reimagining AI Conference Mission Statements to Promote Inclusion in the Emerging Institutional Field of AI
AI conferences play a crucial role in education by providing a platform for knowledge sharing, networking, and collaboration, shaping the future of AI research and applications, and informing curricula and teaching practices. This work-in-progress, innovative practice paper presents preliminary findings from textual analysis of mission statements from select artificial intelligence (AI) conferences to uncover information gaps and opportunities that hinder inclusivity and accessibility in the emerging institutional field of AI. By examining language and focus, we identify potential barriers to entry for individuals interested in the AI domain, including educators, researchers, practitioners, and students from underrepresented groups. Our paper employs the use of the Language as Symbolic Action (LSA) framework [1] to reveal information gaps in areas such as no explicit emphasis on DEI, undefined promises of business and personal empowerment and power, and occasional elitism. These preliminary findings uncover opportunities for improvement, including the need for more inclusive language, an explicit commitment to diversity, equity, and inclusion (DEI) initiatives, clearer communications about conference goals and expectations, and emphasis on strategies to address power imbalances and promote equal opportunities for participation. The impact of our work is bi-fold: 1) we demonstrate preliminary results from using the Language as Symbolic Action framework to text-analysis of mission statements, and 2) our preliminary findings will be valuable to the education community in understanding gaps in current AI conferences and consequently, outreach. Our work is thus of practical use for conference organizers, engineering and CS educators and other AI-related domains, researchers, and the broader AI community. Our paper highlights the need for more intentional and inclusive conference design to foster a diverse and vibrant community and community of AI professionals.
-

From the classroom to the newsroom: A critical route to introduce AI in journalism education
From a computer vision application to monitor elections transparency in Argentina to automated real estate texts in Norway, and everything in between, Artificial Intelligence powered tools are changing journalism. Scholars have taken note, and the academic production of AI in journalism has gained considerable ground in the last five years. However, research on how journalism education deals with AI influence in the industry is scarce. Based on a self-training method using available online free courses for journalists and a review of university teaching initiatives, this article proposes key elements to trace teaching trajectories to introduce AI into journalism curriculum. Included are recommendations for drawing a path to teaching journalism students to think critically about AI and, at the same time, to understand the available tools for reporting and investigating in a complex context where journalism lives in a profound state of crisis.

