Category: Leslie Salgado, Ph.D.

Leslie Salgado, Ph.D.
Biography
LinkedIn
Google Scholar

  • Aula Convening Guideline 2025 Ed.

    Aula Convening Guideline 2025 Ed.

    The Aula Convening Guidelines, 2025 ed.

    These Aula Convening Guidelines are for people working on tech governance and AI in society, these are 6 guidelines for convening communities for legitimate collective decision-making on how AI is implemented in society.

    Since our founding in 2023, Aula Fellows have hosted and participated in 100s of conversations in more than 30 countries and regions on AI. We have spoken with people who have a variety of needs, spanning through Learning AI, Living with AI, Working with AI, and Shaping AI.

    We have worked through 3 project phases, to develop these guidelines, from the common elements that make for conversations in which communities make decisions about AI. Our goal is not a new type of consultation, but rather to see to it that community convenings are conductive to collective decision making on AI.

    In 2026 we will be reaching out to partner organizations to continue to refine these guidelines and to bring them to more groups of people.

    They are complete and available now under a Creative Commons license, in this V.01, 2025 Edition.

    Link to the PDF.

  • Tackling AI Transparency Concerns in Biomedical Research: Bringing a Communication-Participatory Approach to the Conversation

    Tackling AI Transparency Concerns in Biomedical Research: Bringing a Communication-Participatory Approach to the Conversation

    Announcement by Leslie Salgado: “Happy to announce that my book chapter “Tackling AI Transparency Concerns in Biomedical Research: Bringing a Communication-Participatory Approach to the Conversation” is now published as part of the book “Artificial Intelligence in Biobanking. Ethical, Legal and Societal Challenges.” In my chapter, I address questions concerning transparency, explainability and interpretability from a communication-participatory stance. ” Congratulations, Leslie, and thanks for your work!

    Read it at Routledge

  • Developing the Permanent Symposium on AI (poster): Presented at Engineering and Public Policy Division (EPP) Poster Session

    Developing the Permanent Symposium on AI (poster): Presented at Engineering and Public Policy Division (EPP) Poster Session

    A multidisciplinary, reflective autoethnography by some of the people who are building the Permanent Symposium on AI. Includes the history of the project.

    RQ 1: Challenges that unite AI policy & tech

    RQ 2: How to design the PSAI?

    RQ 3: What factors influence the adoption and scalability of the PSAI?

    This is the Flagship project of the Aula Fellowship.

    Read the Poster

  • Whole-Person Education for AI Engineers: Presented to CEEA (Peer Reviewed)

    Whole-Person Education for AI Engineers: Presented to CEEA (Peer Reviewed)

    This autoethnographic study explores the need for interdisciplinary education spanning both technical an philosophical skills – as such, this study leverages whole-person education as a theoretical approach needed in AI engineering education to address the limitations of current paradigms that prioritize technical expertise over ethical and societal considerations. Drawing on a collaborative autoethnography approach of fourteen diverse stakeholders, the study identifies key motivations driving the call for change, including the need for global perspectives, bridging the gap between academia and industry, integrating ethics and societal impact, and fostering interdisciplinary collaboration. The findings challenge the myths of technological neutrality and technosaviourism, advocating for a future where AI engineers are equipped not only with technical skills but also with the ethical awareness, social responsibility, and interdisciplinary understanding necessary to navigate the complex challenges of AI development. The study provides valuable insights and recommendations for transforming AI engineering education to ensure the responsible development of AI technologies.

    More Information

  • Work in Progress: Exclusive Rhetoric in AI Conference Mission Statements

    Work in Progress: Exclusive Rhetoric in AI Conference Mission Statements

    AI conferences are pivotal spaces for knowledge exchange, collaboration, and shaping the trajectory of research, practice, and education. This paper presents preliminary findings from an analysis of AI conference mission statements, investigating how their stated goals affect who is welcomed into AI conversations. We find that many mission statements reflect assumptions that may unintentionally narrow participation and reinforce disciplinary and institutional silos. This limits engagement from a broad range of contributors—including educators, students, working professionals, and even younger users —who are essential to a thriving AI ecosystem. We advocate for clearer framing that supports democratizing and demystifying AI. By broadening participation and intentionally fostering cross-sector and interdisciplinary connections, AI conferences can help unlock more innovation.

    More Information

  • Towards Real Diversity and Gender Equality in Artificial Intelligence

    Towards Real Diversity and Gender Equality in Artificial Intelligence

    This is an Advancement Report for the Global Partnership on Artificial Intelligence (GPAI) project “Towards Real Diversity and Gender Equality in Artificial Intelligence: Evidence-Based Promising Practices and Recommendations.” It describes, at a high level, the strategy, approach, and progress of the project thus far in its efforts to provide governments and other stakeholders of the artificial intelligence (AI) ecosystem with recommendations, tools, and promising practices to integrate Diversity and Gender Equality (DGE) considerations into the AI life cycle and related policy-making. The report starts with an overview of the human rights perspective, which serves as the framework upon which this project is building. By acknowledging domains where AI systems can pose risks and harms to global populations, and further, where they pose disproportionate risks and harms to women and other marginalized populations due to a lack of consideration for these groups throughout the AI life cycle, the need to address such inequalities becomes clear.

    More Information

  • Pre-conference workshop: Université de l’Alberta Conférence Annuelle

    Pre-conference workshop: Université de l’Alberta Conférence Annuelle

    We were pleased to sponsor the 2025 Campus St Jean Annual Conference of the University of Alberta. Two Aula Fellows were present, and offered a workshop for faculty. The event was well attended. As Fellows, we were happy to receive feedback that the workshop empowered faculty to continue conversations on the complexities of AI in society and at the University, outside the conference and into their fields of work. Some of the attendees have since joined us as Fellows.

    More Information

  • United Nations Commission on the creation of a Scientific Panel on AI

    United Nations Commission on the creation of a Scientific Panel on AI

    Consultation on the governance of the UN’s Scientific Advsory Panel on AI. Posted on LinkedIn.

    More Information

  • What We Do Not Know: GPT Use in Business and Management

    What We Do Not Know: GPT Use in Business and Management

    This systematic review examines peer-reviewed studies on application of GPT in business management, revealing significant knowledge gaps. Despite identifying interesting research directions such as best practices, benchmarking, performance comparisons, social impacts, our analysis yields only 42 relevant studies for the 22 months since its release. There are so few studies looking at a particular sector or subfield that management researchers, business consultants, policymakers, and journalists do not yet have enough information to make well-founded statements on how GPT is being used in businesses. The primary contribution of this paper is a call to action for further research. We provide a description of current research and identify knowledge gaps on the use of GPT in business. We cover the management subfields of finance, marketing, human resources, strategy, operations, production, and analytics, excluding retail and sales. We discuss gaps in knowledge of GPT potential consequences on employment, productivity, environmental costs, oppression, and small businesses. We propose how management consultants and the media can help fill those gaps. We call for practical work on business control systems as they relate to existing and foreseeable AI-related business challenges. This work may be of interest to managers, to management researchers, and to people working on AI in society.

    More Information

  • Les leviers du pouvoir dans l’IA

    Les leviers du pouvoir dans l’IA

    La camaraderie Aula est fièr de commanditer la conférence annuel la faculté du campus Saint Jean de l’Université de l’Alberta. Notre directrice, Tammy Mackenzie, y présente les leviers du pouvoir dans l’IA dans la société albertaine et de la francopĥonie. Le tout dans le but de remmettre le pouvoir décisionnel dans les mains des gens qui sont impliqués: nous tous.

    Pour en savoir plus, voir: https://www.ualberta.ca/en/campus-saint-jean/congress/index.html

  • Trustworthy and Responsible AI for Human-Centric Autonomous Decision-Making Systems

    Trustworthy and Responsible AI for Human-Centric Autonomous Decision-Making Systems

    Artificial Intelligence (AI) has paved the way for revolutionary decision-making processes, which if harnessed appropriately, can contribute to advancements in various sectors, from healthcare to economics. However, its black box nature presents significant ethical challenges related to bias and transparency. AI applications are hugely impacted by biases, presenting inconsistent and unreliable findings, leading to significant costs and consequences, highlighting and perpetuating inequalities and unequal access to resources. Hence, developing safe, reliable, ethical, and Trustworthy AI systems is essential. Our team of researchers working with Trustworthy and Responsible AI, part of the Transdisciplinary Scholarship Initiative within the University of Calgary, conducts research on Trustworthy and Responsible AI, including fairness, bias mitigation, reproducibility, generalization, interpretability, and authenticity. In this paper, we review and discuss the intricacies of AI biases, definitions, methods of detection and mitigation, and metrics for evaluating bias. We also discuss open challenges with regard to the trustworthiness and widespread application of AI across diverse domains of human-centric decision making, as well as guidelines to foster Responsible and Trustworthy AI models.

    More Information

  • Reimagining AI Conference Mission Statements to Promote Inclusion in the Emerging Institutional Field of AI

    Reimagining AI Conference Mission Statements to Promote Inclusion in the Emerging Institutional Field of AI

    AI conferences play a crucial role in education by providing a platform for knowledge sharing, networking, and collaboration, shaping the future of AI research and applications, and informing curricula and teaching practices. This work-in-progress, innovative practice paper presents preliminary findings from textual analysis of mission statements from select artificial intelligence (AI) conferences to uncover information gaps and opportunities that hinder inclusivity and accessibility in the emerging institutional field of AI. By examining language and focus, we identify potential barriers to entry for individuals interested in the AI domain, including educators, researchers, practitioners, and students from underrepresented groups. Our paper employs the use of the Language as Symbolic Action (LSA) framework [1] to reveal information gaps in areas such as no explicit emphasis on DEI, undefined promises of business and personal empowerment and power, and occasional elitism. These preliminary findings uncover opportunities for improvement, including the need for more inclusive language, an explicit commitment to diversity, equity, and inclusion (DEI) initiatives, clearer communications about conference goals and expectations, and emphasis on strategies to address power imbalances and promote equal opportunities for participation. The impact of our work is bi-fold: 1) we demonstrate preliminary results from using the Language as Symbolic Action framework to text-analysis of mission statements, and 2) our preliminary findings will be valuable to the education community in understanding gaps in current AI conferences and consequently, outreach. Our work is thus of practical use for conference organizers, engineering and CS educators and other AI-related domains, researchers, and the broader AI community. Our paper highlights the need for more intentional and inclusive conference design to foster a diverse and vibrant community and community of AI professionals.

    More Information