Category: Natalie Perez, Ph.D.

Natalie Perez, PH.D.
Biography (under construction)
Google Scholar

  • Aula Convening Guideline 2025 Ed.

    Aula Convening Guideline 2025 Ed.

    The Aula Convening Guidelines, 2025 ed.

    These Aula Convening Guidelines are for people working on tech governance and AI in society, these are 6 guidelines for convening communities for legitimate collective decision-making on how AI is implemented in society.

    Since our founding in 2023, Aula Fellows have hosted and participated in 100s of conversations in more than 30 countries and regions on AI. We have spoken with people who have a variety of needs, spanning through Learning AI, Living with AI, Working with AI, and Shaping AI.

    We have worked through 3 project phases, to develop these guidelines, from the common elements that make for conversations in which communities make decisions about AI. Our goal is not a new type of consultation, but rather to see to it that community convenings are conductive to collective decision making on AI.

    In 2026 we will be reaching out to partner organizations to continue to refine these guidelines and to bring them to more groups of people.

    They are complete and available now under a Creative Commons license, in this V.01, 2025 Edition.

    Link to the PDF.

  • Levers of Power in the Field of AI

    Levers of Power in the Field of AI

    Forthcoming study, now available on Arxiv:

    Levers of Power in the Field of AI
    An Ethnography of Personal Influence in Institutionalization

    Who holds power over decisions in our society? How do these people influence decisions, and how are these people influenced? How is this the same or different when it comes to questions about AI?  These are some of the questions we set out to understand.

    Abstract: This paper examines how decision makers in academia, government, business, and civil society navigate questions of power in implementations of artificial intelligence (AI). The study explores how individuals experience and exercise “levers of power”, which are presented as social mechanisms that shape institutional responses to technological change. The study reports on the responses of personalised questionnaires designed to gather insight on a decision maker’s institutional purview, based on an institutional governance framework developed from the work of Neo Institutionalists. Findings present the anonymized, real responses and circumstances of respondents in the form of twelve fictional personas of high-level decision makers from North America and Europe. These personas illustrate how personal agency, organizational logics, and institutional infrastructures may intersect in the governance of AI. The decision makers’ responses to the questionnaires then inform a discussion of the field level personal power of decision-makers, methods of fostering institutional stability in times of change, and methods of influencing institutional change in the field of AI. The final section of the discussion presents a table of the dynamics of the levers of power in the field of AI for change makers and 5 testable hypotheses for institutional and social movement researchers. In summary, this study provides insight on the means for policymakers within institutions and their counterparts in civil society to personally engage with AI governance.

    Read  on Arxiv.

  • AWS blog: “AI judging AI”

    AWS blog: “AI judging AI”

    “Picture this: Your team just received 10,000 customer feedback responses. The traditional approach? Weeks of manual analysis. But what if AI could not only analyze this feedback but also validate its own work? Welcome to the world of large language model (LLM) jury systems deployed using Amazon Bedrock. As more organizations embrace generative AI, particularly LLMs for various applications, a new challenge has emerged: ensuring that the output from these AI models aligns with human perspectives and is accurate and relevant to the business context. ”

    Read the work on their blog: https://aws.amazon.com/blogs/machine-learning/ai-judging-ai-scaling-unstructured-text-analysis-with-amazon-nova/

  • Whole-Person Education for AI Engineers: Presented to CEEA (Peer Reviewed)

    Whole-Person Education for AI Engineers: Presented to CEEA (Peer Reviewed)

    This autoethnographic study explores the need for interdisciplinary education spanning both technical an philosophical skills – as such, this study leverages whole-person education as a theoretical approach needed in AI engineering education to address the limitations of current paradigms that prioritize technical expertise over ethical and societal considerations. Drawing on a collaborative autoethnography approach of fourteen diverse stakeholders, the study identifies key motivations driving the call for change, including the need for global perspectives, bridging the gap between academia and industry, integrating ethics and societal impact, and fostering interdisciplinary collaboration. The findings challenge the myths of technological neutrality and technosaviourism, advocating for a future where AI engineers are equipped not only with technical skills but also with the ethical awareness, social responsibility, and interdisciplinary understanding necessary to navigate the complex challenges of AI development. The study provides valuable insights and recommendations for transforming AI engineering education to ensure the responsible development of AI technologies.

    More Information

  • Work in Progress: Exclusive Rhetoric in AI Conference Mission Statements

    Work in Progress: Exclusive Rhetoric in AI Conference Mission Statements

    AI conferences are pivotal spaces for knowledge exchange, collaboration, and shaping the trajectory of research, practice, and education. This paper presents preliminary findings from an analysis of AI conference mission statements, investigating how their stated goals affect who is welcomed into AI conversations. We find that many mission statements reflect assumptions that may unintentionally narrow participation and reinforce disciplinary and institutional silos. This limits engagement from a broad range of contributors—including educators, students, working professionals, and even younger users —who are essential to a thriving AI ecosystem. We advocate for clearer framing that supports democratizing and demystifying AI. By broadening participation and intentionally fostering cross-sector and interdisciplinary connections, AI conferences can help unlock more innovation.

    More Information