The Global Trust Challenge is a policy accelerator. Wherever you are in the world, if you’re building policy for tech, then please take a look. It’s support to develop your policy, in phases, with technical assistance, networking, and later-phase financial support for pilot projects, to people whose project goes through the phases. The idea is that we can build solidarity through these types of initiatives, with people who can support, supporting, and people who can contribute, contributing.
What do we know about trust? It’s in actions over time. It’s not something that you can instill in people, it’s something that is earned by your actions. It’s in getting to know each other. It’s also in making room for each other’s human failings and poor decisions while we’re learning better. It’s having values, like caring for others and compassion. It’s giving each other a chance, and it’s also being resolute in protecting the vulnerable. I am an imperfect human, and it isn’t always easy to know who else is imperfect, but rowing together in the direction of a democratic, fair, and rights-based society.
There are good people throughout society. Some people are working on tech diplomacy. How do we know who can be trusted to not make things worse? How do we learn to work together with people whose work, life situation, and personal trajectories are invisible to us? How can we make policy if people won’t trust policymakers enough to follow it? How can we make products that people will want to pay for, at a fair price, without discounting the future?
If we are to have AI in society that is safe and good for society, rather than continued and exacerbated algorithmic harms from our systems, then we can steer and design these systems with everyone in mind, including the non-human life on our planet. The alternative system for political and financial control of AI, namely Technocracy, has always been and continues to be a narrow-minded and functionally inadequate view of the needs at hand.
I personally support the Global Trust Challenge. It’s a collective conversation over time. I don’t know everyone involved. How could I? These are big, big organizations. I do know that the organizations and the people who I’ve met have exhibited honesty about our major problems, courage in taking action in their professional lives, and that they express hope when they make plans.
I want to thank Strategic GTC Partner Gilles Fayad (AI Commons, IEEE), as well as Sebastien Hallensleben (OECD, EU), Anna Jahn (McGill), Benjamin Prud’ Homme (Mila) and the other organizers of the Digital Trust Convention series. I am fairly sure none of them are perfect humans either, like me, but also fairly sure that they are on the side of constructive democratic decision making on how AI gets rolled out in society. You can tell by their choices over time.
I am an accountant and a political scientist, a pragmatist, not an ideologue. This can work, if we want it to. It has in the past.
For more information:
Launching the Aula Outreach Campaign for the Global Trust Challenge.
Global Trust Challenge: https://www.globalchallenge.ai/
Digital trust Convention 2025: https://mila.quebec/en/event/digital-trust-convention-2025
