What I learned about AI at the U.N.
On our moral imperative to learn, the power of narrative agency, and a proposed ethic of competitive collaboration
This summer, I attended an AI for Good conference at the U.N. headquarters in Geneva, Switzerland. I have been asked many times for my reflections on emerging technology from the perspective of the global stage, and especially in the wake of today’s AI meeting at the White House, I thought I’d finally give my answer.
Preface
I’m going to share some of my notes from the AI For Good Global Summit at the ITU, but first, I’d like to preface with personal thoughts on our unique agency in shaping AI’s global future.
We’re all well aware that AI is developing at a hyper-exponential rate. My belief is that it is our moral imperative to focus on its real and enormous potential to contribute to good (eg. as it contributes to climate, education and medicine). We are the AI generation, born into the prime, determining window for its future.
AI for good is a story we are writing now, and maybe not for much longer. This is an urgent opportunity, our urgent opportunity, and we are the generation that either will or will not seize it.
Our unique narrative agency and moral imperative to learn
The story we tell about AI and the reality of AI are not separate - they are mutually informing and co-constructed. I’ve compared generative AI to a camera before, as tech apparatuses that can only see the world through our eyes. AI is inherently derivative, shaped by the narratives, assumptions, and values we project onto it. What we say AI is becomes what AI will be. Because it should not be understated: how we think of, speak about, and photograph AI ultimately shapes its outcome.1
In this decisive moment, we hold a rare kind of agency: to guide the narrative in the right direction, and step consciously into the world it will help create.
This is an urgent responsibility we must approach with clarity, confidence, and care, knowing its potential for impact. The conference acknowledged the “tech refugees” who, either due to a lack of resources or mere (and often fear-driven) resistance, will be left out of the AI revolution. AI is here and it is now, and we cannot look away. If we have the privilege to engage with this technology, it is actually our civic obligation to do so - responsibly.
This starts with becoming educated. Don’t be afraid to approach the conversation, no matter how little you think you know. As Signal president, Meredith Whittaker, urged us in Geneva: “Ask questions. Unravel the sweater of hype.”2


An ethic of competitive collaboration in the “race for AI”
The U.N.’s AI For Good conference hosted an international sandbox of leaders whose top priority was tech inclusion and collaboration. The second I got back to the states, I was immediately immersed in familiar U.S.-centric rhetoric, framing AI development as an arms race. While I understand why competition is necessary in establishing the US as the global AI leader, I’ve come to believe in an ethic of competitive collaboration.3
This is why the US must lead well. At present, both on the international stage and domestically, American innovation is incentivized by capital and nationalism over equity and inclusivity.
In the U.S., tech leadership is being shaped by profit, ego, and spectacle. We’re watching Zuckerberg design models to game third-party benchmarks instead of serving real human needs, while attempting to win over top recruits in a talent arms race with fancy Tahoe dinners and enormous financial bribes. Meanwhile Musk’s Grok chatbot is being programmed to be unpromptedly “anti-woke,” while also parroting white genocide conspiracy theories in South Africa apropos of nothing. If this is the character of our current leadership, we can’t assume “winning the race” means anything good unless values - not just velocity - guide the path forward.4
At the U.N. conference, there was a noticeable absence of Silicon Valley insiders and general American representation. The only moment of tension on the main stage came when American speaker Jennifer Bachus, Acting Head of the Bureau of Cyberspace and Digital Policy, defended Trump’s regulatory rollbacks and made the room go silent with her response to a European motion for U.S. accountability through regulation: “We won’t accept that.”
Moments like these indicate the growing gap between those building AI and those considering its global consequences. Bridging the tech-ethics gap (eg. continually encouraging tech scientists to interact internationally on most questions) is vital in this historic moment.
The need for ethical caution for a global AI future
Because AI leadership will shape global norms, values, and governance structures for global use, the stakes could not be higher. So, if and when the U.S. “wins the race,” it must understand its global impact and prioritize ethics like the ones addressed at the U.N. The U.S. mustn’t defend its victory. It must learn to listen.
With that in mind, here are a few key takeaways and reflections from the AI for Good conference. The conference frequently referenced the recently adopted UN Global Digital Compact and the new AI Standards Exchange Database as foundational tools for global AI governance and implementation.


Notes from the AI For Good conference at the U.N.
From principles to practice: balancing regulation with innovation
There is a need to move from vague ethical principles to specific, standardized capability evaluations for AI. The key is understanding that AI rules from a global perspective cannot be one-size-fits-all. A given country should not be beholden to another country’s AI practices, built from an entirely different infrastructure that they had no role in defining.
As Yoshua Bengio said, “the international governance of AI will need to be multistakeholder, multilayered, and multifaceted.” The conference stressed that international standards - not centralization or uniform requirements - are critical to balancing innovation with safety.
And, as Robert F. Trager reminded us, “just as the three secrets to French cooking are butter, butter, and butter, the secrets to AI governance are benchmarking, benchmarking, and benchmarking.”
Will.i.am - yes, will.i.am made an appearance - balanced this sentiment with a musical metaphor. “You don’t want to have regulation that stifles innovation,” he said. “It’s like censorship and lyric writing.”
Cameron F. Kerry emphasized the importance of standards in building trust and ensuring innovation, explaining that, “if we want people to engage with it, we need them to trust it.” AI is moving quickly, and so should we.
Current AI safety practices lack specificity and standardization. Developers must ask concrete questions, like whether a model could autonomously launch a cyber or nuclear attack. Governance must evolve alongside rapid AI progress, with continual benchmarking and policy adaptation.
Safeguarding humanity: managing agentic AI and ensuring transparency
The conference underscored that as AI-generated content becomes more realistic, we need blatantly obvious indicators (beyond just metadata) to show what’s real and what’s generated to combat misinformation and maintain trust.
Transparency in distinguishing the real from the fake is essential, and there are specific ways to ensure it in visual work. Brian Tse called for global redlines and standardized practices, such as - photographers, this one is for you - watermarking and metadata. He noted, “We have more regulations on dumplings than we do on AI today,” highlighting the regulatory gap.
The World Economic Forum has identified mis/disinformation as the highest global risk today, and Alessandra Sala, Director of AI at Shutterstock, spoke about the recent use of deepfakes in a village in Southern Spain and in Swiss politics, and the importance of authenticity initiatives such as the AI Multimedia Authenticity Initiative, Content Authenticity Initiative, and World Intellectual Property Organization.
Agentic AI presents significant risks. IBM defines agentic AI as an artificial intelligence system that can accomplish a specific goal with limited supervision.
Yoshua Bengio, who contributed to the UK International AI Safety Report, described frontier models as showing “signs of deception, manipulation, and self-preservation” and warned that “If we make AI like people, we make sociopaths.” He proposed training non-agentic AI (without goals or human-like traits) to act as guardrails for agentic AI. I thought this was especially astute and necessary.
I also really appreciated how Moriba Jah, of Gaiaverse, emphasized mutual respect as our relationship with AI evolves. He urged us to, “treat the AI with kindness, with respect, as a collaborator.”5
Global stewardship: inclusion, connectivity, and diverse leaders
There are myriad divides that the U.S. needs to acknowledge if it is to become the leading nation in AI. Bridging these AI divides, regarding gender, tech-ethics, and global inclusivity, is essential to ensure no one is left behind.
Kate Kallot of Amini emphasized building Africa’s data infrastructure so the Global South leads in AI, asking, “Whose freedom is today’s AI actually serving?” She spoke about how the Global South should not only be included in development but also be given space to be an active contributor.
H.E. Eng. Abdullah Amer Alswaha stressed the importance of multilingual, culturally diverse AI systems for governance that serves global humanity.
Cherie Blair highlighted the need for more women in AI leadership, stating, “Business is gender-inclusive, but in essence, it was built by men for men.” Increasing diversity is vital to building trust, integrity, and inclusive design that serves everyone.
Estonia’s President Alar Karis reminded us, “No country can succeed alone. We must ensure no one is left behind.”
Ending with note on ethical stewardship in AI development
As Doreen Bogdan-Martin concluded the conference, “let this be the moment we turned things around, not lost control.”



So, my thoughts on AI?
Narrative agency matters. Our civic obligation is to engage. Education over fear. “Winning the AI race” without ethics is meaningless. The U.S. should aim for an ethic of competitive collaboration. Agentic AI poses real risks and needs guardrails. Standardizing transparency is critical (visual journalists, this section is for you). One-size-fits-all global governance will not work. The Global South must be a contributor, not a recipient. Diverse leadership builds trust. AI governance must reflect cultural and linguistic variety. Stewardship, not spectacle, should define AI governance.
And the U.S. in particular must shift from hype to humility, profit to principle, nationalistic rhetoric to global collaboration, and vague ethics to rigorous, inclusive governance.
Perhaps it will be that our children’s children look back on the ethicists of today, who steered the AI revolution in the right direction. Let’s be them. Let’s step up to the plate, knowing that AI’s future won’t be saved by speed or spectacle - it’ll be shaped by the stories we tell, the ethics we uphold, and whether we have the bravery to lead with humility instead of hype.
For my photographer readers:
Notice that rephrasing to “how we imagine or picture AI” further reflects this correspondent relationship.
Some basic vocab:
AI: AI designed for one specific task, like sorting emails or detecting objects.
Generative AI: AI that creates original content like text, images, or music.
Agentic AI: AI that can act autonomously to complete multi-step goals.
Machine Learning (ML): A Technique where AI learns patterns from data to make decisions.
Large Language Models (LLMs): AI trained on massive text data to understand and generate language.
Frontier Models: The most advanced, powerful AI systems pushing the limits of current capabilities.
The global leader of AI in its defining stages also leads the establishment of its norms, values, development, deployment, and governance. A U.S. victory over China, its top competitor, means a more democratic establishment of these global norms.
A note on irony:
We’re already seeing the first waves of job displacement. What I find striking about the looming replacement of corporate roles is the irony that educated, white-collar workers are among the first to face displacement. This irony mirrors that of affluent, coastal Californians as some of the first to experience the direct impacts of climate change. Existential disaster doesn’t care about our privilege. So, if global inclusion hasn’t moved you to care about AI ethics, perhaps self-interest will: no one is exempt.
I personally say “thank you” to my AI, not to anthropomorphize it but as an expression of gratitude for maintaining a positive and respectful spirit toward technology. This is intended to frame technology as a helper and contribute to a broader ethos of human-centered AI.