The Time is Ripe
After nineteen years at Google, I’m beginning something new: The Promethean Collective, an effort to help Civil Society find its strength in the AI era. If we want these technologies to serve the common good, both the public and private sectors must come to see Civil Society not as an afterthought, but as an indispensable partner—with the legitimacy to convene hard conversations in its own right. During the Industrial Revolution Civil Society was the driving force that humanized industrialization, steadily dismantling a hellscape of six 16-hour day workweeks and annual workplace mortality rates that could exceed 10%, and building new institutions and legal reforms that reimagined what shared progress could mean. Their successors can do the same today if we equip them with the understanding and confidence to humanize the age of artificial intelligence.
We won’t chase consensus or push a single framework. Our work supports plurality—the idea that every organization should understand AI well enough to apply its own values, serve its own community, and act from its own sense of purpose. Below is a personal reflection on how this conviction took root in me, and why the task feels urgent. If it resonates, subscribe to our substack or head to theproco.org to explore our first piece of research or register your interest in becoming a member of the Collective when we launch fully.
The time is ripe
This spring, after almost nineteen years, I left Google. Today, I announce The Promethean Collective—a nonprofit, non-partisan organization meant to prepare Civil Society (the sphere of society where individuals, non-profits, and other groups organize to advance their shared interests, outside of the direct control of the state and the market) to thrive in a future with AI, navigating and shaping its impact to ensure it serves the common good, rather than further concentrating power among the already privileged.
Neither the focus of the Collective nor my realization that the work had to be done outside Google–emerged overnight. They crystallized from beliefs I’ve carried for years, losses that reshaped my understanding of time’s urgency, and a recognition that now is the moment to ensure Civil Society becomes the indispensable partner our governance ecosystem needs
Let me start with what I believe. E.O. Wilson observed in 2009 that “the real problem of humanity is the following: we have Paleolithic emotions, medieval institutions, and god-like technology. And it is terrifically dangerous, and it is now approaching a point of crisis overall.” I think he’s right, and I think we’re living through that crisis now—one that will only deepen unless we start acting decisively and soon. Wilson’s observation has always been paired in my mind with an assertion by Jared Diamond that I first encountered in high school and has always, for me, held the deep resonance of provocative truth: that the agricultural revolution was the catastrophe from which humanity has never recovered. While Diamond’s argument doesn’t account for the complex trade-offs involved— hunter-gatherer societies may have had shorter lifespans, even if they enjoyed better health spans—his core insight remains compelling. Archaeological evidence shows that average human height and weight dropped significantly after the advent of agriculture and didn’t recover to pre-agricultural levels for thousands of years—a testament to the profound disruption of human well-being: despite being viewed as progress, agriculture led to a profound deterioration in human social equality and environmental stability, trapping humanity in cycles of hierarchical oppression and scarcity-driven conflict.
The Industrial Revolution offers a more hopeful timeline. While it initially reproduced many of agriculture’s harms—concentrating power, degrading working conditions, and deepening inequality—Civil Society movements managed to “reclaim “ much of this human debt within generations rather than millennia. The transformation from Manchester’s “dark Satanic mills” to regulated workplaces with safety standards, reasonable hours, and worker protections happened in decades, not centuries. This suggests that conscious, organized effort can dramatically accelerate humanity’s recovery from technological disruption.
AI, I believe, has the potential to give us our first real opportunity to recover the hidden costs of the agricultural revolution (i.e., to correct the systemic misalignment of incentives driven by the scarcity mindset has plagued nearly every human society since) But that recovery won’t happen automatically. It requires intentional choices based on understanding how these technologies impact people and communities—something our ancestors who ushered in the agricultural revolution could have never imagined, and which we can only secure humanely through deep engagement with communities.
That leads me directly to my third foundational belief, which concerns Civil Society’s unique position in our social fabric. Unlike for-profit companies driven by the imperative to maximize shareholder value, and unlike government with its monopoly on legitimate force, Civil Society organizations (CSOs) exist only because they have earned the trust of the communities they serve. That trust-based legitimacy gives Civil Society institutions, and their leaders, a distinctive capacity to bridge divides and advocate for the common good. This includes Civil Society organizations across the entire political spectrum—legitimacy in democratic governance requires that all perspectives have meaningful representation, not just those we agree with.
Yet we cannot assume that existing Civil Society capacity is sufficient for this moment. Like the giants whose shoulders we stand on today—the labor organizers, suffragettes, and social reformers who renegotiated the social contract during the Industrial Revolution—today’s Civil Society leaders must grow into new competencies and forms of power necessary to shape the AI age. Our analysis of these historical movements, detailed in our forthcoming “Humanizing the Machine: A Civil Society Playbook for the AI Age,” reveals specific patterns and strategies that enabled Civil Society to transform industrialization from a force of exploitation into one that ultimately expanded human capabilities and freedoms. The Promethean Collective is committed to ensuring Civil Society develops the capability and quality needed for a world where these organizations will be uniquely necessary.
These beliefs were forged through experience.
A year as a volunteer social worker in Portland showed me that homelessness is a political choice, not a personal failure; policy creates the conditions under which people live their lives and which, fairly quickly, people treat as simply ‘the way things are.’
In my doctoral thesis I explored how metropolitan and national authorities adapted London’s governance in the face of inexorable population growth as plague’s impact subsided and vast swaths of land changed hands in the aftermath of the Reformation. My research revealed that halting, imperfect processes are shaped primarily by narrow and generally misaligned institutional incentives. A missed opportunity to change course in 1607 meant that by 1800 London was the largest city in the world …governed primarily through its parish churches. It wasn’t until 1889 that national and civic authorities seized a kairotic moment and created the boroughs and their unifying London County Council.
My three phases at Google—from individual contributor on Android and privacy work, to leading public policy across Europe, Middle East, and Africa, to directing foresight work—taught me that technology’s trajectory is never neutral, and that good strategic thinking is essential but is also one of the first things lost when organizational incentives are misaligned.
Perhaps most importantly, my foresight work revealed that your single best asset in thinking about the future is diversity of perspectives. Nothing poisons the clarity of your vision like being surrounded by people who think the same way you do. This insight connects to my deepest understanding of politics: its purpose is to help us collectively agree how to live together with people who are totally different from us. We don’t need politics in a world where everyone is the same.
But beliefs become actions only when we know, deep in our bones, that something we care about is at stake. Public events in the past few years have pointed to the severity of the legitimacy crisis that looms unless Civil Society rises to the challenge as god-like technology begins running ahead of institutional capacity. The need for deep and competent Civil Society engagement has been thrown into stark highlight by recent AI executive orders and questions about whether companies have any incentives to push back, given the scale of government contracts at stake for them.
The first moment of clarity was watching Mark Zuckerberg’s April 2018 Senate testimony after the Cambridge Analytica scandal. Not a single senator asked a good question. Like so many others, my first reaction was to chuckle about how little the senators understood about technology. But it was a ‘laugh to avoid crying’ about the deeper failure: senators have staffers, staffers rely on think tanks, think tanks depend on Civil Society, academics, and watchdog groups to flag emerging issues. The poor questioning didn’t represent the failure of individuals or even of one institution. It portended the complete breakdown of the ecosystem that should inform governance decisions.
The second, eighteen months ago, was the controversy over Google’s Gemini model generating racially diverse imagery of historical figures, including Black Nazis. When the media circus spilled into a second week I despaired that, amidst all the swirl, there seemed to be no effort—or perhaps no capacity—to redirect attention toward substantive issues. This absence of Civil Society voices who could discern signal from noise and say “There are real things we should worry about with AI. This isn’t one of them” reflects the very “missing middle” problem our research into AI training has identified: the program managers and specialists who would normally staff such substantive engagement lack the AI literacy to participate meaningfully in these moments. This illustrated a hunch that has become core to the theory of change the Collective will start from: a poorly informed Civil Society becomes reactive and unfocused in what gets attention and pressure.
These public failures revealed the depth of Wilson’s crisis—the dangerous gap between our god-like technology and our medieval institutions. Government regulators can be captured or politically motivated, companies will optimize for compliance over substance. Civil Society organizations have that accountability to communities that creates different incentives entirely. Yet when Civil Society lacks the capacity to engage substantively with AI governance, we get both ineffective congressional hearings AND reactive outrage cycles—leaving the field clear for whatever regulatory approach is politically expedient. Recent revelations about the administration’s use of massive federal contracts—worth up to $200 million per company—to pressure AI labs into ideological compliance underscore how government capture operates not through direct regulation but through economic leverage that companies find difficult to resist.
I found myself increasingly restless in my role, watching as promising initiatives got absorbed into bureaucratic processes like a slime mold finding its way around obstacles. I told myself that surely, with this title and team and company, I had the best chance of making a meaningful impact. Then my friends started dying.
I’m still reeling from last July, when my dear friend Benji died from brain cancer. He was not just my friend, he was my doppelganger. For several years we went as each other for Halloween. There’s nothing wilder than looking across a room and seeing yourself dancing. There’s something deeply bittersweet about knowing I’ll never experience that uncanny, dissociative recognition again. Two days after Benji’s funeral, another friend died by suicide. I still can’t hold these two losses together in my head or heart simultaneously. They seem intent on occupying different universes within my mind. Two other friends—both also within a few years of their 40th birthdays—died as well, one in April and one in September. Then, the day before my own 44th birthday in December, one of the friends I most adore in London was put in a medically induced coma to fight a brain infection. He survived, and is flourishing in his recovery. When my role was eliminated soon thereafter, his resilience inspired me to think big.
Margaret Atwood’s short story “Happy Endings” has lived rent-free in my head for almost two-thirds of my life. She sketches six distinct stories, each ending identically: John and Mary die. Her observation lingers: “So much for endings. Beginnings are always more fun. True connoisseurs, however, are known to favor the stretch in between, since it’s the hardest to do anything with.”
Until last July, I consistently asked myself three questions – intentionally value neutral questions – when considering opportunities that have crossed my path:
Does the work matter?
Does my role matter to the work?
Do I make a unique contribution to the role?
I couldn’t put my hand on my heart and say yes to those questions, last summer, but they also started to feel sterile to me. Not that they were unnecessary but that they were insufficient. I couldn’t imagine anyone’s life being appreciably better because of my work…even if I was an order of magnitude better at it. But every afternoon I had a visceral sense (which a puppy is uniquely qualified to help one discern) of how much more joy Percy’s life would include if he got a third trip to the beach that day. I knew I needed to change things.
The values that shaped me point toward this moment. My connection to Jesuits is lifelong—my favorite uncle was a Jesuit, and after college I spent a year of volunteer work with the Jesuit Volunteer Corps in Portland. JVC’s tagline is “ruined for life”—once you’ve experienced their four values of community, spirituality, simple living, and social justice deeply, you can’t go back to living without them. This built on my Jesuit education, which taught me to be open to growth, intellectually competent, and committed to doing justice—to see serving others not as charity but as duty, to address root causes rather than symptoms, and to live in solidarity with marginalized communities. As a Rhodes Scholar, I signed up to “esteem the performance of public duties as their highest aim” and to “fight the world’s fight.” These aren’t abstract ideals—they’re practical commitments that demand intellectual rigor in service of the common good. Right now, ensuring AI serves humanity rather than concentrated power is the world’s fight.
At Google, I learned that information democratizes power and accelerates progress—but that theory of change is incomplete. Information without wisdom, access without agency, and speed without direction can amplify existing inequalities rather than resolve them. We need something more intentional.
The Promethean Collective represents that intentional choice. Our mission is to prepare Civil Society to thrive in a future with AI, navigating and shaping its impact to ensure it serves the common good. While CSOs will be our primary clients, our theory of change compels us to work toward strengthening the entire AI governance ecosystem. We believe that competent, confident, and consistent Civil Society will improve outcomes for everyone—AI labs, policymakers, academics, journalists, and public bodies will all benefit when Civil Society can engage as knowledgeable partners rather than reactive critics.
I anticipate several objections to this vision. Some will argue that Civil Society is too fragmented and under-resourced to meaningfully engage with AI governance—that we should focus on regulatory capture or corporate accountability instead. We see the fragmented nature of Civil Society as one of its greatest assets, and we see our work as ensuring that the plurality of voices, each focused on the interests of its own community, can speak competently, confidently, and consistently on behalf of that community. Others will claim that AI literacy training is futile because the technology changes too rapidly, or that Civil Society organizations should stick to their core missions rather than venture into technical domains. The most pointed criticism will be that this approach is naive about power—that well-intentioned training programs can’t compete with the lobbying budgets and revolving doors that truly shape policy. These concerns have merit, but they miss a crucial point: the current approach isn’t working. Regulatory capture is precisely what happens when Civil Society lacks the capacity to provide informed alternatives. Corporate accountability fails when watchdog groups can’t ask sophisticated questions. And the “stick to your lane” mentality ignores how AI will reshape every sector Civil Society touches—from housing advocacy to environmental justice to democratic participation itself. The Promethean Collective isn’t proposing training as a silver bullet, but as essential infrastructure for a governance ecosystem that currently defaults to either elite technocratic management or populist backlash, with little substantive middle ground.
We’ll focus on education—not training people to use AI, but teaching them to think about it critically. Our approach will be more like a poem than a polemic: we never want to tell someone they’re wrong. Rather, we want to help them see things clearly, perhaps from an angle they hadn’t considered before, letting that slight change in perspective become a doorway to their own discoveries. We want organizations to become savvy consumers who know when AI can further their missions, astute critics who ask hard questions of technology companies, and thoughtful planners who can anticipate their communities’ needs ten or twenty years into the future. This progression mirrors my own journey at Google—from building technology as an individual contributor, to advocating for responsible approaches across regions as a policy specialist, to thinking strategically about its long-term societal impact.
Our research into the global AI training landscape for Civil Society has revealed both the scale of the challenge and pathways forward. We’ve identified what we call the “missing middle” problem: while high-cost strategic training serves senior leaders and free tool-specific courses reach frontline staff, the crucial program managers and specialists who translate strategy into action remain underserved. These are precisely the people who would normally staff Civil Society’s substantive policy engagement—their absence from AI literacy initiatives helps explain why responses default to either elite strategic hand-waving or frontline reactive criticism.
We’re developing a comprehensive curriculum strategy that addresses these gaps through modular, tiered approaches: foundational literacy that builds critical thinking skills, specialized tracks for different organizational roles and mission areas, and advanced capacity in areas like algorithmic auditing and policy advocacy. Our emphasis will be on experiential learning, localized content development, and sustainable support ecosystems—ensuring that Civil Society organizations don’t just receive training but develop lasting capabilities to navigate an AI-transformed landscape. This spring we’ll launch pilot programs to test and refine our approach with partner organizations across different sectors.
Our approach will rest on four pillars:
Education, to build deep AI literacy within Civil Society;
Amplification, to help organizations develop compelling narratives and share best practices;
Connection, to forge sustainable bridges between Civil Society, technologists, and policymakers; and
Listening, to systematically gather insights from Civil Society’s experiences with AI.
As an early example of our commitment to creating practices that challenge assumed hierarchies of knowledge, expertise, and power, we’re exploring mutual mentorship opportunities where Civil Society leaders serve as mentors to help us understand their worlds, while we provide guidance on AI developments. This reciprocal learning approach ensures our work remains grounded in community realities rather than abstract theory.
We’ll be nonpartisan and policy-agnostic—we’ll measure success by Civil Society’s agency, not predetermined outcomes.
This dynamic leaves technology companies with two rational but problematic choices: become timid and risk-averse to avoid controversy, or treat Civil Society concerns as news cycles to survive rather than valuable signals to engage with. Neither serves innovation or democracy well.
The name isn’t accidental. Prometheus stole fire as a deliberate act of rebellion against monopolies of power. We act in sympathy with Prometheus, but we don’t bring fire ourselves—we help others learn to tend it.
We’re incorporating as a membership-based CIO in England because, like the hyperparameters that determine an AI model’s fundamental capabilities and limits, early structural decisions quickly create path dependencies that are almost impossible to shake off later. By the time Google’s founders opened their 2004 IPO letter with “Google is not a conventional company. We do not intend to become” they had already structured it in a way that made the outcome perhaps not inevitable, but certainly the path of least resistance. Structural decisions made in the earliest days generally prove more consequential than any mission statement.
For the past few months, time has felt nothing like the slender but strong rope that usually pulls me forward into an uncertain future. Instead, it’s been like a densely woven brocade, in which countless threads in a dozen layers build, together, an intricate pattern. The interaction of gravity, light, and air movement makes parts that seemed dark or indistinct suddenly become sources of warmth or illumination. Threads that seemed to have snuck into the cloth without intention suddenly resonate with meaning, revealing a coherence that existed independent of my recognition.
This shift in perspective comes at a moment when time feels ripe for transformation. The convergence of AI’s accelerating capabilities, democracy’s fragility, and Civil Society’s unique potential creates a window for intervention. Ancient Greeks recognized two forms of time: Chronos, linear and measurable, and Kairos, moments ripe with meaningful potential. This is a kairotic moment.
Now comes the steady work of Chronos—the predictable rhythms that will carry this vision into reality. Like the reliable turning of seasons, we’re establishing the foundational practices that will sustain us:
Regular Updates: Our newsletter (subscribe here) will start as weekly essays, which we’ll supplement in due course with monthly operational updates about the work and what we’re learning from it. Not the polished announcements of a finished organization, but the honest documentation of something taking shape. Subscribe
Weekly Study Hall: Our weekly Study Halls are structured, collaborative sessions designed to turn thoughtful reflection into meaningful action. Their purpose is to refine, advance, and implement ideas that prepare Civil Society for thriving in a future shaped by AI, ensuring work is informed by a plurality of perspectives and collective wisdom. These sessions adhere to strict confidentiality, operating under the Chatham House Rule. You can learn more and register your interest here.
Office Hours: Each day, travel permitting, I’ll have time set aside for the kind of unhurried conversation that digital life often crowds out—space for questions, reflection, and the mutual support that real collaboration requires. You can book a slot here.
Building this vision requires more than good intentions—it demands a founding team with diverse expertise and deep commitment. We’re seeking leaders who understand both the urgency of this moment and the patient work required to build lasting institutional capacity. Whether your background is in Civil Society organizing, AI research, educational design, technology governance, or mission-driven leadership, there’s a role in shaping what comes next.
The time is ripe. The collective is gathering. The work begins.





First - I’m so sorry to hear of the losses of your friends. I’ve been there this year too and am sending good energy your way.
Second - this is so needed, as someone who has watched in relative horror, I’m excited for people coming together to work to find a positive path forward. I’ve seen the potential good that can come of this when thoughtful approaches are taken, and I so want us to harness that effectively.