Why "philosopher builders" are crucial in the AI era, with Harry Law from the Cosmos Institute
This week, we’re speaking to Harry Law, previously a senior researcher at Google Deepmind, and currently researcher at the Cosmos Institute – an institute which supports technologies using AI to “advance human flourishing” and has funded over 100 projects since launching over a year ago. We spoke about why “philosopher builders” are important in the age of AI, and how that cuts across standard ideas of what makes a great founder.
FORM: what is the Cosmos Institute and why does it exist?
The Cosmos Institute supports technologists who use AI to advance human flourishing. Launched just over a year ago, since then we’ve funded nearly 100 projects, incubated new organisations, and worked with the University of Oxford, the Aspen Institute, and leading AI labs.
Our goal is to cultivate ‘philosopher-builders’, people who know what to create, but also why to create in the first place. That might be someone building recommender systems aligned with the person you want to be (instead of the person who just wants easy gratification), producing critical-thinking tools to avoid outsourcing deliberation, or introducing autonomy-preserving solutions for navigating the information environment.
Cosmos’ work is organised around three core themes: truth-seeking (the ability to inquire openly and correct our errors), human autonomy (the cultivated capacity for self-direction), and decentralisation (systems that resist coercion, capture, and control). These goods thrive in a space where disagreement is welcomed, authority is questioned, and better answers emerge through debate. AI should be built to strengthen this ecosystem, not collapse it into conformity through consensus-by-algorithm. Our model involves providing builders with the training they need to create initiatives like these, incubating those that show real promise, and helping to source follow-on funding from established backers to help them scale.
On a personal note, most days I use AI to help me think through problems and test the quality of my thinking. But each time I boot-up ChatGPT or Claude, I wonder how much the final output is the product of my thinking and how much can be attributed to the machine. Exploring this tension – between AI’s ability to augment us and replace us – is one of the reasons I joined Cosmos.
FORM: In VC, there is an obsession with trying to pin down exactly which profiles are likely to build something truly huge. How do “philosopher builders” fit into the broader debate on founder profiles, and why are they important?
In VC, talk of “founder profiles” often boils down to personality tests: the obsessive, the resilient, the second-time operator. Philosopher-builders cut across that. The point isn’t that they have a type of temperament but that they carry a strong set of first principles into the act of building. Think about how LinkedIn’s networked structure changed how careers work, how Stripe’s global connectivity removed geographic constraints on commerce, or how Ethereum built the infrastructure for self-executing organisations.
These companies represent a fundamental shift from the previous generation of garage and dorm room hackers, to philosopher-builders who use vision to attract talent, customers, and capital. They’re people who can articulate why a new institution, protocol, or product should exist in the first place, and can use it to win allies to their cause.
The philosopher-builder ideal is, in a way, the return of an older founder archetype. Early moderns were clearly not “operators” in the startup sense, but they were theorists whose ideas were operationalised by others into constitutions, legal codes, and eventually states. They’re the antidote to obsession with founder “traits” because they’re defined by orientation toward human flourishing rather than a set of psychological or professional markers.
FORM: Cosmos talk about building the “philosophy-to-code pipeline” – how does this fit into the focus on AI that supports human flourishing?
At the dawn of the liberal democracy, there was a ‘philosophy-to-law’ pipeline that turned ideas into legal structures. Thinkers like Locke, Montesquieu, and Smith wrote about natural rights, the separation of powers, or free markets. Within a generation or two, those ideas became the bedrock of constitutions and legal codes.
The philosophy-to-code model takes inspiration from that episode. Rather than seeing philosophical ideas as interesting but ultimately abstract, we think they can be used to directly shape the development and deployment of AI systems. Today, we already see early signs of this approach taking shape. Anthropic’s Constitutional AI bakes an explicit “bill of rights” into its alignment process; xAI’s Community Notes layers crowd-vetted context onto posts using an open-source algorithm with no manual override; and Meta’s open‑weight Llama 3 and Apple’s on‑device models widen user control.
As these examples show, AI that advances human flourishing can take many different forms. Your idea of the good life might not be the same as mine, but what they have in common is that we both find them ourselves. The philosophy-to-code ethos is based on this observation – backing projects that find a way to preserve human flourishing in the age of thinking machines.



The emphasis on "philosopher builders" to create AI that aligns with the person we *aspire* to be, rather than just simple gratification, is briliant; what technical and ethical frameworks do you believe are most promising for effectively integrating these deeper human values into large-scale AI systems, a really insightful point you've raised?