Last year a lot of tech people got very worked up about regulation. Specifically, regulatory capture. Around the world, governments were drafting new rules on AI safety that, depending on who you asked, were basically hand-written by Sam Altman with the explicit purpose of undermining startups and OpenAI’s competitors.
Then, a few weeks ago, Rishi Sunak came out with a strong defence of open source AI, which was lauded by Meta, a16z and others. So, was this regulatory capture too?
No. The much more boring answer is this: it’s the outcome of a policy process where governments, with their own volition, criteria and agency, ‘manage stakeholders’ of all types (big AI companies, startups, civil society, academics, the public, etc.) and then come to a view. But that’s a lot less catchy.
So where has all this distrust come from? We thought it was worth unpacking:
Why most claims of AI ‘regulatory capture’ are misdirected
Within tech, those who warn about ‘regulatory capture’ frame almost any attempt to think about AI safety or risk as malicious, anti-competitive behaviour from big companies to undermine startups. Yes, we shouldn’t be stifling competition, progress or acceleration. But it’s unfair to caricature anyone with earnest concerns as purely rent-seekers or decels/doomers.
Money and influence do matter. Particularly when there’s an imbalance between governments and big companies with an information and resource advantage. But Mark Warner’s definition of ‘selling’ is helpful to unpack what’s really happening here: “finding something truly valuable to the other person and helping them understand that on their own terms”. Policy engagement is partly about sales, and this definition correctly emphasises decision-makers’ own agency and success criteria — which includes safety. Just because they might be persuaded by something doesn’t mean that it isn’t the right answer. Private incentive does not preclude public interest — it’s just for governments to discriminate.
Instead, many of those complaining about regulatory capture overlook the agency of politicians and officials to make their own decisions and oversimplify/ignore the trade-offs to promote their own view. We might disagree with officials’ conclusions — and it’s our job in the tech community to help upgrade government and regulators’ understanding of AI to make better decisions — but let’s avoid discrediting the decision altogether. It’s a point
has made before:Why shouting about 'regulatory capture’ is a bad long term strategy for tech people
No doubt, policymakers and regulators need to do a much better job engaging a broader range of perspectives across tech. And we have definitely seen cases of clear regulatory capture across nearly every regulated industry we work in. But in practice, we should be very wary about accusations that risk making regulators even less likely to engage more widely.
It’s unfortunate, but it’s the reality: for an industry that often (correctly) observes how ‘everything is sales’, it’s surprising how quickly people tend to forget that lesson as soon as they start talking about policy. Officials are people too. Complaining about ‘regulatory capture’, even when it’s true, is rarely a great way to convince them to speak to you, especially when the whiner’s self-interest is so transparent: it’s not a surprise that cries of ‘regulatory capture’ are often made by people whose private motivations are for things to be regulated differently. Regulators/officials know that as much as anyone else.
As Steve Blank writes:
Bill [Gurley]’s closing line, “The reason why Silicon Valley is so successful is that it’s so fxxxng far away from Washington”, received great applause. Unfortunately, for startups entering a regulated market following this advice this might not be the optimum path.”
Finally, if you disagree with a policy, why give all the credit to a nefarious actor? Why let officials off the hook? I’m sure the policy teams at OpenAI, Google and Meta would love to claim every bit of AI regulation as their own, but that’s not how it works. The EU’s Digital Services Act shows how Big Tech can spend billions on lobbying and still lose. Governments have agency! If you disagree with a policy, direct your anger at those who have actually made the decisions, rather than claiming some malicious competitor is actually pulling the strings.
Why, and how, founders & policymakers must engage instead
1. Founders & startups need to get better at engaging
Rather than complaining about other (larger) companies acting in their own self-interest, it’s way past time for startups fight back. That’s why we exist, and we’re glad to see Y Combinator and a16z’s recent ‘little tech’ efforts following our lead ;) As
writes:To win, the best founders optimise their regulatory strategy like any other function, be it product, go-to-market, hiring or fundraising. The regulatory treatment of your business is something to be optimised for along with everything else. That goes for businesses in already-regulated markets (fintech, health, etc.); but also for those that will be regulated in the future (lab-grown food, genAI, etc). Broadly, startups building in these markets should be engaging on:
Offence: when there’s an opportunity to support a change to policy/regulation that would benefit the business; and/or
Defence: when there is a potential change that others are pushing for, either inside government or from other stakeholders, that would be detrimental to the business.
Sometimes, after a quick assessment, it’s clear that there’s little to be gained and so resource-constrained startups decide “engaging policy-makers” doesn’t meet their ROI hurdle. When we talk to our portfolio, we always say:
If you are really trying to build a huge business for the long term, you need to may still need to lay the groundwork for scale; but
You need to do it in the most efficient way possible, because it can be a vast time suck. Doing it properly should always meet the ROI hurdle, because you should be able to do it very, very efficiently.
As part of our work, we help to expand companies’ policy capability, to drive value out of these engagements and enable some of the more marginal opportunities to be screened in. Startups are almost always going to be the underdog, but we’re glad to see more efforts to get organised.
For founders, whatever you choose, this should be a strategic decision. Get too sucked in and you could be distracted from other work. But fail to engage and you could be leaving opportunity on the table.
2. Policymakers need help to build resilience to capture
On the government side, policymakers have 3 options available to them:
No engagement with the industry: Lower chance of undue outside influence, but much harder to understand and grip novel tech where industry has information advantage.
Unstructured, informal engagement: This is where the risk of regulatory capture is highest, particularly through cognitive capture. In this scenario, the regulator begins to think like the regulated industry, often on account of giving that industry privileged (or even exclusive) access to policy-makers.
Structured, broad engagement: This is the ideal scenario, where policymakers can understand the insight and perspectives of every relevant group rather than relying solely on existing networks. But this requires some investment: Incumbents are easier to find, more forthcoming, and better resourced than early stage companies. They shout louder, and officials know they’ll still be around in 18 months. One way to overcome this is for government to build it’s own ‘dealflow engine’, to improve its visibility of early-stage companies and help them to understand who’s worth engaging with.
We recently asked Sarah Munby (Permanent Secretary at DSIT, the UK’s science & technology department) about how she thinks about building resilience against regulatory capture, while still encouraging structured, broad engagement across industry. Her response was excellent: Governments need a variety of input and officials should always be open to new evidence and arguments, but internal capability and vigilance are essential to discriminate between different views.
Arriving at a balanced view requires policymakers to be well-informed, to parse different views and form their own view. Tech shifts incredibly quickly, at a pace that policies/regulations cannot possibly move at – so regulating in these markets requires a degree of seeing into the future that frankly no one can do (least of all, policymakers). Policymakers inevitably favour existing businesses over non existing future ones that they can't quite foresee.
This is precisely what we’ve been arguing with our fixtheregulators.com campaign. UK regulatory capacity is now creaking, not least because frontier technologies like AI have massively increased regulators’ learning curve — at a time when resources have not kept up. To fix regulatory capture, we need to fix regulatory capacity. That means tackling:
Resource: investing in regulatory capacity to understand and approve new technology
Rules: creating flex where frontier technologies are constrained by legacy legislation
Risk appetite: giving regulators the incentive, duty and political cover to take measured risks
Overcoming this requires focus and investment. Governments and regulators might still make different decisions to what you and we might like — as is their prerogative. But we should absolutely correct the structural barriers that inhibit better decisions and leave opportunities for regulatory capture. That is what VCs & startups should be calling for.