The AI Safety Summit is the end of the beginning of regulating AI: let’s not repeat this broken process
When we started Form in 2018, we repeated two things ad nauseam to LPs:
Regulated markets will inevitably grow as a % of GDP
Regulation of tech will be one of the biggest policy issues of our lifetime
The development of AI safety policy suggests we’ll be proven right faster than we expected. One thing we didn’t predict was that the AI revolution would be so rapid that the policy-making process itself would have to change to be fit for purpose.
AI policy is not normal. Usually, regulatory frameworks are designed for markets where tech is changing at the margins, and most of the economic, technological and social aspects of the market are slow moving. But AI is diffusing through markets at a pace that existing frameworks and authorities will almost certainly be unable to cope with.
As a result, early attempts at UK regulation have been reactive, fragmented, and quickly seem out of date. The UK started off with a decidedly “pro-innovation” approach to AI regulation. Then, tabloid headlines about killer robots ended up on the Prime Minister’s desk. So now the UK is hosting the AI Safety Summit, and the future of the wider AI policy landscape is unclear. This ad hoc process is not healthy for stable policymaking.
We need to change the process. Here’s a place to start:
Properly fund the regulators to ensure the UK captures the value of AI
The policy-making process will only deliver on safety and growth if we invest in policy-making and regulatory capacity. Too often, investors point to disruptive policy interventions and urge governments to simply “get out the way”. But this mistakes outcome for process: weak, underfunded regulators with little incentive to enable innovation are no friend to industry. AI companies in health, energy, finance and education will find it harder to scale if we don’t fund the regulators in the sectors where public trust & impact hangs most in the balance.
This almost certainly means more cash for regulators. But it also means upskilling and incentivising regulators so they can properly balance innovation, progress and safety. The UK has done well to bring people who have worked (or still work) close to the frontier of technology into government and regulators (e.g. Ian Hogarth and many of his hires into the Frontier AI Taskforce). This approach needs to become the rule, and not the exception. Formalising this into a career path, with competitive pay, would be a great way to harness those who want to contribute to the ecosystem from the other side of the table.
Recognise that different players need to collaborate for AI Safety
At the same time, we need to recognise the inevitable limitations of the traditional regulator model. The half-life of tech knowledge is rapidly shortening, so that even the most well-informed technologists switching to the regulator side quickly become outdated. We have to acknowledge that even well-funded regulators can’t be the only ones to bear the whole burden of responsible AI, and broaden the net of those responsible for keeping people safe.
We need to think of AI safety as a value chain, where different responsibilities may/ may not lie at different stages. Yes, the regulator and founder bear some of these, but so does the investor, who often has a front row seat to the latest developments. General Catalyst’s work on an AI Investor charter is a step in the right direction here, which considers the duties an investor may/may not have. There may be duties for developers (e.g. model evaluations) or duties in the deployment of software once it reaches users (e.g. moderating content; KYC etc).
Adopt an “always on” mindset for industry engagement
We need to encourage, not criticise, industry engagement - very often those who are closest to tech developments. Claims that OpenAI is just driven by regulatory capture mistake their incentives: against a backdrop of growing concerns of existential AI risks, and calls to pause development altogether, their primary motivation is to ensure there is a sustainable, thriving market for AI in the future. It is still essential that policymakers hold firm on the distinction between frontier, general-purpose AI systems and narrow AI applications — e.g. it might be reasonable for the UK to evaluate models from OpenAI, Anthropic and DeepMind, but this shouldn’t set a precedent for every AI company — but so long as they do this, decisionmakers should absolutely engage with big AI companies to better understand what it is they’re regulating.
Crucially, there needs to be a structural shift to greater engagement on an ongoing basis. Policy-making around AI will need to be agile, and so will investor/ industry engagement. At the moment, a lot of this engagement is ad hoc and can easily just become self-serving — weakening trust in industry engagement precisely when it’s needed most. Fortunately, organisations like the Startup Coalition are already playing their part here, brokering a stronger relationship between policymakers, investors and startups.
Ensure the new approach to AI policy-making priortises public trust
AI is going to completely transform most aspects of everyone’s lives - and even if we firmly believe it will be long-term positive, there will almost certainly be some short-term adverse impacts. It is our responsibility to not only ensure that the content of policy balances growth and averse impacts, but also that the policy-making process itself has the public’s trust - so that any negative impacts cannot be blamed upon the policy-making process. As VCs, we understandably want to create an environment that enables our startup ecosystem to flourish, but this can’t be done at the cost of public trust. Doing so would see us all lose out in the long-term.