Whisper it: AI is a regulated market.
Yes, the UK has sensibly avoided new, over-arching regulation like the EU. But like it or not, the reality for UK AI companies is that model training comes with copyright legal risks, personalisation and safety features are shaped by data policies, market power is quickly scrutinised by economic regulators, and sectoral regulators are all developing their own rules in areas like fintech, health, drones and AVs. (If you really want to, feel free to read 13 major regulators’ ‘strategic approach to AI’ statements here. Or check out the AI Bill that may drop later today.)
Mostly, startups can and should focus on building, instead of getting too bogged down in the uncertainty. But sometimes there’s inconsistency. Sometimes customers cite regulatory risk when hesitating over a contract. Or maybe you’re creating something new, and neither you nor the regulators know where things stand. In that context, the Digital Regulation Cooperation Forum — a group of 4 ‘digital’ regulators in the UK made up of the FCA, ICO, CMA and Ofcom — has launched a service to help: the AI & Digital Hub. We don’t love the name, and we’re keen to hear feedback on the service itself. But a) it’s a pilot, so improvement is the point, and b) we absolutely support the intent to simplify things for founders — as we’ve repeatedly called for.
So we spoke to Kate Jones, CEO at the DRCF, to hear more about her journey, the role of the DRCF in AI regulation, and what startups should expect from the new initiative.
Kate Jones, DRCF CEO, on AI, regulatory innovation, and helping companies to move quickly:
The DRCF was created by and funded the regulators themselves, rather than by legal statute. It acts as the connective tissue between the FCA, CMA, ICO and Ofcom, rather than being a ‘super regulator’ above them all.
It coordinates work across regulators, like research to understand how consumers experience generative AI, competition and interoperability, or guidance on data protection and child safety. The idea is that working together is more efficient for regulators and leads to more consistent guidance for industry, than each doing it in parallel.
It’s complex for startups without regulatory support or large legal budgets to navigate through different regulatory remits. The DRCF wants to make it easier for companies to do the right thing and to reassure them that they can’t be undercut by bad actors.
The AI & Digital Hub is a free advice service for innovators with a query about how their product interacts with regulation. It's not going to be a binary yes/no, but the aim is to produce a consolidated response within 8 weeks, providing guidance about the systems and processes companies need to have in place to be compliant and helping products get to market more quickly.
But it’s a 1-year pilot: for now, the DRCF is testing if there’s demand for a service like this and if a service can be run effectively. Queries can be submitted anonymously and guidance will be made public anonymously, while regulators will also use the Hub to identify future work they need to carry out.
Full interview with Kate Jones, CEO of the Digital Regulation Cooperation Forum
FORM: Can you share a little bit about your personal background and how you came to to work at the DRCF?
KJ: Yes, absolutely. It's a pleasure to be here. I'm a human rights lawyer and an international lawyer by background — I was a lawyer with the UK Foreign Office for a long time. After I left central government and while I was at Oxford University, I became really interested in the governance of emerging tech, originally from a human rights point of view and then gradually that opened out into other areas as well.
I became interested in the DRCF because I could see that in order for digital governance to work, the interplay between regulatory remits is very important. I had seen for quite a long time the challenges to human rights law and data protection law posed by, for example, the rapid growth of social media platforms and a surveillance economy.Â
And then suddenly I became aware of competition law, potentially offering answers to some of those challenges. But immediately I encountered questions about how competition law — with its calls for more interoperability and more data portability —ties in with the data protection considerations.
And so you get to that interplay between regulatory remits. And indeed, one of the first pieces of work by the DRCF — before I joined it — was on exactly that issue, and was the first DRCF product that I saw. It made me think that this entity, right at the confluence of digital strategy and regulatory remits, would be a fascinating place to work.
For those that aren't aware of the DRCF, can you give a little bit of a background for them?
The DRCF, or Digital Regulation Cooperation Forum, was established in 2020 by its four member regulators. That's Ofcom, which has responsibility for online safety; the Competition and Markets Authority, which has responsibility for competition and consumer issues, including implementation of the new Digital Markets, Competition and Consumers Act; the Information Commissioner's Office, which covers data protection and privacy; and the Financial Conduct Authority, which oversees financial services. The DRCF is currently chaired by the CEO of the Financial Conduct Authority.
The DRCF was created by its member regulators rather than by statute. So it's a creature of the regulators, funded by the regulators, and exists in order to help them to do digital governance better, by helping them make sure they're coherent in approach. I gave the example just now of the interplay between competition law and data protection law, but there are various others.
For example, take the interplay between online safety on the one hand, to make sure that children are protected from harms online, and data protection on the other, ensuring that companies don’t capture more of children’s personal data than they should. Ofcom and the ICO are working hand in glove on those issues under the auspices of the DRCF. There are other examples like that as well.Â
And then our other big area is collaboration. We are all tackling certain shared challenges. Take AI: there are lots of things that each of our regulators needs to do, such as understanding how consumers are experiencing generative AI, what kind of issues there might be, what they might need to lean into. Doing that consumer research together is more efficient than each doing it in parallel.
Or similarly, each of our regulators has an interest in having a thriving algorithmic assurance market. Working on that together, again, makes more sense than each doing it separately, and avoids the possibility of slightly conflicting recommendations. We've each got an interest in seeing that AI is fair, so what does fairness mean? We've drawn together the work that each of our four regulators is doing on that, and drawn some conclusions. Similarly, we have a shared interest in exploring what transparency means in AI, so now we're working on that one too. And so on.Â
In all these areas it makes sense for our regulators to work together, and hopefully that means efficiencies both for regulators and, crucially, for industry. Hopefully it means that regulators are jointly creating a more coherent picture, and one that becomes easier for industry to navigate, than if we were each engaged in a range of entirely separate activities.
With that in mind, you’ve now launched the AI & Digital Hub. We’ve shared a bit of feedback here — some companies are aware of it but many weren’t, and their different reactions — but can you explain what it is for those not aware, and how you landed on it as a concept?
We're really excited about the Hub. It’s an opportunity for innovators to access a one-stop shop for advice from all of our member regulators at the same time, for free. Innovators can put in a query to us through our website and it will be distributed as relevant to the respective regulators. And then, within 8 weeks, we are aiming for innovators to receive one consolidated response, setting out what they need to consider in order to comply with the regulatory requirements that affect them. The Hub is open to anyone, as long as their query meets the Hub’s eligibility criteria.
The Hub launched in April, so it's still pretty new. It took a lot of work to get it into shape. And it's the first time we're delivering a service together as four regulators.Â
The Hub is a free source of help to innovators, which we think will enable them get their new products to market more quickly and save them money – with practical, informal advice specific to their ideas, including how to launch their product responsibly. Putting an inquiry in doesn't mean that you're exposing yourself to more regulatory supervision than you would otherwise. We are keen to talk to innovators to help identify and solve their regulation challenges, including giving them the opportunity, in some cases, to discuss with technical experts the issues they are experiencing with what they are trying to do.
What do you think the pain points are for startups, both in AI and more broadly, who are new to building in regulated spaces? What is the context that the Hub is responding to on the industry side?
We hear from industry that the regulatory environment is complex. There are lots of different regulators with different responsibilities. If you're a startup and perhaps you haven't got access to large consultancy or legal fees, I can see that navigating your way through those different regulatory remits can be quite complex.
So we're keen to take measures to make it easier for businesses. That's the main pain point that we are aware of. We did some research and heard that innovators who were working with new technologies are keen to get streamlined regulatory support, to enable them to understand and work through the regulatory landscape more easily.
We also hear from companies that they want clarity on what they need to do, in order to do the right thing. Most companies want to demonstrate that they are taking a responsible approach, but it needs to be clear to them what that entails. So it’s important that we make clear what a responsible approach looks like.
And secondly, most companies don't want to be undercut by competitors who aren't doing the right thing. So they want regulation to be effective as regards their competitors as well as themselves. We're really aware of that too.Â
You might have seen that we've got a relatively new three-year vision. It prioritises not only regulatory effectiveness, but also our aims to support innovation and economic growth, as well as to protect consumers. We see those aims as key to what we do and how we work with industry.
Can you also give a sense of what startups should expect — what the offer is very practically — and the time commitment?
Innovators can submit a query to the Hub setting out what they're planning to do and therefore what they need advice on. It's okay if they're not quite clear exactly what advice they need, because we can then have a dialogue with them, We can help them to work out what their question is. Once we've had that dialogue, we do all the hard work and undertake a collaborative approach to forming a response.Â
Then within eight weeks, innovators will get a reply. It's unlikely to be a reply that says, yes you can do this, no you can't do that. It's more likely to be a reply that says, ‘If you want to do this, these are the steps you need to take and things you need to think about’, ‘This is the process you need to have in place’, or, ‘Have you thought about data protection requirements which entail the following…’ So it aims to be a helpful practical specific road map for innovators to see what they need to do to launch their product responsibly.
The hub is a one-year pilot. What you would like to prove? Also, it’s one thing to get four regulators to agree on what the approach should be, and it's another thing to deliver a service that starts from innovators’ needs and organises around those. How might that play into the long-term vision of the Hub?
First of all, we will be keen to see whether there is actually demand for a service like this — so all the more reason, if you're an innovator, to get your query in within the one-year pilot, to help demonstrate that demand is there!
Second, to show that it is possible to deliver a service like this effectively. I'm confident that it is, because of all the work that we have done in setting it up and the way in which each of our four regulators is committed to delivering it well. But of course, we will have to review that as time goes on.
Third, we’re planning to publish the advice that we give, in anonymised form. People's names and details wouldn't be published, but we hope that the service we're offering will have a wider benefit than solely for the people who apply for it. That's something that we would liaise with people on after they submit their queries.
That leads me to the fourth point: we're hoping that this service will help us regulators to gain a clearer understanding of innovators’ needs and to see areas where we need to do more work together because of demand. For example, for finding out more about how, say, financial services and data protection interrelate, if you're thinking about financial apps or digital ID. We're hoping to be signposted in that way through the service as well.
I’m really intrigued, on a practical level, what does it look like for four regulators to come together and build a service like that, on quite a granular level?
We did research previously to establish whether there was a market demand for the Hub. We were also fortunate that DSIT offered us funding for it.
In terms of what it looks like — and actually this is a broader point about the DRCF as a whole — we work a lot on institutionalising effective collaboration between regulators.
We work with four fiercely independent bodies, who each have quite different ways of working. We've really focused on how we encourage them to work together. There are various elements to that. One is the strategic buy-in we have at the top from the CEOs of all four member organisations, encouraging everybody to lean in. Another is the way that we work to make sure that we understand each of our regulator’s interests.
There's a third point about how we organise the work. We in the DRCF core team are the hub, the centre of the wheel if you like, and then our projects are delivered by people within each regulator. So we've got a team of people, not just from one of the four regulators but across the four regulators, who are all invested, all talking to each other. We make sure they get to know each other in person as well. There are lawyers and other advisors involved too, so we've got that well-networked team working very well together. Without that oil of strong working relationships, collaboration isn't necessarily effective.
There's something interesting there about retaining the internal capability and embedding it within the regulators, not just taking it out. It’ll be relevant with AI too, which is going to affect regulators beyond just the four DRCF members — for example there’s similarity in how AI in safety critical environments, whether it’s autonomous vehicles or medical devices, will be regulated, by focusing on how a continually iterative model evolves, instead of certifying every single model state. But it's going to be important that the MHRA and others build their own capability there, rather than that it being extracted out of them.
That's absolutely right. AI is not a sector, it's going to permeate everywhere. It's really important that existing regulators, both sectoral and cross economy, are all involved in the regulation of AI.
AI isn’t something that can easily be separated from, for example, financial services or communications or the medical sector and so on. It's like electricity. Whatever direction regulation takes in the future, in my view it’s important to have this collaboration between organisations on common issues and on coherence as well.
People sometimes have the misconception that DRCF is some kind of super regulator. We're absolutely not that. We’re a connective tissue; the statutory mandates remain with each of our four members. We're there to facilitate what they do, rather than do it for them.
I completely agree with what you're saying about risk-based regulation. We're seeing technology develop so fast that there's no way that a very rules-based approach to regulation could keep up. And it wouldn't be right to try to have rules that keep up, because in my view fast regulation isn’t necessarily good regulation. By adopting a risk-based approach, regulation is much more future proofed. It can be adapted as technology evolves, and puts the onus much more on industry to make sure that the emerging tech they're adopting complies with the standards it sets. You can see that approach in, for example, the Online Safety Act, the FCA's Consumer Duty, and across the activities of a range of regulators.
The other important aspect to getting regulation right, as well as taking a risk-based approach, is to have a multi-stakeholder conversation. AI in particular, and the digital world as a whole, is developing so rapidly that it doesn't work if we think about regulation in a bubble.
Quite a lot of what I do is about talking with industry, talking with the third sector, as well as talking with government and parliament, and trying to make sure that there are as broad a set of inputs as possible into our work.
You can check out more info about the DRCF’s AI & Digital Hub here, and if you have feedback, let us know and we can pass it on to the regulators (anonymously!).
Hit reply with follow up questions, suggestions of other technology or policy leaders we should interview, or get in touch if you’re building at the frontier of tech and regulation.