Two ways technology could help to better regulate technology. Part 2: Enforcement
This is the second piece in a two-part series on how to regulate technology, and how technology itself might help.
By Oliver Marsh, writer on technology policy, founder of The Data Skills Consultancy, and former Comms at No. 10 Downing Street and Official at DCMS. Part 1 of this two-part series by Oliver is available here.
As anyone who’s ever tried to change something knows, having good ideas is a small fraction of the task. Even if legislation is well drafted, people and organisations must be aware of it and follow it. This does not always work, certainly not immediately. Despite a long lead-up to the GDPR coming into effect, there was a great deal of confusion around its introduction, and slow compliance.
When enforcing laws, you could choose to prioritise either:
stopping bad things happening, or
providing redress after bad things have happened
They aren’t mutually exclusive – punitive enough redress methods (2) might, in theory, dissuade people from doing bad things in the first place (1). But as with any strategy you should decide what to focus on and acknowledge trade-offs. In particular, option 1 ideally requires an extensive infrastructure to monitor people, allow or disallow activity, and so on. A major example here is driving; driving tests, licenses, speed cameras, traffic policing, MOTs, and suchlike are widespread. Option 2 can rely instead on victims coming forward to highlight harms. An example here is defamation. Most democratic countries would not want a system of government-mandated censors pre-checking anything before it appears in the public sphere for potential slander. If people are slandered, they can claim redress after. This saves the need to maintain a costly and intrusive monitoring architecture, but means more bad things happen in the first place. It’s a trade-off.
Technology Regulation as Defamation
Data protection laws, like the GDPR, are currently more like defamation than driving. This is not necessarily because of how laws are drafted; some requirements of the GDPR, like appointing a data protection officer or putting contracts in place before transferring data, look a lot like preemptive protections against harm. But there is not the extremely hardcore enforcement one might see around driving. There is no test to qualify as a data protection officer; no regular MOT to test privacy protections. The EU AI Act, which has the potential to be one of the most consequential pieces of technology regulation in the world, has many provisions intended to stop serious harms. But, as academics Michael Veale and Fredrik Borgesius note, “scratching the surface finds arcane electrical standardisation bodies with no fundamental rights experience expected to write the real rules, which providers will quietly self-assess against.”
One could argue that a defamation-style approach to technology regulation is justifiable. The costs of a more hardcore enforcement systems could be immense, for both governments and regulated bodies. Giving citizens powers of redress is a big step forward from no power at all. But adopting a less rigorous screening system means accepting that bad outcomes will slip through the net, that many citizens will be unaware of their rights, and that burdens will be placed on victims. And most importantly, at what point do the ‘bads’ of technology become so consequential that redress isn’t enough? Choosing who to hire? Diagnosing illnesses? Criminalising, even killing, people?
What might stronger enforcement look like?
There are various factors which make harder enforcement difficult for many 21st century technologies. It is much easier to write code than to get hold of a car; it is much harder for outside observers to spot dangerous or malfunctioning code than a dodgy vehicle or rogue driver. Nonetheless, one could imagine extreme, technologically-enabled, approaches to enforcement. Here is a hypothetical image. Personal data from a citizen in a country can only be accessed via a national data trust. Any technology which falls under a regulation – which would probably be a lot of technology – must be connected to a centralised government system that can monitor and restrict activities (and push updates when regulations change). There would be numerous concerns with such an approach – not least the centralisation of powers (and data) with government and risks of technological malfunction.
But we could explore less extreme alternatives. Companies could be required to use certified software which advises on the legality of certain actions. By interfacing with a government system, the technology could be alerted when legislation changes and flag potential rule breaking back to the government, while limiting direct government collection and retention of data. Important technological decision making, for example a hiring decision, could require two independent government-certified systems to be used, each providing a check against the other. Even these more limited approaches would have problems; how they would function at an international level, burdens they would impose, risks of creating cabal-like markets of regulatory technology. Although risks of technological malfunction – or just insidious, hard-to-detect systemic issues or biases – might be less impactful if decisions aren’t entirely made by technology, they can still be very problematic.
Any technology which falls under a regulation – which would probably be a lot of technology – must be connected to a centralised government system that can monitor and restrict activities.
But to all those problems we must ask the question: is that worse than the situation we have today? If we want to prevent bads, I’d argue we should be proposing more imaginative – and probably more challenging – questions about enforcement. The risks of technology are high, and the track record of governments quickly and effectively limiting technological power is not comforting. Creating technology shows off human creativity, problem-solving, and boldness. We should apply these same principles when it comes to thinking about how we control technology.
If we want to prevent bads, I’d argue we should be proposing more imaginative – and probably more challenging – questions about enforcement.
For more resources and information on this topic, explore here:
Ada Lovelace Institute, 2021, Regulate to Innovate.
Henry Armstrong, Chris Gorst, and Jen Rae, 2019, Renewing Regulation: ‘Anticipatory Regulation’ in an Age of Disruption, Nesta.
Feedback:
Tell us what you think about Oliver’s post here.