Two ways technology could help to better regulate technology: Part 1 - Legislating
By Oliver Marsh, writer on technology policy, founder of The Data Skills Consultancy, and former Comms at No. 10 Downing Street and Official at DCMS. Part 2 of this two-part series by Oliver is available here.
The risks presented by emerging technologies are, arguably, amongst the greatest existential threats humanity faces. Despite this, there are substantial gaps in governments’ abilities to effectively regulate technology.
In this two-part discussion, I outline some strategic considerations around how to regulate technology and how technology itself could support these ends.
As background, from 2019-2021 I helped design the UK’s international data transfer assessment process in the UK Department of Culture, Media, and Sport (DCMS), having previously worked on counter-disinformation and social media in Number 10 Downing Street. I also took up a position as a Policy Fellow at the Royal Academy of Engineering, with a research question of whether technology itself could help us improve technology regulation. This two-part discussion will draw on my experience of this work as a window into broader strategic questions of technology regulation. The first piece will focus on one end of the regulatory pipeline, designing legislation to achieve particular ends. The second will focus on the – in my view much harder – problem of enforcement.
Challenges of legislating for technology
All legislation struggles with the issue of ‘future-proofing’ – how to ensure laws stay appropriate as circumstances change. However the speed with which digital technology can change (by comparison with, say, an oil rig or a road) increases this challenge. This is not necessarily because of specific new technologies, but can be due to broader trends in usage. For instance, the GDPR was drafted between 2012-2015 and came into force in 2018. But it already struggles with some very common uses of technology. One key example is machine learning – if someone objects to their data being used to train a model, does that mean the model needs to be entirely re-trained with their data removed? Even a report to the European Parliament in 2020 acknowledged that “the GDPR does not provide sufficient guidance” for controllers in this respect, despite the great ubiquity of machine learning approaches today.
Another related problem is how abstract legislation can seem to be, particularly when it deals with already quite abstract processes of moving data around. Legislation can be precise: an example is the list of ‘high risk’ use cases of artificial intelligence in the EU AI Act. But this approach risks missing problems, and is at higher risk of going out of date quickly. Mechanisms for updating such lists quickly may exist, such as secondary legislation in the UK, but they often raise concerns about appropriate levels of scrutiny. The alternative to precision is to employ broader principles. For instance, the GDPR is supposed to ensure privacy and control over data in a “technology agnostic” way. Well-written principles can, in theory, apply even as circumstances change. However language must be chosen extremely carefully to avoid ambiguity or obsolescence. Moreover, principles often require trade-offs between different ethical concerns, such as between privacy and the benefits of sharing data, which can further complicate debates over the regulations.
“Principles often require trade-offs between different ethical concerns, such as between privacy and the benefits of sharing data.”
Crowdsourced policymaking
There have recently been many calls for increased democratic participation in policymaking, with associated innovations. It can sometimes seem that technology policy is too specialised to permit much room for non-specialists. However, technology policy is not just about specific issues and tools. It is part of a nation’s self-understanding – for example will post-Brexit Britain be a buccaneering deregulated nation aiming to challenge Silicon Valley, or a big-state nation which directs investment towards public sector technology (one idea could be £350 million per week on NHS R&D)? This is a conversation citizens can, and should, be part of.
Moreover, beyond the broad philosophical questions, crowdsourcing with non-specialists can help address those issues of abstraction and future-proofing. Wider involvement can help formulate principles that have broad democratic acceptance, and also stress-test the language that is used to describe these principles. Non-specialists, when sufficiently well-informed, can be better at forecasting potential scenarios than specialists; in technology policy in particular they could help to flag and account for potential future trends in how technology might be used in everyday contexts.
“Wider involvement can help formulate principles that have broad democratic acceptance.”
I, alongside others, have proposed ways in which governments can hear from a wider range of voices. One of the quickest wins could be greater use of crowdsourcing technology. Technology can play a valuable role here, as examples from vTaiwan to Polis have shown. Digital methods of connecting citizens to government can be unrepresentative, but so are many offline methods; and when these are used in conjunction, they can facilitate multiple conversations about policy and citizen-driven proposals.
There is valuable work going on within the UK government to improve consultations, and this should be expanded; the recent Conference on the Future of Europe is an interesting example to consider of a very wide-scale consultation process with both digital and non-digital components. Finally, digital consultation opportunities should persist in some form long after the legislation has been introduced, to support evaluation of its functioning ‘in the wild’ – including to evaluate its performance as situations change.
I have, naturally, skated over some practical difficulties. Government communicators often struggle to get widespread attention on governmental matters. If governments were to succeed in engaging citizens in a broad swathe of policy areas, then there could also be intense competition for attention between policy areas. For all the potential benefits, engaging non-specialists in data protection law or AI regulation is unlikely to be an easy sell. But it is not an impossible sell; from concerns over losing jobs to robots, to the positives of making social media more pleasant for everyone, there are ways of engaging broad interest. The key point is: we must move beyond engaging ‘stakeholders’ and ‘specialists’ when creating technology legislation; we should be ambitious and imaginative in the methods we use to engage society in the process of regulating technology.
To be continued in Part 2.
Feedback:
Tell us what you think about Oliver’s post here.