Part 1: Public-purpose technology is not neutral
Interview with Dr. Stephanie Hare, author of Technology Is Not Neutral
Dr. Stephanie Hare is a leading researcher, speaker, and broadcaster in technology ethics. In this exclusive two-part interview, Stephanie discusses key themes of her book Technology Is Not Neutral, and their relationship to public-purpose technology, with StateUp’s Rachel Osnos.
RO: In the book, a core theme in the discussion of whether technology can be considered neutral is impact versus intention. So I'm curious, from your perspective, how do you see public-purpose tech fitting into this discussion?
SH: Well, in theory, a public-purpose tech is being designed to have an impact that is positive. So what do we mean by that? That's the tricky bit, because it'd be so easy to say, it just means it's good. But what do you do if what's positive is actually in conflict. Saving money for the public purse could be viewed as good, but what if you're saving money by, for example, sending refugees to Rwanda, because somebody's decided that that's somehow better or cheaper than processing their applications here in the UK? And I'm not sure it is, by the way, I haven't seen the math. But, it's clearly not good from an environmental perspective; it cannot be good to be flying all these people out to a third country. And that's also a country with a very questionable human rights record.
So, why are we doing this? We have systems in place. What is it really about? It's actually about deterring people from even seeking asylum here. So, you can have public objectives that conflict, and depending on your priorities, they will be seen as good or bad. So even when we look at impact and intention through the book’s framework, it's not a straightforward, easy calculus.
Giving another example, during the pandemic, we saw lots of neighbourhoods being made into low traffic areas here in London, which sounds amazing, right? It’s great for reducing air pollution, and London's air is really toxic and makes lots of people sick. So you want to reduce that. But if you don't replace no-traffic zones with viable alternatives, what you do is you hurt your delivery people, you hurt families, who may need to be able to carry around small children and do their grocery shopping. And it's not possible for them to put everything onto a bicycle - that bicycle model isn't actually going to work for everyone. It's also ablest; it's not accessible for the whole population. So it's a great aim, but it has to be balanced and then considered holistically.
So this is a really tough framework, if you want to do public interest tech in any way, because you're still going to be dealing with competing, good goals.
RO: What lessons do you think public purpose tech entrepreneurs or founders, investors and governments should take away from a tech ethics approach? Maybe there are specific elements or different areas of the framework that you take us through the book that are even more important for some stakeholders than others in this equation?
SH: What I've tried to put out is by no means a definitive take on this, it's more for people who are new to this, or for people who perhaps are not new to this, but are just tired and looking for a different view. So, I had two different audiences in mind while creating this book. Because I've done both - I've been the new person, and now I'm the exhausted 45-year old! So, I'm constantly having to learn new stuff, and I really value when somebody gives me a fresh take.
I find it really helpful to have methodologies or, if you will, intellectual frameworks. So very, very early on, when I was a young strategist starting out, I had a boss who was like, PEST, just remember PEST: Politics, Economics, Society, Tech. Anything that we're talking about, just put it through those four boxes in your brain, and run them almost in parallel, to see how the impacts are. I thought, that's quite clever. So this works for any time you get put on the spot in an interview, or if you were very quickly having to take something apart. If somebody was taking an idea and proposing it, you could test it.
The situation I just gave with the low-traffic neighbourhood is a great example of using an intellectual framework to test an idea: anything that's reducing air pollution seems good, but in testing you see how it could be super ableist and could hurt small businesses. What are the alternatives that you're offering? You can't take away without offering some sort of other solutions. So that kind of nimbleness and agility, the spirit of that was what I had in mind in writing this book. And the reason I bring in philosophy is because not all of us study it, and it gives you a “software of the mind,” a phrase I love that was borrowed from another philosopher. The framework is like a Swiss army knife - it's a tool with six component tools.
So for any technology you're thinking about, you begin by parsing through the lens of metaphysics: what is reality? That sounds like a stoner question, but it's really important, again, as you consider the reality of a low traffic situation and the reality of the problem of London's air pollution. I'm deliberately using a non-technical, social example because I think that's a useful starting point. Everyone in London, all of us breathe this air. Okay, so the reality is, the air is disgusting. We can measure it. We know it's bad, because we have all these different ways to determine this; we've just done metaphysics.
For epistemology, in terms of source of knowledge and logic, how do we know - we know we can measure it, we've got the sensors. Then, there's the political philosophy point of, whose cares matter more? Is it the asthmatic? Children who are growing up with their lungs being hurt and people who can't breathe? Is it the small business owners who are hurt by having to spend so much more on fuel or that can't deliver and are going out of business? Is it families who are like, this is great, but your solutions are for people to ride a bike? Well, that doesn't work if I have three small kids and I need to do my weekly shop. Or if I'm elderly, what am I supposed to do? Right? So that's your political philosophy angle of whose power matters here? Who's included and who isn't?
Then, there's the aesthetic lens of: what is our experience, aesthetically, of living in a really polluted city versus not? We remember that during the first and second lockdowns it looked like the air here had been scrubbed clean. It was insane. I've never seen the air like that before, because all of the particles from pollution from driving had dropped. It was stunning, and it showed you what was possible. I also have asthma, so I'm super sensitive to this discussion. Now you can just see the days where it's like a brown layer of sludge that just sits over the city and we're breathing that.
Then there's the ethics lens, which is: is this a good thing or a bad thing? Is it a good thing to try to reduce air pollution? Or is it a bad thing, on balance? Does it hurt us - because sometimes the road to hell is paved with good intentions, you can have a good goal. But in pursuing that good goal, you can actually leave a sort of path of destruction in your wake and create 10 new problems in trying to solve one. What does it mean to live a good life for a city and for all the inhabitants of that city?
In looking through those six lenses, you can go so much deeper and richer and really build down a philosophical approach to technology, but I wanted to do absolute basics for people who haven't studied philosophy to kind of get them going. To begin these discussions, I like to move deliberately from a non-technological example to a technological example to remind us that technology and tools exist in a context. If you launch something into a city, it's going to affect different people differently. We all went through the pandemic, but we did not have the same experience of the pandemic at all. That's a very humbling thing, and I think that's very useful for technologists, to bear in mind.
This is important for a lot of solutions, like contact tracing apps, that we use taxpayer money to build. The apps worked on the phones of people who could run the API. And that's a technological problem that has nothing to do with government policy. But the fact is that taxpayer funded money was giving a potential advantage to protect from a virus only to people who can afford iPhones and Google phones that could run the API, in spite of how much we talk about digital exclusion and accessibility and inclusivity all the time in the UK. That was an example of: you can't afford it, too bad.
RO: Are there certain pitfalls for technologists that you see more frequently?
SH: I think technologists can be tempted sometimes to rush in and start building fast, and just think that they will iterate and sometimes even find the use case and demonstrate the value proposition as they go. In the case of the pandemic, for instance, I think the technologists didn't stop to ask themselves first, will this actually help break the chain of transmission? They could have been benchmarking in real time with all of the other countries, like France, that were trying these apps for contact tracing and vaccine passports for domestic use, etc. If you compare the number of cases, number of hospitalisations, numbers of deaths, it all kind of equalised eventually. If they had paid attention to those things much earlier, they might have stopped building certain things.
To give a positive example, the Zoe app through which people were reporting their symptoms was spun up in five days. Why didn't the government do that? Why did it take a private sector company or university to do it? The Zoe app was also picking up symptoms much earlier than the public health authorities were. In addition, I think they were quite honest with people about how this app functioned as an experiment, as experimental technology, saying “we're crowdsourcing, and we want your feedback on what's working, what's not.” There was a lot of outreach.
Whereas I think the UK government, because of a political agenda, wanted to keep saying that everything was “world-beating” and the apps were being presented as the key to our future, and then the government had to walk it back when it clearly wasn't the key to our future and our freedom. That is where I think actual technologists could do a much better job, and maybe they don't want to get into politics - and I understand that.
However, that's why I believe technology is not neutral. You've gotten into politics the minute you started building it.
So your job as a technologist actually could have been working much more closely to demonstrate the value proposition, and if you couldn't see it, then you have to be honest about that, and kill it.
For example, in the five months between the pandemic waves of last summer to when we came to the Omicron wave in the winter time, the contact tracing app was quietly not part of the government's plan to manage the Omicron wave; they went straight to vaccine passports for domestic use. We were no longer trying to use the contact tracing app. Was there any announcement that we could delete all that from our phones? Or what was going to happen with the data? Or was the app actually worth the money? There's been little-to-no follow through from the media, from the government, from technologists to discuss what worked, what didn’t work, and lessons learned in the public discourse. That's why I wanted in the book to at least track this as part of current affairs. If we want this to be part of our pandemic preparedness toolbox going forward, that is why we need to do this rigorous work. Where is that analysis happening now?
Read Part 2 of the interview with Dr. Stephanie Hare, author of Technology Is Not Neutral, here.
Feedback:
Select feedback from the last post is available at the bottom of the page here. Tell us what you think about part 1 of our interview with Dr. Stephanie Hare here.