Alex Steer

Better communication through data / about / archive

The need for speed in AI ethics

1219 words | ~6 min

A few weeks back LBB asked me and a few others for our views on the open letter calling for a pause to large-scale AI research. This is what I said:

Sometimes you need old wisdom to think about new things. In the eleventh century, King Cnut set his throne on the sea shore and commanded the tide to stop. Spoiler: it did not. He was demonstrating the futility of believing you can stop the unstoppable. It’s a good, if unusual, starting point for thinking about how to take ethical actions in the AI era.

There are good reasons to be suspicious of those AI enthusiasts who argue that there should be no checks and balances on the development of this new technology. This is the ‘move fast and break things’ mentality of Silicon Valley, hyperscaled to a scenario where the speed of movement is incredible, and the scale of breakage potentially immense. We urgently need applied AI ethics, and this should not be left to technology companies (many of whom have laid off their AI ethicists recently).

But I have little sympathy for this open letter. Demanding a halt to the development of new technology, and using crude scaremongering language to do so, is not a credible ethical position. We need AI ethics that can deal with the world as it is and as it will be. When change accelerates, that matters even more.

As King Cnut knew, the tide won’t stop because you want it to.

Technology ethics, which absolutely should not be left to technologists (of which I am sort of one), is fascinating because often technology proceeds in leaps, not steps. This means ethical frameworks may not exist, or it may not be obvious which ethical frameworks to apply, to decisions about the implications of new technologies. Often these implications arise from the intersection of capability and speed, and this is true of many of the ethical dilemmas arising from machine learning and AI at the moment. In much the same way that the ethics of the right to bear arms as defined in the US Bill of Rights were judged on the capability of a gun but not the speed of a modern automatic weapon, so the ethics of AI are founded on beliefs about the capability of prediction, but not the speed of modern cloud computing.

I've written before about the impact of computational speed on consent when it comes to data disclosure. Telling a random person that you like Hello Kitty has a low consent barrier because of the presumption that there is limited damage that person can do with that information. Telling Facebook you like Hello Kitty a few years ago should have had a high consent barrier because Cambridge Analytics used that information, together with lots of your other profile data, and that of millions of other people, to make predictions about your personality type and voting intentions. But almost nobody could have imagined that this was how the information was being used. Some of the ethical concerns around AI are an extrapolation of this: they expose the fact that some ethical frameworks are founded on the presumption that predictions are slow, broad and inaccurate. When predictions become fast, specific and precise, new forms of discrimination become possible in fields like media, advertising, insurance, recruitment, commerce, etc.

The ethics of decisioning and recommendation have, at least, seem some work done on them over the last decade or so, as the direction of travel has been reasonably clear. We have also seen specific ethical actions being taken, not least the proliferation of privacy policies designed (some well, some not) to curb the creation of consent-free identity graphs and unlimited trading of personal data. (Less attention has been paid to discrimination itself than to data trading, admittedly, indicating that privacy theory has been a more dominant force than theories about the ethics of prediction; hence why a single company collecting vast amounts of predictive data about a person is less frowned-upon than many companies collaborating to do the same.)

Generative AI presents a whole new set of challenges because, it turns out, computers have suddenly got very good at predicting not only the next best action but the next best pixel or bit. Methods like diffusion models have got very good, very fast, at learning how to output new images, videos, sounds etc that correspond to novel inputs (e.g. 'Abraham Lincoln playing a Wurlitzer') based on huge amounts of training data. Again, a lot of the ethical frameworks we have for these things, including those encoded in IP law, labour law, etc., have not previously been tested against such capability and such speed. This opens up whole new problematic areas: for example, whether the copyright holders of training data should have any ownership over generated outputs, or any right to prevent their work from being used as input; or whether people whose jobs involve tasks that generative AI can perform should have any protection from being replaced.

Hence my point, I guess. Technology ethics frameworks need to be designed to account for the current and potential future speed of technology, because the speed of technology is such a contributing factor to ethical decisions. Speed (or at least, speed-for-a-given-cost) determines scale and breadth of applications. Being recognised by your local policeman as you walk down the street once a week, and being continually tagged by face-recognition technology hundreds of times a day along with all your fellow citizens, are two variations of the same capability, but the vastly lower speed-for-cost of recognising each extra person using facial recognition AI may lead to wildly different outcomes. Hence the futility of banning research and development until ethical questions are worked out: doing so requires using the law to make rulings on things for which no ethical framework exists (saying, effectively, 'stop doing this in case we turn out to think it's bad'), and then asking ethicists and legslators to speculate about the likely speed increases of technology capabilities, and to 'sign off' on the resumption of R&D. Since speed gains for many technologies, including AI, seem to follow a power law, the consequences of getting these speculations even slightly wrong can be enormous. It's far better in these cases (I think) to make ethical decisions based on extreme possibilities rather than spend too much time trying to work out which are the likely ones. Imagine, for example, that AI becomes able to make incredibly accurate predictions about people, things and networks, faster and more accurately than people can. How should we live, and what should we value, in a world that looks like that?

That kind of ethical decision-making - using extremes, not just likelihood estimates - is far more productive and interesting, and allows us to make fundamental ethical decisions in advance more often. It's why Minority Report is such a fascinating story. It poses the ethical question: in a world where you have a system that can predict crimes, is it right to arrest people before they commit them? The narrative drama hinges on the question: what if the system is wrong? But the really interesting ethical question is: What if it's not?

# Alex Steer (27/05/2023)