A provocation: How does public digital policy avoid the dumpster fire?

The following the text of a provocation delivered by Mark Brown to The Digital Humanity in Health and Care seminar series at Futurelabs in Leeds on 27th June 2017.

We invented policy as a way of making things happen.  It’s like the spell we cast, waving our wands and mumbling our incantations, and then expect for the rabbit to spring out of the hat.  Policy is like a map you make of country before you’ve explored it.  It guesses what might go wrong and tries to wish into existence what might go right.

Policy cuts both ways: it mandates for some things to happen and it attempts to prevent others.

When we look at digital humanities we need to be asking: what exactly is it we want to happen?

This is less of stupid question than it appears.  We are currently living through an era of unprecedented change, or at the very least an acceleration of change.  Technology is, whether we like it or not, changing our working lives, our social lives our lives as part of a community and even what it means to be alive altogether.

It’s becoming increasingly clear that technology that may when set out in isolation may appear to be of interest to only a tiny few enthusiasts, hackers and journalists may actually change the very structure of our towns, cities, governments and ultimately countries.  Social media, for example, is still considered by some as a kind of fad or hobby, a frivolous curse upon people’s attention spans and family meals.  It can be that, a cavalcade of  holiday photos and funny cats.  But the same technology, the same digital experience is also now increasingly fundamental in how people relate to the world.  If reports are to be believed, it’s possible that state and non-state actors have used manipulation of the fabric of social media to shift the direction of world defining referendums and elections.  The idea of fake news and the speed with which it spreads may have undermined many people’s trust in the possibility of objective truth. A lonely, angry man with the most powerful position in the world sits in his bedroom tweeting his thoughts to the rest of the world, making stock markets rise and fall and armies to go to red alert.  The business of selling books online seemed at first to only be an issue for tweedy booksellers listening to radio four in dusty shops, instead over a decade it knocked out staples of our high streets one by one.  Our NHS ran aground in places across the country after a ransomware attack as if it had opened a rude picture and then couldn’t close the hundreds of pop ups that followed it.  There is no way now to separate the human world and the digital world.  There is no tech space and meatspace.  There never has been really, but until relatively recently it was still possible to put tech news over in the file marked ‘boffins and nerds’.  Now, if we deal with people we are also dealing with digital whether we like it or not.

I’m not a tech person. I’m not a social policy person.  I am, as we all are, a person living in a digital world.  I’m interested in power and politics and technology.  I’m interested in what levers we can pull to avoid the worst of possible futures.

When I talk about tech I tend to fall back on the techniques that futurists use to try to think about the future and what the tells us we should be doing now.  Futurists deal in three kinds of futures: probable, possible and preferable.

Probable futures are the futures that seem likely to happen if nothing major changes from now, they’re about extrapolating from present events. They’re the futures that are like now, but morer.

Possible futures are the kind of futures you explore by being playful and mucking about. Sometimes they’re explored through stories, or films, or art or imagination. They’re the futures where we look at how bad, how good, how weird the future might be. They’re the versions of the future where we explore our desires or look at how one particular thing might change everything. It’s the jetpacks and flying cars future, if you like. The future that we try on to see how it makes us feel. Possible futures are where we try on our ‘what-ifs’ for size.

Preferable futures are the futures that are somewhere between the probable futures and the possible ones. Preferable futures are the ones that we look at and think ‘that’s where we’re trying to get to’. Preferable futures are the ones that we try to bring about by making decisions and taking actions now, bending the path of the future further towards where we want the future to be and away from the things we think will happen if we don’t do anything and just let events unfold. A preferable future is the one you get to by playing through a possible future and playing through the probable one and thinking ‘how do we get closer to what we want to happen, rather than what will happen anyway’.

This brings us back to my question: what do we actually want to happen?  And that’s where things get sticky and where the discussion of ethics and values comes into play.

Delivering public services is a political act.  The shape of public services and how they feel are defined by political and historical realities. The decision of who pays tax, what taxes they pay, upon whom those taxes are spent and who it is that does the work is political.  The ‘social good’ is not an uncontested idea.

With the ascendency of digital as a transformative agency in society we are looking at very particular historical forces shaping what might come next.  People, technology and institutions intermingle.

To me there are a number of issues that govern what digital public services might look like.

These are based upon ideas about efficiency, ideas about profit, ideas about transfer of responsibility to individuals and away from paid workers

Efficiency and austerity

There are a lot of notions that technology will always be labour saving.  The vacuum cleaner, microwave oven and fridge freezer were sold as technology that would liberate housewives from domestic drudgery, which they did in that sexist conception.  They liberated women to join the labour market, which is brilliant, but they didn’t really remove the gender division around housework.  Working women still on average do much more of the house and caring work..  In the public sector we often confuse technology with the idea of efficiency.  Technology will do things more quickly, more cheaply, with less work.  The question we have to ask is whether this is the correct way to view the role of technology: as a kind of magic partner to austerity and to limited spending horizons?

One of the areas that is most important is working out what is transactional and what is relational in public services:  where do people want straight forward, easy to access processes and where do they want people and complexity and care and support.  I think we have to face that different people have different needs from the same public service process.  Anywhere where a process is more than a simple exchange of information will mean that some people are looking for something more.  It often seems to me that we spend a lot of times trying to convert relationships in public services into transactions.  Obviously, I do not want to meet weekly for an hour with a representative of the council every time I pay my council tax, but I might very much value something more than a text message to check up on me in my own home.  The trick we will have master is establishing who needs what from our processes and making technology a lever to make that happen.  Intrinsic to that is providing digital to people who want digital transactions and then using the savings to increase the people time available to those who want and need it.  But how would we do that?

Profit and privatisation of the commons

In a classic business sense disruption is finding a business model that once introduced to the market will make it impossible for everyone else to do business in the way that they previously had. The ambitious digital disruptor moving into the social problem space may talk a very good game about disrupting the problem; but hidden in that either wittingly or unwittingly is a wish to disrupt the market. Which means finding a way of inserting themselves and their idea into the stream of money that is currently flowing in that particular sector.

An ambitious digital disruptor wanting to solve social problems may not see that they are ultimately trying to break completely the existing ways of delivering services and place themselves there instead. Or they may, because they can see that they would get to keep the money if they were successful. In the same way that robots have reduced much manual labour to obsolescence, so the digital disruptor might be aiming to do the same in an area of social problem. And much like robots and labour; they know that it’s the people who invent the robots who get to keep all the money.  It is conceivable that we might, forever, give away elements of our public services to private companies.  Managing that will be a policy challenge, especially if tech is not well understood by public professionals as recent events such as the Google Deepmind deal with the Royal Free London NHS Foundation Trust suggest.

To quote from a news story:

“The deal with The Royal Free was quietly signed in September 2015 and it gave DeepMind permission to process 1.6 million NHS patient records from November 2015 to November 2016. The records belong to patients that have visited Royal Free Hospital.

“DeepMind said it needed access to the medical records to help it test its kidney monitoring mobile app, which is called Streams and has the potential to save lives by sending out alerts to clinicians when their patient’s condition suddenly deteriorates.

But medical records contain some of our most private information.”

 Google really did get a very good deal there.

I recently reviewed a paper on the ethics of recommending digital services to patients in mental health for The Mental Elf. Bauer et al.’s (2017) open access paper Ethical perspectives on recommending digital technology for patients with mental illness if you want to have a look. The paper’s authors had some very strong observations:

“The authors were at pains to point out that even healthcare professionals enthusiastic for the implementation of digital technologies may not have an understanding of the wider digital economy and the potential points of tension between its practices and accepted ethical standards. As such, they recommend healthcare professionals should have access to education and regular updates on the state of the industry from independent sources rather than the seller of services themselves.”

Devices, websites and apps regularly capture and transmit data about individuals.  As the authors of the paper say: “In the past, it was only profitable to collect personal data about the rich and famous (Goldfarb and Tucker 2012). The costs of data capture, storage, and distribution are now so low that it is profitable to collect personal data about everyone… Data from sources that appear harmless and unrelated may be combined to detect highly sensitive information, such as predicting sexual orientation from Facebook Likes (Kosinski et al. 2013).”  The authors quote Eric Schmidt (Executive Chairman of Alphabet, Google’s parent company): “We know where you are. We know where you’ve been. We can more or less know what you’re thinking about. (Saint 2010).”

They go on to say: “At first glance, the use of personal data for commercial profiling and medical monitoring purposes may look identical. But the motivation for using algorithms to define emotions or mental state for commercial organizations is to make money, not to help patients… There must be a clear distinction between the algorithmic findings from the practice of psychiatry, and commercial findings for profit, even though similar analytic approaches are used.”

The authors were very worried about the deal with the devil that public services might make with private providers where the data is the real thing that is of interest to the private provider.

Companies provide free-to-the-user services by collecting huge amounts of data, which is turned into data products sold on to third parties. This includes data from medical websites and apps. This trade allows for the creation of decision making tools that operate without human involvement, something that concerns the authors. “The collected data based on tracking behaviors enable automated decision-making, such as consumer profiling, risk calculation, and measurement of emotion,” they write. “These algorithms broadly impact our lives in education, insurance, employment, government services, criminal justice, information filtering, real-time online marketing, pricing, and credit offers” (Yulinsky 2012; Executive Office 2016; Pasquale 2015). The authors worry that these algorithms may compound biases and introduce new inequalities and that healthcare apps and websites might feed into this. The authors claim that: “People divulge information online because they are susceptible to manipulations that promote disclosure” (Acquisti et al. 2015).  People are keen to keep their medical data private; but are often inadvertently sharing large amounts of data about themselves as they interact with digital products and services. In the use of medical websites and apps, the line becomes blurred.

How does public digital policy avoid the dumpster fire?

The culture of silicon valley is increasing looking, in the American phraseology, like a dumpster fire.  Libertarian ideas run riot, with the very ideas that our public services in the UK are founded upon seen as a deadly infringement of the rights of the individual to choice.  Low tax, low regulation is the mantra.  The ethics of digital creators, investors and developers may run contrary to our core purposes and ethics in running public services such as health and care.

This shades into my final concern:  how do we decide whether digital technology represents a transfer of power to individuals and not just a way of saying ‘you’ve got your own tools, make your own public services?’  This in some ways is that libertarian impulse that always positions doing it yourself as being the ultimate goal, doing away with public services all together.

I think the binary we need to be looking at is a line with care at one end and autonomy at the other.  Depending on who we are and what are problems, needs and desires are we will be positioned somewhere along that line.  We may be at one end for some things and at the opposite for others.  There are some things in my life where I want the help, concern and care of others. There are others where I want other people to just get out of my.  As with the consideration of whether services should be transactional or relational, so we must be looking at whether people want autonomy or care and when and how they want it.

We really need to be using these kinds of thoughts as a way of exploring not just reactive public policy but public policy developed by considering probable and possible futures so that we can actually decide upon a preferable one.

We need to be leading the discussion about the future that we want and the future of public services that we want.  Future policy should reflect the future we want.

I really don’t think we know what people want from digital public services because people do not feel they have any power in the face of digital acceleration.  People are still taught to consider digital as a specialism rather than a practical thread of life that affects everything.  In the end, the only thing that will make any public service digital policy humane is people.  People who make policy, people who are on the receiving end of policy, people who put policy into action: it’s up to us to make the arguments for the best of all digital worlds.

What is the best of all digital worlds?  Well, that’s a work in progress.

Mark Brown is development director of Social Spider CIC.  He is @markoneinfour on twitter.

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *