Back in 2010 I wrote here about the idea that in a world where digital technology is the norm, our ideas about information have to change. I referred to the concept of a 'greppable world' (after grep, the faithful old Unix text-searching utility):
In a greppable world, nothing that is made public can be buried in detail. Form and arrangement of texts can be remade on the fly. Tasks like concordancing, which used to take years, can be done in seconds using a script. And, of course, huge stacks of documents can be skimmed for salient details just as fast.
Right now, Facebook’s facial-recognition software can sense who is in your pictures and make tagging suggestions, but what if the social network could further learn behaviors and preferences by reading the Gap sweatshirt you’re wearing and seeing that Coca-Cola can in your hand?
What is it that's new and alarming about this concept? It's not the ability of Facebook to see the contents of your photos and make judgements about you. One reason we wear Gap sweatshirts or drink Coca-Cola is to make statements about our preferences. As the owners of the platform where you're hosting your pics, we know that Facebook could always have gone through your photos and used the information in them to target you. It's just that there are over 140 billion photos on Facebook, so it would have taken them a long time. Now (or at least soon), it could be done automatically, and much faster.
The arguments that we've seen about text mining over the last few years can now convincingly be applied to image mining too. As with the ability to hunt out text (impossible with books, then possible with documents and filesystems, now possible with huge unstructured data sets), we can argue about whether or not its applications are desirable, harmful, or intrusive. Those who disagree with the Mashable piece above can, I think fairly, point out that if you're that concerned about people not judging you for the content of your photos, you shouldn't put them online.
What's clear, though, is that rising processing power has privacy implications, because it changes the practical limits on information processing. Tasks that could be done slowly by people (like going through all your photos) can now be done quickly by algorithms. This does not mean they should be regulated - but they should be discussed. People should know that the time-and-effort cost of gathering and analysing information is falling fast; and this should change how they approach their personal privacy, their roles as citizens, and even what they expect from products and services. A faster world is a less private world by default, in practice if not in theory, and thought about privacy and openness needs to take this into account.
# Alex Steer (02/07/2012)