Facebook and the new errors of judgment
587 words | ~3 min
Another day, another Facebook privacy story. Well, sort of. BBC News reports on the publication of a data file containing the publicly-available details of 100 million Facebook users by security consultant Ron Bowles.
Predictably, there's anger and concern, much of it directed at Facebook. But I'm not sure the company itself is entirely to blame. I'm starting to think it's a generational thing among internet users.
This is, after all, publicly available information. Yes, Facebook's privacy settings are convoluted and offputting, but it is possible to lock down virtually everything with some determination and a small amount of time. Or - and I know nobody wants to hear this - you can simply not use Facebook if you're more deeply concerned about putting personal data online. In much the same way that, back in the olden days when I learned to use the interwebs, you might consider not having a web page.
So yes, there's still cause to grumble at Facebook for some of its points of view on privacy; but there's a more general problem. It's that a lot of social network users don't really know how computers work any more.
Specifically, they don't know how scripting works, and the ease and speed with which a publicly-accessible site can be read by a script, and structured data extracted and written to a file. Once written, it doesn't need supervision or amendment (as long as it's not buggy). It just runs and runs.
This breaks through what I call the 'effort threshold' for privacy. We tend to assume that there are some things people won't do to us because the cost in effort outweighs the possible benefits. For example, assume someone who lives in south-west London doesn't like me, and wants to punch me. A problem if I'm also in south-west London, and I might have a word with the local police. However, I live in South Africa, so I assume that the effort required to give me a black eye is so great that the risk is pretty low that he will fly out. I'm beyond the effort threshold.
The problem comes when we get the effort threshold wrong. That's easy to do online when you don't understand how tasks are performed by computers (and by people who know how to use them).
We're now well beyond the point at which most internet users had a pretty decent knowledge of computing generally. We can complain about this ('it's always September...'), or we can accept that it's a symptom of a maturing technology. It's probably the first time a media technology has had a cost of participation so low that this kind of 'mass error' has been a problem.
So by all means let's argue about privacy versus openness, and about settings on social networks (and whatever comes after). But let's also talk about what new kinds of knowledge are needed to avoid mass error with the technologies that increasingly define how the world works. That's a much bigger conversation. We're used to talking about a generation that's growing up digital as if that's a benefit, but it can also make that generation less well equipped to function than the members of the previous generation who had to learn their way into the future, one grep at a time.
(And if you don't get the pun in the previous sentence, ask a geek.)
# Alex Steer (29/07/2010)