In February 2012 there was a metaphorical explosion in online privacy. A Singapore-based blogger discovered that social-networking app Path had a dramatic way of expanding its potential customers: if you installed it on your phone, the app would grab all the information in your contacts list – without permission – and upload it to Path’s servers. There the data would sit, unencrypted, protected only by whatever skills the Path web team possessed.
As the news spread, researchers looked more deeply at other apps, and discovered that many were hoovering contact details for all they were worth. Rumours abounded that some uploaded app databases contained the contact details for Silicon Valley’s tech royalty, including intensely private numbers, simply because some early adopters of those apps had contacts within that royalty.
Path was investigated and fined $800,000 by the US Federal Trade Commission, and told to delete roughly 3,000 accounts. Dave Morin, the chief executive, said plaintively "it wasn't until we gave our account verification system a second look that we realised there was a problem."
That incident forced Apple to update its iPhone software so that users had to give their explicit permission to let an app access their contacts – although, if you agree, you still don’t know whether they will then grab the lot and upload them to the cloud. (The same applies on Android phones.) Amid all the noise about the EU’s GDPR, which brings in tough penalties for companies’ misuse of or carelessness with user data, we’re still too relaxed about what might happen to our data.
After all, what happens to the data that you hand over to a company if it goes bust? (Though it's more usually when; 95 per cent of start-ups don't make it through their first five years.) Do you know? In the case of Path, it was bought in May 2015 by Kakao Talk, a messaging app popular in Indonesia and southeast Asia, which might apply very different standards to Path's.
That’s the problem with our modern, connected world: we don’t know where our data will end up. Initially we get excited about signing up to new services, and pouring ourselves – our name, address, phone numbers, date of birth, likes and dislikes, personal photographs, personal connections – into them.
We tend not to wonder what will happen later. Will these companies share our details with others, or might they somehow be passed on? If the database behind the app is hacked, where will your details and photos end up? If the entire company is hacked, everything – including, potentially, credit card and bank details – are at risk.
And that's before you think about the way in which we can happily hand over our data – in effect, our personalities – to services we trust, even temporarily. The classic example of this is the innocent-looking "personality quiz" that 240,000 people were paid to take on Facebook in 2013; through their connections to friends, and friends' friends, the personal data of about 87 million people was harvested and found its way to Cambridge Analytica, where the suspicion is that some of it was used to target ads at American voters in the 2016 presidential election, and possibly some British voters in the Brexit referendum in the same year.
It seemed so innocuous: a personality quiz? But that, and as many as 200 other Facebook-based apps, were busy acquiring all the data they could; a very modern land grab. Facebook has banned them, but that leaves open the question of quite where the data they acquired has ended up.
Yet even while people eagerly hand over all sorts of personal data just so they can find out whether they would have been a Slytherin or a Ravenclaw, they will often be distrustful of larger organisations, citing security fears. It’s not unreasonable; those big organisations, particularly banks, are gigantic targets for hackers. But for precisely that reason, they’re motivated to take extra care around security.
Small companies, by contrast, often only come to security too late. In 2012 the cloud storage company Dropbox was hacked and 68 million usernames and passwords were stolen – though the passwords were encrypted. (Stored passwords are always encrypted; when you log in, the password you type in is encrypted using the same system, and that's compared to the stored version. If they match, it must be the correct one.) Adobe, maker of Flash Player, may have had up to 150 million account details stolen, as was discovered in 2013. And LinkedIn, a business social network, was hacked in 2012, and about 117 million user account details were stolen.
The question about encrypted passwords, once they're in the wild, is how hard it is to reverse the encryption to get back to the original text. That varies enormously. The risk is heightened by the fact that we tend to be lazy about passwords. How many online passwords do you have? If you're very security-minded, then the answer is "the same as the number of online services that I use." That is, every time you sign in to a different service – such as Facebook, Twitter, Amazon, Pinterest, Dropbox, Nest, Gmail, Yahoo, the App Store or Google Play, Windows, your electricity supplier, your gas supplier, your bank – you have to recall a different password, and perhaps even different username.
If you’re a touch lazy, then you’ve probably re-used the same username and password somewhere. And who, faced with the baby chick-style demands from new sites and services for us to sign up, hasn’t taken the path of least resistance at some point, and just stuck in the familiar email address and password that you like to use, the one with your first pet’s name and your date of birth?
That opens up problems when weak encryption (or a weak password – such as the classic “password123”) is used. Once you reuse a password, everything becomes vulnerable. Some companies, such as Twitter, will block you from using any of the 100 most common (and weakest) passwords; you’ll have to choose something different. And apps such as 1Password offer far more robust ways to create strong passwords; they also connect to online repositories, which spot passwords that have been used and publicly hacked, and stop you using them. And those sites that ask for your birth date? Hackers use January 1 1970 as their birthdate: it’s the “year zero” for the Unix operating system that powers the internet. It’s easy to remember – nearly as easy as your own.
But hacks are a way of life. On the "dark web", user credentials are traded by the million; your personal information might be worth only a few pennies at best. The LinkedIn data – those 117 million account details – were put up for sale for about $2,200 in 2016; that's about 50,000 accounts for every dollar, 500 for every penny. That's how valuable being able to log in to your business details and see all your connections is to a hacker: not very, because they're so plentiful. More personal data, such as credit card numbers, or the details required to steal your identity and set up a line of credit, is more expensive – but nothing like its value to you: in 2015, the data needed for ID theft would cost about $20.
That doesn't mean that such data isn't much more valuable to you or me. Having your email hacked, or your shopping account hacked (or, worst of all, your bank account hacked) is an awful experience; it leaves one feeling both powerless and angry. Sometimes the damage can't be undone; in the case of Sony Pictures, where executive email was leaked by North Korean hackers, or John Podesta, who was Hillary Clinton's campaign chairman and had his personal email account hacked and passed to Wikileaks, the effect can be dramatic and lasting. Even if it can be reversed, the time and worry takes its own toll.
So next time you’re about to sign up for a new service, ask yourself: does this feel like it’s going to last? And if not, why should you trust it with anything more than an email? After all, companies rise and fall – but your personal data will always be about you.
Charles Arthur's latest book Cyber Wars: Hacks That Shocked the Business World is out now from Kogan Page