There are certain risks associated with sending your data up to the cloud for safe-keeping. Chief among these, some would say, is privacy. If your data is not encrypted in the cloud with a passkey that only you know, or have access to, then you don’t know who’s reading. This fear was once again awakened recently with the realisation that the US government was running PRISM and accessing ISP data more or less directly. What else could they be plugging into? I believe we will find that the US government had direct access to all of Facebook, as well as other sites. The corresponding denial by the companies involved seemed rather well-coordinated.
How else are we putting ourselves at risk, and can we ever be sure that we’re not getting taken for a ride when using computers? While some had argued that cloud based services would be a road towards subscriptional enslavement, this apparently has not happened, with Google Reader being shut down rather than being made a subscription service, and Picasa Web images being pushed to Google Plus. Another example would be Yahoo! Mail, which silently opened a previously paid-for IMAP interface to customers some time before March 2011.
In other recent news, however, Adobe is following in the footsteps of Microsoft and converting its Creative Suite (including the popular Photoshop) to a rental model that requires that computers connect to the internet at regular intervals. Adobe products had been known to “phone home” for some time. For consumers, such home-phoney activities carry the uncertainty of exactly what data is transmitted. In fact, few of us actually check our outgoing connections sufficiently meticulously to capture encrypted privacy-violating rogue packages that some softwares may send. This is even more true with our mobile phones, an area in which 100% open source solutions are still limited due to the relative lack of open-sourced apps.
To be rigorous about privacy and economic-freedom-to-perpetually-do-as-you-please-with-software-you-paid-for, locally hosted open source software is the only way to go. And if you want to be really rigorous, you must read and comprehend the entire source code of everything you use. But is that still possible? Haven’t we gotten to a place where that source code is Too Long, Didn’t Read (TLDR)? Instead, in my impression, we treat open source developers as saints, presumably due to the realisation that if they’re anything else, we may still be screwed. Code review helps, but how do you know the review coverage of the code you use? Did one person review it after it was written, or ten? Who were they, who do they work for, and how smart are they? Maybe some of them work for the NSA. Perhaps the best idea is to stay offline except for occasions where you know explicitly why not to, and the online activity is in fact essential. Humans have a tendency to destroy resouces they depend on. Agriculture helped us grow our populations beyond the point naturally sustainable purely on gathering and hunting activities, and we subsequently hunted and collected many species to extinction, and continue to do so today. We drilled a hole in the ozone layer, poisoned fisheries with oil spills, are wreaking havoc with naval sonic technology and fracking, are too undisciplined to use risky nuclear technologies safely, stir hatred with drone-mediated killings of civilians in countries far beyond our bounds, and may be raising sea levels with the consequence of threatening the homes of hundreds of millions of people. Why should the Internet be a safe and enduring place?