WordPress’ freemium model fails

Today, I noticed for the first time that WordPress would now like to place ads in combination with my content. The only way to avoid this is to pay them, and that’s not what I blog for. I’m also not inclined to place my own advertising to cover costs, so I’m left with no choice but to conclude that freemium is dead, and that I should leave this plane in favour of a private blog somewhere. It’s a shame as the keyword economy is thereby broken. Is there anyone other than Technorati to manage keyword clouds? (Google and hence Blogspot are evil, so I won’t go there.)

Privacy and freedom with subscription and cloud

There are certain risks associated with sending your data up to the cloud for safe-keeping. Chief among these, some would say, is privacy. If your data is not encrypted in the cloud with a passkey that only you know, or have access to, then you don’t know who’s reading. This fear was once again awakened recently with the realisation that the US government was running PRISM and accessing ISP data more or less directly. What else could they be plugging into? I believe we will find that the US government had direct access to all of Facebook, as well as other sites. The corresponding denial by the companies involved seemed rather well-coordinated.

How else are we putting ourselves at risk, and can we ever be sure that we’re not getting taken for a ride when using computers? While some had argued that cloud based services would be a road towards subscriptional enslavement, this apparently has not happened, with Google Reader being shut down rather than being made a subscription service, and Picasa Web images being pushed to Google Plus. Another example would be Yahoo! Mail, which silently opened a previously paid-for IMAP interface to customers some time before March 2011.

In other recent news, however, Adobe is following in the footsteps of Microsoft and converting its Creative Suite (including the popular Photoshop) to a rental model that requires that computers connect to the internet at regular intervals. Adobe products had been known to “phone home” for some time. For consumers, such home-phoney activities carry the uncertainty of exactly what data is transmitted. In fact, few of us actually check our outgoing connections sufficiently meticulously to capture encrypted privacy-violating rogue packages that some softwares may send. This is even more true with our mobile phones, an area in which 100% open source solutions are still limited due to the relative lack of open-sourced apps.

To be rigorous about privacy and economic-freedom-to-perpetually-do-as-you-please-with-software-you-paid-for, locally hosted open source software is the only way to go. And if you want to be really rigorous, you must read and comprehend the entire source code of everything you use. But is that still possible? Haven’t we gotten to a place where that source code is Too Long, Didn’t Read (TLDR)? Instead, in my impression, we treat open source developers as saints, presumably due to the realisation that if they’re anything else, we may still be screwed. Code review helps, but how do you know the review coverage of the code you use? Did one person review it after it was written, or ten? Who were they, who do they work for, and how smart are they? Maybe some of them work for the NSA. Perhaps the best idea is to stay offline except for occasions where you know explicitly why not to, and the online activity is in fact essential. Humans have a tendency to destroy resouces they depend on. Agriculture helped us grow our populations beyond the point naturally sustainable purely on gathering and hunting activities, and we subsequently hunted and collected many species to extinction, and continue to do so today. We drilled a hole in the ozone layer, poisoned fisheries with oil spills, are wreaking havoc with naval sonic technology and fracking, are too undisciplined to use risky nuclear technologies safely, stir hatred with drone-mediated killings of civilians in countries far beyond our bounds, and may be raising sea levels with the consequence of threatening the homes of hundreds of millions of people. Why should the Internet be a safe and enduring place?

Apple and Adobe

Much has been written lately about Apple falling out with Adobe. Here’s my version of that story.

Apple is always keen to get its customers pushed forward to the next release of their operating system. In the most recent case, they offered a very cheap mini-upgrade which didn’t even change its name all that much (Leopard to Snow Leopard). This is important for Apple, because it makes life better for developers coding for the platform, and, remember? Developer, developers, developers, de… you get the picture.

Adobe put a dent in Apple’s plans when they caused a kerfuffle about possible incompatibilities of CS3 or even CS4 with Snow Leopard. This may have put some customers off upgrading, and detracted from Apple’s long-term strategy. I could fully understand this if Adobe felt they had been loyal partners to Apple all along, and were now being left out on the iPhone. Perhaps Adobe never really understood, or perhaps had no tolerance for, the fact that Apple’s reasons for leaving out Flash are far deeper. The reason is myspace, youtube, and facebook: music! Apple budged a little bit when offering H.264 encoded videos in a special Youtube application, but the key here was that Apple could keep tabs on what was and wasn’t added, and could prevent both hit singles and blockbusters being shared free of (its) charge.

Apple knew that Adobe would be mad that Flash isn’t going to be on the iPad either, so Steve decided to lead a pre-emptive strike by shouting very loudly about how terribly Apple has suffered at the hands of evil Flash (my observation is that no browser crashes as frequently and effectively as Safari, and to make things worse, it doesn’t have any recovery either, but that’s just a by-the-by for Steve’s personal introspection).

Meanwhile, the fact that Adobe keeps a product-for-product edge in market share over Apple’s Pro apps (Aperture, Final Cut Studio, Logic Studio) would be that Adobe also serves Windows customers. So in spite of the fact that Adobe’s files are not always compatible cross-platform (well, neither so are Microsoft’s), they do at least give themselves this appearance. How to describe Adobe’s position? Well, it needs to make sure that it is tolerated on the growing Mac platform. However, at least for the time being, Apple can’t afford to lose Adobe. Adobe stands to lose the most coercive product from its lineup if the web of the future doesn’t do Flash, but if youtube et al found their way unencumbered onto Apple’s mobile devices (via jailbreaking perhaps), Apple would lose a big chunk of its iTunes store revenue. So these two companies can hurt each other a lot, and at the moment it doesn’t look like Apple is going to play nice (all the while taking the mickey out of Google – not that I have any sympathy for that latter company given its recent anti-privacy antics).

Book review: The Art Book

I recently picked up a copy of The Art Book at a very reasonable price. It’s basically a list of major artists, one a page, with an article about a “typical” piece of work of the artist, usually a well-known one, with the bottom three quarters of the page displaying that work of art. Each article starts with a description of the work of art, and then moves on to general observations about works by the artist, sometimes with side notes about his life (rather unavoidable in cases like van Gogh). In all, 500 artists are featured, including sculptors (e.g. Rodin) and less categorizable examples (Koons, Merz). Having been shown in an art gallery may have been the criterion.
Entirely missing are Bouguereau and Spitzweg, a slight pain given the recent interest in Biedermeier art. Similarly difficult to explain is the absence of Riemenschneider, the unsurpassed genius of woodwork, while his contemporary painter colleague, Cranach the Elder, has been included (although not his son). The general impression is that while other forms of art are included, the focus is still firmly on works done in the plane, especially painting (photography is entirely absent, but Gerhard Richter is there). I had previously thought of Audubon as a highly accomplished book illustrator, but he is there, testimony to the focus on painting. Furthermore, the German traditions may have been slightly underemphasised. The Young British Artists are thankfully missing, although their predecessor, Josef Beuys, is there.
Artists are presented in alphabetic order, with cross-references suggested. These cross-connections are not always comprehensive, unfortunately. There is no link, for example, between Miro and Klee. The glossary at the end of the book is basic, and was of little utility to me, but will probably help complete newbies.
Now, what could be the value of such a comprehensive selection that gives little attention to each individual artist? For me, it makes an excellent internet companion that gives me all the essential artists that I may have missed, along with suggestions of other artists to explore. After a while, I got tired of the over-abundance of boring modern paintings which, once the experience has been had, need no longer be preserved – we have Kline, Klein, Hodgkin, Heron, Hayter, and that’s just two letters’ worth. I graduated from the book with a strong yearning for beautiful art, accomplished in craftsmanship as well as composition and aesthetics. All in all, it is a very good selection that leaves little to be desired, and a highly useful reference work at an affordable price, and given it was published in 1994, it is incredibly up-to-date and comprehensive.

Phaidon Press, ISBN-10: 071484487X, ISBN-13: 978-0714844879.

Visitor tracking – server or JavaScript?

I was annoyed a few years ago when my hosting provider decided to remove access to the server logs. Previously, they’d provided nice pie charts as well as raw access data indicating which parts of a site were more popular. Importantly, because it was server access data, it didn’t matter whether the requested items were html pages, jpegs or other downloadable items.

The removal of this feature seems to have happened at quite a few hosting companies, otherwise how to explain the demand for Google Analytics et al.? Leave the hard work for the user’s browser, they said, and send the data to a separate, dedicated server. Server speeds have increased during the same time, so why were features taken away from users? And why, indeed, hand data over to Google et al., when you could get it from your server host?

Of course, we all realise that this is why we’re now all crying for faster multi-tabbed browsing, a demand that may have helped Google to elbow its way into the browser war theatre with its new offering, Chrome.

It’s quite possible though that the penny-pinching users themselves were at fault, unwilling to pay extra for the access to their server logs – driving down the price of web hosting by seeking out cheaper and cheaper hosts, in utter disregard of the reliability of service, and features only required by people who actually care about who looks at their web pages.

While I don’t have any data available to me right now to determine the exact root cause of the current situation, I do want to point out the duplicated effort: Web browsers do send their user agent string, identifying the exact browser version and operating system, with every http request, and in the default server configuration, access logs will usually be kept by the server, of at least the user’s IP, and the file path accessed on the server.

How many of you really feel that you need more data? Do you need to know your user’s screen resolution? Do you actually compute heat maps for your pages? Have you ever used analysis software for raw server logs, and what is wrong with your hosting company providing web access to analysis tools that are solely based on server logs?

Looking forward to your comments.