The currently beta-tested and about-to be released Microsoft Studios game Halo 5 contains a storyline about a rogue AI named Cortana. The recently released Microsoft Windows 10 also contains an AI named Cortana. Somebody thought this was a good idea.
I don’t think I could live after exposing my friends to government spying. Maybe it’s time to stop using big web services altogether.
There are certain risks associated with sending your data up to the cloud for safe-keeping. Chief among these, some would say, is privacy. If your data is not encrypted in the cloud with a passkey that only you know, or have access to, then you don’t know who’s reading. This fear was once again awakened recently with the realisation that the US government was running PRISM and accessing ISP data more or less directly. What else could they be plugging into? I believe we will find that the US government had direct access to all of Facebook, as well as other sites. The corresponding denial by the companies involved seemed rather well-coordinated.
How else are we putting ourselves at risk, and can we ever be sure that we’re not getting taken for a ride when using computers? While some had argued that cloud based services would be a road towards subscriptional enslavement, this apparently has not happened, with Google Reader being shut down rather than being made a subscription service, and Picasa Web images being pushed to Google Plus. Another example would be Yahoo! Mail, which silently opened a previously paid-for IMAP interface to customers some time before March 2011.
In other recent news, however, Adobe is following in the footsteps of Microsoft and converting its Creative Suite (including the popular Photoshop) to a rental model that requires that computers connect to the internet at regular intervals. Adobe products had been known to “phone home” for some time. For consumers, such home-phoney activities carry the uncertainty of exactly what data is transmitted. In fact, few of us actually check our outgoing connections sufficiently meticulously to capture encrypted privacy-violating rogue packages that some softwares may send. This is even more true with our mobile phones, an area in which 100% open source solutions are still limited due to the relative lack of open-sourced apps.
To be rigorous about privacy and economic-freedom-to-perpetually-do-as-you-please-with-software-you-paid-for, locally hosted open source software is the only way to go. And if you want to be really rigorous, you must read and comprehend the entire source code of everything you use. But is that still possible? Haven’t we gotten to a place where that source code is Too Long, Didn’t Read (TLDR)? Instead, in my impression, we treat open source developers as saints, presumably due to the realisation that if they’re anything else, we may still be screwed. Code review helps, but how do you know the review coverage of the code you use? Did one person review it after it was written, or ten? Who were they, who do they work for, and how smart are they? Maybe some of them work for the NSA. Perhaps the best idea is to stay offline except for occasions where you know explicitly why not to, and the online activity is in fact essential. Humans have a tendency to destroy resouces they depend on. Agriculture helped us grow our populations beyond the point naturally sustainable purely on gathering and hunting activities, and we subsequently hunted and collected many species to extinction, and continue to do so today. We drilled a hole in the ozone layer, poisoned fisheries with oil spills, are wreaking havoc with naval sonic technology and fracking, are too undisciplined to use risky nuclear technologies safely, stir hatred with drone-mediated killings of civilians in countries far beyond our bounds, and may be raising sea levels with the consequence of threatening the homes of hundreds of millions of people. Why should the Internet be a safe and enduring place?
I recently picked up a copy of The Art Book at a very reasonable price. It’s basically a list of major artists, one a page, with an article about a “typical” piece of work of the artist, usually a well-known one, with the bottom three quarters of the page displaying that work of art. Each article starts with a description of the work of art, and then moves on to general observations about works by the artist, sometimes with side notes about his life (rather unavoidable in cases like van Gogh). In all, 500 artists are featured, including sculptors (e.g. Rodin) and less categorizable examples (Koons, Merz). Having been shown in an art gallery may have been the criterion.
Entirely missing are Bouguereau and Spitzweg, a slight pain given the recent interest in Biedermeier art. Similarly difficult to explain is the absence of Riemenschneider, the unsurpassed genius of woodwork, while his contemporary painter colleague, Cranach the Elder, has been included (although not his son). The general impression is that while other forms of art are included, the focus is still firmly on works done in the plane, especially painting (photography is entirely absent, but Gerhard Richter is there). I had previously thought of Audubon as a highly accomplished book illustrator, but he is there, testimony to the focus on painting. Furthermore, the German traditions may have been slightly underemphasised. The Young British Artists are thankfully missing, although their predecessor, Josef Beuys, is there.
Artists are presented in alphabetic order, with cross-references suggested. These cross-connections are not always comprehensive, unfortunately. There is no link, for example, between Miro and Klee. The glossary at the end of the book is basic, and was of little utility to me, but will probably help complete newbies.
Now, what could be the value of such a comprehensive selection that gives little attention to each individual artist? For me, it makes an excellent internet companion that gives me all the essential artists that I may have missed, along with suggestions of other artists to explore. After a while, I got tired of the over-abundance of boring modern paintings which, once the experience has been had, need no longer be preserved – we have Kline, Klein, Hodgkin, Heron, Hayter, and that’s just two letters’ worth. I graduated from the book with a strong yearning for beautiful art, accomplished in craftsmanship as well as composition and aesthetics. All in all, it is a very good selection that leaves little to be desired, and a highly useful reference work at an affordable price, and given it was published in 1994, it is incredibly up-to-date and comprehensive.
Phaidon Press, ISBN-10: 071484487X, ISBN-13: 978-0714844879.
I was annoyed a few years ago when my hosting provider decided to remove access to the server logs. Previously, they’d provided nice pie charts as well as raw access data indicating which parts of a site were more popular. Importantly, because it was server access data, it didn’t matter whether the requested items were html pages, jpegs or other downloadable items.
The removal of this feature seems to have happened at quite a few hosting companies, otherwise how to explain the demand for Google Analytics et al.? Leave the hard work for the user’s browser, they said, and send the data to a separate, dedicated server. Server speeds have increased during the same time, so why were features taken away from users? And why, indeed, hand data over to Google et al., when you could get it from your server host?
Of course, we all realise that this is why we’re now all crying for faster multi-tabbed browsing, a demand that may have helped Google to elbow its way into the browser war theatre with its new offering, Chrome.
It’s quite possible though that the penny-pinching users themselves were at fault, unwilling to pay extra for the access to their server logs – driving down the price of web hosting by seeking out cheaper and cheaper hosts, in utter disregard of the reliability of service, and features only required by people who actually care about who looks at their web pages.
While I don’t have any data available to me right now to determine the exact root cause of the current situation, I do want to point out the duplicated effort: Web browsers do send their user agent string, identifying the exact browser version and operating system, with every http request, and in the default server configuration, access logs will usually be kept by the server, of at least the user’s IP, and the file path accessed on the server.
How many of you really feel that you need more data? Do you need to know your user’s screen resolution? Do you actually compute heat maps for your pages? Have you ever used analysis software for raw server logs, and what is wrong with your hosting company providing web access to analysis tools that are solely based on server logs?
Looking forward to your comments.
Verified by Visa is a mechanism by which the identity of a Visa card holder making an online payment is supposed to be confirmed. In order to pass this additional verification mechanism, the user has to enter certain digits of a previously set password. If the user cannot remember the password (or does not know because he has fraudulently obtained the card details), he has the option of clicking “Forgotten password”, after which he will be allowed to reset the password entering only information that is actually printed on the credit card, plus the date of birth of the card holder (which would be printed on any other document that would have been found in your wallet, for instance). I’m not impressed – are you? Think about it, in a password of 6 to 12 (iirc) characters, including at least one letter and one numerical digit, can you remember which is the 9th character without writing the whole password down? Not convinced that a coherently thinking individual was behind this idea.
It’s now clear to me that with our capacity to distribute large works of art, such as books, music, and films, to global audiences of millions, and many computer programmers’ opposition to paying for digital goods (resulting in quick breaking of any digital rights management system yet deployed), that we will have a re-emergence of patrons who will support artists for recording albums, writing books, and making films. It is also possible that these patrons will be corporate bodies rather than individual persons, especially in the early days of this cultural trend. Once audiences have become fully accustomed to TV and online ads, such sponsorship will be the best way to reach audiences disenfranchised from traditional media, whose advertising already communicates little about the product and services portrayed, and instead tries to appeal to emotions, which can be seen as deceptive. Additionally, it is clear that many corporations are wealthy enough to pay for high quality works of art and may prefer this opportunity to not be limited to the typical duration of a TV ad. Agencies that put corporations in touch with promising artists stand to make good margins, and will be a desirable employer. Most of the actual trade will be carried out online. As an example of this trend, I would cite the TED conference.
Addendum, same day: I also think it’s likely that this will raise the quality of pop culture, as patrons with economic interests won’t want to be associated with mediocre contributions. More education and genuinely witty entertainment, less l’art pour l’art.