Firefox extensions, broadband speed, and why software should be modular

I remember blogging some time ago about how the browser was becoming the new platform, in effect being the only part of the operating system that the user should have to see. I have for some time been using Scrapbook, and now the advent of Zotero has convinced me that “build it and they will come” really works, and the developers are taking on the browser as the new platform. What I am very pleased to note is that after years of different GUI applications doing things in isolation (and notwithstanding Apple’s very clever, if underused, Automator), there are finally some interaction effects emerging, where several extensions in combination are allowing you to do things that none of the developers would have foreseen. To give an example, I can capture a page in Scrapbook, stripping off most of the JavaScript while maintaining the layout, and then edit the source. This has countless applications that I shan’t go into here.

This is very similar to an idea that Jef Raskin presented many years ago in his book, namely to create an operating system that allowed users to add commands. If I remember correctly, Jef envisaged these commands to be purchased rather than downloaded for free, not foreseeing that open source software would replace much of the commercial market of this kind. I remember wondering at the time how this was any different from Unix, where you can string together commands using pipes to gain additional functionality. From my recollection of reading the book, I don’t recall that Jef explained in detail why users should extend the command set themselves. Having used Firefox for a while now, I think I understand what was intended

I use about a dozen extensions in Firefox (and it remains stable, touch wood!), but if you’d put me in front of a machine that had all of these extensions available to start with, I would not have known what to do. This is why I can’t bear to use Opera: it has a lot of nice features, but they aren’t very accessible. If Opera were modular, I might like it better. So the message is to allow people to extend functionality themselves, because that way, they grow with the technology and can better adapt to it.

Web applications are often limited by current broadband speed and availability, as well as server response times; Firefox extensions are helping to fill this temporary lag.

Incidentally, has anyone tried Onspeed? Their proprietary compression technologies sound impressive, but I wonder whether they have the bandwidth and processing power to match their claims that they can speed up even 8Mb/s broadband.

What about consolidation?

What I see in the technology space at the moment is a lot of marginal technologies coming to the fore – has bluetooth substantially improved our lives? WiFi? What I mostly see is that new technologies introduce greater liabilities than problem solving. I’ll be forgiven for thinking that my previous laptop had a better build quality than my current one, and ditto for digital cameras. I hear people talking about the megapixel myth. And I see companies filing patent after patent for new technologies, whose main purpose seems to be to cripple competitors. I see a similar stagnation is science, as people are putting out more and more research papers, but lack the ingenuity to try and put it all together in a comprehensive way. Science was somewhat sexier in the Victorian age, and stuff was getting done. In my research environment, I feel that there are too many people who are looking into superficially interesting details of their study systems, and trying to make a case for spending tax and charity money on their research.

Meanwhile, a lot of publications are fading into the background. Basically, anything that isn’t available as pdf is going to fall behind, including a number of turn-of-the-century (19th/20th) works that were full of data. Darwin’s Descent of Man is one such example, page for page full of data, anecdotal evidence &c. Instead of spending resources in adding yet more reams of data, the sensible thing to do is look really thoughtfully at what we already have. For the most part, the reviews I read cover small subject areas rather than the bigger picture. Perhaps we are lacking in talented individuals who can do this kind of work, getting ever more mired down by the demands of our tools. I recently noticed that whereas ten years ago, I could walk out of my house and shut the door behind me, I now have to check whether I have my mobile phone and laptop power cord with me. And when it rains, I still get wet. How is that for progress?

So how can we improve consolidation of existing technologies into more mature products that are less driven by feature count, and instead more by seamless integration, moderate but compelling functionality, and quite simply, technology getting out of our way? I first thought that patents were getting in our way, and that it would be an idea to declare a patent free year, or patent free two years – essentially, a time gap in which companies could freely prey on each other’s technologies in order to create devices with an unprecedented combination of features, without devices getting clunky by companies having to work around each other’s patents.  Then I thought that the problem really was with governments not providing enough basic services. Shouldn’t it  be the government’s job to provide a basic, completely interoperable computing platform for its citizens that commercial companies can then build applications for? Universities already produce significant output to this effect, but it is unclear to me whether the promise of publicly funded research is ever realised, or similar encumbered by patents or royalties.

I once noticed that sites rarely copy web design from each other. No doubt that it is frowned upon, and besides, immediately obvious, because a website’s graphical design is part of the first impression you and I would form. There is some element of human pride that deters most of us from copying each other’s work. I have no doubt that this also applies to computing hardware – be they desktop computers or mobile phones. Few people are prepared to engage in serious plagiarism. It would be wasting their opportunity to express themselves, to create something unique and lasting, something that carries their memory. Thinking along in this line, I am not convinced that if patents were eliminated, we would see clone after clone of perfectly technology-integrated device. On the contrary, I think we would see just as many crappy devices as we see today. Microsoft was unhappy with its hardware partners on several occasions, most recently spawning the Zune in response to a lack of promising mp3 players running a Windows derivative and plugging into their content distribution network.

This lesson to some extent comes out of Asian copycat devices that mimic, say, the iPod. Of those devices that actually work, few are really identical in functionality. Often, companies add on an FM receiver, voice recorder, or different manual controls. So differentiation is at the heart of the human spirit, and resists the forces of consumer demand. After all, we’ve learnt that people don’t know what they want until you show it to them. So let’s find a way to ditch the patent paranoia, and build an interoperable platform for 21st century services. (On a separate note, it would be interesting to find out why some government-funded technology projects fail, as seems to be the case with “Quaero”.)

Zooooom

Right, I was going to tell you the other thing that Linux developers don’t get about OS X. The problem starts with the fact that most Linux developers haven’t read Jef Raskin’s equivalent of  Mein Kampf (in the sense that Hitler laid out what he was going to do in Mein Kampf, but most liberals in Germany did not read the book and so came up against an avoidable surprise). Microsoft would have only needed to read Jef Raskin’s book thoroughly and develop quicker than Apple – which they were well poised to do – in order to edge ahead on usability (avoiding certain patents such as having the application menu on the screen edge).

Here’s a quick hint:

  • Expose: Zoom
  • Spaces: Zoom
  • Time machine: Zoom

Okay, I think we’re getting the idea here. And did you know that the green button on the title bar was called a “zoom” button? It’s not for maximising, it’s for zooming. And then there are the zoom sliders on apps such as iPhoto and Yep. What chronology is to storytelling, zooming is to work environment visualisation. Google Earth? PhotoSynth? Bingo. And zooming is extensible indefinitely. As an aside, this is also how the iPod works: you zoom into the artist, then the album, then the song. Hierarchical layers. And the column view in Finder is the same idea broad side on. I would really, really like to see this clarity of paradigm in Linux.

How to get Linux up to scratch

I’ve been watching the recent competition between compiz, beryl and metisse with some worry, as Linux developers seem overly keen to make the case that their graphics are at least as good as those of Windows Vista or Mac OS X, while aspects of the Linux desktop that would actually improve productivity get left behind. Is Linux stagnating? Are we really copy dogs after all, and while Microsoft isn’t improving their product in ways that go beyond what Linux already has, and Steve is not letting on what the new features in OS X are, we can’t make any progress on the core product? I’ll be posting more about this in a while, but here are some suggestions of how to improve Ubuntu (as an example…)

  •  Bluetooth support, esp. a GUI
  • WiFi GUI that allows discovering networks (it’s silly that I have to use a separate program, kismet, to do this!)
  • expand the screen resolution GUI to support external monitors and screen spanning

So the old saying that Linux is no use for laptops, doesn’t die…