The desktop I'd like to see

I don’t think I’m being fusty for suggesting that most computer users
see the computer as a tool for better living, not as a thing in itself
that’s designed for their delight. So why are developers still pushing
the desktop race toward richer interfaces whose existence is supposed
to justify itself?

Things have gotten to the point where Microsoft has to release its
latest operating system in a kind of two-tier offering, one with all of
the compositing and 3D
effects that take advantage of recent graphics cards, and the other
directed at users with systems that lack these cards, or with laptops
that raise

concerns about battery life

No surprise–Microsoft has been loading up its software and loading
down users’ hardware for years. But the open source world has pursued
a comparable path since the Xgl was introduced for the X Window
System. It, too, takes advantage of modern graphics cards to compete
with proprietary desktops on the basis of dazzle.

You don’t have to write back to tell me that a good use of color,
icons, and other graphical features can save people time and prevent
errors–I know that, and I’m not using this blog to dwell on the pros
and cons of compositing desktops. I could point out that saving time
and eliminating errors doesn’t seem to be raised as the purpose for
advances everybody seems happy to label eye candy. What I want to
write about is how to save a lot more time with a functional approach
to the desktop.

The component alternative

I’d rather have lean visual effects with minimal distractions (which
can look very attractive) and let desktop developers focus on getting
programs to be more open to each other and work together more tightly.
I’m getting tired of moving between one silo of an application to
another, a division I’m finding increasingly arbitrary.

If I’ve developed a complex equation in my spreadsheet, I’d like to
extract it and use it in other settings. If I write a powerful
pattern-matching (regular expression) subroutine in Perl, I’d like to
apply it to the page in my browser or the document in my word

I explored this idea five years ago in articles such as

Applications, User Interfaces, and Servers in the Soup
which tentatively suggested the end of the computer application as we
know it. I had high praise for Microsoft’s component architecture. I
saw that it allowed tight integration of services from different
vendors, but I haven’t seen the same innovation on the part of small
contributors. Integration is harder than it should be, a game only
corporations can play. (One such corporation, Groove, is now part of

Since then we’ve seen GreaseMonkey, mash-ups everywhere on the
Internet, lots of REST and Ajax services, and the gadgets and widgets
and other sources. Many people have compared this movement to the Unix
philosophy of many small programs working together. How can desktop
and application developers now push forward this potential?

Desktop designers could find ways to help users quickly add components
to the display (as applications are added now) and organize them.
Considerable ingenuity could be invested in displaying components and
making it easy to search for them. Interfaces could make it easy to
enter input and to pipe data from one component to another. Naturally,
standards would help. Each of these advances would touch off an
explosion of hacker activity on components.

It’s important for desktops to make these components easy to call up,
and for them to be much more lightweight than current applications.
The biggest productively boost for a user is fast response time,
because the user have her mind on some complex sequence of steps and
can lose track of her thoughts in between calling up applications. A
valuable goal would be to have a new component start up and be ready
for input in no more than half a second.

Free software components would work best in such an environment, where
components are tightly coupled and are created by many different
developers, especially amateurs. Users need the source because they’ll
continually run into situations the original developers did not code
for. Developers will impose size limits without even realizing it.
Inputs may be designed for a simple scalar when the user wants to
supply a tuple, or for a plain text string when the user wants to
supply some form of rich text. Components will need localization for
different languages and cultures.

I’ll handle aesthetics on the walls of my office, on the furniture in
my house, and in the music on my player. I really don’t live for the
moment when someone on the train next to me says, “Hey, what’s that
beautiful desktop you’re running?” I’d rather have all the results of
my previous work (and work done by millions of others around the
world) at my fingertips, so I’d have more free time to write blogs
about what I’d rather have.

A response

I circulated these ideas for comment, and was treated to a fascinating
discussion from Federico Mena Quintero of the
GNOME and Mono projects, an early employee of Ximian (now Novell,
of course). He referred to an estimate by Frederick P. Brooks in “The
Mythical Man-Month” (not exactly the latest research in the technology
space, but it still demands attention):

  • To turn a personal tool into an application robust enough for other
    people to use takes three times as much work as developing the
    personal tool. This includes polishing, documentation, and debugging.

  • To turn an application into a library (which Brooks calls a
    “programming system product”) so it can be used in the way I’ve asked
    as a component also requires three times as much work as developing
    the application.

So the gap between “scratching your own itch” in free software and
producing a component is a nine-fold increase in work! Federico
reports that he’s found the estimates accurate during his team’s work
on Evolution and Nautilus.

And he goes on to ask: how many end-users benefit from the extra
three-fold work that goes into turning an application into a
component? “Most people who use the software will want a finished
product. They want to buy a cake, not buy the ingredients to bake it
themselves.” And the ultimate retort of the practitioner to the
armchair theorist: “You are confusing the malleability of software
with the actual work needed to modify it.”

Federico may have identified the difficulties that keep so few tools
from becoming available in combinable component form, but I think the
demand is present. People can learn to use simple pipelines if the
mechanics are presented to them in an understandable, fill-in-the-form
manner. Besides the genius of the Internet is that tinkerers can share
the mash-ups they’ve developed with others.

Picking up my thread, David Neary, a prominent GNOME community member,
pointed me to several pieced of free software infrastructure that lay
the basis for cooperation between applications:

  • Telepathy,
    which will give VoIP and instant messaging programs access
    to data maintained by other applications, so you can do such things as
    retrieve contact information from your contacts database. On a more
    sophisticated level, presence and multiple conversations could be
    controlled through a single server.

    Telepathy is based on the Free Desktop Project’s D-Bus protocol.
    I like the first benefit promoted in Telepathy’s
    “Breaking down previously complicated monolithic applications into
    simpler components which are concerned with only one group of

  • XDND,
    where the DND stands for “drag and drop.” X has supported
    cut-and-paste from the beginning through selections, and many programs
    now go further and implement true drag and drop, letting you drag
    images in and out of different programs as well. But these programs
    use a variety of protocols, which defeats the purpose. Projects
    as supporting XDND include GNOME and GTK+, KDE and Qt, StarOffice,
    Mozilla, and J2EE.

    (Data sharing is one step toward my goal of sharing the processing
    itself by piping data from application to another.)

  • Representatives from different projects are finalizing specs on how to
    share patterns, palettes, curves, bitmaps, and several other types of
    data used in graphical programs such as the GIMP.

Some notes about performance

Making software faster is difficult work, as Mena Quintero reports,
because the problems are hard to find and are usually spread over a
number of components written by different people.

KDE developer Lubos Lunak has spent several years
on performance testing, and finally determined that the main
bottlenecks lie in underlying software: notably the time it takes to
do disk seeks when reading from numerous files. Lubos
wishes for a filesystem
that would “linearize” related files: “making sure that related files
(not one, files) are one contiguous area of the disk.” In email to me,
he admits that figuring out what files should go together and ensuring
they do so “is non-trivial to achieve and probably impossible to
ensure completely.”

He also points to problems in the linker’s resolution of symbols in
shared libraries. developer Michael Meeks has proposed
ways to improve performance on Linux by providing new
internal storage measures in the GNU linker, as explained in an article.

Given that performance analysis is difficult and a lot less fun than
other desktop development, it’s amazing that it gets done in a free
software project with high volunteer participation.
(Lunak works for SUSE.) But David Faure, also of the KDE
team, says, “KDE is very aware of performance issues, and new
KDE releases are often faster than previous ones.” X developers also
do a lot of optimization (regularly reported by
Behdad Esfahbod)
on the Cairo and Pango rendering engines.

Chris Tyler, X developer, says that Xgl and AIGLX (a 3D,
OpenGL-compliant system that represents an alternative to Xgl on the X
Window System) actually speed up some applications because these
systems fully render all windows into memory. He writes, “they are
never damaged by other overlapping windows or by switching virtual
desktops. This means that the clients almost never receive redraw
requests and remote apps appear much more responsive.”

AIGLX should be faster than Xgl, Chris says, because AIGLX runs
directly on the host’s X server. Xgl requires two X servers, one
layered on top of the other, because X doesn’t have pure-GL video
drivers. But benchmarks show varying results in comparing Xgl and
AIGLX, so AIGLX could use more optimization. Still, Chris says the
commands that both systems pass to the graphics card can be handled by
any card produced over the past half decade, although some of them
can’t do it with acceptable performance.

Mena Quintero adds that compositing can make repainting look faster by
eliminating flicker. When a traditional windowing program redraws its
window, it has to receive a separate request for each component and
paint them one at a time. When you move a window, this produces a
flicker (even when the graphics library uses double-buffering) that
produces the impression of slow response. A compositor creates the
complete image in back-up memory and paints it without making paint
requests to underlying windows.

The difference can be compared to moving meal items in a cafeteria.
Traditional repainting is like going back and forth to your chair to
carry your plate, your cup, your fork, your napkin, etc. Compositing
is like loading them all on a tray and moving them as a unit. Federico
says compositing may take longer to repaint the screen, but because of
the lack of flicker, it seems faster psychologically.

Major free software desktop applications (such as Firefox and still tend to be slow. There’s a limit to how much
desktop developers can do to help, and even the application developers
themselves. Performance is harder to address than features, but the
developers of the various layers of the free software desktop are
working on both.

Update, 17 May 2007: Thanks for the comments. The various histories of component technologies are nice to have here in one place, but I’m not talking just about using components. It’s one thing (and a necessary foundation) to build programs out of reusable, pluggable components. It’s another to expose them to end-users in ways that make it convenient for users to bypass large-scale applications or plug in their own little scripts, and that’s what I’d like to see next. –Andy

  • Donald


    To your point about “tight integration of services from different vendors” as part of a modern desktop user experience, check out some of the online desktop work that Red Hat is leading in the GNOME community. That umbrella effort spans individual projects like the web services centric Big Board panel prototype for GNOME and the open Mugshot service which glues together user activities across third party services, with APIs that enable deeper integration into the desktop. Lots of activity here and a huge opportunity for open source to combine with online services to factor out much of the cruft that has accumulated in the traditional desktop OS over the last 20 years.

  • Exactly, I will have a functional desktop with less eye candy that offers good performance than one with high end visual effects. Thatz exactly the reason I wonder why Microsoft is still investing heavily on their Windows development.

  • Israel Alvarez
  • The world has unfortunately migrated away from your proposed desktop environment when the IBM PC, and later the MAC, became popular. And then it all went downhill from there ;-)
    The interim Dynabook behaved similar to your wishes, and if the Xerox PARC work had continued, I bet we’d have a whole different desktop paradigm today.

    Today this work manifests itself in a free Smalltalk environment called Squeak. Although not yet ready for the average user out of the box, it’s being used in many applications (e.g. seaside web server, in the OLPC, and a 3D virtual environment called Croquet.) And well worth anyone reading OReilly to use and expand. (Please see for more info and to download. And please tell me if you do! I’m interested in learning more about those who use Squeak.)

    Fortunately, Kay, Ingalls and others are extending the Dynabook idea from the metal up.
    Please see
    their NSF Propsal on their plans. It’s going to be interesting to watch and participate.

  • Richard


    …was that un-subtle enough to pass muster? Not to be disparaging, but what you’re after sounds like it’d have to be a perfectly implemented amalgamation of BeOS, Plan9 and a (magically!) simple-yet-fast COM or CORBA equivalent, and implemented on a LISP machine to boot. You could try making that if you really wanted, but it’s pretty much guaranteed that you’d end up wallowing in the same tarpit as MULTICS did.

  • Ahhh thank you Richard, its finally safe to post now.

    Everyone rushes out to post their own interpretation of a suitable match. Just notice that they’re all programmers solutions, and note how many have actually been designed for a desktop-manifest usage. If you want to get anywhere on this issue, I suggest going where you started, to the Mythical Man Month, and decide what you need and how to accomplish it in a way that wont require an extra zero at the end of your hour sheet.

    Right now, no desktops work as component managers because there are no components. Components arent here because there are no other components to interact with. The best we get is some tightly coupled REST clients accessing loosely coupled data providers. There are web pages, mashups, and plenty of Dashboard and Google widgets to go around.

    Of course, how many of these components are reusable? How many interact in loosely coupled fashions with other components? How many enhance the user experience beyond that of a normal program, and what distinction grants them these supplemental capabilities?

    Opera is a great component manager. F11 hides and displays them from an overlay, launching them is a cinch. Of course the components are all tightly coupled to their loose data providers. Plan9 was the only system listed here that really has a notion of loose coupling (the divine namespaces).

    I feel this thread is rather insubstantial until someone codes otherwise or proposes an idea with implementation content behind it. Its hard to rationally and productively discuss abstract ideas without solid grounding, and I’m not sure what this thread is really grounded on.

  • Neil

    Microsoft spent the 1990s heartily persuing a similar vision with Object Linking & Embedding (OLE), OLE servers/containers/compound documents, Visual Bssic for Applications, along with COM and DCOM. It was (by my judgement) a total disaster. You only had to embed a Excel table into a Word document once to see it was useless. (or an equation, audio recording, graph, etc)

    Extensibility in Windows has cost the world billions of dollars dealing with security vulerabilities. Certainly the damage far outweighed any benefit from these features which, ironically, the vast majority of users did not understand, did not work as they expected, and had no reasonable way to secure.

    There are staggering challenges to richly integrated components– primarily: security, user interface, storage (including granularity of storage), code quality, and sharing/networks as it relates to all of these.

    “Gadgets” & “Pipes” are taking off on the web only because browser security is reaching moderate maturity. On the web you can feel reasonably secure that gadgets won’t invade your personal documents or spy on you. Notice that there are no (as far I know) gadgets to display your bank or credit card balance. What would it take for you to be comfortable using such a gadget other than having written it yourself?

    A “casual” gadget on the desktop– one that you can search for & quickly integrate into your desktop & that has enough access to your personal data & computer’s resources to be useful– demands a security model way beyond anything we have today. Ultimately, security is an ease-of-use issue and we have a very long way to go.

    Even if we achieved such a security model, on the desktop I’m skeptical that this could solve real-world end user problems. Look at the difficulty novice Bluetooth users face trying to help users securely connect only 2 or 3 devices.

    “Legos” of functionality that you plug together in complex ways has been an engineering fantasy forever, but most desktop end users don’t take the Wallace & Grommit/Rube Goldberg approach to making breakfast– they pop something into the microwave & move on to something else.

  • Zakaria

    A post from Planet Gnome lead me here.

    It’s kind a match with my favorite desktop vision as presented by Aza Raskin’s TechTalk
    Away with Applications: The Death of the Desktop

    There’s several ideas in there:

    1. Forget 3D even overlapping window just put your windows in a screen that has all your window with zoomable interface.
    2. Use the power of languange with new command line interface.
    3. Instead building bloated application, build user visible component that user himself could mix and match component. The demo is spell check component which user could use in any app by selecting text in any text field or text area and invoking spell check command.