The desktop I'd like to see

I don’t think I’m being fusty for suggesting that most computer users
see the computer as a tool for better living, not as a thing in itself
that’s designed for their delight. So why are developers still pushing
the desktop race toward richer interfaces whose existence is supposed
to justify itself?

Things have gotten to the point where Microsoft has to release its
latest operating system in a kind of two-tier offering, one with all of
the compositing and 3D
Aero
effects that take advantage of recent graphics cards, and the other
directed at users with systems that lack these cards, or with laptops
that raise

concerns about battery life
.

No surprise–Microsoft has been loading up its software and loading
down users’ hardware for years. But the open source world has pursued
a comparable path since the Xgl was introduced for the X Window
System. It, too, takes advantage of modern graphics cards to compete
with proprietary desktops on the basis of dazzle.

You don’t have to write back to tell me that a good use of color,
icons, and other graphical features can save people time and prevent
errors–I know that, and I’m not using this blog to dwell on the pros
and cons of compositing desktops. I could point out that saving time
and eliminating errors doesn’t seem to be raised as the purpose for
advances everybody seems happy to label eye candy. What I want to
write about is how to save a lot more time with a functional approach
to the desktop.

The component alternative

I’d rather have lean visual effects with minimal distractions (which
can look very attractive) and let desktop developers focus on getting
programs to be more open to each other and work together more tightly.
I’m getting tired of moving between one silo of an application to
another, a division I’m finding increasingly arbitrary.

If I’ve developed a complex equation in my spreadsheet, I’d like to
extract it and use it in other settings. If I write a powerful
pattern-matching (regular expression) subroutine in Perl, I’d like to
apply it to the page in my browser or the document in my word
processor.

I explored this idea five years ago in articles such as

Applications, User Interfaces, and Servers in the Soup
,
which tentatively suggested the end of the computer application as we
know it. I had high praise for Microsoft’s component architecture. I
saw that it allowed tight integration of services from different
vendors, but I haven’t seen the same innovation on the part of small
contributors. Integration is harder than it should be, a game only
corporations can play. (One such corporation, Groove, is now part of
Microsoft.)

Since then we’ve seen GreaseMonkey, mash-ups everywhere on the
Internet, lots of REST and Ajax services, and the gadgets and widgets
from
Microsoft’s live.com,
Google,
and other sources. Many people have compared this movement to the Unix
philosophy of many small programs working together. How can desktop
and application developers now push forward this potential?

Desktop designers could find ways to help users quickly add components
to the display (as applications are added now) and organize them.
Considerable ingenuity could be invested in displaying components and
making it easy to search for them. Interfaces could make it easy to
enter input and to pipe data from one component to another. Naturally,
standards would help. Each of these advances would touch off an
explosion of hacker activity on components.

It’s important for desktops to make these components easy to call up,
and for them to be much more lightweight than current applications.
The biggest productively boost for a user is fast response time,
because the user have her mind on some complex sequence of steps and
can lose track of her thoughts in between calling up applications. A
valuable goal would be to have a new component start up and be ready
for input in no more than half a second.

Free software components would work best in such an environment, where
components are tightly coupled and are created by many different
developers, especially amateurs. Users need the source because they’ll
continually run into situations the original developers did not code
for. Developers will impose size limits without even realizing it.
Inputs may be designed for a simple scalar when the user wants to
supply a tuple, or for a plain text string when the user wants to
supply some form of rich text. Components will need localization for
different languages and cultures.

I’ll handle aesthetics on the walls of my office, on the furniture in
my house, and in the music on my player. I really don’t live for the
moment when someone on the train next to me says, “Hey, what’s that
beautiful desktop you’re running?” I’d rather have all the results of
my previous work (and work done by millions of others around the
world) at my fingertips, so I’d have more free time to write blogs
about what I’d rather have.

A response

I circulated these ideas for comment, and was treated to a fascinating
discussion from Federico Mena Quintero of the
GNOME and Mono projects, an early employee of Ximian (now Novell,
of course). He referred to an estimate by Frederick P. Brooks in “The
Mythical Man-Month” (not exactly the latest research in the technology
space, but it still demands attention):

  • To turn a personal tool into an application robust enough for other
    people to use takes three times as much work as developing the
    personal tool. This includes polishing, documentation, and debugging.

  • To turn an application into a library (which Brooks calls a
    “programming system product”) so it can be used in the way I’ve asked
    as a component also requires three times as much work as developing
    the application.

So the gap between “scratching your own itch” in free software and
producing a component is a nine-fold increase in work! Federico
reports that he’s found the estimates accurate during his team’s work
on Evolution and Nautilus.

And he goes on to ask: how many end-users benefit from the extra
three-fold work that goes into turning an application into a
component? “Most people who use the software will want a finished
product. They want to buy a cake, not buy the ingredients to bake it
themselves.” And the ultimate retort of the practitioner to the
armchair theorist: “You are confusing the malleability of software
with the actual work needed to modify it.”

Federico may have identified the difficulties that keep so few tools
from becoming available in combinable component form, but I think the
demand is present. People can learn to use simple pipelines if the
mechanics are presented to them in an understandable, fill-in-the-form
manner. Besides the genius of the Internet is that tinkerers can share
the mash-ups they’ve developed with others.

Picking up my thread, David Neary, a prominent GNOME community member,
pointed me to several pieced of free software infrastructure that lay
the basis for cooperation between applications:

  • Telepathy,
    which will give VoIP and instant messaging programs access
    to data maintained by other applications, so you can do such things as
    retrieve contact information from your contacts database. On a more
    sophisticated level, presence and multiple conversations could be
    controlled through a single server.

    Telepathy is based on the Free Desktop Project’s D-Bus protocol.
    I like the first benefit promoted in Telepathy’s
    System_Overview:
    “Breaking down previously complicated monolithic applications into
    simpler components which are concerned with only one group of
    functionality.”

  • XDND,
    where the DND stands for “drag and drop.” X has supported
    cut-and-paste from the beginning through selections, and many programs
    now go further and implement true drag and drop, letting you drag
    images in and out of different programs as well. But these programs
    use a variety of protocols, which defeats the purpose. Projects
    listed
    as supporting XDND include GNOME and GTK+, KDE and Qt, StarOffice,
    Mozilla, and J2EE.

    (Data sharing is one step toward my goal of sharing the processing
    itself by piping data from application to another.)

  • Representatives from different projects are finalizing specs on how to
    share patterns, palettes, curves, bitmaps, and several other types of
    data used in graphical programs such as the GIMP.

Some notes about performance

Making software faster is difficult work, as Mena Quintero reports,
because the problems are hard to find and are usually spread over a
number of components written by different people.

KDE developer Lubos Lunak has spent several years
on performance testing, and finally determined that the main
bottlenecks lie in underlying software: notably the time it takes to
do disk seeks when reading from numerous files. Lubos
wishes for a filesystem
that would “linearize” related files: “making sure that related files
(not one, files) are one contiguous area of the disk.” In email to me,
he admits that figuring out what files should go together and ensuring
they do so “is non-trivial to achieve and probably impossible to
ensure completely.”

He also points to problems in the linker’s resolution of symbols in
shared libraries. OpenOffice.org developer Michael Meeks has proposed
ways to improve OpenOffice.org performance on Linux by providing new
internal storage measures in the GNU linker, as explained in an
LWN.net article.

Given that performance analysis is difficult and a lot less fun than
other desktop development, it’s amazing that it gets done in a free
software project with high volunteer participation.
(Lunak works for SUSE.) But David Faure, also of the KDE
team, says, “KDE is very aware of performance issues, and new
KDE releases are often faster than previous ones.” X developers also
do a lot of optimization (regularly reported by
Behdad Esfahbod)
on the Cairo and Pango rendering engines.

Chris Tyler, X developer, says that Xgl and AIGLX (a 3D,
OpenGL-compliant system that represents an alternative to Xgl on the X
Window System) actually speed up some applications because these
systems fully render all windows into memory. He writes, “they are
never damaged by other overlapping windows or by switching virtual
desktops. This means that the clients almost never receive redraw
requests and remote apps appear much more responsive.”

AIGLX should be faster than Xgl, Chris says, because AIGLX runs
directly on the host’s X server. Xgl requires two X servers, one
layered on top of the other, because X doesn’t have pure-GL video
drivers. But benchmarks show varying results in comparing Xgl and
AIGLX, so AIGLX could use more optimization. Still, Chris says the
commands that both systems pass to the graphics card can be handled by
any card produced over the past half decade, although some of them
can’t do it with acceptable performance.

Mena Quintero adds that compositing can make repainting look faster by
eliminating flicker. When a traditional windowing program redraws its
window, it has to receive a separate request for each component and
paint them one at a time. When you move a window, this produces a
flicker (even when the graphics library uses double-buffering) that
produces the impression of slow response. A compositor creates the
complete image in back-up memory and paints it without making paint
requests to underlying windows.

The difference can be compared to moving meal items in a cafeteria.
Traditional repainting is like going back and forth to your chair to
carry your plate, your cup, your fork, your napkin, etc. Compositing
is like loading them all on a tray and moving them as a unit. Federico
says compositing may take longer to repaint the screen, but because of
the lack of flicker, it seems faster psychologically.

Major free software desktop applications (such as Firefox and
OpenOffice.org) still tend to be slow. There’s a limit to how much
desktop developers can do to help, and even the application developers
themselves. Performance is harder to address than features, but the
developers of the various layers of the free software desktop are
working on both.

Update, 17 May 2007: Thanks for the comments. The various histories of component technologies are nice to have here in one place, but I’m not talking just about using components. It’s one thing (and a necessary foundation) to build programs out of reusable, pluggable components. It’s another to expose them to end-users in ways that make it convenient for users to bypass large-scale applications or plug in their own little scripts, and that’s what I’d like to see next. –Andy

tags: