Software patents, prior art, and revelations of the Peer to Patent review

A report
from the Peer to Patent initiative shows
that the project is having salutary effects on the patent system.
Besides the greater openness that Peer to Patent promotes in
evaluating individual patent applications, it is creating a new
transparency and understanding of the functioning of the patent system
as a whole. I’ll give some background to help readers understand the
significance of Manny Schecter’s newsletter item, which concerns prior
art that exists outside of patents. I’ll add my own comments about
software patents.

Let’s remind ourselves of the basic rule of patenting: no one should
get a patent for something that was done before by someone else. Even
if you never knew that some guy on the other side of the world thought
of adding a new screw to a boiler, you can’t get a patent on
the idea of adding a screw in that place for that purpose. The other
guy’s work is called prior art, and such prior art can be
found in all kinds of places: marketing brochures, academic journals,
or actual objects that operate currently or operated any time in the
past. For software (which is of particular interest to most readers
of this blog), prior art could well be source code.

Now for the big lapse at the U.S. Patent Office: they rarely look for
prior art out in the real world. They mostly check previously granted
U.S. patents–a pretty narrow view of technology. And that has
seriously harmed software patenting.

Software was considered a form of thinking rather than as a process or
machine up until the early 1980s, and therefore unpatentable. Patents
started to be granted on software in the United States in the early
1980s and took off in a big way in the 1990s. (A useful history has
been put up by Bitlaw
. This sudden turn meant that patent
examiners were suddenly asked to evaluate applications in a field
where there were no patents previously. So of course they couldn’t
find prior art. It would have been quixotic in any case to expect
examiners–allowed less than 20 hours per patent–to learn a new field
of software and go out among the millions of lines of code to search
for examples of what they were being asked to grant patents for.

In many parts of the world, software is still considered unsuitable
for patenting, but it’s worth noting that the European Union has been
handing out patents on software without acknowledging them as such,
because a hard-fought battle among free software advocates has kept
software officially unpatentable.

In the U.S., patents have been handed out right and left for two
decades now, so the prior art does exist within patents on software.
But that even makes things worse. First, the bad patents handed out
over the initial decades continues to weigh down software with
lawsuits that lack merit. Second, the precedent of so many unmerited
patents gives examiners the impression that it’s OK to grant patents
on the same kinds of overly broad and obvious topics now.

Now to Schecter’s article. He says the patent office has long
acknowledged that they look mostly to patents for prior art, but they
won’t admit that this is a problem. One has to prove to them that
there is important prior art out in the field, and that this prior art
can actually lead to the denial of applications.

And Peer to Patent has accomplished that. From Schecter:

Approximately 20% of patent applications in the pilot were rejected in
view of prior art references submitted through Peer To Patent, and
over half of the references applied by examiners as grounds for those
rejections were non-patent prior art.

The discussion over the patent process, which has progressed so
painfully slowly over many years, now takes a decisive step forward.
Prior art in the field should be taken into account during the process
of examining patents. The next question is how.

Peer to Patent and related efforts such as Article One Partners
offer a powerful step toward a solution. Much of the tinkering
proposed in current debates, such as the number of patent examiners,
the damages awarded for infringement, and so forth (a bill was
debated in the Senate today, I’ve heard), will accomplish much less to
cut down the backlog of 700,000 applications and improve outcomes than
we could achieve through serious involvement of public input.

I am not a zealot on the subject of software patents. I’ve read a lot
of patent applications and court rulings about patents (see, for
instance, my
analysis of the Bilski decision
) and explored the case for
software patents sympathetically in another
article
. But I have to come down on the side of position that
software and business processes, like other areas of pure human
thought, have no place in the patent system.

Maybe Rivest, Shamir, and Adleman deserved their famous patent (now
expired) on public-key cryptography–that was a huge leap of thought
making a historic change in how computers are used in the world. But
the modern patents I’ve seen are nothing like the RSA algorithm. They
represent cheap patches on tired old practices. Proponents of software
patents may win their battle in the halls of power, but they have lost
their argument on the grounds of the patents to which their policy has
led. Sorry, there’s just too much crap out there.

tags: , , , , , , , ,
  • Jose_X

    The mighty US government doesn’t cover the prior art well (an impossible task to manage), yet implicitly this is the burden the courts place on each defendant right after plaintiff places the USPTO patent grant on the table.

    As an example, it took Red Hat $3 million of its own money as defendant to get a few patents tossed, essentially doing part of the job that belonged to the USPTO. Most defendants cannot afford to do the job the law presumes the USPTO is doing and do it according to exacting court standards.

    Despite the USPTO being unable to do its critical job, it still doles out buckets of patents each day, fueling an extremely wide-scale protection racket.

    Related to this fact that patents are being granted that overlap with the prior art is that, even if all the prior art were know, we’d still have a bunch of stifling patents being granted.

    Why?

    Because the bar to inventiveness is extremely low: *non-obvious* to a person having *ordinary* skill in the art.

    This means geniuses, very smart people, and even average practitioners will be tied down by a patent for the better part of 20 years whenever someone with ordinary skill finds something non-obvious.

    Obviously, this is very bad and broken. It means that potentially over 1 million software developers might be tied down for at least a large fraction of 20 years for some of the patent grants that just meet this bar (eg, one-click patent).

    To gain a bit of appreciation, let’s look at this problem another way in order to see just how likely “I” am to get a patent. After all, if the bar is so low, then I should be able to get my patents, right?

    Test: If we consider 100,000 things one can do with an ipod that is non-obvious to an average ipod user, how much is any person (or the smartest people) likely to be the very first to do any of those things so as to get a patent for it?

    Almost anything you can think of, someone will beat you to it. So no matter how likely you are to eventually rediscover that for yourself, someone else will have gotten a monopoly ahead of you. And almost surely, in order to spend any significant amount of time with an ipod, you will have to do numerous of those patented things and hence can be blocked by any of those patent holders.

    So you just want to write basic software to solve your problem, ie, you just want to spend a little time with an ipod, but you still have to deal with the fact that many patents will exist and you likely won’t own any of them.

    This demonstrates the problem with a low bar as well as with independent invention not being recognized.

    We might even try to see if some subgroup of people are more likely than others to get a patent. Well, the answer is easy. The very very wealthy are the only ones who can afford to get anything resembling a minimal number of patents to do anything of any use. Even one patent can be very costly to acquire and maintain. Now imagine 10 or 100? Surely, if you are decently skilled, you will have in one year many more than 100 ideas that are non-obvious to a person having ordinary skill in the art, yet you will likely not be able to afford even a tiny fraction of those patents.

    Process patents applied to consumer devices (like software instructions being carried out by a programmable device) present this issue of many many potential creators (note the very low costs). These monopoly grants come with ridiculously high opportunity costs (and loads of free speech abridgments). This is why such process patents (pure information patents) are not able to meet the Constitutional requirement of “promoting the progress”. And it’s not just common sense that tells us this. Professional researchers are saying it as well ( http://www.stlr.org/volumes/volume-x-2008-2009/torrance/ ).

  • Jose_X

    Even when we consider things like RSA encryption, we can’t forget that all progress is a team effort by all of society (of present and past). Even lesser mortals contribute (think of the butterfly effect but on ideas).

    Even when a few people really stand out, they do not work in isolation (in isolation, we’d all vegetate in a cave making grunts and licking the walls for mineral nutrients). Not only would giving a monopoly to one short-change progress, it would be unfair to those not getting it.

    Note that in mathematics, literature, music, law, and any other pure information discipline (where we create using idealized models that avoid the tricky issues of Mother Nature), we don’t award patents. Besides to avoid stifling, it would abridge free expression Constitutional rights.

    Einstein did not work in a vacuum. Even mythical Prometheus got help from the gods.

    If we give a monopoly to captain Einstein, then all the other individuals on team Einstein get short-changed. These are the team members without whom captain Einstein would have failed.

    To use a minerals/mining analogy, if society is approaching hitting paydirt and does most of the work cumulatively, it is unfair to let one person rush out and lay claim to a particular pocket of the mine lying right next to the big payload mineral riches.

    One we don’t want the gold ($$ and demand) to run out. It might run out in less than 20 years.

    Two, the primary discoverer already can capitalize on the discovery ahead of everyone else. This is an advantage. It would be torture on the rest for them to be tied down while this one accumulates riches for 20 years. [This analogy would apply for high quality patents where possibly no practical alternative is likely to be created during 20 years.]

    In fact, some might argue that the inventor still would owe society enough debt to warrant quick revelation of the discovery. Many open source software developers and scientists tend to loosely apply this voluntary principle.

    Everyone gains more from society’s riches than we put back.

    When you work in a team, as we all do within society, it is unfair to let one team member get all the credit (a monopoly). And worse, to give that credit to the person who ran the last leg of the race when perhaps the greatest work was done in an earlier leg.

  • Jose_X

    There is a new FTC report (covered at http://www.techdirt.com/articles/20110308/01101513393/ftc-puts-patent-trolls-notice.shtml ) which criticizes certain aspects of our patent system (patent trolling); however, it does defend the patent system as a whole.

    What I found interesting in glancing at the paper is that the defense being made of a patent system doesn’t apply to software.

    [I went into a bit more detail in a comment there. I mention this to avoid repeating here and because I didn’t notice anyone at the time having posted about that. I only read bits of the pdf paper, so I might be 99% wrong about the conclusion I drew. On the other hand, the report might be packed with material useful to support an anti-swpat stance.]